Hi, everybody! (I hope you hear Dr. Nick's voice when you read this line.)
My SDD boot . . . isn't booting.
Here's how it unfolded: I popped the side off the case so as to install a couple of top-facing exhaust fans, controlled by the motherboard, bring the total case fan count to five (two intake, three exhaust), because with that many fans none ever really spins up, so the computer is next to silent. That's all I did in there. I came nowhere near anything having to do with any of the drives.
When I fired up the computer again I got the usual GRUB menu, the second item of which was the 20.04-LTS installation on /dev/sdc1. I chose it.
And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. No errors, nothing. Just booted the wrong drive.
The SDD (sdc) is working fine -- I can mount it, navigate in it, read and write to it.
Any troubleshooting ideas that don't involve disassembly? This was working fine yesterday and isn't today. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
said dep:
Below is what I wrote otherwise. It's not SDD, it's the SSD, which is /dev/sdc.
| Hi, everybody! (I hope you hear Dr. Nick's voice when you read this | line.) | | My SSD boot . . . isn't booting. | | Here's how it unfolded: I popped the side off the case so as to install | a couple of top-facing exhaust fans, controlled by the motherboard, | bring the total case fan count to five (two intake, three exhaust), | because with that many fans none ever really spins up, so the computer | is next to silent. That's all I did in there. I came nowhere near | anything having to do with any of the drives. | | When I fired up the computer again I got the usual GRUB menu, the second | item of which was the 20.04-LTS installation on /dev/sdc1. I chose it. | | And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. | No errors, nothing. Just booted the wrong drive. | | The SDD (sdc) is working fine -- I can mount it, navigate in it, read | and write to it. | | Any troubleshooting ideas that don't involve disassembly? This was | working fine yesterday and isn't today. | -- | dep | | Pictures: http://www.ipernity.com/doc/depscribe/album | Column: https://ofb.biz/author/dep/ | ____________________________________________________ | tde-users mailing list -- users@trinitydesktop.org | To unsubscribe send an email to users-leave@trinitydesktop.org | Web mail archive available at | https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinityde |sktop.org
-- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
dep composed on 2021-08-19 16:52 (UTC):
| Any troubleshooting ideas that don't involve disassembly?
No. You touched things inside. Something must have been disturbed. Are any SATA cables old *and* the same red color as the attachment? If yes, replace it/them.
On Thursday 19 August 2021 18.46:45 dep wrote: was the 20.04-LTS installation on /dev/sdc1. I chose it.
And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. No errors, nothing. Just booted the wrong drive.
The SDD (sdc) is working fine -- I can mount it, navigate in it, read and write to it.
Any troubleshooting ideas that don't involve disassembly? This was working fine yesterday and isn't today. -- dep
BIOS? I suggest this because I just had something like that: I installed Devuan chimaera and... my computer stopped booting.
The BIOS had thought "logical" to change the boot disk from my nvme to the SSD I had installed on (actually this happened after installing TDE, installation that apparently triggered some recreation of the initrd.img...)
Thierry
said Thierry de Coulon: | On Thursday 19 August 2021 18.46:45 dep wrote: | was the 20.04-LTS installation on /dev/sdc1. I chose it. | | > And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. | > No errors, nothing. Just booted the wrong drive. | > | > The SDD (sdc) is working fine -- I can mount it, navigate in it, read | > and write to it. | > | > Any troubleshooting ideas that don't involve disassembly? This was | > working fine yesterday and isn't today. | > -- | > dep | | BIOS? I suggest this because I just had something like that: I installed | Devuan chimaera and... my computer stopped booting. | | The BIOS had thought "logical" to change the boot disk from my nvme to | the SSD I had installed on (actually this happened after installing TDE, | installation that apparently triggered some recreation of the | initrd.img...)
You were right, it appears. I've been annoyed by literally hundreds of bioses over the last decades, but I've never totally hated one as much as I hate this Asus UEFI bios, which in addition to being all over the place is the most poorly documented piece of computer hardware I've ever encountered, limiting itself to stating the blindingly obvious. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Hi dep!
Anno domini 2021 Thu, 19 Aug 16:46:45 +0000 dep scripsit:
Hi, everybody! (I hope you hear Dr. Nick's voice when you read this line.)
:)
My SDD boot . . . isn't booting.
Here's how it unfolded: I popped the side off the case so as to install a couple of top-facing exhaust fans, controlled by the motherboard, bring the total case fan count to five (two intake, three exhaust), because with that many fans none ever really spins up, so the computer is next to silent. That's all I did in there. I came nowhere near anything having to do with any of the drives.
When I fired up the computer again I got the usual GRUB menu, the second item of which was the 20.04-LTS installation on /dev/sdc1. I chose it.
And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. No errors, nothing. Just booted the wrong drive.
The SDD (sdc) is working fine -- I can mount it, navigate in it, read and write to it.
Any troubleshooting ideas that don't involve disassembly? This was working fine yesterday and isn't today.
This usually happens when you update grub from the wrong OS. Can you unplug sda, boot again. If it works, doublecheck the boot entry in the bios. If it's efi and you have 2 efi partions on your system (each on a different drive) try to get rid of the one you don't use - should be sufficient to change the partition type.
And move your boot device from sdc to sda --> swap sata ports.
Nik
-- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/ ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...
Hi Nik,
This usually happens when you update grub from the wrong OS. ..... And move your boot device from sdc to sda --> swap sata ports.
dep only fiddled around with the fans, nothing else (he said)! As he has a running system - only from the wrong disk - there's the foremost simplest approach, reorder the boot-sequence in the BIOS. And then edit grub.cfg *both* on sda and sdc to have the correct disk-nomination (UUID, Label....) as startup.
Regards, Peter .
said phiebie@drei.at: | Hi Nik, | | > This usually happens when you update grub from the wrong OS. ..... | > And move your boot device from sdc to sda --> swap sata ports. | | dep only fiddled around with the fans, nothing else (he said)! | As he has a running system - only from the wrong disk - there's the | foremost simplest approach, reorder the boot-sequence in the BIOS. And | then edit grub.cfg *both* on sda and sdc to have the correct | disk-nomination (UUID, Label....) as startup.
I did indeed install fans and nothing more.
The cables are fine.
This bios-from-hell doesn't have the kind of boot-order configuration to which I (and I suspect most others) am accustomed, as in floppy, usb, cd-rom, hd1, hd2, etc., so half the time in bios is spent writing down which configurations didn't work, before trying another one. The one that seems likeliest renders only the infamous black screen with blinking cursor. The way that previously worked was to let the UEFI drive boot grub, whence I'd hand it all off to the grub entry for the Linux installation on /dev/sdc1. It would then boot, quickly, and checking mount determined that I was in fact running off the / on sdc1.
But now it does not do that, at least not entirely. I can select the sdc1 installation at the grub menu -- but it goes ahead and boots sda1 anyway.
I've gone so far as to chroot in to /dev/sdc1 and run update-grub there. The result is as expected, with the first item on the menu being the /dev/sdc1 Linux.
the sda1 fstab:
/dev/sda1 / ext4 errors=remount-ro 0 1 /dev/sda3 /home ext4 defaults 0 2 /dev/sda2 none swap sw 0 0 UUID=8C5E-D456 /boot/efi vfat defaults 0 1
the sdc1 fstab: LABEL=BOOT / ext4 errors=remount-ro 0 1 LABEL=HOME /home ext4 defaults 0 2 LABEL=swap none swap sw 0 0 UUID=8C5E-D456 /boot/efi vfat defaults 0 1
The labels point to the same debvices/partitions except for the first one -- LABEL=BOOT is /dev/sdc1 -- and the swap, which are on sda2 and sdc2 respectively. (sda1 has the label BOOT-HD, fwiw.) -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
dep composed on 2021-08-19 20:43 (UTC):
the sda1 fstab:
/dev/sda1 / ext4 errors=remount-ro 0 1 /dev/sda3 /home ext4 defaults 0 2 /dev/sda2 none swap sw 0 0 UUID=8C5E-D456 /boot/efi vfat defaults 0 1
the sdc1 fstab: LABEL=BOOT / ext4 errors=remount-ro 0 1 LABEL=HOME /home ext4 defaults 0 2 LABEL=swap none swap sw 0 0 UUID=8C5E-D456 /boot/efi vfat defaults 0 1
The labels point to the same debvices/partitions except for the first one -- LABEL=BOOT is /dev/sdc1 -- and the swap, which are on sda2 and sdc2 respectively. (sda1 has the label BOOT-HD, fwiw.)
I'd like to see:
lsblk -f grep GRUB_DISTRIBUTOR= (on/from both sda1 and sdc1) grep GRUB_DISABLE_OS_PROBER= (on/from both sda1 and sdc1) tree -D /boot/efi/ (on/from both sda1 and sdc1) efibootmgr -v
said Felix Miata: | dep composed on 2021-08-19 20:43 (UTC): | > the sda1 fstab: | > | > /dev/sda1 / ext4 errors=remount-ro 0 1 | > /dev/sda3 /home ext4 defaults 0 2 | > /dev/sda2 none swap sw 0 0 | > UUID=8C5E-D456 /boot/efi vfat defaults 0 1 | > | > the sdc1 fstab: | > LABEL=BOOT / ext4 errors=remount-ro 0 1 | > LABEL=HOME /home ext4 defaults 0 2 | > LABEL=swap none swap sw 0 0 | > UUID=8C5E-D456 /boot/efi vfat defaults 0 1 | > | > The labels point to the same debvices/partitions except for the first | > one -- LABEL=BOOT is /dev/sdc1 -- and the swap, which are on sda2 and | > sdc2 respectively. (sda1 has the label BOOT-HD, fwiw.) | | I'd like to see: | | lsblk -f
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT sda ├─sda1 ext4 BOOT-HD 00c71d45-5bc2-40a7-b52f-cd73c82d294f 89.5G 17% / ├─sda2 swap aa889171-f211-48a0-8922-2230feb99b2d [SWAP] ├─sda3 ext4 HOME f33fcb46-c800-4a83-acef-3952e6f813d7 3.5T 29% /home └─sda4 vfat 8C5E-D456 286.4M 4% /boot/efi sdb └─sdb1 ext4 Pictures 0c42b57d-9331-4865-9972-e1bfa44c3cc0 sdc ├─sdc1 ext4 BOOT 00c71d45-5bc2-40a7-b52f-cd73c82d294f └─sdc2 swap swap 100a95b5-178d-4356-a95d-808d9310e5d6
| grep GRUB_DISTRIBUTOR= (on/from both sda1 and sdc1)
This returns nothing, not even the command prompt, when grep GRUB_DISTRIBUTOR= is run from an sda boot by either user or root; likewise grep GRUB_DISTRIBUTOR=/dev/sda. and /dev/sdc.
| grep GRUB_DISABLE_OS_PROBER= (on/from both sda1 and sdc1)
Same as the command above. Is it supposed to take a very long time?
| tree -D /boot/efi/ (on/from both sda1 and sdc1)
/boot/efi/ ├── [May 18 12:35] boot │ └── [May 18 12:35] grub │ ├── [May 18 12:35] fonts │ │ └── [May 18 19:38] unicode.pf2 │ ├── [May 18 12:35] grubenv │ └── [May 18 19:38] i386-pc │ ├── [May 18 19:38] 915resolution.mod │ ├── [May 18 19:38] acpi.mod │ ├── [May 18 19:38] adler32.mod │ ├── [May 18 19:38] affs.mod │ ├── [May 18 19:38] afs.mod │ ├── [May 18 19:38] ahci.mod │ ├── [May 18 19:38] all_video.mod │ ├── [May 18 19:38] aout.mod │ ├── [May 18 19:38] archelp.mod │ ├── [May 18 19:38] ata.mod │ ├── [May 18 19:38] at_keyboard.mod │ ├── [May 18 19:38] backtrace.mod │ ├── [May 18 19:38] bfs.mod │ ├── [May 18 19:38] biosdisk.mod │ ├── [May 18 19:38] bitmap.mod │ ├── [May 18 19:38] bitmap_scale.mod │ ├── [May 18 19:38] blocklist.mod │ ├── [May 18 19:38] boot.img │ ├── [May 18 19:38] boot.mod │ ├── [May 18 19:38] bsd.mod │ ├── [May 18 19:38] bswap_test.mod │ ├── [May 18 19:38] btrfs.mod │ ├── [May 18 19:38] bufio.mod │ ├── [May 18 19:38] cat.mod │ ├── [May 18 19:38] cbfs.mod │ ├── [May 18 19:38] cbls.mod │ ├── [May 18 19:38] cbmemc.mod │ ├── [May 18 19:38] cbtable.mod │ ├── [May 18 19:38] cbtime.mod │ ├── [May 18 19:38] chain.mod │ ├── [May 18 19:38] cmdline_cat_test.mod │ ├── [May 18 19:38] cmosdump.mod │ ├── [May 18 19:38] cmostest.mod │ ├── [May 18 19:38] cmp.mod │ ├── [May 18 19:38] cmp_test.mod │ ├── [May 18 19:38] command.lst │ ├── [May 18 19:38] configfile.mod │ ├── [May 18 19:38] core.img │ ├── [May 18 19:38] cpio_be.mod │ ├── [May 18 19:38] cpio.mod │ ├── [May 18 19:38] cpuid.mod │ ├── [May 18 19:38] crc64.mod │ ├── [May 18 19:38] cryptodisk.mod │ ├── [May 18 19:38] crypto.lst │ ├── [May 18 19:38] crypto.mod │ ├── [May 18 19:38] cs5536.mod │ ├── [May 18 19:38] ctz_test.mod │ ├── [May 18 19:38] datehook.mod │ ├── [May 18 19:38] date.mod │ ├── [May 18 19:38] datetime.mod │ ├── [May 18 19:38] diskfilter.mod │ ├── [May 18 19:38] disk.mod │ ├── [May 18 19:38] div.mod │ ├── [May 18 19:38] div_test.mod │ ├── [May 18 19:38] dm_nv.mod │ ├── [May 18 19:38] drivemap.mod │ ├── [May 18 19:38] echo.mod │ ├── [May 18 19:38] efiemu32.o │ ├── [May 18 19:38] efiemu64.o │ ├── [May 18 19:38] efiemu.mod │ ├── [May 18 19:38] ehci.mod │ ├── [May 18 19:38] elf.mod │ ├── [May 18 19:38] eval.mod │ ├── [May 18 19:38] exfat.mod │ ├── [May 18 19:38] exfctest.mod │ ├── [May 18 19:38] ext2.mod │ ├── [May 18 19:38] extcmd.mod │ ├── [May 18 19:38] f2fs.mod │ ├── [May 18 19:38] fat.mod │ ├── [May 18 19:38] file.mod │ ├── [May 18 19:38] font.mod │ ├── [May 18 19:38] freedos.mod │ ├── [May 18 19:38] fshelp.mod │ ├── [May 18 19:38] fs.lst │ ├── [May 18 19:38] functional_test.mod │ ├── [May 18 19:38] gcry_arcfour.mod │ ├── [May 18 19:38] gcry_blowfish.mod │ ├── [May 18 19:38] gcry_camellia.mod │ ├── [May 18 19:38] gcry_cast5.mod │ ├── [May 18 19:38] gcry_crc.mod │ ├── [May 18 19:38] gcry_des.mod │ ├── [May 18 19:38] gcry_dsa.mod │ ├── [May 18 19:38] gcry_idea.mod │ ├── [May 18 19:38] gcry_md4.mod │ ├── [May 18 19:38] gcry_md5.mod │ ├── [May 18 19:38] gcry_rfc2268.mod │ ├── [May 18 19:38] gcry_rijndael.mod │ ├── [May 18 19:38] gcry_rmd160.mod │ ├── [May 18 19:38] gcry_rsa.mod │ ├── [May 18 19:38] gcry_seed.mod │ ├── [May 18 19:38] gcry_serpent.mod │ ├── [May 18 19:38] gcry_sha1.mod │ ├── [May 18 19:38] gcry_sha256.mod │ ├── [May 18 19:38] gcry_sha512.mod │ ├── [May 18 19:38] gcry_tiger.mod │ ├── [May 18 19:38] gcry_twofish.mod │ ├── [May 18 19:38] gcry_whirlpool.mod │ ├── [May 18 19:38] gdb.mod │ ├── [May 18 19:38] geli.mod │ ├── [May 18 19:38] gettext.mod │ ├── [May 18 19:38] gfxmenu.mod │ ├── [May 18 19:38] gfxterm_background.mod │ ├── [May 18 19:38] gfxterm_menu.mod │ ├── [May 18 19:38] gfxterm.mod │ ├── [May 18 19:38] gptsync.mod │ ├── [May 18 19:38] gzio.mod │ ├── [May 18 19:38] halt.mod │ ├── [May 18 19:38] hashsum.mod │ ├── [May 18 19:38] hdparm.mod │ ├── [May 18 19:38] hello.mod │ ├── [May 18 19:38] help.mod │ ├── [May 18 19:38] hexdump.mod │ ├── [May 18 19:38] hfs.mod │ ├── [May 18 19:38] hfspluscomp.mod │ ├── [May 18 19:38] hfsplus.mod │ ├── [May 18 19:38] http.mod │ ├── [May 18 19:38] hwmatch.mod │ ├── [May 18 19:38] iorw.mod │ ├── [May 18 19:38] iso9660.mod │ ├── [May 18 19:38] jfs.mod │ ├── [May 18 19:38] jpeg.mod │ ├── [May 18 19:38] keylayouts.mod │ ├── [May 18 19:38] keystatus.mod │ ├── [May 18 19:38] ldm.mod │ ├── [May 18 19:38] legacycfg.mod │ ├── [May 18 19:38] legacy_password_test.mod │ ├── [May 18 19:38] linux16.mod │ ├── [May 18 19:38] linux.mod │ ├── [May 18 19:38] loadenv.mod │ ├── [May 18 19:38] loopback.mod │ ├── [May 18 19:38] lsacpi.mod │ ├── [May 18 19:38] lsapm.mod │ ├── [May 18 19:38] lsmmap.mod │ ├── [May 18 19:38] ls.mod │ ├── [May 18 19:38] lspci.mod │ ├── [May 18 19:38] luks.mod │ ├── [May 18 19:38] lvm.mod │ ├── [May 18 19:38] lzopio.mod │ ├── [May 18 19:38] macbless.mod │ ├── [May 18 19:38] macho.mod │ ├── [May 18 19:38] mda_text.mod │ ├── [May 18 19:38] mdraid09_be.mod │ ├── [May 18 19:38] mdraid09.mod │ ├── [May 18 19:38] mdraid1x.mod │ ├── [May 18 19:38] memdisk.mod │ ├── [May 18 19:38] memrw.mod │ ├── [May 18 19:38] minicmd.mod │ ├── [May 18 19:38] minix2_be.mod │ ├── [May 18 19:38] minix2.mod │ ├── [May 18 19:38] minix3_be.mod │ ├── [May 18 19:38] minix3.mod │ ├── [May 18 19:38] minix_be.mod │ ├── [May 18 19:38] minix.mod │ ├── [May 18 19:38] mmap.mod │ ├── [May 18 19:38] moddep.lst │ ├── [May 18 19:38] modinfo.sh │ ├── [May 18 19:38] morse.mod │ ├── [May 18 19:38] mpi.mod │ ├── [May 18 19:38] msdospart.mod │ ├── [May 18 19:38] mul_test.mod │ ├── [May 18 19:38] multiboot2.mod │ ├── [May 18 19:38] multiboot.mod │ ├── [May 18 19:38] nativedisk.mod │ ├── [May 18 19:38] net.mod │ ├── [May 18 19:38] newc.mod │ ├── [May 18 19:38] nilfs2.mod │ ├── [May 18 19:38] normal.mod │ ├── [May 18 19:38] ntfscomp.mod │ ├── [May 18 19:38] ntfs.mod │ ├── [May 18 19:38] ntldr.mod │ ├── [May 18 19:38] odc.mod │ ├── [May 18 19:38] offsetio.mod │ ├── [May 18 19:38] ohci.mod │ ├── [May 18 19:38] part_acorn.mod │ ├── [May 18 19:38] part_amiga.mod │ ├── [May 18 19:38] part_apple.mod │ ├── [May 18 19:38] part_bsd.mod │ ├── [May 18 19:38] part_dfly.mod │ ├── [May 18 19:38] part_dvh.mod │ ├── [May 18 19:38] part_gpt.mod │ ├── [May 18 19:38] partmap.lst │ ├── [May 18 19:38] part_msdos.mod │ ├── [May 18 19:38] part_plan.mod │ ├── [May 18 19:38] part_sun.mod │ ├── [May 18 19:38] part_sunpc.mod │ ├── [May 18 19:38] parttool.lst │ ├── [May 18 19:38] parttool.mod │ ├── [May 18 19:38] password.mod │ ├── [May 18 19:38] password_pbkdf2.mod │ ├── [May 18 19:38] pata.mod │ ├── [May 18 19:38] pbkdf2.mod │ ├── [May 18 19:38] pbkdf2_test.mod │ ├── [May 18 19:38] pcidump.mod │ ├── [May 18 19:38] pci.mod │ ├── [May 18 19:38] pgp.mod │ ├── [May 18 19:38] plan9.mod │ ├── [May 18 19:38] play.mod │ ├── [May 18 19:38] png.mod │ ├── [May 18 19:38] priority_queue.mod │ ├── [May 18 19:38] probe.mod │ ├── [May 18 19:38] procfs.mod │ ├── [May 18 19:38] progress.mod │ ├── [May 18 19:38] pxechain.mod │ ├── [May 18 19:38] pxe.mod │ ├── [May 18 19:38] raid5rec.mod │ ├── [May 18 19:38] raid6rec.mod │ ├── [May 18 19:38] random.mod │ ├── [May 18 19:38] rdmsr.mod │ ├── [May 18 19:38] read.mod │ ├── [May 18 19:38] reboot.mod │ ├── [May 18 19:38] regexp.mod │ ├── [May 18 19:38] reiserfs.mod │ ├── [May 18 19:38] relocator.mod │ ├── [May 18 19:38] romfs.mod │ ├── [May 18 19:38] scsi.mod │ ├── [May 18 19:38] search_fs_file.mod │ ├── [May 18 19:38] search_fs_uuid.mod │ ├── [May 18 19:38] search_label.mod │ ├── [May 18 19:38] search.mod │ ├── [May 18 19:38] sendkey.mod │ ├── [May 18 19:38] serial.mod │ ├── [May 18 19:38] setjmp.mod │ ├── [May 18 19:38] setjmp_test.mod │ ├── [May 18 19:38] setpci.mod │ ├── [May 18 19:38] sfs.mod │ ├── [May 18 19:38] shift_test.mod │ ├── [May 18 19:38] signature_test.mod │ ├── [May 18 19:38] sleep.mod │ ├── [May 18 19:38] sleep_test.mod │ ├── [May 18 19:38] smbios.mod │ ├── [May 18 19:38] spkmodem.mod │ ├── [May 18 19:38] squash4.mod │ ├── [May 18 19:38] strtoull_test.mod │ ├── [May 18 19:38] syslinuxcfg.mod │ ├── [May 18 19:38] tar.mod │ ├── [May 18 19:38] terminal.lst │ ├── [May 18 19:38] terminal.mod │ ├── [May 18 19:38] terminfo.mod │ ├── [May 18 19:38] test_blockarg.mod │ ├── [May 18 19:38] testload.mod │ ├── [May 18 19:38] test.mod │ ├── [May 18 19:38] testspeed.mod │ ├── [May 18 19:38] tftp.mod │ ├── [May 18 19:38] tga.mod │ ├── [May 18 19:38] time.mod │ ├── [May 18 19:38] trig.mod │ ├── [May 18 19:38] tr.mod │ ├── [May 18 19:38] truecrypt.mod │ ├── [May 18 19:38] true.mod │ ├── [May 18 19:38] udf.mod │ ├── [May 18 19:38] ufs1_be.mod │ ├── [May 18 19:38] ufs1.mod │ ├── [May 18 19:38] ufs2.mod │ ├── [May 18 19:38] uhci.mod │ ├── [May 18 19:38] usb_keyboard.mod │ ├── [May 18 19:38] usb.mod │ ├── [May 18 19:38] usbms.mod │ ├── [May 18 19:38] usbserial_common.mod │ ├── [May 18 19:38] usbserial_ftdi.mod │ ├── [May 18 19:38] usbserial_pl2303.mod │ ├── [May 18 19:38] usbserial_usbdebug.mod │ ├── [May 18 19:38] usbtest.mod │ ├── [May 18 19:38] vbe.mod │ ├── [May 18 19:38] verifiers.mod │ ├── [May 18 19:38] vga.mod │ ├── [May 18 19:38] vga_text.mod │ ├── [May 18 19:38] video_bochs.mod │ ├── [May 18 19:38] video_cirrus.mod │ ├── [May 18 19:38] video_colors.mod │ ├── [May 18 19:38] video_fb.mod │ ├── [May 18 19:38] videoinfo.mod │ ├── [May 18 19:38] video.lst │ ├── [May 18 19:38] video.mod │ ├── [May 18 19:38] videotest_checksum.mod │ ├── [May 18 19:38] videotest.mod │ ├── [May 18 19:38] wrmsr.mod │ ├── [May 18 19:38] xfs.mod │ ├── [May 18 19:38] xnu.mod │ ├── [May 18 19:38] xnu_uuid.mod │ ├── [May 18 19:38] xnu_uuid_test.mod │ ├── [May 18 19:38] xzio.mod │ ├── [May 18 19:38] zfscrypt.mod │ ├── [May 18 19:38] zfsinfo.mod │ ├── [May 18 19:38] zfs.mod │ └── [May 18 19:38] zstd.mod ├── [May 18 19:42] chrootdir │ ├── [May 18 19:42] bin │ ├── [May 18 19:42] boot │ ├── [May 18 19:42] dev │ ├── [May 18 19:42] etc │ ├── [May 18 19:42] lib │ ├── [May 18 19:42] lib64 │ ├── [May 18 19:42] proc │ ├── [May 18 19:42] sbin │ ├── [May 18 19:42] sys │ ├── [May 18 19:42] tmp │ ├── [May 18 19:42] usr │ └── [May 18 19:42] var ├── [May 18 14:36] EFI │ ├── [Jun 2 14:45] BOOT │ │ ├── [May 19 12:55] bkpbootx64.efi │ │ ├── [Aug 19 14:24] bootx64.efi │ │ ├── [Aug 19 14:24] fbx64.efi │ │ ├── [May 18 14:36] grubx64.efi │ │ └── [Aug 19 14:24] mmx64.efi │ └── [May 18 14:36] ubuntu │ ├── [Aug 19 14:24] BOOTX64.CSV │ ├── [Aug 19 14:24] grub.cfg │ ├── [Aug 19 14:24] grubx64.efi │ ├── [Aug 19 14:24] mmx64.efi │ └── [Aug 19 14:24] shimx64.efi ├── [May 18 19:46] pulse-acTIQ9prDraK ├── [May 18 19:46] pulse-BO8rab7rY4lB ├── [May 18 19:46] pulse-EuAfZDG8WTl5 ├── [May 18 19:46] pulse-IdmWC5udLPbv ├── [May 18 19:46] pulse-itonj14tqzSI ├── [May 18 19:47] pulse-jkCqoIbQVFZo ├── [May 18 19:47] pulse-kPBq8yWh5Jee ├── [May 18 19:46] pulse-lzFzupb0zgEy ├── [May 18 19:47] pulse-niXKFFp0bJJb ├── [May 18 19:46] pulse-NSxaW1wjVbvs ├── [May 18 19:46] pulse-PKdhtXMmr18n ├── [May 18 19:47] pulse-syiX9ctikWzx ├── [May 18 19:46] pulse-x6OmHDaiNHeB ├── [May 18 19:47] pulse-X9yRZDRMYSQx └── [May 18 19:48] sh-thd.nV4iZ7
34 directories, 298 files
When I run the command on /dev/sdc1 it runs through 44086 directories and 447378 files which when piped to a file runs 42 megs, so I'll not attach it.
| efibootmgr -v
BootCurrent: 0000 Timeout: 0 seconds BootOrder: 0000,0002,0006,0005 Boot0000* ubuntu HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(\EFI\ubuntu\shimx64.efi) Boot0002* UEFI OS HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(\EFI\BOOT\BOOTX64.EFI) Boot0005* Hard Drive BBS(HD,,0x0)AMGOAMNO........o.W.D.C. . .W.D.S.5.0.0.G.1.R.0.A.-.6.8.A.4.W.0....................A...........................>..Gd-.;.A..MQ..L.1.2.4.2.0.2.0.A.0.0.A.1. . . . . . . . ......AMBOAMNO........o.W.D.C. .W.D.6.0.0.3.F.R.Y.Z.-.0.1.F.0.D.B.0....................A...........................>..Gd-.;.A..MQ..L.9.V.E.H.0.L.L.2. . . . . . . . . . . . ......AMBOAMNO........o.W.D.C. .W.D.1.0.1.K.R.Y.Z.-.0.1.J.P.D.B.0....................A...........................>..Gd-.;.A..MQ..L.P.7.X.G.W.0.G.M. . . . . . . . . . . . ......AMBOAMNO........q.T.S.-.R.D.F.5. .S.D. .T.r.a.n.s.c.e.n.d....................A.............................>..Gd-.;.A..MQ..L.T.S.-.R.D.F.5. .S.D. .T.r.a.n.s.c.e.n.d......AMBO Boot0006* ubuntu HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(EFI\Ubuntu\grubx64.efi)
Has this cast any light on the situation? -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
said dep:
| When I run the command tree -D /boot/efi/ on /dev/sdc1 it runs through | 44086 directories and 447378 files which when piped to a file runs 42 | megs, so I'll not attach it.
Looking, however, in a file browser, I see that /dev/sdc1/boot/efi is stone empty. It is, as mentioned, populated in /dev/sda1/. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
dep composed on 2021-08-19 23:23 (UTC):
| tree -D /boot/efi/ (on/from both sda1 and sdc1)
├── [May 18 14:36] EFI │ ├── [Jun 2 14:45] BOOT │ │ ├── [May 19 12:55] bkpbootx64.efi │ │ ├── [Aug 19 14:24] bootx64.efi │ │ ├── [Aug 19 14:24] fbx64.efi │ │ ├── [May 18 14:36] grubx64.efi │ │ └── [Aug 19 14:24] mmx64.efi │ └── [May 18 14:36] ubuntu │ ├── [Aug 19 14:24] BOOTX64.CSV │ ├── [Aug 19 14:24] grub.cfg │ ├── [Aug 19 14:24] grubx64.efi │ ├── [Aug 19 14:24] mmx64.efi │ └── [Aug 19 14:24] shimx64.efi
... Above is approximately all I expected to see. This is from one of mine: # tree -f /boot/efi/ /boot/efi ├── /boot/efi/EFI │ ├── /boot/efi/EFI/BOOT │ │ ├── /boot/efi/EFI/BOOT/BOOTX64.EFI │ │ └── /boot/efi/EFI/BOOT/fbx64.efi │ ├── /boot/efi/EFI/debian10 │ │ └── /boot/efi/EFI/debian10/grubx64.efi │ ├── /boot/efi/EFI/opensuse │ │ └── /boot/efi/EFI/opensuse/grubx64.efi │ ├── /boot/efi/EFI/opensusetw │ │ └── /boot/efi/EFI/opensusetw/grubx64.efi │ └── /boot/efi/EFI/tubuntu │ ├── /boot/efi/EFI/tubuntu/BOOTX64.CSV │ ├── /boot/efi/EFI/tubuntu/grub.cfg │ ├── /boot/efi/EFI/tubuntu/grubx64.efi │ ├── /boot/efi/EFI/tubuntu/mmx64.efi │ └── /boot/efi/EFI/tubuntu/shimx64.efi ├── /boot/efi/grub2 │ └── /boot/efi/grub2/custom.cfg
When I run the command on /dev/sdc1 it runs through 44086 directories and 447378 files which when piped to a file runs 42 megs, so I'll not attach it.
That I totally would not have expected. I suppose it may be that tree mishandles chroot environments.
| efibootmgr -v
BootCurrent: 0000 Timeout: 0 seconds BootOrder: 0000,0002,0006,0005 Boot0000* ubuntu HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(\EFI\ubuntu\shimx64.efi) Boot0002* UEFI OS HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(\EFI\BOOT\BOOTX64.EFI) Boot0005* Hard Drive BBS(HD,,0x0)AMGOAMNO........o.W.D.C. . .W.D.S.5.0.0.G.1.R.0.A.-.6.8.A.4.W.0....................A...........................>..Gd-.;.A..MQ..L.1.2.4.2.0.2.0.A.0.0.A.1. . . . . . . . ......AMBOAMNO........o.W.D.C. .W.D.6.0.0.3.F.R.Y.Z.-.0.1.F.0.D.B.0....................A...........................>..Gd-.;.A..MQ..L.9.V.E.H.0.L.L.2. . . . . . . . . . . . ......AMBOAMNO........o.W.D.C. .W.D.1.0.1.K.R.Y.Z.-.0.1.J.P.D.B.0....................A...........................>..Gd-.;.A..MQ..L.P.7.X.G.W.0.G.M. . . . . . . . . . . . ......AMBOAMNO........q.T.S.-.R.D.F.5. .S.D. .T.r.a.n.s.c.e.n.d....................A.............................>..Gd-.;.A..MQ..L.T.S.-.R.D.F.5. .S.D. .T.r.a.n.s.c.e.n.d......AMBO Boot0006* ubuntu HD(4,GPT,f3d5923e-cff9-4b47-beb9-be3a8442e1a0,0x800,0x96000)/File(EFI\Ubuntu\grubx64.efi)
Has this cast any light on the situation?
I was hopeful, but the GRUB_DISTRIBUTOR=s and GRUB_DISABLE_OS_PROBER=s are missing, and I may need blkid output as well to confirm where I think f3d5923e-cff9-4b47-beb9-be3a8442e1a0 comes from.
Felix Miata composed on 2021-08-19 18:11 (UTC-0400):
grep GRUB_DISTRIBUTOR= (on/from both sda1 and sdc1) grep GRUB_DISABLE_OS_PROBER= (on/from both sda1 and sdc1)
I left out the file to grep:
/etc/default/grub
e.g.
grep GRUB_DISTRIBUTOR= /etc/default/grub
So, then. I popped into the bios yet again. It lists four boot options. One is UEFI-something; two are Ubuntu-something (and based on what appear to be hard drive serial numbers, those two seem to be the same); and one is Ubuntu-something that is the SSD. Booting to the last of these produces the black acreen and flashing cursor. Booting to the UEFI-something seems to boot okay, producing the grub menu but booting me to sda1 no matter what I choose from the menu. But --AHA? -- choosing one of the plain Ubuntu choices (I don't know which one, because they appear identical) allows me to choose from the grub menu to boot the Linux on sdc1 and when I make that choice, I appear to actually boot sdc1. from "mount" output:
/dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro)
What I can't figure out is how this got changed from what worked before, and how to keep it from changing again. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Anno domini 2021 Fri, 20 Aug 00:02:18 +0000 dep scripsit:
So, then. I popped into the bios yet again. It lists four boot options. One is UEFI-something; two are Ubuntu-something (and based on what appear to be hard drive serial numbers, those two seem to be the same); and one is Ubuntu-something that is the SSD. Booting to the last of these produces the black acreen and flashing cursor. Booting to the UEFI-something seems to boot okay, producing the grub menu but booting me to sda1 no matter what I choose from the menu. But --AHA? -- choosing one of the plain Ubuntu choices (I don't know which one, because they appear identical) allows me to choose from the grub menu to boot the Linux on sdc1 and when I make that choice, I appear to actually boot sdc1. from "mount" output:
/dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro)
What I can't figure out is how this got changed from what worked before, and how to keep it from changing again.
As said, swap the 2 sata ports.
And get rid of the old OS on sda1 :)
Nik
-- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/ ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...
said Dr. Nikolaus Klepp: | Anno domini 2021 Fri, 20 Aug 00:02:18 +0000 | | dep scripsit: | > So, then. I popped into the bios yet again. It lists four boot | > options. One is UEFI-something; two are Ubuntu-something (and based on | > what appear to be hard drive serial numbers, those two seem to be the | > same); and one is Ubuntu-something that is the SSD. Booting to the | > last of these produces the black acreen and flashing cursor. Booting | > to the UEFI-something seems to boot okay, producing the grub menu but | > booting me to sda1 no matter what I choose from the menu. But --AHA? | > -- choosing one of the plain Ubuntu choices (I don't know which one, | > because they appear identical) allows me to choose from the grub menu | > to boot the Linux on sdc1 and when I make that choice, I appear to | > actually boot sdc1. from "mount" output: | > | > /dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro) | > | > What I can't figure out is how this got changed from what worked | > before, and how to keep it from changing again. | | As said, swap the 2 sata ports.
If it boots reliably from sdc1, as it does now, why would I want to do that?
| And get rid of the old OS on sda1 :)
I may not have been clear as to my purpose. The OS on sda1 is *identical* to the one on the SSD, sdc1. I hope to take advantage of the improved speed of the SSD, but I do not utterly trust SSDs (nor hard drives, but I come closer to trusting those). By having both, should the SSD fail I can simply boot an identical system by making that choice in the GRUB menu. So in this case it really is a feature, not a bug. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Anno domini 2021 Fri, 20 Aug 13:40:03 +0000 dep scripsit:
said Dr. Nikolaus Klepp: | Anno domini 2021 Fri, 20 Aug 00:02:18 +0000 | | dep scripsit: | > So, then. I popped into the bios yet again. It lists four boot | > options. One is UEFI-something; two are Ubuntu-something (and based on | > what appear to be hard drive serial numbers, those two seem to be the | > same); and one is Ubuntu-something that is the SSD. Booting to the | > last of these produces the black acreen and flashing cursor. Booting | > to the UEFI-something seems to boot okay, producing the grub menu but | > booting me to sda1 no matter what I choose from the menu. But --AHA? | > -- choosing one of the plain Ubuntu choices (I don't know which one, | > because they appear identical) allows me to choose from the grub menu | > to boot the Linux on sdc1 and when I make that choice, I appear to | > actually boot sdc1. from "mount" output: | > | > /dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro) | > | > What I can't figure out is how this got changed from what worked | > before, and how to keep it from changing again. | | As said, swap the 2 sata ports.
If it boots reliably from sdc1, as it does now, why would I want to do that?
Good question :) Let's call it habit. When you have more than one computer with more than one harddrive and each of them may or may not be able to boot from each harddive it's good to ensure that each machine can boot from the first device on sata1, otherwise you start cursing your younger self for beeing such a moron playing the find-the-boot-drive-guess-game with your older self.
| And get rid of the old OS on sda1 :)
I may not have been clear as to my purpose. The OS on sda1 is *identical* to the one on the SSD, sdc1. I hope to take advantage of the improved speed of the SSD, but I do not utterly trust SSDs (nor hard drives, but I come closer to trusting those). By having both, should the SSD fail I can simply boot an identical system by making that choice in the GRUB menu. So in this case it really is a feature, not a bug.
Sure. But why not put that (old) sda in cold storage? In your scenario you won't need it till things get "interesting". Or do you plan to keep both devices in sync? e.g. do any config twice, repeate any os upgrade?
Nik
-- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/ ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...
said Dr. Nikolaus Klepp: | Anno domini 2021 Fri, 20 Aug 13:40:03 +0000 | | dep scripsit:
| > If it boots reliably from sdc1, as it does now, why would I want to do | > that? | | Good question :) Let's call it habit. When you have more than one | computer with more than one harddrive and each of them may or may not be | able to boot from each harddive it's good to ensure that each machine | can boot from the first device on sata1, otherwise you start cursing | your younger self for beeing such a moron playing the | find-the-boot-drive-guess-game with your older self.
I have this thing on my hard drives called GRUB that presents upon reboot a selection from which I can choose.<g> And I'm not running a computer farm here -- keeping track of which is which is pretty easy.
| > | And get rid of the old OS on sda1 :) | > | > I may not have been clear as to my purpose. The OS on sda1 is | > *identical* to the one on the SSD, sdc1. I hope to take advantage of | > the improved speed of the SSD, but I do not utterly trust SSDs (nor | > hard drives, but I come closer to trusting those). By having both, | > should the SSD fail I can simply boot an identical system by making | > that choice in the GRUB menu. So in this case it really is a feature, | > not a bug. | | Sure. But why not put that (old) sda in cold storage? In your scenario | you won't need it till things get "interesting". Or do you plan to keep | both devices in sync? e.g. do any config twice, repeate any os upgrade?
There are several reasons. The first is that the machine would be limited in its usefulness without a /home directory, which resides on the 6tb sda1. Yes, I coupld move cables around, after which I could also spend hours changing links that rely currently on it being on sda1 and on the other drives that rely on their being where they currently are.
And yes, I do intend to keep them in sync, which simply requires booting to the hard drive once every couple of weeks and doing an #apt update and upgrade. Most configurations I need to do are in userspace, and in that both boot partitions point to the same ~/, the configurations will work on either one. The switch to SSD for daily use as / is a calculated tradeoff of reliability for speed, and maintaining a boot partition on sda1 -- as you note, the default for most everybody -- is a way of mitigating the risk. In that I'm often on deadline, the ability to keep going despite an SSD failure and dealing with a failed SSD at my later convenience is preferable to going to cold storage and taking the computer apart, then updating the Linux onboard, before I can do what I need to do right now. That was my design and, with the help of you and others here, I've been able to achieve it. (The problem at the top of this thread, it's now pretty clear, was due to the BIOS being . . . awful.)
And I don't do a full OS upgrade all that often, every couple of years at most. I used to get all excited and burn every new kernel that Linus released -- I remember the exciting Christmas of 2000, which I spent building and testing a release candidate for 2.4.0 (and which I didn't mind -- the inlaws were visiting) -- but I haven't done that for many years. Though having a machine like the one I have now would have made that easier, because I'd have a pretty easy out when the damned new kernel blew up! -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Anno domini 2021 Fri, 20 Aug 15:16:02 +0000 dep scripsit:
said Dr. Nikolaus Klepp: | Anno domini 2021 Fri, 20 Aug 13:40:03 +0000 | | dep scripsit:
| > If it boots reliably from sdc1, as it does now, why would I want to do | > that? | | Good question :) Let's call it habit. When you have more than one | computer with more than one harddrive and each of them may or may not be | able to boot from each harddive it's good to ensure that each machine | can boot from the first device on sata1, otherwise you start cursing | your younger self for beeing such a moron playing the | find-the-boot-drive-guess-game with your older self.
I have this thing on my hard drives called GRUB that presents upon reboot a selection from which I can choose.<g> And I'm not running a computer farm here -- keeping track of which is which is pretty easy.
| > | And get rid of the old OS on sda1 :) | > | > I may not have been clear as to my purpose. The OS on sda1 is | > *identical* to the one on the SSD, sdc1. I hope to take advantage of | > the improved speed of the SSD, but I do not utterly trust SSDs (nor | > hard drives, but I come closer to trusting those). By having both, | > should the SSD fail I can simply boot an identical system by making | > that choice in the GRUB menu. So in this case it really is a feature, | > not a bug. | | Sure. But why not put that (old) sda in cold storage? In your scenario | you won't need it till things get "interesting". Or do you plan to keep | both devices in sync? e.g. do any config twice, repeate any os upgrade?
There are several reasons. The first is that the machine would be limited in its usefulness without a /home directory, which resides on the 6tb sda1. Yes, I coupld move cables around, after which I could also spend hours changing links that rely currently on it being on sda1 and on the other drives that rely on their being where they currently are.
Ah, ok, I somehow missed the part with /home still on sda1 ... sounds like a good reason to keep that drive :)
Nik
And yes, I do intend to keep them in sync, which simply requires booting to the hard drive once every couple of weeks and doing an #apt update and upgrade. Most configurations I need to do are in userspace, and in that both boot partitions point to the same ~/, the configurations will work on either one. The switch to SSD for daily use as / is a calculated tradeoff of reliability for speed, and maintaining a boot partition on sda1 -- as you note, the default for most everybody -- is a way of mitigating the risk. In that I'm often on deadline, the ability to keep going despite an SSD failure and dealing with a failed SSD at my later convenience is preferable to going to cold storage and taking the computer apart, then updating the Linux onboard, before I can do what I need to do right now. That was my design and, with the help of you and others here, I've been able to achieve it. (The problem at the top of this thread, it's now pretty clear, was due to the BIOS being . . . awful.)
And I don't do a full OS upgrade all that often, every couple of years at most. I used to get all excited and burn every new kernel that Linus released -- I remember the exciting Christmas of 2000, which I spent building and testing a release candidate for 2.4.0 (and which I didn't mind -- the inlaws were visiting) -- but I haven't done that for many years. Though having a machine like the one I have now would have made that easier, because I'd have a pretty easy out when the damned new kernel blew up! -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/ ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...
said Dr. Nikolaus Klepp:
| Ah, ok, I somehow missed the part with /home still on sda1 ... sounds | like a good reason to keep that drive :)
As well you should have! I mistyped -- /home is sda3, which as you note justifies keeping the drive, not sda1, which would have been begging for trouble, and with it there I would have deserved any ill that befell me!
Drive organization is, seems to me, a science and art unto itself. One of the reasons I hated-hated-hated KDE-4.x was the insistence of the boys in sprinkling the (at first just terrible) KDE all over the place instead of maintaining the tradition of putting it where it belongs, in /opt/kde. (That was another time when many of us were compiling the whole thing every week or so. I did a Linux Planet piece in which I tested to see if one could compile all of KDE -- 3.x at the time, I think -- in the time that it took to drive from Newtown, Connecticut, to Key West, Florida. The compile, on a P-133, took longer! But in /opt, it was easy to rename the old, working directory, compile the new KDE, and if it didn't work switch back to the old version.) -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Felix Miata composed on 2021-08-19 18:11 (UTC-0400):
grep GRUB_DISTRIBUTOR= (on/from both sda1 and sdc1) grep GRUB_DISABLE_OS_PROBER= (on/from both sda1 and sdc1)
I left out the file to grep:
/etc/default/grub
e.g.
grep GRUB_DISTRIBUTOR= /etc/default/grub
said Dr. Nikolaus Klepp: | Hi dep! | | Anno domini 2021 Thu, 19 Aug 16:46:45 +0000 | | dep scripsit: | > Hi, everybody! (I hope you hear Dr. Nick's voice when you read this | > line.) | > | :) | : | > My SDD boot . . . isn't booting. | > | > Here's how it unfolded: I popped the side off the case so as to | > install a couple of top-facing exhaust fans, controlled by the | > motherboard, bring the total case fan count to five (two intake, three | > exhaust), because with that many fans none ever really spins up, so | > the computer is next to silent. That's all I did in there. I came | > nowhere near anything having to do with any of the drives. | > | > When I fired up the computer again I got the usual GRUB menu, the | > second item of which was the 20.04-LTS installation on /dev/sdc1. I | > chose it. | > | > And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. | > No errors, nothing. Just booted the wrong drive. | > | > The SDD (sdc) is working fine -- I can mount it, navigate in it, read | > and write to it. | > | > Any troubleshooting ideas that don't involve disassembly? This was | > working fine yesterday and isn't today. | | This usually happens when you update grub from the wrong OS. Can you | unplug sda, boot again. If it works, doublecheck the boot entry in the | bios. If it's efi and you have 2 efi partions on your system (each on a | different drive) try to get rid of the one you don't use - should be | sufficient to change the partition type.
That's a problem: sda is on a sled that is permanently located; I could *conceivably* switch the cables but it would not be pretty. And I haven't been updating grub.
As to the efi partition -- to the best of my knowledge i have only one, sda4, the purpose of which has never been even slightly clear to me, except that this can't possibly be the best way of doing things. There is none on the SSD, which is only 500 gigs. And again, this was working, in its current condition, until yesterday, when I shut down to add a couple of case fans.
I had previously changed the partition labels to LABEL= in /etc/fstab; if that had any effect at all anywhere it is certainly well hidden. -- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
Anno domini 2021 Thu, 19 Aug 18:03:07 +0000 dep scripsit:
said Dr. Nikolaus Klepp: | Hi dep! | | Anno domini 2021 Thu, 19 Aug 16:46:45 +0000 | | dep scripsit: | > Hi, everybody! (I hope you hear Dr. Nick's voice when you read this | > line.) | > | :) | : | > My SDD boot . . . isn't booting. | > | > Here's how it unfolded: I popped the side off the case so as to | > install a couple of top-facing exhaust fans, controlled by the | > motherboard, bring the total case fan count to five (two intake, three | > exhaust), because with that many fans none ever really spins up, so | > the computer is next to silent. That's all I did in there. I came | > nowhere near anything having to do with any of the drives. | > | > When I fired up the computer again I got the usual GRUB menu, the | > second item of which was the 20.04-LTS installation on /dev/sdc1. I | > chose it. | > | > And it booted forthwith -- to the 20.04-LTS installation on /dev/sda1. | > No errors, nothing. Just booted the wrong drive. | > | > The SDD (sdc) is working fine -- I can mount it, navigate in it, read | > and write to it. | > | > Any troubleshooting ideas that don't involve disassembly? This was | > working fine yesterday and isn't today. | | This usually happens when you update grub from the wrong OS. Can you | unplug sda, boot again. If it works, doublecheck the boot entry in the | bios. If it's efi and you have 2 efi partions on your system (each on a | different drive) try to get rid of the one you don't use - should be | sufficient to change the partition type.
That's a problem: sda is on a sled that is permanently located; I could *conceivably* switch the cables but it would not be pretty. And I haven't been updating grub.
As to the efi partition -- to the best of my knowledge i have only one, sda4, the purpose of which has never been even slightly clear to me, except that this can't possibly be the best way of doing things. There is none on the SSD, which is only 500 gigs. And again, this was working, in its current condition, until yesterday, when I shut down to add a couple of case fans.
I had previously changed the partition labels to LABEL= in /etc/fstab; if that had any effect at all anywhere it is certainly well hidden.
Can you boot /dev/sdc1 using grub commandline?
-- dep
Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/ ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...