Greetings;
I have a weird problem in that I seem to be able to access the remote machine, except for the most important directory on it, /home. An ls /net/machine/home ls: cannot access /net/shop/home: Stale NFS file handle
Expected this sudo ls /net/shop/root permission denied
But every other actually exists on the drive directory is ls'able. I've restarted all the nfs related stuff, everything but rebooted this machine.
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA224
Greetings;
I have a weird problem in that I seem to be able to access the remote machine, except for the most important directory on it, /home. An ls /net/machine/home ls: cannot access /net/shop/home: Stale NFS file handle
Expected this sudo ls /net/shop/root permission denied
But every other actually exists on the drive directory is ls'able. I've restarted all the nfs related stuff, everything but rebooted this machine.
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
"There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) Genes Web page http://geneslinuxbox.net:6309/gene
Aside from the usual cricket havens (kernel mailing lists, where no one seems able to answer your questions ;-)) you might try the Arch Linux boards of all places. There are a lot of knowledgeable people there, and if you can distill the issue down enough you might get by even though you are using a different distro. :-)
That said, stale handle normally means the client has been disconnected from the server without knowing why; since you can access other directories on that server I'd be looking at: 1.) permissions on the server side 2.) Stale / broken portmapper / RPC services 3.) Any firewalls between the two boxes, possibly blocking only certain ports
If it isn't any of the easy things like that you're going to have to use Wireshark and get packet traces.
Tim
On Wednesday 23 September 2015 11:28:53 Timothy Pearson wrote:
Greetings;
I have a weird problem in that I seem to be able to access the remote machine, except for the most important directory on it, /home. An ls /net/machine/home ls: cannot access /net/shop/home: Stale NFS file handle
Expected this sudo ls /net/shop/root permission denied
But every other actually exists on the drive directory is ls'able. I've restarted all the nfs related stuff, everything but rebooted this machine.
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
"There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) Genes Web page http://geneslinuxbox.net:6309/gene
Aside from the usual cricket havens (kernel mailing lists, where no one seems able to answer your questions ;-)) you might try the Arch Linux boards of all places. There are a lot of knowledgeable people there, and if you can distill the issue down enough you might get by even though you are using a different distro. :-)
That said, stale handle normally means the client has been disconnected from the server without knowing why; since you can access other directories on that server I'd be looking at: 1.) permissions on the server side
Define "server" please Timothy. I am under the impression that any machine with an /etc/exports file is the "server" at that instant. On GO704: drwxr-xr-x 3 root root 4096 Jun 21 15:24 home On lathe: drwxr-xr-x 3 root root 4096 May 8 09:16 home On shop: drwxr-xr-x 5 root root 4096 May 19 10:31 home
Which might be a clue, 2 3's on machines that work, and a 5, plus a shorter line format on the machine that doesn't. All installed from the same dvd image.
2.) Stale / broken portmapper / RPC services
Which I can force reinstall...
3.) Any firewalls between the two boxes, possibly blocking only certain ports
None.
If it isn't any of the easy things like that you're going to have to use Wireshark and get packet traces.
Tim
Twasn't installed, is now, gave me hell for running it as root (sudo) but won't trace eth0, no perms, if I don't. I let it capture about 500 packets including a couple directory listings on "shop", but even though I expanded every interchange between here and shop, that isn't a thing there to identify that fact that I am trying to do, or just did, a directory listing. So I can't connect what I'm seeing in the capture compared to what I typed and received a response to in a different workspace/terminal.
IOW, I am lost. BTDT, several times now. ;-)
I have to assume the man page will tell me what to do, so I'll see if I can suss it out.
A query on the debian list made last night has so far been ignored. And I am not subbed to the arch list, but it sounds like I should be.
Thank you Timothy.
Cheers, Gene Heskett
On Wednesday 23 September 2015 06:11:27 am Gene Heskett wrote:
Greetings;
I have a weird problem in that I seem to be able to access the remote machine, except for the most important directory on it, /home. An ls /net/machine/home ls: cannot access /net/shop/home: Stale NFS file handle
Expected this sudo ls /net/shop/root permission denied
But every other actually exists on the drive directory is ls'able. I've restarted all the nfs related stuff, everything but rebooted this machine.
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
On Wednesday 23 September 2015 13:55:32 Greg Madden wrote:
On Wednesday 23 September 2015 06:11:27 am Gene Heskett wrote:
Greetings;
I have a weird problem in that I seem to be able to access the remote machine, except for the most important directory on it, /home. An ls /net/machine/home ls: cannot access /net/shop/home: Stale NFS file handle
Expected this sudo ls /net/shop/root permission denied
But every other actually exists on the drive directory is ls'able. I've restarted all the nfs related stuff, everything but rebooted this machine.
I have done that now, no really meaningfull change. The stale file handle seems to be gone, but the return now is a null byte.
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
Hi Gene!
SSHFS is just like any other filesystem, so mc has no problems with that. But may I ask, what you do with sshmenu? Usally I just mount remote directories like that:
sshfs nik@somehost:/just/the/path /where/i/want/it
and unmount:
fusermount -u /where/i/want/it
Nik
On Thursday 24 September 2015 05:27:44 Dr. Nikolaus Klepp wrote:
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
Hi Gene!
SSHFS is just like any other filesystem, so mc has no problems with that. But may I ask, what you do with sshmenu? Usally I just mount remote directories like that:
sshfs nik@somehost:/just/the/path /where/i/want/it
And that works, once I had done a "sudo chown -R gene:gene /sshnet" ls -l's now work on /sshnet/shop. With my pw. Expected, I am limited to what I own on shop, but thats a heck of a lot better than before. I'll get all the details together in a script. I assume we still have an expect util I can feed with my pw yet? I haven't looked lately, as in 5+ years & several installs ago. To late in the night here, or too early in the morning to spend a lot of time on it until I've found some more sleep.
Thanks Nik.
and unmount:
fusermount -u /where/i/want/it
I'll let reboots do that. :)
Nik
Cheers, Gene Heskett
sshfs nik@somehost:/just/the/path /where/i/want/it
And that works, once I had done a "sudo chown -R gene:gene /sshnet" ls -l's now work on /sshnet/shop. With my pw. Expected, I am limited to what I own on shop, but thats a heck of a lot better than before. I'll get all the details together in a script. I assume we still have an expect util I can feed with my pw yet? I haven't looked lately, as in 5+ years & several installs ago. To late in the night here, or too early in the morning to spend a lot of time on it until I've found some more sleep.
Hi Gene!
There's something better than expect:
$ ssh-copy-id nik@remotehost
and from there I can log into nik@remotehost without password. "ssh-copy-id" appends your local "~/.ssh/id_rsa.pub" to the remote users "~/.ssh/authorized_keys".
Nik
Thanks Nik.
and unmount:
fusermount -u /where/i/want/it
I'll let reboots do that. :)
Nik
Cheers, Gene Heskett
On Thursday 24 September 2015 07:34:41 Dr. Nikolaus Klepp wrote:
sshfs nik@somehost:/just/the/path /where/i/want/it
And that works, once I had done a "sudo chown -R gene:gene /sshnet" ls -l's now work on /sshnet/shop. With my pw. Expected, I am limited to what I own on shop, but thats a heck of a lot better than before. I'll get all the details together in a script. I assume we still have an expect util I can feed with my pw yet? I haven't looked lately, as in 5+ years & several installs ago. To late in the night here, or too early in the morning to spend a lot of time on it until I've found some more sleep.
Hi Gene!
There's something better than expect:
$ ssh-copy-id nik@remotehost
Problem: ene@coyote:/etc$ ssh-copy-id gene@shop /usr/bin/ssh-copy-id: ERROR: No identities found
I must be even more a-nany-mouse than most. :) All 3 live machines report the same error.
Do I need to somehow generate this "id" file with a different utility?
and from there I can log into nik@remotehost without password. "ssh-copy-id" appends your local "~/.ssh/id_rsa.pub" to the remote users "~/.ssh/authorized_keys".
Nik
Thanks Nik.
and unmount:
fusermount -u /where/i/want/it
Which does restore my owner:group to the unmounted directory.
I'll let reboots do that. :)
Many thanks, Nik.
Cheers, Gene Heskett
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name This makes one of your panels a virtual file system on the other machine. There is also a fish kpart that you can use from konqueror in file browser mode.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
RG
On Thu, 24 Sep 2015 19:02:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not
too
many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu
which
pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name This makes one of your panels a virtual file system on the other machine. There is also a fish kpart that you can use from konqueror in file
browser
mode.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
RG
To unsubscribe, e-mail: trinity-users-
unsubscribe@lists.pearsoncomputing.net
For additional commands, e-mail: trinity-users-help@lists.pearsoncomputing.net Read list messages on the
web
archive: http://trinity-users.pearsoncomputing.net/ Please remember not
to
top-post: http://trinity.pearsoncomputing.net/mailing_lists/#top-posting
oops, correction - in MC do: cd sh://user_name@machine_name
RG
On Thu, 24 Sep 2015 19:13:54 Richard Glock wrote:
On Thu, 24 Sep 2015 19:02:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
> Subject says it all. I need to find the experts. > > Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not
too
many linux users in these here parts. I am quite likely 100
miles
from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from
an
expert. I export my "/home/<user>" dir and manually mount ,
cli,
on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then.
So,
what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu
which
pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new
2
pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name This makes one of your panels a virtual file system on the other
machine.
There is also a fish kpart that you can use from konqueror in file
browser
mode.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
RG
To unsubscribe, e-mail: trinity-users-
unsubscribe@lists.pearsoncomputing.net
For additional commands, e-mail: trinity-users-help@lists.pearsoncomputing.net Read list messages on the
web
archive: http://trinity-users.pearsoncomputing.net/ Please remember not
to
top-post: http://trinity.pearsoncomputing.net/mailing_lists/#top-posting
oops, correction - in MC do: cd sh://user_name@machine_name
RG
You have to have an SSH server daemon running on the remote machine, of course. Also, the transfer rates will be slower than NFS, if that matters.
Apologies for the multiple postings.
RG
On Thursday 24 September 2015 05:43:54 Richard Glock wrote:
On Thu, 24 Sep 2015 19:02:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
> Subject says it all. I need to find the experts. > > Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not
too
many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu
which
pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name This makes one of your panels a virtual file system on the other machine. There is also a fish kpart that you can use from konqueror in file
browser
mode.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
RG
- To unsubscribe, e-mail: trinity-users-
unsubscribe@lists.pearsoncomputing.net
For additional commands, e-mail: trinity-users-help@lists.pearsoncomputing.net Read list messages on the
web
archive: http://trinity-users.pearsoncomputing.net/ Please remember not
to
top-post: http://trinity.pearsoncomputing.net/mailing_lists/#top-posting
oops, correction - in MC do: cd sh://user_name@machine_name
RG
Which does work, thank you Richard Humm, but lemme see if that works from lathe to shop. No, big red failure message, cannot cd, ioerror 5.
To unsubscribe, e-mail: trinity-users-unsubscribe@lists.pearsoncomputing.net For additional commands, e-mail: trinity-users-help@lists.pearsoncomputing.net Read list messages on the web archive: http://trinity-users.pearsoncomputing.net/ Please remember not to top-post: http://trinity.pearsoncomputing.net/mailing_lists/#top-posting
Cheers, Gene Heskett
On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs command, a mount point I created, then chown'd to me, works just fine. Since I also have an ssh -Y session into each of those machines, if I need to muck around out of my home dir, sudo is always available.
To summerize, I added these lines to /etc/fstab: shop.coyote.den:/ /sshnet/shop fuse.sshfs defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe fuse.sshfs defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704 fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the same thing to each of the other 3 machines currently alive on this local network. Then I can hopefully reach across the net from any machine to any machine, which was my target in the first place.
According to my results of doing the mkdir yadda, followed by the sshfs login, it works just fine on GO704. I can look at /home/gene on this machine from the ssh -Y session into that machine. Two more machines to go... But first, clean up the mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb didn't list. I don't have a heck of a lot of ruby stuffs in use here. I'll nuke it.
Nik's example sshfs command line was then executed once for each of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs session converts the individual subdir used from gene:gene to root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the mkdir tapdance again. But while it will connect to both this machine and GO704, it will not connect to "shop", "connection reset by peer", so once again that shop machine is being a spoiled brat.
Doing that same tapdance on machine "shop" works as expected. Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is not usable on shop or GO704 which are 4 axis milling machines, so that is not a showstopper loss. Besides, I can go to the other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership of /sshnet/machinename being changed to root is permanent, that will mean I have to do the "sudo chown -R /sshnet" dance on 4 machines when they have been rebooted. That is the only way I know to get around the target machines asking me for a non-existent root pw.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
And has been so for at least 5 years, the level of neglect seems rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a usable alternative. I long ago got tired of the constant perms fixing that CIFS needs, too many roots in the M$ world.
RG
Many many thanks to both Nik and Richard for supplying the clues and examples that made it work . And to Timothy for pointing out that it might be a problem with rpcbind. But yesterdays huge update included rpcbind, which all machines have now been updated, and that did not fix nfs.
Cheers, Gene Heskett
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
> Subject says it all. I need to find the experts. > > Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs command, a mount point I created, then chown'd to me, works just fine. Since I also have an ssh -Y session into each of those machines, if I need to muck around out of my home dir, sudo is always available.
To summerize, I added these lines to /etc/fstab: shop.coyote.den:/ /sshnet/shop fuse.sshfs defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe fuse.sshfs defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704 fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the same thing to each of the other 3 machines currently alive on this local network. Then I can hopefully reach across the net from any machine to any machine, which was my target in the first place.
According to my results of doing the mkdir yadda, followed by the sshfs login, it works just fine on GO704. I can look at /home/gene on this machine from the ssh -Y session into that machine. Two more machines to go... But first, clean up the mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb didn't list. I don't have a heck of a lot of ruby stuffs in use here. I'll nuke it.
Nik's example sshfs command line was then executed once for each of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs session converts the individual subdir used from gene:gene to root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the mkdir tapdance again. But while it will connect to both this machine and GO704, it will not connect to "shop", "connection reset by peer", so once again that shop machine is being a spoiled brat.
Doing that same tapdance on machine "shop" works as expected. Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is not usable on shop or GO704 which are 4 axis milling machines, so that is not a showstopper loss. Besides, I can go to the other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership of /sshnet/machinename being changed to root is permanent, that will mean I have to do the "sudo chown -R /sshnet" dance on 4 machines when they have been rebooted. That is the only way I know to get around the target machines asking me for a non-existent root pw.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
And has been so for at least 5 years, the level of neglect seems rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a usable alternative. I long ago got tired of the constant perms fixing that CIFS needs, too many roots in the M$ world.
RG
Many many thanks to both Nik and Richard for supplying the clues and examples that made it work . And to Timothy for pointing out that it might be a problem with rpcbind. But yesterdays huge update included rpcbind, which all machines have now been updated, and that did not fix nfs.
Cheers, Gene Heskett
Hi Gene!
Plese check that the file /etc/ssh/sshd_config is identical for all 3 maschines.
On rebooting: you could add the "sshfs ..." lines to /etc/rc.local:
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
.. then you sould have the user of the remote files set to "gene".
Also check that the subfolders of /sshnet/* are owned by "gene" when no filesystem is mounted there.
Nik
On Thursday 24 September 2015 12:59:45 Dr. Nikolaus Klepp wrote:
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
> > Subject says it all. I need to find the experts. > > > > Cheers, Gene Heskett > > I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
> I use nfs on my local network, it just works so I am far > from an expert. I export my "/home/<user>" dir and > manually mount , cli, on the clients. > > Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs command, a mount point I created, then chown'd to me, works just fine. Since I also have an ssh -Y session into each of those machines, if I need to muck around out of my home dir, sudo is always available.
To summerize, I added these lines to /etc/fstab: shop.coyote.den:/ /sshnet/shop fuse.sshfs defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe fuse.sshfs defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704 fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the same thing to each of the other 3 machines currently alive on this local network. Then I can hopefully reach across the net from any machine to any machine, which was my target in the first place.
According to my results of doing the mkdir yadda, followed by the sshfs login, it works just fine on GO704. I can look at /home/gene on this machine from the ssh -Y session into that machine. Two more machines to go... But first, clean up the mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb didn't list. I don't have a heck of a lot of ruby stuffs in use here. I'll nuke it.
Nik's example sshfs command line was then executed once for each of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs session converts the individual subdir used from gene:gene to root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the mkdir tapdance again. But while it will connect to both this machine and GO704, it will not connect to "shop", "connection reset by peer", so once again that shop machine is being a spoiled brat.
Doing that same tapdance on machine "shop" works as expected. Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is not usable on shop or GO704 which are 4 axis milling machines, so that is not a showstopper loss. Besides, I can go to the other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership of /sshnet/machinename being changed to root is permanent, that will mean I have to do the "sudo chown -R /sshnet" dance on 4 machines when they have been rebooted. That is the only way I know to get around the target machines asking me for a non-existent root pw.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
And has been so for at least 5 years, the level of neglect seems rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a usable alternative. I long ago got tired of the constant perms fixing that CIFS needs, too many roots in the M$ world.
RG
Many many thanks to both Nik and Richard for supplying the clues and examples that made it work . And to Timothy for pointing out that it might be a problem with rpcbind. But yesterdays huge update included rpcbind, which all machines have now been updated, and that did not fix nfs.
Cheers, Gene Heskett
Hi Gene!
Plese check that the file /etc/ssh/sshd_config is identical for all 3 maschines.
It is not. GO704 and shop have this line in that file:
HostKey /etc/ssh/ssh_host_ecdsa_key
While lathe and coyote do not. And I have no clue if that key is available on those 2 machines
On rebooting: you could add the "sshfs ..." lines to /etc/rc.local:
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
And this bypasses the pw request for my pw from each machine as its called to duty?
.. then you sould have the user of the remote files set to "gene".
Also check that the subfolders of /sshnet/* are owned by "gene" when no filesystem is mounted there.
Nik
Thank you Nik.
Cheers, Gene Heskett
Am Freitag, 25. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 12:59:45 Dr. Nikolaus Klepp wrote:
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
> > > Subject says it all. I need to find the experts. > > > > > > Cheers, Gene Heskett > > > > I use my local Linux User Group, full service. > > My local linux user group. Chuckle. I am 1 of a group of 3. > Not too many linux users in these here parts. I am quite > likely 100 miles from the nearest "user group" that numbers > 10 or more. > > > I use nfs on my local network, it just works so I am far > > from an expert. I export my "/home/<user>" dir and > > manually mount , cli, on the clients. > > > > Debian stable. > > Debian Wheezy. With TDE. > > Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs command, a mount point I created, then chown'd to me, works just fine. Since I also have an ssh -Y session into each of those machines, if I need to muck around out of my home dir, sudo is always available.
To summerize, I added these lines to /etc/fstab: shop.coyote.den:/ /sshnet/shop fuse.sshfs defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe fuse.sshfs defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704 fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the same thing to each of the other 3 machines currently alive on this local network. Then I can hopefully reach across the net from any machine to any machine, which was my target in the first place.
According to my results of doing the mkdir yadda, followed by the sshfs login, it works just fine on GO704. I can look at /home/gene on this machine from the ssh -Y session into that machine. Two more machines to go... But first, clean up the mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb didn't list. I don't have a heck of a lot of ruby stuffs in use here. I'll nuke it.
Nik's example sshfs command line was then executed once for each of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs session converts the individual subdir used from gene:gene to root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the mkdir tapdance again. But while it will connect to both this machine and GO704, it will not connect to "shop", "connection reset by peer", so once again that shop machine is being a spoiled brat.
Doing that same tapdance on machine "shop" works as expected. Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is not usable on shop or GO704 which are 4 axis milling machines, so that is not a showstopper loss. Besides, I can go to the other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership of /sshnet/machinename being changed to root is permanent, that will mean I have to do the "sudo chown -R /sshnet" dance on 4 machines when they have been rebooted. That is the only way I know to get around the target machines asking me for a non-existent root pw.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
And has been so for at least 5 years, the level of neglect seems rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a usable alternative. I long ago got tired of the constant perms fixing that CIFS needs, too many roots in the M$ world.
RG
Many many thanks to both Nik and Richard for supplying the clues and examples that made it work . And to Timothy for pointing out that it might be a problem with rpcbind. But yesterdays huge update included rpcbind, which all machines have now been updated, and that did not fix nfs.
Cheers, Gene Heskett
Hi Gene!
Plese check that the file /etc/ssh/sshd_config is identical for all 3 maschines.
It is not. GO704 and shop have this line in that file:
HostKey /etc/ssh/ssh_host_ecdsa_key
While lathe and coyote do not. And I have no clue if that key is available on those 2 machines
On rebooting: you could add the "sshfs ..." lines to /etc/rc.local:
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
And this bypasses the pw request for my pw from each machine as its called to duty?
.. then you sould have the user of the remote files set to "gene".
Also check that the subfolders of /sshnet/* are owned by "gene" when no filesystem is mounted there.
Nik
Thank you Nik.
Cheers, Gene Heskett
Hi Gene!
Ok, this is the workflow you need to go through for each maschine - all steps, including new key generation etc. Plese make sure, that you do not use any passphrase, or you'll need to enter it each time you use sshfs:
# walk-the-shop, round 1: recreate SSH-keys for all maschines rm ~/.ssh ssh-keygen
# round 2: distribute ssh public keys, you'll need this for all 3 maschines, not just the other 2: ssh-copy-id -i ~/.ssh/*.pub gene@this_one ssh-copy-id -i ~/.ssh/*.pub gene@the_other ssh-copy-id -i ~/.ssh/*.pub gene@the_next_one
# round 3: check, if you can log into any of these maschines without password: ssh gene@this_one ssh gene@the_other ssh gene@the_next_one
# round 4: reboot, try round 3 again for all maschines.
# round 5: try mounting sshfs for all 3 maschines: sudo bash su - gene -c "sshfs gene@shop:/ /sshnet/shop" su - gene -c "sshfs gene@shop:/ /sshnet/coyote" su - gene -c "sshfs gene@shop:/ /sshnet/lath"
# round 6: add the lines above to /etc/rc.local, reboot and check if it worked
Now, does this procedure fail anywhere? If yes, what linux versions are running where?
Nik
On Friday 25 September 2015 05:11:29 Dr. Nikolaus Klepp wrote:
Am Freitag, 25. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 12:59:45 Dr. Nikolaus Klepp wrote:
Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp
wrote:
> > > > Subject says it all. I need to find the experts. > > > > > > > > Cheers, Gene Heskett > > > > > > I use my local Linux User Group, full service. > > > > My local linux user group. Chuckle. I am 1 of a group of > > 3. Not too many linux users in these here parts. I am > > quite likely 100 miles from the nearest "user group" > > that numbers 10 or more. > > > > > I use nfs on my local network, it just works so I am > > > far from an expert. I export my "/home/<user>" dir and > > > manually mount , cli, on the clients. > > > > > > Debian stable. > > > > Debian Wheezy. With TDE. > > > > Cheers, Gene Heskett > > Hi Gene! > > I dropped NFS on linux ages ago, due to simillar issues as > you describe. Now I use SSHFS and haven't had any issues > since then. So, what about using SSHFS instead of NFS? > > Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs command, a mount point I created, then chown'd to me, works just fine. Since I also have an ssh -Y session into each of those machines, if I need to muck around out of my home dir, sudo is always available.
To summerize, I added these lines to /etc/fstab: shop.coyote.den:/ /sshnet/shop fuse.sshfs defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe fuse.sshfs defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704 fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the same thing to each of the other 3 machines currently alive on this local network. Then I can hopefully reach across the net from any machine to any machine, which was my target in the first place.
According to my results of doing the mkdir yadda, followed by the sshfs login, it works just fine on GO704. I can look at /home/gene on this machine from the ssh -Y session into that machine. Two more machines to go... But first, clean up the mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb didn't list. I don't have a heck of a lot of ruby stuffs in use here. I'll nuke it.
Nik's example sshfs command line was then executed once for each of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs session converts the individual subdir used from gene:gene to root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the mkdir tapdance again. But while it will connect to both this machine and GO704, it will not connect to "shop", "connection reset by peer", so once again that shop machine is being a spoiled brat.
Doing that same tapdance on machine "shop" works as expected. Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is not usable on shop or GO704 which are 4 axis milling machines, so that is not a showstopper loss. Besides, I can go to the other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership of /sshnet/machinename being changed to root is permanent, that will mean I have to do the "sudo chown -R /sshnet" dance on 4 machines when they have been rebooted. That is the only way I know to get around the target machines asking me for a non-existent root pw.
NFS these days is a hairball of epic proportions. Try getting the NFS daemons to bind to a specific address per the man pages...
And has been so for at least 5 years, the level of neglect seems rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a usable alternative. I long ago got tired of the constant perms fixing that CIFS needs, too many roots in the M$ world.
RG
Many many thanks to both Nik and Richard for supplying the clues and examples that made it work . And to Timothy for pointing out that it might be a problem with rpcbind. But yesterdays huge update included rpcbind, which all machines have now been updated, and that did not fix nfs.
Cheers, Gene Heskett
Hi Gene!
Plese check that the file /etc/ssh/sshd_config is identical for all 3 maschines.
It is not. GO704 and shop have this line in that file:
HostKey /etc/ssh/ssh_host_ecdsa_key
While lathe and coyote do not. And I have no clue if that key is available on those 2 machines
On rebooting: you could add the "sshfs ..." lines to /etc/rc.local:
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
And this bypasses the pw request for my pw from each machine as its called to duty?
.. then you sould have the user of the remote files set to "gene".
Also check that the subfolders of /sshnet/* are owned by "gene" when no filesystem is mounted there.
Nik
Thank you Nik.
Cheers, Gene Heskett
Hi Gene!
Ok, this is the workflow you need to go through for each maschine - all steps, including new key generation etc. Plese make sure, that you do not use any passphrase, or you'll need to enter it each time you use sshfs:
# walk-the-shop, round 1: recreate SSH-keys for all maschines rm ~/.ssh
It won't let me, complains its a directory, which it is, and contains known_hosts and known_hosts.old
ssh-keygen
# round 2: distribute ssh public keys, you'll need this for all 3 maschines, not just the other 2: ssh-copy-id -i ~/.ssh/*.pub gene@this_one ssh-copy-id -i ~/.ssh/*.pub gene@the_other ssh-copy-id -i ~/.ssh/*.pub gene@the_next_one
Done, had to use my login pw for this.
# round 3: check, if you can log into any of these maschines without password: ssh gene@this_one ssh gene@the_other ssh gene@the_next_one
Works!
# round 4: reboot, try round 3 again for all maschines.
# round 5: try mounting sshfs for all 3 maschines: sudo bash su - gene -c "sshfs gene@shop:/ /sshnet/shop" su - gene -c "sshfs gene@shop:/ /sshnet/coyote" su - gene -c "sshfs gene@shop:/ /sshnet/lathe"
# round 6: add the lines above to /etc/rc.local, reboot and check if it worked
It should, I already added that stuff in my rc.local.
Now, does this procedure fail anywhere?
No. :)
If yes, what linux versions are running where?
Nik
Thank you very much Nik.
Cheers, Gene Heskett
On Thursday 24 September 2015 05:18:52 Gene Heskett wrote:
On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp wrote:
Subject says it all. I need to find the experts.
Cheers, Gene Heskett
I use my local Linux User Group, full service.
My local linux user group. Chuckle. I am 1 of a group of 3. Not too many linux users in these here parts. I am quite likely 100 miles from the nearest "user group" that numbers 10 or more.
I use nfs on my local network, it just works so I am far from an expert. I export my "/home/<user>" dir and manually mount , cli, on the clients.
Debian stable.
Debian Wheezy. With TDE.
Cheers, Gene Heskett
Hi Gene!
I dropped NFS on linux ages ago, due to simillar issues as you describe. Now I use SSHFS and haven't had any issues since then. So, what about using SSHFS instead of NFS?
Nik
Never heard of it till now. So I installed it, along with sshmenu which pulled in a dozen other rubyish packages.
Silly Q though, does mc understand sshfs? Or do I need to find a new 2 pane file manager that does understand it?
One thing's for sure, NFS, even V4 is old enough to have bit rot.
Thanks Nik. Off to read some man pages.
Cheers, Gene Heskett
Well, as a sample fstab entry, I have gotten as far as its asking for a root password, something that does not exist on any of my debian installs. and of course my pw, used with sudo, is no good.
How do I go about telling it I am the user doing the mounting and if needed, the unmounting? "defaults,idmap=user" in the fstab apparently does nothing. As a test, miss-spelling user does get bounced.
Thanks Nik.
Cheers, Gene Heskett