On Friday 25 September 2015 05:11:29 Dr. Nikolaus Klepp wrote:
Am Freitag, 25. September 2015 schrieb Gene Heskett:
> On Thursday 24 September 2015 12:59:45 Dr. Nikolaus Klepp wrote:
> > Am Donnerstag, 24. September 2015 schrieb Gene Heskett:
> > > On Thursday 24 September 2015 05:32:53 Richard Glock wrote:
> > > > On Thu, 24 Sep 2015 05:18:52 Gene Heskett wrote:
> > > > > On Thursday 24 September 2015 03:03:10 Dr. Nikolaus Klepp
wrote:
> > >
> > > Subject says it all. I need to find the experts.
> > > > > >
> > > > > > Cheers, Gene Heskett
> > > > >
> > > > > I use my local Linux User Group, full service.
> > > >
> > > > My local linux user group. Chuckle. I am 1 of a group of
> > > > 3. Not too many linux users in these here parts. I am
> > > > quite likely 100 miles from the nearest "user group"
> > > > that numbers 10 or more.
> > > >
> > > > > I use nfs on my local network, it just works so I am
> > > > > far from an expert. I export my "/home/<user>"
dir and
> > > > > manually mount , cli, on the clients.
> > > > >
> > > > > Debian stable.
> > > >
> > > > Debian Wheezy. With TDE.
> > > >
> > > > Cheers, Gene Heskett
> > >
> > > Hi Gene!
> > >
> > > I dropped NFS on linux ages ago, due to simillar issues as
> > > you describe. Now I use SSHFS and haven't had any issues
> > > since then. So, what about using SSHFS instead of NFS?
> > >
> > > Nik
> >
> > Never heard of it till now. So I installed it, along with
> > sshmenu which pulled in a dozen other rubyish packages.
> >
> > Silly Q though, does mc understand sshfs? Or do I need to
> > find a new 2 pane file manager that does understand it?
> >
> > One thing's for sure, NFS, even V4 is old enough to have bit
> > rot.
> >
> > Thanks Nik. Off to read some man pages.
> >
> > Cheers, Gene Heskett
>
> On the MC command line, do cd fish://user_name@machine_name
Resource temporarily unavailable, fish might not be installed?
But a "cd /sshnet/shop", after using Niks's example sshfs
command, a mount point I created, then chown'd to me, works just
fine. Since I also have an ssh -Y session into each of those
machines, if I need to muck around out of my home dir, sudo is
always available.
To summerize, I added these lines to /etc/fstab:
shop.coyote.den:/ /sshnet/shop fuse.sshfs
defaults,idmap=user 0 0 lathe.coyote.den:/ /sshnet/lathe
fuse.sshfs
defaults,idmap=user 0 0 GO704.coyote.den:/ /sshnet/GO704
fuse.sshfs defaults,idmap=user 0 0
Which I suspect can be nuked. but there it is. The mount points
were created and chown'd to me.
Now I will see if the fstab entries are surplus by doing the
same thing to each of the other 3 machines currently alive on
this local network. Then I can hopefully reach across the net
from any machine to any machine, which was my target in the
first place.
According to my results of doing the mkdir yadda, followed by
the sshfs login, it works just fine on GO704. I can look at
/home/gene on this machine from the ssh -Y session into that
machine. Two more machines to go... But first, clean up the
mess in my fstab.
Oh, and sshmenu is broken, needs a ruby dependency the deb
didn't list. I don't have a heck of a lot of ruby stuffs in use
here. I'll nuke it.
Nik's example sshfs command line was then executed once for each
of the mount points.
Humm, on GO704 it all works, and here it all works BUT the sshfs
session converts the individual subdir used from gene:gene to
root:root. If thats permanent it will be a problem.
So I go over to the ssh -Y session into the lathe and do the
mkdir tapdance again. But while it will connect to both this
machine and GO704, it will not connect to "shop", "connection
reset by peer", so once again that shop machine is being a
spoiled brat.
Doing that same tapdance on machine "shop" works as expected.
Now why can't lathe access shop?
gene@lathe:~$ sshfs gene@shop:/ /sshnet/shop
read: Connection reset by peer
However, gcode written for the lathe (only a 2 axis machine) is
not usable on shop or GO704 which are 4 axis milling machines,
so that is not a showstopper loss. Besides, I can go to the
other machine and do the file copies if I need it bad enough.
What does bother me though is that if the ownership
of /sshnet/machinename being changed to root is permanent, that
will mean I have to do the "sudo chown -R /sshnet" dance on 4
machines when they have been rebooted. That is the only way I
know to get around the target machines asking me for a
non-existent root pw.
> NFS these days is a hairball of epic proportions. Try getting
> the NFS daemons to bind to a specific address per the man
> pages...
And has been so for at least 5 years, the level of neglect seems
rampant. The manpages haven't been touched in 9 years.
But till now I have used it if I could because there wasn't a
usable alternative. I long ago got tired of the constant perms
fixing that CIFS needs, too many roots in the M$ world.
> RG
Many many thanks to both Nik and Richard for supplying the clues
and examples that made it work . And to Timothy for pointing
out that it might be a problem with rpcbind. But yesterdays
huge update included rpcbind, which all machines have now been
updated, and that did not fix nfs.
Cheers, Gene Heskett
Hi Gene!
Plese check that the file /etc/ssh/sshd_config is identical for
all 3 maschines.
It is not. GO704 and shop have this line in that file:
HostKey /etc/ssh/ssh_host_ecdsa_key
While lathe and coyote do not. And I have no clue if that key is
available on those 2 machines
On rebooting: you could add the "sshfs
..." lines to
/etc/rc.local:
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
And this bypasses the pw request for my pw from each machine as its
called to duty?
.. then you sould have the user of the remote
files set to "gene".
Also check that the subfolders of /sshnet/* are owned by "gene"
when no filesystem is mounted there.
Nik
Thank you Nik.
Cheers, Gene Heskett
Hi Gene!
Ok, this is the workflow you need to go through for each maschine -
all steps, including new key generation etc. Plese make sure, that you
do not use any passphrase, or you'll need to enter it each time you
use sshfs:
# walk-the-shop, round 1: recreate SSH-keys for all maschines
rm ~/.ssh
It won't let me, complains its a directory, which it is, and contains
known_hosts and known_hosts.old
ssh-keygen
# round 2: distribute ssh public keys, you'll need this for all 3
maschines, not just the other 2: ssh-copy-id -i ~/.ssh/*.pub
gene@this_one
ssh-copy-id -i ~/.ssh/*.pub gene@the_other
ssh-copy-id -i ~/.ssh/*.pub gene@the_next_one
Done, had to use my login pw for this.
# round 3: check, if you can log into any of these
maschines without
password: ssh gene@this_one
ssh gene@the_other
ssh gene@the_next_one
Works!
# round 4: reboot, try round 3 again for all maschines.
# round 5: try mounting sshfs for all 3 maschines:
sudo bash
su - gene -c "sshfs gene@shop:/ /sshnet/shop"
su - gene -c "sshfs gene@shop:/ /sshnet/coyote"
su - gene -c "sshfs gene@shop:/ /sshnet/lathe"
# round 6: add the lines above to /etc/rc.local, reboot and check if
it worked
It should, I already added that stuff in my rc.local.
Now, does this procedure fail anywhere?
No. :)
If yes, what linux versions are running where?
Nik
Thank you very much Nik.
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>