After what appears to be an equipment failure I am having trouble getting a new server connected to a client. After trying whatever I can think of the client still does:
[root@bobg bobg]# mount 192.168.2.128 /mnt/testb mount: /mnt/testb: special device 192.168.2.128 does not exist.
While ssh shows:
[bobg@localhost-live ~]$ ll /etc/exports -rw-r--r--. 1 root root 41 Aug 20 12:30 /etc/exports
Any suggestions.thoughts?
Bob
On 8/21/19 6:36 PM, Bob Goodwin wrote:
After what appears to be an equipment failure I am having trouble getting a new server connected to a client. After trying whatever I can think of the client still does:
[root@bobg bobg]# mount 192.168.2.128 /mnt/testb mount: /mnt/testb: special device 192.168.2.128 does not exist.
While ssh shows:
[bobg@localhost-live ~]$ ll /etc/exports -rw-r--r--. 1 root root 41 Aug 20 12:30 /etc/exports
Any suggestions.thoughts?
Aren't you missing what on 192.168.2.128 you wish to mount?
Like....
[root@meimei ~]# mount ds6 /syntegra mount: /syntegra: special device ds6 does not exist.
[root@meimei ~]# mount ds6:/volume1/syntegra /syntegra [root@meimei ~]#
On Wed, Aug 21, 2019 at 12:47 PM Bob Goodwin bobgoodwin@fastmail.us wrote:
After what appears to be an equipment failure I am having trouble getting a new server connected to a client. After trying whatever I can think of the client still does:
[root@bobg bobg]# mount 192.168.2.128 /mnt/testb mount: /mnt/testb: special device 192.168.2.128 does not exist.
You need to run "mount 192.168.2.128:/path/2/exported/fs /mnt/testb"
On 8/21/19 7:19 AM, Tom H wrote:
[root@bobg bobg]# mount 192.168.2.128 /mnt/testb mount: /mnt/testb: special device 192.168.2.128 does not exist.
You need to run "mount 192.168.2.128:/path/2/exported/fs /mnt/testb"
. If I understand. I think the "path" is complete?
-rw-r--r--. 1 root root 41 Aug 20 12:30 /etc/exports
[bobg@localhost-live ~]$ cat /etc/exports /home 192.168.2.0/24(rw,no_root_squash)
but now I am wondering where is that "/home " coming from? I didn't notice that before ...
/mnt/testb is on the client from which the mount command is being issued, 192.168.2.153 not the server 192.168.2.128.
The more I look at this the more confused I am getting, I'm simply doing what I would normally do to mount the server ...
However, now when I look at it this way I see what my mount command should be seeing.
[bobg@localhost-live ~]$ cat /home /etc/exports cat: /home: Is a directory
/home 192.168.2.0/24(rw,no_root_squash)
I am still confused, is my mistake in the server or the client?
On 8/23/19 8:13 AM, Bob Goodwin wrote:
[bobg@localhost-live ~]$ cat /etc/exports /home 192.168.2.0/24(rw,no_root_squash)
but now I am wondering where is that "/home " coming from? I didn't notice that before ...
It has to be there. That tells the nfs server what it should be sharing and to whom. See "man exports". I am wondering about why there is "live" in the hostname. Is that on a live boot? Do you have it backwards and you have the exports file on the client you're trying to mount on.
Assuming you are trying to mount an external /home and the server is 192.168.2.128, the command on the client would be: mount 192.168.2.128:/home /home
On 8/23/19 1:31 PM, Samuel Sieb wrote:
On 8/23/19 8:13 AM, Bob Goodwin wrote:
[bobg@localhost-live ~]$ cat /etc/exports /home 192.168.2.0/24(rw,no_root_squash)
but now I am wondering where is that "/home " coming from? I didn't notice that before ...
It has to be there. That tells the nfs server what it should be sharing and to whom. See "man exports". I am wondering about why there is "live" in the hostname. Is that on a live boot? Do you have it backwards and you have the exports file on the client you're trying to mount on.
. The "live" is the result of a Fedora 30 L.live install which replaces a failed server. I should have some new drives when UPS gets here Monday and then I will start rebuilding from scratch. This has been a disaster, I rely heavily on that server and the latest back-up is no good, fortunately I still have some older stuff saved ... The disk quit spinning up to normal speed, sounded like cranking an engine with low battery. In desperation the last thing I tried was chilling it, it sounded better but still would not make full speed so it appears my data is lost.
Assuming you are trying to mount an external /home and the server is 192.168.2.128, the command on the client would be: mount 192.168.2.128:/home /home
. Adding to my misery, experimenting with some variations on the above suggestion it looks like I wiped out the home directory in my main computer, I am writing this on another one,your message disappeared and I had nothing to respond to. Home appears to have been replaced with the "etc,exports" from the server, not what I wanted to do! This computer, Fedora 30 also, works and the
temporary NF serer is still there so I will continue with this .
On 23Aug2019 17:53, Bob Goodwin bobgoodwin@fastmail.us wrote:
On 8/23/19 1:31 PM, Samuel Sieb wrote:
Assuming you are trying to mount an external /home and the server is 192.168.2.128, the command on the client would be: mount 192.168.2.128:/home /home
Also, you can check from the client what a server is prepared to export to you:
showmount -e 192.168.2.128
Adding to my misery, experimenting with some variations on the above suggestion it looks like I wiped out the home directory in my main computer, I am writing this on another one,your message disappeared and I had nothing to respond to. Home appears to have been replaced with the "etc,exports" from the server, not what I wanted to do! This computer, Fedora 30 also, works and the temporary NF serer is still there so I will continue with this .
That is ... odd. The "mount" command should show you what is attached where on the client.
BTW, what were you using for backups? An issue for another thread and topic though.
Cheers, Cameron Simpson cs@cskk.id.au
On 8/23/19 7:07 PM, Cameron Simpson wrote:
Also, you can check from the client what a server is prepared to export to you:
showmount -e 192.168.2.128
. [root@box6 bobg]# mount 192.168.2.128:/home /mnt/testb
[root@box6 bobg]# showmount -e 192.168.2.128 Export list for 192.168.2.128: /home 192.168.2.0/24
then:
[bobg@box6 ~]$ ls -al /mnt/testb/bobg/ shows nothing that leads to /etc/exports, so my mount command is not right.
That is ... odd. The "mount" command should show you what is attached where on the client.
. mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
and this is what I see in /etc/exports:
[bobg@NFS ~]$ cat /etc/exports
/home 192.168.2.0/24(rw,no_root_squash)
Where should the stored files be, it looks like I have it set up wrong?
I'm still chipping away at this, should have new hardrives tomorrow and then I will repeat the setup process from scratch once more but I would like to see this work first.
On 25Aug2019 12:28, Bob Goodwin bobgoodwin@fastmail.us wrote:
On 8/23/19 7:07 PM, Cameron Simpson wrote:
Also, you can check from the client what a server is prepared to export to you: showmount -e 192.168.2.128
. [root@box6 bobg]# mount 192.168.2.128:/home /mnt/testb [root@box6 bobg]# showmount -e 192.168.2.128 Export list for 192.168.2.128: /home 192.168.2.0/24
Looks normal. .128 exports /home.
[bobg@box6 ~]$ ls -al /mnt/testb/bobg/ shows nothing that leads to /etc/exports, so my mount command is not right.
I don't understand what you expect here. /mnt/testb/bobg should show the contents of /home/bobg from the NFS server. Does it?
That is ... odd. The "mount" command should show you what is attached where on the client.
. mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
and this is what I see in /etc/exports:
[bobg@NFS ~]$ cat /etc/exports /home 192.168.2.0/24(rw,no_root_squash)
Which looks normal: this exports /home to your subnet.
Where should the stored files be, it looks like I have it set up wrong?
Can you explain this sentence in more detail. Nothing above is weird looking from my point of view.
Cheers, Cameron Simpson cs@cskk.id.au
On 8/26/19 6:57 AM, Cameron Simpson wrote:
mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
I would say it looks sorta normal yet a bit weird.
On straight forward nfs mounts I've not seen any gvfs references or user_id or group_id parameters unless there was a fstab entry.
[root@f30g ~]# grep nfs /etc/fstab [root@f30g ~]#
No fstab nfs entries.
[root@f30g ~]# showmount -e ds6 Export list for ds6: /volume1/syntegra *.greshko.com,192.168.1.0/24,2001:470:66:cce::2,2001:B030:112F:0000::/56 /volume1/video *.greshko.com,2001:470:66:cce::2,2001:B030:112F:0000::/56 /volume1/misty *.greshko.com,2001:470:66:cce::2,2001:B030:112F:0000::/56 /volume1/music *.greshko.com,2001:B030:112F:0000::/56 [root@f30g ~]#
[root@f30g ~]# mount ds6:/volume1/video /mnt [root@f30g ~]#
[root@f30g ~]# mount | grep ds6 ds6:/volume1/video on /mnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp6,timeo=600,retrans=2,sec=sys,clientaddr=2001:b030:112f::50,local_lock=none,addr=2001:b030:112f::1bd6)
So, I would ask the OP if he has an fstab entry for that mount point.
On 26Aug2019 10:06, Ed Greshko ed.greshko@greshko.com wrote:
On 8/26/19 6:57 AM, Cameron Simpson wrote:
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
I would say it looks sorta normal yet a bit weird.
On straight forward nfs mounts I've not seen any gvfs references or user_id or group_id parameters unless there was a fstab entry.
I miscut. I was referring only to the nfs4 line (preceeding this one). I have no opinion about the gvfs stuff, presuming it is some virtual tree presented by some app.
Cheers, Cameron Simpson cs@cskk.id.au
On 8/25/19 7:06 PM, Ed Greshko wrote:
On 8/26/19 6:57 AM, Cameron Simpson wrote:
mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
I would say it looks sorta normal yet a bit weird.
On straight forward nfs mounts I've not seen any gvfs references or user_id or group_id parameters unless there was a fstab entry.
The gvfs line is completely separate from the nfs one and irrelevant to this issue. Gnome sets up the gvfs mount at login.
On 8/26/19 11:36 AM, Samuel Sieb wrote:
On 8/25/19 7:06 PM, Ed Greshko wrote:
On 8/26/19 6:57 AM, Cameron Simpson wrote:
mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
I would say it looks sorta normal yet a bit weird.
On straight forward nfs mounts I've not seen any gvfs references or user_id or group_id parameters unless there was a fstab entry.
The gvfs line is completely separate from the nfs one and irrelevant to this issue. Gnome sets up the gvfs mount at login.
Ah, Cameron already explained that he mis-cut and I mistook that mis-cut to imply there was a single entry in the output of the "mount" command. :-) :-)
On 8/26/19 12:02 AM, Ed Greshko wrote:
On 8/26/19 11:36 AM, Samuel Sieb wrote:
On 8/25/19 7:06 PM, Ed Greshko wrote:
On 8/26/19 6:57 AM, Cameron Simpson wrote:
mount shows: 192.168.2.128:/home on /mnt/testb type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.6,local_lock=none,addr=192.168.2.128)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
Looks normal, matches what was in your "mount" command above.
I would say it looks sorta normal yet a bit weird.
On straight forward nfs mounts I've not seen any gvfs references or user_id or group_id parameters unless there was a fstab entry.
The gvfs line is completely separate from the nfs one and irrelevant to this issue. Gnome sets up the gvfs mount at login.
Ah, Cameron already explained that he mis-cut and I mistook that mis-cut to imply there was a single entry in the output of the "mount" command. :-) :-)
. I edited to remove the "/home ". I believe that was a mistype when configuring /etc/exports ...
presently it is:
[root@NFS bobg]# cat /etc/exports
192.168.2.0/24(rw,soft,intr,fg,comment=systemd.automount)
Now this morning I am seeing a new error:
[root@box83 bobg]# showmount -e 192.168.2.128 clnt_create: RPC: Program not registered
What does that mean and how do I fix it?
Am 2019-08-26 11:34, schrieb Bob Goodwin:
[ ... ]
I edited to remove the "/home ". I believe that was a mistype when configuring /etc/exports ...
presently it is:
[root@NFS bobg]# cat /etc/exports
192.168.2.0/24(rw,soft,intr,fg,comment=systemd.automount)
No, that's wrong now. Please see man 5 exports. It explains the mandatory format of the exports definition and comes with examples.
Your definition lacks what to export. The "/home ..." was correct, at least syntactically. Although you may want to export /home/bob from the NFS server?
Now this morning I am seeing a new error:
[root@box83 bobg]# showmount -e 192.168.2.128 clnt_create: RPC: Program not registered
What does that mean and how do I fix it?
First fix the exports file. Then make sure the NFS server runs the required RPC services: portmapper, nfs and mountd. You can check that by
rpcinfo -p <NFS server>
Alexander
On 8/26/19 5:49 AM, Alexander Dalloz wrote:
at mean and how do I fix it?
First fix the exports file. Then make sure the NFS server runs the required RPC services: portmapper, nfs and mountd. You can check that by
rpcinfo -p <NFS server>
Alexander
. Before fixing exports -
[root@NFS bobg]# rpcinfo -p 192.168.2.128 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 49201 status 100024 1 tcp 50431 status [root@NFS bobg]# rpcinfo -p 192.168.2.128
On 8/26/19 5:49 PM, Alexander Dalloz wrote:
Am 2019-08-26 11:34, schrieb Bob Goodwin:
[ ... ]
I edited to remove the "/home ". I believe that was a mistype when configuring /etc/exports ...
presently it is:
[root@NFS bobg]# cat /etc/exports
192.168.2.0/24(rw,soft,intr,fg,comment=systemd.automount)
No, that's wrong now. Please see man 5 exports. It explains the mandatory format of the exports definition and comes with examples.
Your definition lacks what to export. The "/home ..." was correct, at least syntactically. Although you may want to export /home/bob from the NFS server?
Also, he needs to look at the exports man page a bit more closely.
There are no "soft", "intr", or "fg" options in the exports. These are options on the nfs *client* side.
It is very possible the nfs server is choking on both no definition of what to export as well as invalid parameters.
On 8/26/19 6:11 PM, Bob Goodwin wrote:
On 8/26/19 5:49 AM, Alexander Dalloz wrote:
at mean and how do I fix it?
First fix the exports file. Then make sure the NFS server runs the required RPC services: portmapper, nfs and mountd. You can check that by
rpcinfo -p <NFS server>
Alexander
. Before fixing exports -
[root@NFS bobg]# rpcinfo -p 192.168.2.128 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 49201 status 100024 1 tcp 50431 status [root@NFS bobg]# rpcinfo -p 192.168.2.128
Your server isn't running properly. There is no mountd.
A good output would be something like....
[egreshko@meimei ~]$ rpcinfo -p f30k program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100024 1 udp 41692 status 100024 1 tcp 34507 status 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100021 1 udp 48749 nlockmgr 100021 3 udp 48749 nlockmgr 100021 4 udp 48749 nlockmgr 100021 1 tcp 41697 nlockmgr 100021 3 tcp 41697 nlockmgr 100021 4 tcp 41697 nlockmgr
On 8/26/19 5:34 PM, Bob Goodwin wrote:
[root@NFS bobg]# cat /etc/exports
192.168.2.0/24(rw,soft,intr,fg,comment=systemd.automount)
Here is a sample of a good/working exports file on a test nfs server here. I'm testing IPv6 only at the moment but the format is identical when using IPv4 addresses.
[root@f30-k etc]# cat exports /home/egreshko 2001:B030:112F:0000::/56(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
I am exporting "/home/egreshko"
to the IPv6 network "2001:B030:112F:0000::/56"
with options "rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys"
On 8/26/19 6:47 PM, Ed Greshko wrote:
[root@f30-k etc]# cat exports /home/egreshko 2001:B030:112F:0000::/56(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
FYI, this output of the cat should have shown on one line. I think my email client is warping too soon.
On 8/26/19 6:30 AM, Ed Greshko wrote:
Your server isn't running properly. There is no mountd.
. [root@box83 bobg]# showmount -e 192.168.2.128 clnt_create: RPC: Program not registered
[root@NFS bobg]# systemctl start nfs-mountd
And it still shows Dead:
[root@NFS bobg]# systemctl status nfs-mountd ● nfs-mountd.service - NFS Mount Daemon Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static; vendor preset: disabled) Active: inactive (dead)
Shouldn't this show as "astive?"
Am 2019-08-26 16:01, schrieb Bob Goodwin:
On 8/26/19 6:30 AM, Ed Greshko wrote:
Your server isn't running properly. There is no mountd.
. [root@box83 bobg]# showmount -e 192.168.2.128 clnt_create: RPC: Program not registered
[root@NFS bobg]# systemctl start nfs-mountd
And it still shows Dead:
[root@NFS bobg]# systemctl status nfs-mountd ● nfs-mountd.service - NFS Mount Daemon Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static; vendor preset: disabled) Active: inactive (dead)
Shouldn't this show as "astive?"
Yes.
[root@storage ~]# systemctl status -l nfs-mountd.service ● nfs-mountd.service - NFS Mount Daemon Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static; vendor preset: disabled) Active: active (running) since Do 2019-08-22 17:33:20 CEST; 3 days ago Main PID: 11426 (rpc.mountd) CGroup: /system.slice/nfs-mountd.service └─11426 /usr/sbin/rpc.mountd
Aug 22 17:33:20 storage.ocp.lab systemd[1]: Starting NFS Mount Daemon... Aug 22 17:33:20 storage.ocp.lab rpc.mountd[11426]: Version 1.3.0 starting Aug 22 17:33:20 storage.ocp.lab systemd[1]: Started NFS Mount Daemon.
The long status request might show you what's wrong. Else look into the journal or syslog logfile.
Alexander
On 8/26/19 10:01 PM, Bob Goodwin wrote:
On 8/26/19 6:30 AM, Ed Greshko wrote:
Your server isn't running properly. There is no mountd.
. [root@box83 bobg]# showmount -e 192.168.2.128 clnt_create: RPC: Program not registered
[root@NFS bobg]# systemctl start nfs-mountd
And it still shows Dead:
[root@NFS bobg]# systemctl status nfs-mountd ● nfs-mountd.service - NFS Mount Daemon Loaded: loaded (/usr/lib/systemd/system/nfs-mountd.service; static; vendor preset: disabled) Active: inactive (dead)
Shouldn't this show as "astive?"
Yes, it should show "active (running). It is actually started by virtue of nfs-server starting.
Again....
What is your /etc/exports file now on the server? If it is not in the proper format you can't expect anything to work properly.
On 8/26/19 10:25 AM, Ed Greshko wrote:
Again....
What is your /etc/exports file now on the server? If it is not in the proper format you can't expect anything to work properly.
. [root@NFS bobg]# cat /etc/exports
# 192.168.2.0/24(rw,no_root_squash)
192.168.2.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
On 8/26/19 10:43 PM, Bob Goodwin wrote:
On 8/26/19 10:25 AM, Ed Greshko wrote:
Again....
What is your /etc/exports file now on the server? If it is not in the proper format you can't expect anything to work properly.
. [root@NFS bobg]# cat /etc/exports
# 192.168.2.0/24(rw,no_root_squash)
192.168.2.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
That is *wrong*.
You need to specify the directory path to be exported!!
/home 192.168.2.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
Would be an example of a valid entry.
On 8/26/19 10:46 AM, Ed Greshko wrote:
You need to specify the directory path to be exported!!
/home 192.168.2.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
Would be an example of a valid entry.
. Well this is what I see, dunno which one I need to use, .etc looked like it to me?
[root@NFS bobg]# locate exports /boot/test1/nfs4exports /etc/exports /etc/exports.d /usr/share/augeas/lenses/dist/exports.aug /usr/share/man/man5/exports.5.gz /usr/share/vim/vim81/syntax/exports.vim
On 8/26/19 11:18 PM, Bob Goodwin wrote:
On 8/26/19 10:46 AM, Ed Greshko wrote:
You need to specify the directory path to be exported!!
/home 192.168.2.0/24(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys)
Would be an example of a valid entry.
. Well this is what I see, dunno which one I need to use, .etc looked like it to me?
[root@NFS bobg]# locate exports /boot/test1/nfs4exports /etc/exports /etc/exports.d /usr/share/augeas/lenses/dist/exports.aug /usr/share/man/man5/exports.5.gz /usr/share/vim/vim81/syntax/exports.vim
?????
What have you been editing up until now?
/etc/exports is the file you should be concerned with.
On 8/26/19 11:21 AM, Ed Greshko wrote:
What have you been editing up until now?
/etc/exports is the file you should be concerned with.
. Yes, /etc/exports is what I have been using. You suggested it should be subordinate to something, /home or /home/bobg perhaps. I can't find that.
i have been using the Fedora document https://fedoraproject.org/wiki/Administration_Guide_Draft/NFS in trying to set this up. It does not seem to be what I have used in the past but it is what I found searching with Google. Next I will replace the NFS hard drive and start over with a new one, I would like to get it right without a ton of troubleshooting.
Perhaps someone can suggest a better set of instructions to follow?
On 8/26/19 11:53 PM, Bob Goodwin wrote:
i have been using the Fedora document https://fedoraproject.org/wiki/Administration_Guide_Draft/NFS in trying to set this up. It does not seem to be what I have used in the past but it is what I found searching with Google. Next I will replace the NFS hard drive and start over with a new one, I would like to get it right without a ton of troubleshooting.
Perhaps someone can suggest a better set of instructions to follow?
You can get TMI by reading https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
But the most import thing that you're missing is the format of the /etc/exports file. That document describes it with an example (not quite as good as I would like). But the most important part is....
8.7.1. The /etc/exports Configuration File
The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
- Blank lines are ignored. - To add a comment, start a line with the hash mark (#). - You can wrap long lines with a backslash (). - Each exported file system should be on its own individual line. - Any lists of authorized hosts placed after an exported file system must be separated by space characters. - Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following structure: export host(options)
The aforementioned structure uses the following variables: export The directory being exported
host The host or network to which the export is being shared
options The options to be used for host
On Tue, 27 Aug 2019 05:58:59 +0800 Ed Greshko wrote:
host The host or network to which the export is being shared
I'd point out that the host names need to be able to be found by a gethostbyname() call at the time the NFS server starts. It has been my experience that the slightest error in even one export line causes the entire set of exports to fail. Helpful people at work "cleaning up" no longer used names from the DNS database often make NFS stop working the next time some machine reboots because a host it was exporting to is gone.
On 26Aug2019 11:28, Bob Goodwin bobgoodwin@fastmail.us wrote:
On 8/26/19 11:21 AM, Ed Greshko wrote:
What have you been editing up until now?
/etc/exports is the file you should be concerned with.
. Yes, /etc/exports is what I have been using. You suggested it should be subordinate to something, /home or /home/bobg perhaps. I can't find that.
No, he said that you're missing the leftmost field of that file: _what_ to export. He suggested /home or /home/bobg as reasonable things you may have wanted to exports.
All you have in your file is a netmask and options. Every noncomment line must start with a directory to be exported.
Please read "man 5 exports".
Cheers, Cameron Simpson cs@cskk.id.au
On 8/27/19 6:15 AM, Tom Horsley wrote:
On Tue, 27 Aug 2019 05:58:59 +0800 Ed Greshko wrote:
host The host or network to which the export is being shared
I'd point out that the host names need to be able to be found by a gethostbyname() call at the time the NFS server starts. It has been my experience that the slightest error in even one export line causes the entire set of exports to fail. Helpful people at work "cleaning up" no longer used names from the DNS database often make NFS stop working the next time some machine reboots because a host it was exporting to is gone.
Yes, it would have been "nice" if the quoted document mentioned that. And, it would have been "nice" if the document also gave a few more examples. :-)
On Mon, 2019-08-26 at 11:28 -0400, Bob Goodwin wrote:
Yes, /etc/exports is what I have been using. You suggested it should be subordinate to something, /home or /home/bobg perhaps. I can't find that.
Your exports file is missing the filepath(s) you want to export. You list the directories that you want to share out. And try simplifying the options you have listed for the exports, until you know which ones you really should add.
This is from one of my machines:
$ cat /etc/exports /home *.example.com(rw,sync) /var/www *.example.com(rw,sync)
e.g. [what] [where to] [how]
On that machine, it shares out its /home directory, and the /var/www directory. It shares them out to other computers on the example.com network (fictional example), you can also use numerical IP addresses. It allows read & write access. Sync is an option about how the writes are done, it's supposedly the default, but you never know when a program's default differs from Fedora's defaults.