File system mounting issues 7–3
7.1.2 Client node still boots the old kernel after installation
If a client node does not boot the new kernel after the HP SFS client software has been installed on the node,
it may be because the new kernel has not been defined as the default kernel for booting.
To correct this problem, edit the appropriate bootloader configuration file so that the new kernel is selected
as the default for booting and then reboot the client node.
Alternatively, if your boot loader is GRUB, you can use the /sbin/grubby --set-default command.
7.2 File system mounting issues
This section deals with issues that may arise when client nodes are mounting Lustre file systems. The section
is organized as follows:
• Client node fails to mount or unmount a Lustre file system (Section 7.2.1)
• The sfsmount command reports device or resource busy (Section 7.2.2)
• Determine whether Lustre is mounted on a client node (Section 7.2.3)
• The SFS service is unable to mount a file system (SELinux is not supported) (Section 7.2.4)
• Troubleshooting stalled mount operations (Section 7.2.5)
7.2.1 Client node fails to mount or unmount a Lustre file system
If the sfsmount(8) or sfsumount(8) commands hang or return an error, look at the
/var/log/messages file on the client node, or the relevant console log, to see if there are any error
messages. In addition, consider the following possible causes for failing to mount a file system:
• The Lustre modules are not configured on the client node.
If you built your own client kernel, you must run the depmod command after you reboot the client
node with the correct kernel installed, to have the Lustre modules correctly registered with the
operating system (see Section 3.3.2).
• The client node is not configured correctly.
Make sure that the client node is configured as described in Chapter 2 or Chapter 3. To check if the
client configuration is correct, enter the following command, where server is the name of the
HP SFS server that the client node needs to access to mount the file system:
# sfsconfig -s server
If the client configuration is not correct, enter the following command to update the configuration files:
# sfsconfig -s server conf
• The client is experiencing difficulties in communicating with the HP SFS services.
Use the information provided in Section 4.10 to determine whether the client node is experiencing
difficulty in communicating with the HP SFS services. Note that it may take up to 100 seconds for
some of the messages described in that section to be recorded in the logs.
If the client node is experiencing difficulty in communicating with the HP SFS services, ensure that all
the MDS and OST services that make up the file system in question are actually available. Check that
the servers required by the services are booted and running, and determine whether a failover
operation is taking place; that is, whether a server has failed and its services are being failed over to
the peer server. Refer to Chapter 4 of the HP StorageWorks Scalable File Share System User Guide
for details of how to view file system information.
If any of the file system services are in the recovering state, they cannot permit new client nodes to
mount file systems, or existing client nodes to unmount file systems; the services must complete the
recovery process before the client nodes can mount or unmount file systems.
When the failover operation completes, the client nodes normally recover access to the file system.
Commentaires sur ces manuels