the system reads all the blocks of the storage pools and compares its contents against the checksum. The scrubbing in the background, however, produces a very high I / O load. Sun recommends that the scrubbing of desktop boards once a week, with server boards once a month.
A failure in a RAID disk array changes from online in the state degraded, also writes of the Fault Manager daemon, fmd an appropriate entry (drive offline) to / var / adm / messages. Data can still read and write again and, now, of course without redundant storage. Therefore, we will soon make this case for replacement:
Code:
pool replace date c4t1d0 c4t2d0
exchanged or replaced the failed disk c4t1d0 c4t2d0 through. ZFS mirrors while the data of the remaining disk automatically to the new panel, which can take several minutes depending on the amount of data to hours. While access to the data is still possible, the RAID array is again fully functional.
Sharing
On the storage pool you can now create another file system specifically for the release via SMB:
Code:
zfs create -o casesensitivity=mixed date/public
zfs set sharesmb=on date/public
The option CaseSensitivity = mixed OpenSolaris emulates the access via SMB, the behavior of Windows when handling file names: Although you may include uppercase and lowercase letters, when accessing a file is different, the system not case-sensitive. Solaris would behave otherwise just like any other Unix and foo, Foo and FOO considered three different files, all of which may be in the same directory.
The file system property sharesmb makes sure that that will be released locally in / data / public integrated file system via SMB - even after the next reboot without were entries in any configuration files needed this: The release of SMB is a property of the file system, as well as the mount point.
Also a feature of the file system is the name of the share. A name other than the default date_public, such as All is one with :
Code:
zfs set sharesmb=name=Alle date/public
The somewhat crude-looking syntax with the two equal sign is indeed correct Sun
Additional shares can be set up following the same pattern, the command :
shows the existing shares. The tool Shared Folder in the system administration cannot use the way for the release: It only supports Samba, but not the Sun-SMB server.
If the OpenSolaris servers appear in a different group than the default WORKGROUP, can also configure:
[CODE]smbadm join -w Workgroup-Name[CODE]
User Account
ZFS file systems can be used as points of administrative control logic. This allows you to view usage data, manage the properties, make backups, make snapshots, etc.. With ZFS model, you can easily define a file system per user for server directories. The association of non-ZFS quotas to individual users is intentional because the file systems are points of administrative control.
You can set quotas on ZFS file systems can represent users, projects, groups etc.. And entire portions of a file system hierarchy. It allows combining different quotas quotas per user classics. The per-user quotas have been set up because we had several users can share the same file system. The quotas of ZFS file systems are flexible and easy to implement. You can apply a quota when creating the file system
Access to a release always requires specifying a user name and password created using Open Solaris user, anonymous shares or a guest account does not exist.
To allow OpenSolaris users access via SMB, you have the file / etc / pam.conf to the line other password :
Code:
requiredpam_smb_passwd.so.1 nowarn
Add Who does not want to deal with vi, the file opens as the main user created during installation with the command :
Code:
pfexec gedit /etc/pam.conf
in a GUI editor. Then, each is found via the graphical user management (in the menu system administration) new user access via SMB allowed. For existing accounts you have to change the password once with the command passwd. A new user smb with identical password can be used to access the public share, for each user, who should receive a separate directory on the file server is to create a separate account.
Who is accessing a share as a special user has then due to the rights, the same user would have in a local application in the system. You must chmod to say on the Open Solaris machine using the commands (change the permissions) and chown (changes provide the owner) for the appropriate access rights. How to obtain with
Code:
chmod a+rwx /date/public
all users read and write access to the ZFS file system data / public and thus also for this release (Details to chmod betrays man chmod ).
If you want to set up next to this for all users read / writable share custom clearances, I create with the tool "Users and Groups" in the system administration menu for each user a separate account. In the new user a home directory is replaced with the name of the user's home in / export /. / Export / home is the mount point of a separate file system rpool / export / home - the command
displays all file systems.
Transfer
It could be easy to share home directories en bloc:
Code:
zfs set sharesmb=on rpool/export/home
On our DIY NAS is not very useful: The rpool in storage pool containing the system data and home directories in different file systems, was created during installation on the 4-GB SSD - a lot of space to store there so not.
But the contents of a ZFS file system can easily relocate to another storage pool. This one needs first a snapshot (a snapshot) of the current state of the transmitted file system:
Code:
zfs snapshot -r rpool/export/home@transfer
So you can now fill a different file system, which is created the same:
Code:
zfs send rpool/export/home@transfer | zfs receive date/home
The zfs send-command of the tools produced here an image of the snapshot file system rpool transfer / export / home that is written to standard output (and save it with an output redirection in a file can be). zfs receive takes the image and creates the new file system data / public / home.
The same is needed for the home directory created during the installation main user which during 2008/11 Open Solaris file system for its own home directory. The-r option when creating the snapshot of rpool / export / home has already made sure that the snapshot exists:
Code:
zfs send rpool/export/home/test@transfer | zfs receive date/public/home/test
To replace the old with the new home directory, you must first unmount the old one, it should not run applications that keep files open:
Code:
zfs umount -f rpool/export/home/test
zfs umount -f rpool/export/home
Now it is for the two newly created directories with the correct mount points
Code:
zfs set mountpoint=/export/home date/public/home
zfs set mountpoint=/export/home/test date/public/home/test
and deletes the old home directory with
zfs destroy -r rpool/export/home
Bookmarks