Results 1 to 4 of 4

Thread: Building your own ZFS file server

  1. #1
    Join Date
    Nov 2005
    Posts
    344

    Building your own ZFS file server

    ZFS for Zettabyte File System, is a file system produced by Sun for its Solaris OS. The peculiarity of it is to be 128 bits allowing it to store up to 2 ^ 48 files in a file system and is thus able to provide 16 billion times that can provide a file system 64 bits. The maximum file size and maximum size of a volume are also impressive because the limit is 16 Exabyte’s which is 10 ^ 18 bytes. According to one of the creators of ZFS, Jeff Bonwick, Fill a 128-bit file system would exceed the quantum limits of data storage. You could not fill a space of 128-bit data without boiling the oceans.
    There is through these figures that limits ZFS is impressive, but this file system has other interesting features. First, the administration line order will be simplified. Forget mount, format, fsck, the only two commands to know are in ZFS zpool and zfs with their attributes. With these commands it will be possible to create pools, creating directories and setting a mount point, set quotas, etc.

    Key Features :

    • Simple administration commands or by web (forgot the format, newfs, mount, etc).
    • Copy-on-write: ZFS will not overwrite the new data directly, the data creates a new block and then changes the data pointers and makes the final agreement. With this method always guaranteed the integrity of data and is not necessary to use utilities such as fsck.
    • Snapshots: we can take a picture quickly an entire file system. We can install a package on the system and if this does not meet our expectations we can perform a rollback to return to the previous state.
    • Compression can define a file system where all information is compressed.
    • Mirror and RAIDZ: you can easily define a mirroring between disks and RAID-Z.

    ZFS Server

    The biggest advantage of Open Solaris for a file server is the ZFS file system that simplifies the management of mass enormous. The pooling of multiple disks into a RAID 5 array, along with creating a file system and permanent embedding requires the tree, only a single command on the command line - no fiddling with partitions, physical and logical volumes, fdisk, file system and RAID tools configuration files and scripts. Other volumes in the storage pool can instantly create and integrate into the system, is added a record that stands the new space once all the file systems available. Built-in functions for creating memory-efficient snapshots and images provide data security.

    Checksums guarantee the integrity of the operating system to physical memory, errors are detected and corrected automatically - whether to tilt the SATA cable, or sometimes a few bits zickt a plate. As a 128-bit ZFS file system will encounter in the foreseeable future at virtually no relevant limits: The maximum size for files and file systems is 16 Exabyte’s (16 million terabytes). A storage pool, the physical storage devices to a virtual disk can be summarized, up to 256 Zettabyte manage - IDC estimates the worldwide space available in 2007 to 250 Exabyte’s, a thousandth of it also.


    ZFS uses copy-on-write: Changed data is not overwritten on the disk, but stored in new, free blocks on the disk. On the one hand guarantees a consistent file system at any time, because you can access the old data before new data is completely written. Second, it allows the simple creation of snapshots: When we remove the old data blocks rather than release it after the write operation, you can always return to an older version of a file - this is the creation of resource-saving snapshots possible.

    ZFS in Snow Leopard

    Before we see what ZFS, let us first examine the features of HFS +, which is currently used in Mac OS X Leopard. HFS + is a 32-bit file system used since 1998 in Apple computers. It incorporates the features of HFS which is directly derived. It was created to offset the bulk of the HFS problem is the inefficient allocation of disk space and add some new features. It was also created as an evolution of the HFS to be more suitable for large volume sizes.

    ZFS has a new concept of storage pool that can completely eliminate the concept of volumes and thus the problems of partitions, waste of bandwidth ... The principle is that each file system created in a common storage pool uses only the space they actually need. A new system is introduced RAID with ZFS, it is the RAID-Z is other than RAID 5 Enhanced ZFS sauce. RAID-Z uses a method of copy on write when the RAID 5 uses a read modify write. The principle is rather than writes new data over old, RAID-Z will write new data to a new location and then rewrites the pointer to the new data. This system makes the RAID-Z is more efficient than RAID 5.

    ZFS also manages backups and system recovery via the snapshot method. One can well imagine that Apple could use this function by binding directly with Time Machine. Finally in lot of features managed by ZFS, add data compression is also present with HFS +. ZFS compression with a compression of the disk space used 2 to 3 times. But make no mistake, this file system is not perfect and also has its shortcomings. It does not allow data encryption and logging is also much heavier with ZFS with HFS +. One of the big questions concerning the use of ZFS as root file system and the ability to boot a ZFS file system. The handling does not seem to be within reach of novices because it requires certain knowledge and sometimes the use of specific commands.

  2. #2
    Join Date
    Nov 2005
    Posts
    344

    Re: Building your own ZFS file server

    ZFS Commands :

    To use ZFS is recommended to have at least 1 GB of memory (for all architectures), but to be safe and more, because ZFS makes extensive use of caching. Architecture Amd64 preferable because of its larger address space. The minimum amount of memory recommended for use ZFS - 1 GB. Even on machines with enough RAM will have to change some kernel configuration to ensure stable operation. Here's an example configuration for machines with 1 GB of memory:

    Pool ZFSSERVER 4GB covered by the records c1d0 and c1d1 both with a size of 2GB.
    • Create Pool : zpool create-f data_pool c1d0
    • Destroy Pool : zpool destroy data_pool
    • Pool with Raid 1 : zpool create-f c1t1d0s3 c1t2d0s3 mirror data_pool
    • Pool with Raid Z : zpool create-f c1t1d0s3 c1t2d0s3 c1t3d0s3 raidz data_pool
    • Add a disk to a Pool : zpool attach-f c1t0d0s0 c1t1d0s0 rpool
    • Remove a disk from a Pool : c1t0d0s0 zpool detach rpool
    • Pool Check : zpool list
    • Scrubbing a Pool : zpool scrub rpool
    • Creating ZFS file systems :
    • zfs create Babylonian / applications
    • zfs create Babylonian / data

    How to Install :

    OpenSolaris contains two SMB / CIFS server: the well-known from the world of Linux and Sun's own Samba SMB server. The latter is better integrated into the system and in the ZFS and offers the integration of the Solaris server to an Active Directory successive options, Windows and Solaris users to map and regulate the access to shares. All niceties describe the Solaris CIFS Administration Guide - the only guide we want to give a how Open Solaris machines in a simple file server for a small LAN transformed one.

    The Sun-SMB server must be in If you install OpenSolaris. He is in the software packages and SUNWsmbs SUNWsmbskr to find the one on the graphical package manager, which records in the menu system administration. To register the SMB server in the system, you reboot the system then just remodeled and to take him without having to reboot into service, some pull-ups are necessary.

    Network

    Now you must go to the command line, where a degree of

    Code:
    su -
    Gives root access. The command

    Code:
    svcadm enable -r smb/server 
    starts the SMB server; 
    svcs smb/server
    then gives a status of the service online.

    The program svcadm (service administration) must be in OpenSolaris 2008/05 also try, if the Open Solaris computer is to get a fixed network address (which is to operate as a file server but is not essential): Commands

    Code:
    svcadm disable network/physical:nwam 
    svcadm enable network/physical:default
    disable the Network Auto-Magic from the default, obtain an IP address via DHCP, and instead allow the configuration of a fixed address. The easiest is related to the management system to be found in the GUI tool for network configuration. In the current OpenSolaris 2008/11, the network tool handle them yourself, the Network Auto Magic off when you want to set a fixed IP address.

    RAID - It then goes on to set up the RAID-5 arrays. The command

    Code:
    Format
    shows the names of the plates in the system and may be terminated with Ctrl-C. holds then

    Code:
    zpool create date raidz1 c4t0d0 c4t1d0 c4t2d0 c4t3d0
    the four panels of our DIY file server - the respective first plates (d0) to the Target Devices 0-3 of the fifth mass storage controller in the system (c4, the SATA controller, c0 to c3, the IDE controller) are - to a RAID is-5 storage pool (RAID-Z1 in the ZFS terminology) named data together, the same file system and provided with an integrated on / data.

    Who wants a number of small and be satisfied with two disks are mirroring each other (RAID-1), instead uses the command :

    Code:
    zpool create date mirror c4t0d0 c4t1d0
    When you create a RAID array can be specified with the same hot spares, boards, uses the system automatically when one of the disks in the RAID no longer works. Reveal more to the man page for zpool.

    The command :
    Code:
    zpool list
    shows an overview of the storage pools at together with their assignment;
    Code:
    zpool iostat
    provides information about reading and writing - with the option-v for the various devices in the pool.
    Code:
    Control
    Important for a file server, however, is the state of the plates:
    Code:
    zpool status
    reveal whether it is good and the plates in the array if checksum errors when reading files. If the tool reports a disk error, are
    Code:
    zpool status -v
    more accurate information. With
    Code:
    zpool scrub date

  3. #3
    Join Date
    Nov 2005
    Posts
    344

    Re: Building your own ZFS file server

    the system reads all the blocks of the storage pools and compares its contents against the checksum. The scrubbing in the background, however, produces a very high I / O load. Sun recommends that the scrubbing of desktop boards once a week, with server boards once a month.

    A failure in a RAID disk array changes from online in the state degraded, also writes of the Fault Manager daemon, fmd an appropriate entry (drive offline) to / var / adm / messages. Data can still read and write again and, now, of course without redundant storage. Therefore, we will soon make this case for replacement:
    Code:
    pool replace date c4t1d0 c4t2d0
    exchanged or replaced the failed disk c4t1d0 c4t2d0 through. ZFS mirrors while the data of the remaining disk automatically to the new panel, which can take several minutes depending on the amount of data to hours. While access to the data is still possible, the RAID array is again fully functional.
    Sharing
    On the storage pool you can now create another file system specifically for the release via SMB:
    Code:
    zfs create -o casesensitivity=mixed date/public 
    zfs set sharesmb=on date/public
    The option CaseSensitivity = mixed OpenSolaris emulates the access via SMB, the behavior of Windows when handling file names: Although you may include uppercase and lowercase letters, when accessing a file is different, the system not case-sensitive. Solaris would behave otherwise just like any other Unix and foo, Foo and FOO considered three different files, all of which may be in the same directory.

    The file system property sharesmb makes sure that that will be released locally in / data / public integrated file system via SMB - even after the next reboot without were entries in any configuration files needed this: The release of SMB is a property of the file system, as well as the mount point.
    Also a feature of the file system is the name of the share. A name other than the default date_public, such as All is one with :
    Code:
    zfs set sharesmb=name=Alle date/public
    The somewhat crude-looking syntax with the two equal sign is indeed correct Sun
    Additional shares can be set up following the same pattern, the command :
    Code:
    sharemgr show –vp
    shows the existing shares. The tool Shared Folder in the system administration cannot use the way for the release: It only supports Samba, but not the Sun-SMB server.
    If the OpenSolaris servers appear in a different group than the default WORKGROUP, can also configure:
    [CODE]smbadm join -w Workgroup-Name[CODE]


    User Account

    ZFS file systems can be used as points of administrative control logic. This allows you to view usage data, manage the properties, make backups, make snapshots, etc.. With ZFS model, you can easily define a file system per user for server directories. The association of non-ZFS quotas to individual users is intentional because the file systems are points of administrative control.
    You can set quotas on ZFS file systems can represent users, projects, groups etc.. And entire portions of a file system hierarchy. It allows combining different quotas quotas per user classics. The per-user quotas have been set up because we had several users can share the same file system. The quotas of ZFS file systems are flexible and easy to implement. You can apply a quota when creating the file system

    Access to a release always requires specifying a user name and password created using Open Solaris user, anonymous shares or a guest account does not exist.

    To allow OpenSolaris users access via SMB, you have the file / etc / pam.conf to the line other password :
    Code:
    requiredpam_smb_passwd.so.1 nowarn
    Add Who does not want to deal with vi, the file opens as the main user created during installation with the command :
    Code:
    pfexec gedit /etc/pam.conf
    in a GUI editor. Then, each is found via the graphical user management (in the menu system administration) new user access via SMB allowed. For existing accounts you have to change the password once with the command passwd. A new user smb with identical password can be used to access the public share, for each user, who should receive a separate directory on the file server is to create a separate account.
    Who is accessing a share as a special user has then due to the rights, the same user would have in a local application in the system. You must chmod to say on the Open Solaris machine using the commands (change the permissions) and chown (changes provide the owner) for the appropriate access rights. How to obtain with
    Code:
    chmod a+rwx /date/public
    all users read and write access to the ZFS file system data / public and thus also for this release (Details to chmod betrays man chmod ).
    If you want to set up next to this for all users read / writable share custom clearances, I create with the tool "Users and Groups" in the system administration menu for each user a separate account. In the new user a home directory is replaced with the name of the user's home in / export /. / Export / home is the mount point of a separate file system rpool / export / home - the command
    Code:
    zfs list
    displays all file systems.

    Transfer
    It could be easy to share home directories en bloc:
    Code:
    zfs set sharesmb=on rpool/export/home
    On our DIY NAS is not very useful: The rpool in storage pool containing the system data and home directories in different file systems, was created during installation on the 4-GB SSD - a lot of space to store there so not.
    But the contents of a ZFS file system can easily relocate to another storage pool. This one needs first a snapshot (a snapshot) of the current state of the transmitted file system:
    Code:
    zfs snapshot -r rpool/export/home@transfer
    So you can now fill a different file system, which is created the same:
    Code:
    zfs send rpool/export/home@transfer | zfs receive date/home
    The zfs send-command of the tools produced here an image of the snapshot file system rpool transfer / export / home that is written to standard output (and save it with an output redirection in a file can be). zfs receive takes the image and creates the new file system data / public / home.
    The same is needed for the home directory created during the installation main user which during 2008/11 Open Solaris file system for its own home directory. The-r option when creating the snapshot of rpool / export / home has already made sure that the snapshot exists:
    Code:
    zfs send rpool/export/home/test@transfer | zfs receive date/public/home/test
    To replace the old with the new home directory, you must first unmount the old one, it should not run applications that keep files open:
    Code:
    zfs umount -f rpool/export/home/test 
    zfs umount -f rpool/export/home
    Now it is for the two newly created directories with the correct mount points
    Code:
    zfs set mountpoint=/export/home date/public/home 
    zfs set mountpoint=/export/home/test date/public/home/test 
    and deletes the old home directory with 
    zfs destroy -r rpool/export/home

  4. #4
    Join Date
    Nov 2005
    Posts
    344

    Re: Building your own ZFS file server

    Server sharing
    The CIFS / SMB server from Open Solaris supports a useful feature called auto home: If one in / etc a file with the entry
    Code:
    smbautohome * /export/home/&
    on, each user gets at login access via SMB to his home directory - until he logs off again. The home directories on the server so that only be achieved if they are really needed. The downside: You do not appear in the network environment.

    A more comfortable solution, therefore, is with :
    Code:
    zfs set sharesmb=name=Home date/home
    simply release the entire file system with all the home directories. By default, the access rights under Open Solaris are set so that each user read the contents of all home directories, but not allowed to write in it. If you want to hide the contents of the home directories before the others, takes
    Code:
    chmod go-r /export/home/*
    the read permission for all users except the owner of the respective directory.
    If you want to prevent individual users to use up the entire space of the storage pool, you can set quotas:
    zfs set quota=600GB date/home zfs set quota=600GB date/public
    ensures that none can show the two in a RAID array-based file systems over 600 GB.

    Conclusion :

    ZFS is a file system young because he began to be used in 2005 on Solaris. HFS is used in computers since 1986 when Apple UFS (Unix File System) has existed for over 20 years now. This shows that Apple is looking to the future in deciding to integrate ZFS in the server part of its OS.

    The future, yes, because the limitations of ZFS are simply unattainable with the currently used technologies. The future even with this new concept is that the pools of storage that will certainly have to think system administrators. But ZFS also provide solutions for the use and maintenance of network infrastructure.

    Currently the role of ZFS in Leopard is still to be defined, and remains today the HFS + file system reference. According to Apple Insider now we have precise information in this regard. Apple, in fact, was distributed to some developers "ZFS on Mac OS X Preview 1.1", combining the documentation. This documentation claims that Leopard will officially support ZFS, but only in reading (as suggested in the press). With 1.1 previews released, however, seems already possible to read and write discs in this format.

    It would seem, then, that Apple will introduce a more comprehensive approach to ZFS, but if OS X will be more mature. In the next development could even replace HFS +. This step is not easy, because the new system requires completely different software, operating systems and history teaches us that this type of migration is very complicated.

    Well, I say that ZFS is not perfect or come to shake the hegemony of ext3 and 4 on Linux, and the growing advanced Btrfs (pronounced "Butter FS"). As is natural in any technology, has pros and cons, and work to do. Recall that ZFS has been operating partly on Linux through FUSE . The latter helps to create file systems run outside the kernel, which inevitably leads to lower scores. However, the Lawrence Livermore made a direct implementation and very specific to their needs, so that ZFS to operate normally with any Linux requires an extra layer yet to be developed.

    On the other hand, it was running ZFS with FUSE for not hitting the Linux GPL. In this sense, the use of direct ZFS on Linux is still a problem because it is possible that the two can be distributed together. It will be possible if and only if both are compiled by the end user.

Similar Threads

  1. help building new Ubuntu file+media server
    By Tajdar7 in forum Operating Systems
    Replies: 3
    Last Post: 21-10-2010, 06:01 AM
  2. Building a 20TB file server, suggestions?
    By Catawba in forum Networking & Security
    Replies: 3
    Last Post: 08-09-2010, 06:43 AM
  3. Building a Server
    By Christina 80 in forum Hardware Peripherals
    Replies: 5
    Last Post: 21-04-2010, 12:21 AM
  4. Need help in building secure linux server
    By Wenro in forum Networking & Security
    Replies: 5
    Last Post: 13-04-2010, 01:05 PM
  5. Hardware for Building Home Server
    By Amaresh in forum Hardware Peripherals
    Replies: 4
    Last Post: 14-03-2009, 11:28 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Page generated in 1,711,678,326.50301 seconds with 17 queries