OpenSolaris 2009.06 Automated Installer
OpenSolaris Automated Installer
Introduction :
OpenSolaris 2009.06 is the output for the sparc platform available with. Currently there are no installation CD for sparc and one must therefore rely on the Automated Installer. For both the sparc version and for the x86 version, there are for iso images (Left: x86.iso and sparc.iso). From the x86 version, there are - as usual additional CD images live. The Automated Installer is a new installation mechanism that has nothing to do with the more traditional Solaris installation. Why is it a jumpstart (and therefore no JET) installation possible. Nevertheless, the configuration is simple and easy to keep pleasing.
The installation server must necessarily currently on an Open Solaris (any platform) server running, a client must be WAN boot capable. In my case I will probably have to update some machines with a fresh firmware / PROM, since not all servers support WAN boot. Did you get a Solaris running it out with the following command if the open prom supports WAN boot:
Code:
# Eeprom | grep network-boot-arguments
network-boot-arguments: data not available
#
If the described grep does not output, the PROM must be updated (or, at worst, is not a WAN boot support). Will be at the Open Solaris installation server
Code:
# Pkg install pfexec SUNWinstalladm-tools
Re: OpenSolaris 2009.06 Automated Installer
The Automated Installer (with accompanying Apache and other tools installed). Shall then be the "ai" ISO image on disk somewhere. Perhaps it may be useful to create a separate ZFS:
Code:
zfs create MyPool / install / images
In addition, a list needed in the configuration files (per architecture or client) will be:
Code:
zfs create MyPool / install / config
To view the default installation (not using user root with default password, the default package installation to make) and to get a sense of how the installation is going on, meet the following commands
Code:
# Installadm create-service-n 0906_x86
/ usr_pool/install/images/osol-0906-ai-x86.iso
/ usr_pool/install/config/osol-0906-ai-x86
Or
Code:
# Installadm create-service-n 0906_sparc / mypool/install/images/osol-0906-ai-sparc.iso / mypool/install/config/osol-0906-ai-sparc
The former for x86, the latter for sparc. This command sets up the config directory all the necessary files and configs and gives the Sun-DHCP command is on the dhcp server (in my case, the dhcp server, a separate server) is running. However, the macro does not specify the DNS server. That should do it but. Here is an example of the output
Code:
# Installation create-service-n-s ..............................
You will see more output that this. The command on the dhcp server would now read:
Code:
/ Usr / sbin / dhtadm-g-A-m-d dhcp_macro_0906_sparc: BootSrvA = 131.xxx: bootfile = \ "http://131.xxx:5555/cgi-bin/wanboot-cgi \": DNS server = 131st xxx:
As I said, including a DNS server entry. On a Sparc machine all is what one must do in openprom to boot from the network:
Code:
(0) ok boot net: dhcp Rebooting with command: boot net: dhcp Boot device...................
It should be noted perhaps that the default installation, ZFS rpool not want to put on the whole plate but on the first disk slice 0. Thus, this slice be large enough, or just include the entire plate. In addition, a sparc machine probably no labeled EFI boot disk like (even though I have not tried that in my first test installation was the record label an SMI (default)). Installation (about an hour, a large part of the package is loaded on) and reboot the Internet, one should not forget the default root password (= open solaris) to change the post and the default user "jack" (password: jack) whatsoever. Although, the root user (as long as he role- user in / etc / user_attr registered) is not as not log in, and - still you should implement the password. Best to just after the reboot as jack / jack log, change the password root, create a new user (do not forget this new user corresponding Admin rights in / etc / user_attr give!) And then delete the user jack.
The rpool is the default installation of a plate. If - as is common for servers - has two disks in the system, should the mirror pool must:
Code:
# Zpool attach rpool c6t0d0s0 c6t1d0s0
(Is the disk c6t0d0 with those, which is the rpool and c6t1d0 the second half mirror). A "zpool add" would not lead to success, because the pool would thus have no redundancy and the second panel would be added only as a concatenation. be written after the addition of the second half mirror still has the boot block on the new album:
Code:
# Install boot / usr / platform / `uname-i` / lib / fs / zfs / boo
tblk / dev/rdsk
The Resilvering now takes a long time but in the meantime, the system is already fully operational.