Table of Contents
Solaris 10 notes
http://breden.org.uk/2008/03/08/home-fileserver-zfs-setup/
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Sun Fire 280R (genecliff)
wipe drives with System Rescue CD
rescuecd
boot: rescuecd console=ttyS0
Service Managment Facility
service control list all
svcs -a
ZFS
Add drive to zfs pool on Indiana: (by Luke)
I opened up a root-prompt in the GUI while the installer was running and did a:
zpool attach -f zpl_slim c0d0s0 c7d1
It was slightly funky. The -f was necessary because I still had a header from an old pool on the disk. And the c7d0s0 and c7d1 needed to be in different formats. The first one came from “zpool status” and the second came from the disk inventory in “format”.
create zpool raidz2 using XRaid JBOD's
zpool create datapool1 raidz2 c4t60003930000214EEd0 c4t60003930000214EEd1 c4t60003930000214EEd2
zpool status
zpool status -v [poolname] zpool list zfs list zpool iostat zpool iostat -v
test zpool creation (Dry run)
zpool create -n tank mirror c1t0d0 c1t1d0
Change the default mount point during creation
'zpool create' defualts to the root directory (/) to create mount points. To specify another mount point:
zpool create -m /export/zfs home c1t0d0
deleting (destroy) the zpool
zpool destroy tank
Add devices to the **mirrored** pool
zpool add tank c1t1d0
Attach/detach devices to the pool -- not includeing raidz
To add devices to an existing pool raid set
zpool attach tank c1t2d0
take device offline
zpool offline tank c1t2d0
query specific items
zpool list -o name,size
supress column headings
zpool list -Ho name,size
Pool wide statistics
zpool iostat
Pool virtual device statistics
zpool iostat -v zpool iostat -v 2 (every 2 seconds) zpool iostat -v 2 3 (every 2 seconds for three times) zpool iostat -v tank 2 (just for tank)
pool health
zpool status -x zpool status -v tank
migrating data
export it first
zpool export tank
remove from system and install on new system, then to identify
zpool import zpool import tank (or whatever pool desired to import)
filesystems
creation
zfs create pool-name/[filesystem-name/]filesystem-name
All the intermediate file system names must already exist in the pool. The last name in the path identifies the name of the file system to be created.
In the following example, a mount point of /export/zfs is specified and is created for the tank/home file system.
zfs create -o mountpoint=/export/zfs tank/home
destruction
zfs destroy tank/home/tabriz
renaming
This example renames the kustarz file system to kustarz_old.
zfs rename tank/home/kustarz tank/home/kustarz_old
properties
zfs list zfs list pool/home/marks (for recurssive in marks) zfs list -r pool/home/marks (for recurssive in marks) zfs list -r -o name,sharenfs,mountpoint pool/home/marks (for recurssive in marks, specific datasets) zfs list -r -Ho name,sharenfs,mountpoint pool/home/marks (for recurssive in marks, no header, just datasets) zfs set <property=value> tank/home zfs inherit
You can use the zfs inherit command to clear a property setting, thus causing the setting to be inherited from the parent.
zfs set compression=on tank/home/bonwick zfs get -r compression tank (-r for recursion) NAME PROPERTY VALUE SOURCE tank compression off default tank/home compression off default tank/home/bonwick compression on local zfs inherit compression tank/home/bonwick zfs get -r compression tank NAME PROPERTY VALUE SOURCE tank compression off default tank/home compression off default tank/home/bonwick compression off default zfs get checksum tank/ws zfs get all tank
replacing problem
Hmmm, this looks like a bug to me. The single argument form of 'zpool replace' should do the trick. What has happened is that there is enough information on the disk to identify it as belonging to 'tank', yet not enough good data for it to be opened. Incidentally, you you send me the contents of /var/fm/fmd/errlog and /var/fm/fmd/fltlog, as well as /var/adm/messages? I'm always trying to collect details of this failure mode. The 'zpool replace' code should probably allow you to replace a disk with itself provided the original isn't still online. As a workaround, you should be able to dd(1) over the first and last megabyte of the disk. This will prevent zpool(1M) from recognizing it as the same disk in the pool, and should allow you to replace it. - Eric On Fri, May 05, 2006 at 03:28:34PM -0700, Richard Broberg wrote: > I have a raidz pool which looks like this after a disk failure: > > # zpool status > pool: tank > state: DEGRADED > status: One or more devices could not be used because the label is missing or > invalid. Sufficient replicas exist for the pool to continue > functioning in a degraded state. > action: Replace the device using 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-4J > scrub: resilver completed with 0 errors on Fri May 5 18:14:29 2006 > config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > raidz DEGRADED 0 0 0 > c1t0d0 ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > c1t5d0 UNAVAIL 0 0 0 corrupted data > c2t8d0 ONLINE 0 0 0 > c2t9d0 ONLINE 0 0 0 > c2t10d0 ONLINE 0 0 0 > c2t11d0 ONLINE 0 0 0 > c2t12d0 ONLINE 0 0 0 > c2t13d0 ONLINE 0 0 0 > > errors: No known data errors > # > > ----- > > I have physically replaced the failed disk with a new one, but I'm having > problems using 'zpool replace': > > # zpool replace tank c1t5d0 > invalid vdev specification > use '-f' to override the following errors: > /dev/dsk/c1t5d0s0 is part of active ZFS pool tank. Please see zpool(1M). > /dev/dsk/c1t5d0s2 is part of active ZFS pool tank. Please see zpool(1M). > # > > so I follow the advice, and use '-f': > > # zpool replace -f tank c1t5d0 > invalid vdev specification > the following errors must be manually repaired: > /dev/dsk/c1t5d0s0 is part of active ZFS pool tank. Please see zpool(1M). > /dev/dsk/c1t5d0s2 is part of active ZFS pool tank. Please see zpool(1M). > # > > --- > > What now? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@xxxxxxxxxxxxxxx > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Did you pull out the old drive and add a new one in its place hot? What does cfgadm -al report? Your drives should look like this: sata0/0::dsk/c7t0d0 disk connected configured ok sata0/1::dsk/c7t1d0 disk connected configured ok sata1/0::dsk/c8t0d0 disk connected configured ok sata1/1::dsk/c8t1d0 disk connected configured ok If c0t20d0 isn't configured, use # cfgadm -c configure sata1/1::dsk/c0t20d0 before attempting the zpool replace. hth - - Bart -- Bart Smaalders Solaris Kernel Performance
dd last block
bash-3.00# dd if=/dev/zero of=/dev/rdsk/c5t600039300002155Bd3 bs=16384 count=32767 oseek=30501775 dd: unexpected short write, wrote 15872 bytes, expected 16384 21617+0 records in 21617+0 records out
bash-3.00# dd if=/dev/zero of=/dev/rdsk/c5t600039300002155Bd4 bs=1024x1024 count=1 oseek=476927 dd: unexpected short write, wrote 1048064 bytes, expected 1048576 1+0 records in 1+0 records out
bash-3.00# dd if=/dev/zero of=/dev/rdsk/c5t600039300002155Bd4 bs=1024x1024 count=2 oseek=476926 dd: unexpected short write, wrote 1048064 bytes, expected 1048576 2+0 records in 2+0 records out
If the oseek is past the end it throws this:
write: I/O error
What works
zpool create datapool raidz c4t60003930000214EEd0 c4t60003930000214EEd1 c4t60003930000214EEd2 c4t60003930000214EEd3 c4t60003930000214EEd4 c4t60003930000214EEd5 c4t60003930000214EEd6 zpool status zpool scrub datapool
yank drive
zpool status <--does not recognize failure zpool scrub datapool zpool status <--Now disk is unavail
put same drive back
zpool replace datapool c4t60003930000214EEd3 zpool replace -f datapool c4t60003930000214EEd3 <--still throws error zpool clear datapool zpool status <--now it is okay. zpool scrub datapool zpool status <--still okay.
zfs/zones
http://www.sun.com/software/solaris/howtoguides/zfshowto.jsp http://blogs.sun.com/DanX/entry/solaris_zfs_and_zones_simple
Use “~.” to disconnect from the console
Various
clear device list in /dev/rdsk
devfsadm -C
List drives
iostat -En iostat -xn
In order to see if you have a slow drive, run 'iostat -x' while writing data. If the svc_t field is much higher for one drive than the others, then that drive is likely slow.
to get the system to rcognize disk changes
touch /reconfigure reboot
zfs ddebugger
zdb -l /dev/dsk/c1d0s0 zdb -vvv zpool1 17
more drive information
fmdump fmdump -v -u a0e0918f-9de2-ef43-cb49-df625a477b7f
<quote> You get three extra pieces of information in the fmdump output: the device path, the devid and from that the device manufacturer model number and serial number.
The device path is
/pci@0,0/pci1022,7458@2/pci11ab,11ab@1/disk@4,0
which you can grep for in /etc/path_to_inst and then map using the output from iostat -En.
The devid is the unique device identifier and this shows up in the output from prtpicl -v and prtconf -v. Both of these utilities should also then show you the “devfs-path” property which you should be able to use to map to a cXtYdZ number.
Finally, you can see that you've got a Hitachi HDS7250S with serial number KRVN67ZBHDPX3H - this will definitely be reported in your iostat -En output.
cheers, James C. McPherson </quote>
create rootable account 79b
http://blogs.sun.com/gbrunett/tags/rbac
usermod -P "Primary Administrator" luke usermod -R root luke <--this one is all required roles steve luke profiles -l steve luke
Devices and file systems
usb drive
USB memory stick /rmdisk/noname /vol/dev/aliases/rmdisk0 /vol/dev/dsk/cntndn/volume-name:c
floppy
First diskette drive /floppy /vol/dev/aliases/floppy0 /dev/rdiskette /vol/dev/rdiskette0/ volume-name
Create iso from files
mkisofs -r /directory > file.iso
Device Naming conventions
- Physical device name Represents the full device path name in the device information hierarchy. The physical device name is created by when the device is first added to the system. Physical device files are found in the /devices directory.
- Instance name Represents the kernel's abbreviation name for every possible device on the system. For example, sd0 and sd1 represent the instance names of two disk devices. Instance names are mapped in the /etc/path_to_inst file. Logical device name The logical device name is created by when the device is first added to the system.
- Logical device names are used with most file system commands to refer to devices. For a list of file commands that use logical device names, see Table 5.3 [of 817-5093.pdf]. Logical device files in the /dev directory are symbolically linked to physical device files in the /devices directory.
TABLE 5.3 Device Interface Type Required by Some Frequently Used Commands
| Command Reference | InterfaceType | Example of Use |
|---|---|---|
| df(1M) | Block | df /dev/dsk/c0t3d0s6 |
| fsck(1M) | Raw | fsck -p /dev/rdsk/c0t0d0s0 |
| mount(1M) | Block | mount /dev/dsk/c1t0d0s7 /export/home |
| newfs(1M) | Raw | newfs /dev/rdsk/c0t0d1s1 |
| prtvtoc(1M) | Raw | prtvtoc /dev/rdsk/c0t0d0s2 |
usb
Use the prtconf command output to identify whether your system supports USB 1.1 or USB 2.0 devices.
For example:
# prtconf -D | egrep "ehci|ohci|uhci"
If your prtconf output identifies an EHCI controller, your system supports USB 2.0 devices.
If your prtconf output identifies an OHCI or UHCI controller, your system supports USB 1.1 devices.
hot plugging USB Devices
Make sure that vold is running.
# svcs volfs STATE STIME FMRI online 10:39:12 svc:/system/filesystem/volfs:default
The file system can be mounted from the device if it is valid and vold recognizes it. If it fails to mount, stop vold.
# svcadm disable volfs
Then, try a manual mount.
Before hot-removing the device, find the name of the device in the eject -n command's alias name. Then eject the device's media. If you don't do this, vold still releases the device and the port is usable again, but the filesystem on the device might have been damaged.
jpg viewer
gnome-open <url>
View usb information
prtconf rmformat cfgadm cfgadm -l -s "cols=ap_id:info"
Managing Disks
prtvtoc
iSCSI
bash-3.00# pkginfo SUNWiscsir SUNWiscsiu system SUNWiscsir Sun iSCSI Device Driver (root) system SUNWiscsiu Sun iSCSI Management Utilities (usr)
bash-3.00# cd export/home/
bash-3.00# mkdir sandbox
bash-3.00# iscsitadm modify admin -d /export/home/sandbox
bash-3.00# iscsitadm create target --size 2g sandbox
bash-3.00# iscsitadm list target -v sandbox
Target: sandbox
iSCSI Name: iqn.1986-03.com.sun:02:0206870f-6503-49ef-e23e-ad1bc458c5ea.sandbox
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 01000019b9f9386300002a0047ab0bbb
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0G
Status: offline
for testing, I set up
targetzone hostname=targetiscsi 192.168.2.98
initiatorzone hostname=initiatoriscsi 192.168.2.97
http://www.opensolaris.org/os/project/crossbow/Docs/ipinstances-sug1.pdf
release
bash-3.00# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007
bash-3.2# cat /etc/release
Solaris Express Community Edition snv_85 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 10 March 2008
bash-3.2# isalist -b
amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
bash-3.2# isainfo -v
64-bit amd64 applications
ssse3 cx16 mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu
32-bit i386 applications
ssse3 ahf cx16 mon sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu
bash-3.2# psrinfo -v
Status of virtual processor 0 as of: 04/11/2008 08:45:28
on-line since 04/10/2008 17:50:11.
The i386 processor operates at 2667 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 04/11/2008 08:45:28
on-line since 04/10/2008 17:50:14.
The i386 processor operates at 2667 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 2 as of: 04/11/2008 08:45:28
on-line since 04/10/2008 17:50:14.
The i386 processor operates at 2667 MHz,
and has an i387 compatible floating point processor.
Status of virtual processor 3 as of: 04/11/2008 08:45:28
on-line since 04/10/2008 17:50:14.
The i386 processor operates at 2667 MHz,
and has an i387 compatible floating point processor.
bash-3.2# isainfo -kv
64-bit amd64 kernel module
- Solaris 10 3/05 = Solaris 10 RR 1/05
- Solaris 10 1/06 = Update 1
- Solaris 10 6/06 = Update 2
- Solaris 10 11/06 = Update 3
- Solaris 10 8/07 = Update 4
- Solaris 10 5/08 = Update 5
zfs example commands
Global# zpool create mypool mirror c2t5d0 c2t6d0 Global# zpool list Global# mkdir /zones Global# zonecfg -z myzone
myzone: No such zone configured
Use 'create' to begin configuring a new zone zonecfg:myzone< create zonecfg:myzone< set zonepath=/zones/myzone zonecfg:myzone< verify zonecfg:myzone< commit zonecfg:myzone< exit Global# zoneadm -z myzone install Global# zoneadm -z myzone boot Global# zlogin -C myzone Global# zlogin myzone init 5
Run Levels
init 0 shutdown, do not power off init 3 normal init 5 shutdown, power off
Fiber Channel card driver install
Download driver zip file from: LSI 7404EP Driver
and unzip.
Starting with the itmptfc_install.tar.Z file:
1. Uncompress and un-tar the itmptfc_install.tar.Z file by typing the
following commands to create a directory named install:
uncompress itmptfc_install.tar.Z
tar -xvf itmptfc_install.tar
cd install
2. Start the installation by invoking the pkgadd command as:
pkgadd -d .
3. Follow the prompts to perform the installation.
4. The itmptfc device driver is now installed. Reboot the
machine to reconfigure the system and to recognize the new devices.
NOTE: If you change the disk drive configuration of your machine, it
may be necessary to issue the command:
touch /reconfigure
and then reboot the system in order for the system to detect and
correctly install your new disks.
ended up with 2.73 TB on each RAID from half the XRAID
TODO
name cname for filestore? nic configure for world nic confugure for management firewall smb nfs authentication via ldap