|jflaster on ZFS on Linux emergency Boot…|
|HarusKG on ZFS on Linux emergency Boot…|
|jflaster on Customizing Your Solaris 11 Au…|
|Raymond on Customizing Your Solaris 11 Au…|
|jflaster on Minding your ZFS pool and file…|
A place for Unix Thoughts and Ideas
In my career I have gone from building volumes from the bottom up approach, to using vxassist, to VEA (with CFS cluster) and back to command line. My Symantec Reps have been raving about a new management console to replace VEA, but I’m leery of new software that comes with warnings about triggering kernel panics on existing older CFS clusters.
In the last year I have switched back to using the command line almost exclusively and I’m now going to illustrate the easiest and quickest way to add luns and grow filesystems.
In this example I’m going to be using lun 7 and 8 to grow my filesystem by 300GB. Although this resize can be done in one step, I’m splitting it out into two commands for illustration purposes.
First thing I’m going to check is to make sure that there is space left over in the filesystem for the new inodes.
If the filesystem is 100% full with no space left, do not proceed as if you attempt the resize, the operation will likely freeze and you’ll need to reboot to be able to complete the grow. If you are close to 100%, but have some space left over, you can try growing slowly, in chunks of MBs until you comfortably have enough free space to grow the volume.
I had read somewhere that this behavior should be gone by now, but members of my team still encounter it on occasion.
root@testserver # df -h /zones/.zonemounts/testzone-01/niqa3 Filesystem size used avail capacity Mounted on /dev/vx/dsk/blddbdg/niqa3 200G 5.0G 183G 3% /zones/.zonemounts/testzone-01/niqa3
In this case I have plenty of space so I’m going to proceed.
Unless you spend your time only managing database server or have dedicated an entire disk for swap, eventually you are most likely going to need to increase your swap to handle a overzellous Java applications.
I had to do this the other night and figured it would be good to post the process
If you are on a ZFS root, while you can technically resize the zfs volume, Solaris will not pick up the changes without a reboot.
You could always drop the swap volume and re-add it, but if you are having to increase swap, that is probably a really bad idea.
So Here is a quick a dirty procedure for adding ZFS swap to a system, this system had less than 2GB of swap left and was starting to get sluggish
root@testserver $ swap -s total: 47704096k bytes allocated + 10022424k reserved = 57726520k used, 1726824k available root@testserver # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 16779248 16779248 root@testserver # zfs create -V 20G rpool/swap_1 root@testserver # swap -a /dev/zvol/dsk/rpool/swap_1 jflaster@testserver $ swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 16779248 16779248 /dev/zvol/dsk/rpool/swap_1 256,3 16 41943024 41943024 root@testserver # swap -s total: 47697768k bytes allocated + 10022152k reserved = 57719920k used, 22704208k available echo "/dev/zvol/dsk/rpool/swap_1 - - swap - no -" >> /etc/vfstab
Here is the procedure on UFS:
root@testserver:~# mkfile 20G /var/DO_NOT_DELETE_swapfile1 root@sanbapsunadm3 # swap -a /var/DO_NOT_DELETE_swapfile1 root@sanbapsunadm3 # swap -l swapfile dev swaplo blocks free /dev/md/dsk/d10 85,10 16 16780208 16780208 /var/DO_NOT_DELETE_swapfile1 - 16 41943024 41943024 root@testserver:~# echo "/var/DO_NOT_DELETE_swapfile1 - - swap - no -" >> /etc/vfstab
Configuring IPMP on Solaris 11 has become very straightforward and simple.
However, most of the examples I have seen online assume that both of the ipmp interfaces are unused and don’t have your system IP on them, which probably isn’t the case.
To get past this, you just need to run a ipadm delete-addr on the existing interface(s) that already have IP’s assigned.
Here are the steps for configuring IPMP active/standby, with test addresses.
In this example testserver-nic0 and testserver-nic1 are the DNS names for the test addresses on each network card and are defined in the hosts file.
1. Identify the net devices to be used. In this case I will be using bge0 and bge2 which map to net0 and net2
root@testserver-01:/# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 0 unknown bge1
net3 Ethernet unknown 0 unknown bge3
net0 Ethernet up 1000 full bge0
net2 Ethernet unknown 0 unknown bge2
2. Remove any addresses if defined
ipadm delete-addr net0/v4
3. Create IPMP device and assign both network cards to it
ipadm create-ipmp ipmp0
ipadm add-ipmp -i net0 -i net2 ipmp0
4. Configure Probe-based Failure dection and the address for the card.
For this you will either assign test addresses to the adapters in the IPMP group (Like in Solaris 10), or enable Transitive probing, which doesn’t require test addresses.
Using Test addresses
ipadm create-addr -T static -a testserver-nic0/23 net0/test
ipadm create-addr -T static -a testserver-nic1/23 net2/test
ipadm set-ifprop -p standby=on -m ip net2
ipadm create-addr -T static -a testserver/23 ipmp0/v4
Using Transitive probing:
Oracle RAC 11gR2 added the requirements for slewalways yes and disable pll to be set in the ntpd.conf
However, on Solaris 11 if you have those lines in your ntpd.conf, you’ll get the following error:
Apr 8 14:54:50 testserver ntpd: [ID 702911 daemon.error] syntax error in /etc/inet/ntp.conf line 1, ignored
Apr 8 14:54:50 testserver ntpd: [ID 702911 daemon.error] syntax error in /etc/inet/ntp.conf line 2, ignored
Searching through metalink, it turns out that the SMF profiles for ntpd.conf now has a option to configure the slew.
Here is how to enable the setting:
svccfg -s svc:/network/ntp:default setprop config/slew_always = true
svcadm refresh ntp
svcadm restart ntp
You can verify it with:
root@testserver:~# svcprop -p config/slew_always svc:/network/ntp:default
Despite this being set, the Oracle installer will still flag it as missing, but you can now safety ignore it (metalink 1373255.1)
I have been working on finishing up my Solaris 11 baseline and I noticed a weird issue where my server would no longer have a default route after the first reboot.
Digging into my logs, I found the following error on the initial boot.
Error creating default route: "/usr/sbin/route get default 10.0.0.1 -ifp net0"
Looking into the log file for the service at /var/svc/log/network-install\:default.log
[ Apr 8 16:55:06 Executing start method ("/lib/svc/method/net-install"). ] add net default: gateway 10.0.0.1 route to: default destination: default mask: default gateway: 10.16.148.1 interface: net0 flags: <UP,GATEWAY,DONE,STATIC> recvpipe sendpipe ssthresh rtt,ms rttvar,ms hopcount mtu expire 0 0 0 0 0 0 1500 0 Error creating default route: "/usr/sbin/route get default 10.16.148.1 -ifp net0"
Everything seems correct, except for the error.
So I dug into the service method to see what it was doing: