1⟩ How to unfreeze the svcgroup, (enable onl. & offl.)?
#haconf -makerw
#hagrp -unfreez SG -persistent
#haconf -dump -makero
“Solaris Commands frequently Asked Questions by expert members with experience in Solaris Commands. These interview questions and answers on Solaris Commands will help you strengthen your technical skills, prepare for the interviews and quickly revise the concepts. So get preparation for the Solaris Commands job interview”
#haconf -makerw
#hagrp -unfreez SG -persistent
#haconf -dump -makero
Get the mac address from both the nodes
#getmac /dev/qfe:0
Sv from server side
Cv from client side
#./dlpiping –sv /dev/qfe:0 macaddresss
#./dlpiping –cv /dev/qfe:0 macaddresss
Stop the cluster on the local server but leave the
application/s running, do not failover the application/s
#hastop -local
Stop cluster on local server but evacuate (failover) the
application/s to another node within the cluster
#hastop -local -evacuate
Stop the cluster on all nodes but leave the application/s
running
#hastop -all -force
verify resources are online on tgui1-svc
# scstat
Take all resources offline
# scswitch –F –g smsweb-rg
verify resources are offline on both tgui1 & tgui2
# scstat
verify resources are offline on tgui1-svc & tgui2-svc
# scstat
switch resources online on smslu131
#scswitch –Z –g smsweb-rg
verify resources are online on tgui1-svc
#scstat
verify resources are online on tgui1-svc
# scstat
switch resources from smslu131 to smslu132
# scswitch –z –g smsweb-rg –h tgui2-svc
verify resources are online on tgui2-svc
# scstat
switch resources from smslu131 to smslu132
# scswitch –z –g smsweb-rg –h tgui1-svc
verify resources are online on tgui1-svc
# scstat
4types
Online local
Online remote
Online global
Offline global
#haconf –makerw
#hagrp –add groupname
#hagrp –modify groupname systemList –add node1 node2
#haconf –dump -makero
1. Check the Service Group dependancy with other service
group
#hares -dep <Service Group>
2.to make the cluster configuration to read write mode.
haconfig -dump ro
3.Switch the online Service groups to the other system.
#hagrp -switch <Service group> -to <host2>
4.Make the Service group stable on other Host or freez them
on other node:
#hares -freeze <Service group> -sys <host2>
5.Make the service group offline on host1
#hagrp -disable <service group> -sys <host1>
We can recreate a corrupt main.cf configuration file using the below two files.
1)main.cf.previous
2)main.cmd
If running on solaris use following command to get vxvm version-
pkginfo -l packagename
/usr/cluster/bin/scconf is the path in suncluster
Whenever we add user the file "/etc/passwd", "/etc/shadow"
and "/etc/group" is regenerated.
and when we assign/change the password the file
"/etc/shadow" will regenerated.
In ok prompt,we can use printenv command...
ok printenv boot-device
In shell prompt,
#eeprom | grep boot-device
or
#prtconf -vp
Upto solaris 9 runlevel are used they are
Runlevel 0 1 2 3 4 5 6
In solaris 10 milestone is introduced. Milestone is
improved level of runlevel
In run level we use init 1 to bring system into single
user, in milestone # svcadm milestone single-user command
used to bring the system to single user Simillarly
init 2 as multi-user(more than one user),init 3 as multi-
user-sever provide multiuser mode along with nfs
runlevel 4 is currently unused
runlevel 5 ie init 5 shutdown and power off
runlevel 6 ie init 6 shut down and restart
After initialization of kernal it self next init phase
starts. Here kernal reads init level from /etc/inittab file.
in solaris 9 but in solaris 10 svc.startd deamon starts
mailstone serviceses
To view the current values of the resource control, enter
the following commands:
#id -p //to verify the project ID
uid = 0 (Root) gid = 0 (Root) projid = 1 (user.root)
#prctl -n project.max-shm-memory -i project user.root
#prctl -n project.max=sem-ids -i project user.root
if /etc/system file deleted take the back-up of file.if at
all no back-up copy /dev/null file to /etc/system in ok
prompt and it copies default files to /etc/system.
1) Checks the disks available: #metastat –p
2) Detach the one with errors: #metadetach d1 d2
If you get error here : #metadetach -f d1 d2
3) Now check the status again: # metastat -p
We use metareplace usually when we have have faulty
submirrors and once replaced it resyncs with the failed
components.
1) Find the meta state databases on the slice
#metadb -i
2) If any meta state databases exist remove them
#metadb –d c0t0d0sX where x is slice number
3) Once the meta state databases are removed we can use
cfgadm to unconfigure the device
#cfgadm -c unconfigure diskname
Once unconfigured replace the disk and configure it as
below
4)#cfgadm –c configure diskname
5) Now copy the Volume Table Of Contents (VTOC) to the new
disk
#prtvtoc /dev/rdsk/devicename | fmthard -s -
/dev/rdsk/devicename
6) Once the VTOC is in place now use metareplace command to
replace the faulty meta devices
#metareplace -e d11 devicename ( Where d11 is the
meta device )
A state database is the collections of multiple,
replicated database copies and each copy is considered as a
state database replica.
If the solaris box loses a state database replica, SVM
should figure out which state database replicas still
contain valid data and boot using the valid ones and this
is achieved by using “majority consensus algorithm” and
according to this algorithm we need half+1 number of state
database replicas before it finds for a valid data.
And for the above reason we need to recreate at least three
state database replicas when we setup a disk configuration.
If all the three database replicas are corrupted meaning we
lose all data stored on svm volumes.
Hence its good to create as many replicas on separate
drives across controllers.
1) Disks failures (Boot device failures)
2) Insufficient state database replicas issues
3) Wrong entries in /etc/vfstab file