Monday, August 5, 2013

Hardlinks in Unix what are they......


The Below article give you a picture of what are Hard links.

A hard link is essentially a label or name assigned to a file. Conventionally, we think of a file as consisting of a set of information that has a single name. However, it is possible to create a number of different names that all refer to the same contents. Commands executed upon any of these different names will then operate upon the same file contents.

To make a hard link to an existing file, enter:

  ln thisfile sameasthisfile

Replace thisfile with the original filename, and sameasthisfile with the additional name you like to use to refer to the original file.

This will create a new item in your working directory, sameasthisfile, which is linked to the contents of thisfile. The new link will show up along with the rest of your file names when you list them using the ls command. This new link is not a separate copy of the old file, but rather a different name for exactly the same file contents as the old file. Consequently, any changes you make to thisfile will be visible in sameasthisfile file.

You can use the standard Unix rm command to delete a link. After a link has been removed, the file contents will still exist as long as there is one name referencing the file. Thus, if you use the rm command on a file name, and a separate link exists to the same file contents, you have not really deleted the file; you can still access it through the other link. Consequently, hard links can make it difficult to keep track of files. Furthermore, hard links cannot refer to files located on different computers linked by NFS, nor can they refer to directories. For all of these reasons, you should consider using a symbolic link, also known as a soft link, instead of a hard link.

below link has good information.

http://superuser.com/questions/12972/how-can-you-see-the-actual-hard-link-by-ls

good article on hardlink and soft link.

http://linuxgazette.net/105/pitcher.html


Friday, August 2, 2013

starting udev hangs while installing SUSE Linux on hyper v

When installation hangs at starting udev always almost guaranteed this is due to the hardware incompatibility. Try to change the hardware if its in physical machine and with virtual try to change the Network adapter and it should work. using legacy network adapter solved my problem.

Tuesday, July 30, 2013

Install Bash on AIX 7.1

     How to install Bash on AIX 7.1. Just follow the below steps

 XYZ# ftp ftp.software.ibm.com
      Name> ftp
      Password> anything@xyz.com
          ftp> cd aix/freeSoftware/aixtoolbox/RPMS/ppc/wget
          ftp> binary
          ftp> get wget-1.9.1-1.aix5.1.ppc.rpm
          ftp> quit
      XYZ# rpm -hUv wget-1.9.1-1.aix5.1.ppc.rpm
      XYZ# wget -r -nd ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc

You will now have the following files in the directory that you created:


XYZ # ls
getapp-dev.sh       getgnome.base.sh    getkde3.all.sh
Xsession.kde        getbase.sh          getkde2.all.sh      getkde3.base.sh
Xsession.kde2       getdesktop.base.sh  getkde2.base.sh     getkde3.opt.sh
getgnome.apps.sh    getkde2.opt.sh   

XYZ # chmod +x get*.sh

Run the script getbase.sh this will create a directory called base, ftp the rpm's into it.


XYZ # ./getbase.sh
XYZ # cd base
XYZ #ls
bash-4.2-1.aix6.1.ppc.rpm          info-4.6-1.aix5.1.ppc.rpm          rpm-build-3.0.5-52.aix5.3.ppc.rpm  zip-2.3-3.aix4.3.ppc.rpm
bzip2-1.0.5-3.aix5.3.ppc.rpm       patch-2.5.4-4.aix4.3.ppc.rpm       rpm-devel-3.0.5-52.aix5.3.ppc.rpm
gettext-0.10.40-8.aix5.2.ppc.rpm   popt-1.7-2.aix5.1.ppc.rpm          tar-1.22-1.aix6.1.ppc.rpm
gzip-1.2.4a-10.aix5.2.ppc.rpm      rpm-3.0.5-52.aix5.3.ppc.rpm        unzip-5.51-1.aix5.1.ppc.rpm


XYZ#

Install the rpms that you need:

XYZ# rpm -hUv unzip-5.51-1.aix5.1.ppc.rpm
XYZ# rpm -hUv zip-2.3-3.aix4.3.ppc.rpm
XYZ# rpm -hUv bash-4.2-1.aix6.1.ppc.rpm

There we go, we now have bash with AIX 7.1
XYZ# bash
bash-4.2#

Wednesday, July 24, 2013

Simple Linux Cluster.

     How to build a simple Linux cluster.(tried to make a drop out of an ocean).

Things that you require before starting to build the cluster with one Virtual node and storage fail-over from one node to the other.

1) SUSE 11 Package DVD with High Availability Package.
2) Two Virtual \ Physical machines on which SUSE package can be installed.
3) Three Static IP addresses that could be used.
4) ISCSI storage.
5) Understand Linux network configuration specific to multicast and broadcast.
     why to understand this ?
      we Need to configure the network and cluster network configuration which is important for cluster communication. very important is to know which multicast ip address could be used in your network.
http://en.wikipedia.org/wiki/Multicast
http://en.wikipedia.org/wiki/Multicast_address
6) Understand the resource management of the cluster with peacemaker.
http://lcmc.sourceforge.net/1.1-lcmc/pdf/Clusters_from_Scratch/Pacemaker-1.1-Clusters_from_Scratch-en-US.pdf
7) Understand how password less communication could be achieved between two Linux machines for SSH communication.
http://www.linuxproblem.org/art_9.html
8) Have a list of packages that needs to be installed for your requirement.
in the present use case following packages are being installed.

a) Openais
b) ocfs2 (including the file system package).
https://oss.oracle.com/projects/ocfs2/dist/documentation/v1.4/ocfs2-1_4-usersguide.pdf
c) o2cb
d) corosync
e) iscsi initiator
f) pacemaker.
g) heartbeat
h) crm
i) hawk

9) Last but not the least it helps to read the complete documentation from the following link before you start the step by step process for creating a basic cluster.

https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha/book_sleha.html

Step By Step:

1) Install the OS with the all the above mentioned packages on a virtual machine or a physical machine.
2) Configure the machine name (host-name) and static IP of the machine.
3) make sure that the communication between both the machines with the IP and the host name is resolvable.
    Good to configure the Machine Name and IP in the DNS properly. The other way is to have the host and the IP address configured in the hosts file on each of these machines \ nodes.
4) configure the ISCSI storage on the machine in such a way that the same storage is shared across both the machines and is visible to both of them with permissions to read and write for both the machines.
http://www.linuxtopia.org/online_books/suse_linux_guides/SLES10/suse_enterprise_linux_server_installation_admin/sec_inst_system_iscsi_initiator.html

5) Configure the password for the High availability user group that is used to login into the peacemaker
To log in to the cluster from the Pacemaker GUI, the respective user must be a member of the haclientgroup. The installation creates a linux user named hacluster and adds the user to the haclient group.
Before using the Pacemaker GUI, either set a password for the hacluster user or create a new user which is member of the haclient group.
Do this on every node you will connect to with the Pacemaker GUI.
6) Configure the initial cluster configuration as specified in the steps below with or without any redundancy in the network configuration.
https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha/book_sleha.html#sec.ha.installation.setup.manual.

the above step completes the configuration of your cluster without any storage.

you can check this with crm_mon command which gives you the details of the configured nodes and resources with the configuration information.

7) Now once the above steps are done we are done with the cluster node configuration we need to look into the resource configuration. (you need to understand which combination of resource works for you.). for me i have considered the disk and Virtual ip as the resource for the cluster and i am configuring them as part of group which make it more easy for the group to be moved from one node to the another in case of a fail-over.

8) configure the file system once the above stuff is done. make sure that the ISCSI storage is available for both the nodes. Format is with ocfs2 file system. once the file system is configured.the perform the below step. once the file system is configured. on the ocfs2console configure the cluster nodes. configure oc2b and ocfs2 to start at the boot time on both the nodes.

configure o2cb with the following command on both nodes.
/etc/init.d/o2cb configure

dont forget to provide the cluster name in the above configuration. its a must.

9) open the crm_gui from the node on which the cluster configuration is being configured and select the resource node and the select the add option. select the group radio button and give a name to the group then configure the primitive ip address and primitive storage group configuration.

ip : ocf::heartbeat::IPaddr2
need to specify the ipaddress and netmast for this to work properly.

storage : ocf::heartbeat::Filesystem
need to specify the block device that is formatted and available on both the nodes.
need to specify the mount path on the local machine.
need to speficy the file system parameter correctly for this.

once again last but not the least there are things that are still confusing \ not understood. you can send me an email to get answers for things that are not clear. I could append the same in this blog.