Sunday 27 July 2014

autoyast configuration for PXE boot #OpenSuSE #SLES11

Objective: autoyast configuration and boot via PXE

Environment: OpenSuSE 11/SuSE 11

w.r.t my earlier post on PXE boot for SLES ( click here ) , I would continue as how autoyast could be configured and could be combined into PXE environment.

Yast ->Miscellaneous -> Autoinstallation    

In above Groups and their corresponding Modules, Clone the Modules, so that the current system Modules would be copied the the destination host.
Below is one such example shown,

Pic -1:                                                                                                                                                      
 Autoinstallation - Configuration
 ┌Groups────────────────────────────┐┌Modules───────────────────────
 │Hardware                          ││Add-On Products               
 │High Availability                 ││Image deployment              
 │Miscellaneous                     ││Online Update Configuration   
 │Network Services                  ││Package Selection             
 │Network Devices                   ││                              
 │Security and Users                ││                              
 │Software                          ││                              
 │Support                           ││                              
 │System                            ││                              
 │Virtualization                    ││                              
 │                                  ││                              

Pic -2:

Details
┌───────────────────────────────────────────────────────────────────┐

│Selected Patterns                                                  │
│                                                                   │
│ *  Minimal                                                        │
│ *  WBEM                                                           │
│ *  apparmor                                                       │
│ *  base                                                           │
│ *  dhcp_dns_server                                                │
│ *  documentation                                                  │
│ *  file_server                                                    │
│ *  gnome                                                          │
│ *  lamp_server                                                    │
│ *  print_server                                                   │
│ *  x11                                                            │
│                                                                   │
│Individually Selected Packages                                     │
│                                                                   │
│149                                                                │
│                                                                   │
│Packages to Remove                                                 │
│                                                                   │
│20                                                                 │
│                                                                   │
│                                                                   │
└───────────────────────────────────────────────────────────────────┘
      [Clone]                                                [Edit]
 [Apply to system]                                           [Clear]


Once the above package cloning and all other Groups are completed, you need to save the file(XML format) which by default resides in the directory /var/lib/autoinstall/repository.

Note: During the User and Group Management selection, make sure you would de-select 'gdm' as it is created during the installation and hence can be omitted. You may receive the error as below incase you have selected the user & group.

Error: Could not update ICEauthority file /var/lib/gdm/.ICEauthority

# ls -l /var/lib/autoinstall/repository/*.xml
-rw-r--r-- 1 root root 47703 Jul 25 13:55 /var/lib/autoinstall/repository/autoyast_pxe.xml

- Make a directory inside the apache's default DocumentRoot /srv/www/htdocs/

# mkdir /srv/www/htdocs/autoyast
#

- Copy the default XML file from /var/lib/autoinstall/repository/autoyast_pxe.xml to /srv/www/htdocs/autoyast
- Make sure that your PXE finds the autoyast configuration from the TFTP server to start the installations, hence need to append as below in the default configs of the PXE config's.

APPEND initrd=sles/11/x86_64/initrd splash=silent showopts install=http://192.168.56.116/sles/11/x86_64/ autoyast=http://192.168.56.116/autoyast/autoyast_pxe.xml

Config file for autoyast could be downloaded from the location http://goo.gl/1uWoHz

Now, your installations are successful and is automated.

Friday 25 July 2014

PXE Installation on SLES 11

Objective: PXE installation for autoyast

In an effort to help automate OS installation, I had set up a Preboot Execution Environment (PXE) server.

"The Preboot eXecution Environment (PXE, also known as Pre-Execution Environment, or 'pixie') is an environment to boot computers using a network interface independently of available data storage devices (like hard disks) or installed operating systems."

Environment: SLES 11

I had already discusses how PXE works in my earlier posts where I had installed PXE environment for kick-starting the Redhat/CentOS flavors. If the reader is interested to know how PXE is configured on Redhat/CentOS - click here

Change Plan:

1. Create an ISO from the DVD installation media.
2. Mount the ISO permanently(/etc/fstab) to a particular mount point directory structure which is accessed through server, instead of extracting images. This could be more efficient in storage utilization.
3. Add a software repository for the web-server/ISO image which you have created.
4. Install packages like TFTP, DHCP, APACHE, SYSLINUX if they were not installed by default.
5. Modify TFTP and DHCP configurations as to lease IP addresses according to your environment you are building your enterprise server.
6. Poweron the destination host, boot from the LAN in which NIC makes a request to the DHCP which in-turns provides with information like(IP, subnet, gateway...etc), additionally provides the TFTP location from where it has to get the booting image.

I assume the reader would be aware of creating an ISO image, mounting it permanently, also would be skipping the package installations.

I would be providing more of the configuration details along with the screen shots, which could be helpful incase if you are configuring from " YAST "

Executions :

I had mounted by ISO image on /srv/www/htdocs/sles/11/x86_64 and has added into my repository as below shown, 

Repository Additions :

(Yast -> Software Repositories -> Add -> HTTP -> Server and Directory )


 Repository Name
 sles11sp3 
                           (x) Edit Parts of the URL  
  ┌Protocol────────────────────────────────────────
  │            ( ) FTP            (x) HTTP            
  └─────────────────────────────────────────────
 Server Name                                          
 192.168.56.116 
 Directory on Server
 /sles/11/x86_64 
  ┌Authentication────────────────────────────────────
  │[x] Anonymous                                      
  │User Name                                          
  │ 
  │Password                                           
  │ 
  └─────────────────────────────────────────────

TFTP Enable/Configurations :

Install/enable TFTP and make a boot image directory(/tftpboot), as below :

(Yast -> Network Services -> TFTP Server )



  ( ) Disable
  (x) Enable

  Boot Image Directory

  /tftpboot                  [Browse...]

  [ ] Open Port in Firewall  [Firewall Details...]
  Firewall is disabled


                     [View Log]


DHCP configurations :

Once DHCP is installed on the server, use DHCP server wizard.

Pic 1 :

Domain Name

Primary Name server IP
192.168.56.116


                                            [ Next ]

Pic 2 :


IP Address Range
First IP Address           Last IP Address
192.168.56.175             192.168.56.180


                                             [ Next ]

Pic 3 :

Service start
[X] When Booting
[ ] Manually


Pic 4: 

Global Options                        

    ┌────────────────────────────────────────────────────
    │Option                               │Value                    
    │ddns-update-style                    │none                     
    │ddns-updates                         │Off                      
    │authoritative                        │On                       
    │log-facility                         │local7                   
    │default-lease-time                   │14400                    
    │option domain-name                   │"suselnx.com"            
    │option domain-name-servers           │192.168.56.116           
                                                     
Pic 5 :

Subnet Configuration                                        

    Network Address                                 Network Mask
    192.168.56.0                                     255.255.255.0  

    ┌──────────────────────────────────────────────────────────
    │Option          │Value                                                      
    │range           │192.168.56.175 192.168.56.180                 
    │next-server     │192.168.56.116                                        
    │filename        │"pxelinux.0"                                    
    │option routers  │192.168.56.1              
    

                                                    Click OK and then finish

- Creating a directory structure for TFTP server

mkdir -p /tftpboot/pxelinux.cfg
mkdir -p /tftpboot/sles/11/x86_64

- Copy necessary files for boot to the TFTP server directory structure:

# cd /srv/www/htdocs/sles/11/x86_64/boot/x86_64/loader/
# cp linux initrd message biostest memtest /tftboot/sles/11/x86_64/
# cp /usr/share/syslinux/pxelinux.0 /tftpboot/
# cp /usr/share/syslinux/menu.c32 /tftpboot/

- Create a default menu as below :

#  cat /tftpboot/pxelinux.cfg/default 
default menu.c32
prompt 0
timeout 100

LABEL sles11sp3
MENU LABEL SLES 11 SP3 x86_64
KERNEL sles/11/x86_64/linux
APPEND initrd=sles/11/x86_64/initrd splash=silent showopts install=http://192.168.56.116/sles/11/x86_64 ramdisk_size=65536 

- Below would be the skeleton for our configured TFTP server.


# ls -lar /tftpboot/*

-rw-r--r-- 1 root root 16462 Jul 24 18:14 /tftpboot/pxelinux.0
-rw-r--r-- 1 root root 57140 Jul 24 18:14 /tftpboot/menu.c32

/tftpboot/sles:
total 12
drwxr-xr-x 3 root root 4096 Jul 24 18:23 11
drwxr-xr-x 4 root root 4096 Jul 24 18:23 ..
drwxr-xr-x 3 root root 4096 Jul 24 18:23 .

/tftpboot/pxelinux.cfg:
total 12
-rw-r--r-- 1 root root  669 Jul 25 11:07 default
drwxr-xr-x 4 root root 4096 Jul 24 18:23 ..
drwxr-xr-x 2 root root 4096 Jul 25 11:07 .

- On BIOS booting press F12, and select LAN(l) to further boot from the media.






Hence we could conclude that the PXE installation is successful and in my further posts I would configure an autoyast file to PXE which would automate SLES.
Thank you for reading and re-sharing.

Tuesday 15 July 2014

Storage replication with DRBD

Objective: Storage replication with DRBD

Environment : CentOS 6.5 (32-bit)


DRBD Version : 8.3.16

Introduction;
In this article, I am using DRBD(Distributed Replicated Block Devicereplicated storage solution mirroring the content of block devices (hard disks) between serves. Not everyone can afford network-attached storage but somehow the data needs to be kept in sync, DRBD which can be thought of as network based RAID-1.

DRBD's position within the Linux I/O stack




Below are some of the basic requirements.

- Two disks  (preferably same size /dev/sdb )
- Networking between machines (drbd-node1 & drbd-node2)
Working DNS resolution.
- NTP synchronized times on both nodes

Install DRBD packages :

drbd-node1# yum install -y  drbd83-utils kmod-drbd83
drbd-node2# yum install -y  drbd83-utils kmod-drbd83

Load the DRBD modules.

Reboot or /sbin/modprobe drbd

Partition the disk:
drbd-node1# fdisk /dev/sdb
drbd-node2# fdisk /dev/sdb

Create the Distributed Replicated Block Device resource file

Readers are required to change according to your specifications on the servers, which are marked in RED


drbd-node1# cat /etc/drbd.d/drbdcluster.res 
resource drbdcluster
 {
 startup {
 wfc-timeout 30;
 outdated-wfc-timeout 20;
 degr-wfc-timeout 30;
 }
net {
 cram-hmac-alg sha1;
 shared-secret sync_disk;
 }
syncer {
 rate 10M;
 al-extents 257;
 on-no-data-accessible io-error;
 }
 on drbd-node1 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.XXX:7788;
 flexible-meta-disk internal;
 }
 on drbd-node2 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.YYY:7788;
 meta-disk internal;
 }
 }
drbd-node1#

Copy DRBD configured to the secondary node(drbd-node2)

drbd-node1# scp /etc/drbd.d/drbdcluster.res root@192.168.1.YYY:/etc/drbd.d/drbdcluster.res
drbd-node1#

Initialize DRBD on both the nodes and start their services(drbd-node1 & drbd-node2)

ALL]# drbdadm create-md drbdcluster
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

ALL# service drbd start
Starting DRBD resources: [ d(drbdcluster) s(drbdcluster) n(drbdcluster) ]........

- Since both the disks contain garbage values, we are required to tell to the DRBD which set of data would be used as primarily.

drbd-node1# drbdadm -- --overwrite-data-of-peer primary drbdcluster
drbd-node1#

- Device would start for an initial sync, we are required to wait until it completes.

drbd-node1# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:123904 nr:0 dw:0 dr:124568 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:923580
[=>..................] sync'ed: 12.2% (923580/1047484)K
finish: 0:01:29 speed: 10,324 (10,324) K/sec

Create a file system and populate the data  on the device.

drbd-node1# mkfs.ext4 /dev/drbd0 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 261871 blocks
13093 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
drbd-node1#

drbd-node1# mount /dev/drbd0 /data
drbd-node1# touch /data/file1
drbd-node1#

You don't need to mount the disk from secondary machines. All data you write on /data folder will be synced to secondary server.

In order to view it, umount /data from the drbd-node1 and maker secondary node as primary node and mount back /data on the drbd-node2. you could see the same contents of /data.

drbd-node1# drbdadm secondary drbdcluster
drbd-node2# drbdadm -- --overwrite-data-of-peer primary drbdcluster

We were successful on storage replication with DRBD.

On further improvement, since DRBD is functioning I would configure cluster and file system as a resource to use it. further in-addition to the Filesystem definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start(after
the Primary was promoted). 

I would publish an article in near future for the same.

Saturday 5 July 2014

Identify Open Files #Linux

It was been very often asked by few of the system administrators as how to identify the open files in the Linux environment. 

Here we will identify the open files.

Environment : SuSE 11 

Given our intent, we use /var FS. Below, we see that /var has its own FS and we would see if any process are holding any files under "/var"

 linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M 1008M     0 100% /var
linux:~ # 

linux:~ # fuser -c /var 2>/dev/null
  1570  1585  1603  1691  1694  2626  2628  2663  3127  3142  3232  3257  3258  3299  3300  3301  3328  3349  3350  3351  3352  3353  3354  3486  3517  3518  3521  3525  3526  3528  3531  3540  3541  3549  3560  3563  3596  3599  5611  5614  6883linux:~ # 

Since we now know that there are opened files under "/var", let's see which particular files are opened.  We can do this by running a "for" loop on the output of 'fuser', using the resulting variable as part of our "/proc" path that we list out with 'ls'.


linux:~ # for i in `fuser -c /var 2>/dev/null` ; do echo "${i}:  `cat /proc/${i}/cmdline`" ; ls -ld /proc/${i}/fd/* | awk '/\/var/ {print "\t"$NF}' ; done

1570:  /sbin/acpid
1585:  /bin/dbus-daemon--system
1603:  /sbin/syslog-ng
/var/log/mail
/var/log/acpid
/var/log/warn
/var/log/mail.info
.
.
.
.
.
.
.
3560:  sshd: root@pts/0

3563:  -bash
3596:  sshd: root@pts/1
3599:  -bash
(deleted)
5611:  sshd: root@pts/2
5614:  -bash
6883:  /usr/lib/gdm/gdm-session-worker
/var/log/gdm/:0-slave.log
/var/log/gdm/:0-slave.log
cat: /proc/6928/cmdline: No such file or directory
6928:  
ls: cannot access /proc/6928/fd/*: No such file or directory
linux:~ # 

of intrest PID 3599 is holding an open file, that's because subsequent 'ls' on '/proc/3599/fd/*' shows that it is for writing STDOUT


linux:/var # ls -ld /proc/3599/fd/* | grep /var
l-wx------ 1 root root 64 Jul  5 10:47 /proc/3599/fd/1 -> /var/tmp/openfile.txt (deleted)

linux:/var # 

linux:/var # ls -l /var/tmp/openfile.txt 
ls: cannot access /var/tmp/openfile.txt: No such file or directory
linux:/var # 

by killing or restarting any application process, we could re-claim space in the disk.

You could also find an open files using 'lsof'


linux:/var # for u in `fuser -c /var 2>/dev/null`;do lsof -p $u | awk '{ if ( $7 > 100000000 ) print $0 }' ; done | grep del

bash    3599 root    1w   REG  253,7 924344320  49159 /var/tmp/openfile.txt (deleted)
linux:/var # 

linux:~ # kill -9 3599
linux:~ #

linux:~ # df -h /var
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvvar 1008M  126M  832M  14% /var
linux:~ # 

Open file has been identified & deleted successfully.