• Books
  • Movies
  • Sports
    • Football
    • Hockey
  • Tech
    • Apps
    • Networking
    • Virtualization
    • Linux
      • CentOS
      • Ubuntu
    • MacOS
    • Microsoft
  • Recipes
  • Family
  • About
  • Home.Lab Project
    • Home.Lab Hardware
      • Storage CentOS Server (Phoenix)
      • Centos Hypervisor (Cyclops)
      • CentOS Hypervisor (Wolverine)
      • Homegrown SuperMicro ESXi Hypervisor (Deadpool)
    • Networking
      • Hypervisor Network
      • Juniper SSG5 Configuration
      • Security Zone Configuration
    • Virtual Machines
      • Black Widow (RHEL 6.4)
    • Hypervisor Technologies
    • Tutorials
      • Creating an iSCSI Target in CentOS 6.4
      • Managing Linux Hypervisors through Windows
Home » Tech » Linux » CentOS » Storage CentOS Server (Phoenix)

Storage CentOS Server (Phoenix)

Posted on July 23, 2013 by Vitaly Posted in CentOS, Linux, Networking, Tech, Virtualization

The primary storage server (don’t want to call what I have as a SAN really as it’s just one server, not a filer with added modules) and has the following specs:


Dell R510 Server 1 (Phoenix)

  • Primary Purpose: Storage
  • CPU: Single E5520 Xeon CPU (quad core)
  • RAM: 8GB DDR3 ECC Registered
  • First RAIDset: Two 160GB SATA drives in RAID1 (Operating System)
  • Second RAIDset: Three 2TB SATA drives in RAID5 (NFS Share)
  • Third RAIDset: Three 2TB SATA drives in RAID5 (iSCSI Share)
  • Ethernet: Bonded quad NIC (bond0) for storage network
  • Ethernet: Bonded dual NIC (bond1) for management network
  • Ethernet: Single NIC for Dell iDRAC/IPMI

The volumes on the server as setup in the following way:

Operating system – CentOS 6.4 x64
First drive (/dev/sda) – This is a RAID1 stripe of the 160GB drives
Primary partition (dev/sda1) – the /boot partition which is just a 100MB
Second partition (/dev/sda2) – the swap partition which is 8GB (matches ram)
Third partition (/dev/sda3) – the / partition which is where the operating system is loaded

NFS RAIDset (/dev/sdb)
Created via:

parted dev/sdb
(parted) mklabel gpt
(parted) unit TB
(parted) mkpart primary 0 0
(parted) mkpart primary 0.00TB 4.00TB
(parted) print
~~~
Model: DELL PERC 6/i (scsi)

Disk /dev/sdb: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 4397GB 4397GB primary lvm
~~~

iSCSI RAIDset (/dev/sdc)

Created via:

parted dev/sdc
(parted) mklabel gpt
(parted) unit TB
(parted) mkpart primary 0 0
(parted) mkpart primary 0.00TB 4.00TB
(parted) print
~~~
Model: DELL PERC 6/i (scsi)

Disk /dev/sdc: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 4397GB 4397GB primary lvm
~~~

The above methods create the two physical partitions we will be using for storage. The next step is to create physical volumes. This done via pcvreate /dev/sdb1 and pvcreate /dev/sdc1. I can then create volume groups (these can be combined to span other drives as well, such as if you add more drives later, or a RAID array as /dev/sdd for example, and want to grow your volume to include it).  In my case, I will be creating two volume groups (each with a different purpose). These are

vgcreate vg_nfs_share /dev/sdb1 #Create a volume group called vg_nfs_share using all of device sdb1
vgcreate vg_iscsi_share /dev/sdc1 #Create volume group called vg_iscsi_share using all of device sdc1

The reason I use the vg_ name above is that it’s easy to determine later. If you use other names, you have to try to remember what you named what (or use the vgscan command, but for me the nomenclature makes it easy to remember as long as you use it consistently). You can use volume groups to span multiple physical volumes (for example you could have run vgcreate vg_nfs_share /dev/sdb1 /dev/sdc1 which would have combined both volumes into one large share, which how some of the NAS and SAN vendors operate).

Keep in mind that nothing has been formatted yet (we are just creating storage groups and volume structure). The next step is to create a logical volume that we can actually format to ext4 and use.  I will create on logical volume for storing NFS images. I will create this logical volume within the volume group I created earlier called vg_nfs_share:

lvcreate -L 1000g -n images vg_nfs_share # This creates a 1TB logical volume called “images” in the vg_nfs_share volume group

Similarly I will create a logical volume that will be used to store ISO images (so I can load the OS on the virtual machines I will create):

lvcreate -L 60g -n ISO vg_nfs_share # This creates a 60GB partition called ISO in the vg_nfs_share volume group

Although jumping the gun a little bit, I will create one more logical volume called Punisher in the my iSCSI volume group (I will talk about why later when I discuss this machine):

lvcreate -L 80G -n punisher vg_iscsi_share # This creates an 80GB logical volume called punisher inside the volume group vg_iscsi_share

You can then take a look at everything you have create from the logical volume side by typing in the lvdisplay command:

[root@Phoenix ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vg_iscsi_share/punisher
LV Name punisher
VG Name vg_iscsi_share
LV UUID LpUq0Q-Np0I-mtZ9-KURe-3kmg-unV0-GQ5agF
LV Write Access read/write
LV Creation host, time Phoenix.storage, 2013-05-07 16:22:46 -0400
LV Status available
# open 1
LV Size 80.00 GiB
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Path /dev/vg_nfs_share/images
LV Name images
VG Name vg_nfs_share
LV UUID fpG4bP-p6qV-ZVLU-oK5G-61m2-og1x-8OQpLq
LV Write Access read/write
LV Creation host, time Phoenix.storage, 2013-05-03 05:18:40 -0400
LV Status available
# open 1
LV Size 1000.00 GiB
Current LE 256000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Path /dev/vg_nfs_share/ISO
LV Name ISO
VG Name vg_nfs_share
LV UUID t8vmYQ-x4Fl-005X-N0Ad-W1Co-Ate9-kOf53p
LV Write Access read/write
LV Creation host, time Phoenix.storage, 2013-05-06 07:06:35 -0400
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

The last step is to install the ext4 filesystem on our images and ISO logical volumes we created. The Punisher volume will be used as an iscsi_target, and will not need to be formatted as it will be mounted as a block device (more on that when discussing the Punisher virtual machine). The commands to format our logical volumes using ext4 is:

mkfs.ext4 /dev/vg_nfs_share/images
mkfs.ext4 /dev/vg_nfs_share/ISO

Lastly, just so I know what’s inside these shares, so  I can browse what’s inside these guys, I will add them to my /etc/fstab (file system table) file so that I can mount them on bootup. I create three folders (using the mkdir command), one is called /virtualmachines, and inside that folder I create a folder called ISO and another one called nfsimages.  In my /etc/fstab file, I add these lines:

/dev/mapper/vg_nfs_share-ISO /virtualmachines/ISO ext4 defaults 0 0
/dev/mapper/vg_nfs_share-images /virtualmachines/nfsimages ext4 defaults 0 0

Now I can just browse the /virtualmachines folder and look what’s mounted inside. To check what is mounted outside the default operating system config, just run the df -h command:

[root@Phoenix mapper]# df -h
Filesystem
Size Used Avail Use% Mounted on
/dev/mapper/vg_phoenix-lv_root
127G 1.3G 119G 2% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda1 485M 52M 408M 12% /boot
/dev/mapper/vg_phoenix-lv_home
12G 158M 11G 2% /home
/dev/mapper/vg_nfs_share-ISO
60G 22G 38G 37% /virtualmachines/ISO
/dev/mapper/vg_nfs_share-images
985G 25G 960G 3% /virtualmachines/nfsimages
/dev/sdd1 2.7T 861G 1.9T 32% /backups

I’ll include a picture that best describes physical volumes, volume groups, and logical volumes here once I get a couple minutes to get everything together.

For the networking side, I have the following configuration:
bond0 (NICs p1p1 to p1p4, which is all four NICs on my quad gigabit card) – 10.0.0.2/24
bond1 (NICs em1 and em2, the two NICs that come onboard with the Dell) – 192.168.2.2/24
IPMI port (the management port on the Dell) – 192.168.2.202/24

I’ll go into the networking side of things and how I set all this up in the Networking section of the home.lab project. The above IPs are what are most important. The bond-0 address has no gateway; that is just the storage area network. All they hypervisors have one NIC on this subnet to get their NFS and iSCSI images. The bond-1 port is to manage Phoenix if I need to get into it. The IPMI port is if the machine is frozen or I need to do a hard reboot (generally the IPMI port will be available even if the OS freezes on the machine).

« Homegrown SuperMicro ESXi Hypervisor (Deadpool)
Hypervisor Network »

Leave a comment Cancel reply

Your email address will not be published. Required fields are marked *

Around the Way

July 2013
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  
« Apr   Aug »

Recent Posts

  • Getting spice client console working in Virt-Manager for Mint 19
  • Using RealVNC in Windows to connect to a KVM session
  • Switching the default user when using SSH
  • X-Men Apocalypse Movie Review
  • Getting Virt-Manager working in Ubuntu 16.04 LTS bypassing openssh-askpass

Old School

  • September 2018
  • August 2016
  • July 2016
  • June 2016
  • April 2016
  • February 2016
  • January 2016
  • August 2013
  • July 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012

Archives

  • September 2018
  • August 2016
  • July 2016
  • June 2016
  • April 2016
  • February 2016
  • January 2016
  • August 2013
  • July 2013
  • April 2013
  • February 2013
  • December 2012
  • November 2012

Recent Posts

  • Getting spice client console working in Virt-Manager for Mint 19
  • Using RealVNC in Windows to connect to a KVM session
  • Switching the default user when using SSH
  • X-Men Apocalypse Movie Review
  • Getting Virt-Manager working in Ubuntu 16.04 LTS bypassing openssh-askpass
© Chubby Apple