Sunday 26 January 2014

Memcached

If you've read anything about scaling large websites, you've probably heard about memcached. memcached is a high-performance, distributed memory object caching system. Here at Facebook, we're likely the world's largest user of memcached. We use memcached to alleviate database load. memcached is already fast, but we need it to be faster and more efficient than most installations. We use more than 800 servers supplying over 28 terabytes of memory to our users. Over the past year as Facebook's popularity has skyrocketed, we've run into a number of scaling issues. This ever increasing demand has required us to make modifications to both our operating system and memcached to achieve the performance that provides the best possible experience for our users. 

Because we have thousands and thousands of computers, each running a hundred or more Apache processes, we end up with hundreds of thousands of TCP connections open to our memcached processes. The connections themselves are not a big problem, but the way memcached allocates memory for each TCP connection is. memcached uses a per-connection buffer to read and write data out over the network. When you get into hundreds of thousands of connections, this adds up to gigabytes of memory-- memory that could be better used to store user data. To reclaim this memory for user data, we implemented a per-thread shared connection buffer pool for TCP and UDP sockets. This change enabled us to reclaim multiple gigabytes of memory per server. 

Although we improved the memory efficiency with TCP, we moved to UDP for get operations to reduce network traffic and implement application-level flow control for multi-gets (gets of hundreds of keys in parallel). We discovered that under load on Linux, UDP performance was downright horrible. This is caused by considerable lock contention on the UDP socket lock when transmitting through a single socket from multiple threads. Fixing the kernel by breaking up the lock is not easy. Instead, we used separate UDP sockets for transmitting replies (with one of these reply sockets per thread). With this change, we were able to deploy UDP without compromising performance on the backend. 

Another issue we saw in Linux is that under load, one core would get saturated, doing network soft interrupt handing, throttling network IO. In Linux, a network interrupt is delivered to one of the cores, consequently all receive soft interrupt network processing happens on that one core. Additionally, we saw an excessively high rate of interrupts for certain network cards. We solved both of these by introducing “opportunistic” polling of the network interfaces. In this model, we do a combination of interrupt driven and polling driven network IO. We poll the network interface anytime we enter the network driver (typically for transmitting a packet) and from the process scheduler’s idle loop. In addition, we also take interrupts (to keep latencies bounded) but we take far fewer network interrupts (typically by setting interrupt coalescing thresholds aggressively). Since we do network transmission on every core and since we poll for network IO from the scheduler’s idle loop, we distribute network processing evenly across all cores. 

Finally, as we started deploying 8-core machines and in our testing, we discovered new bottlenecks. First, memcached's stat collection relied on a global lock. A nuisance with 4 cores, with 8 cores, the lock now accounted for 20-30% of CPU usage. We eliminated this bottleneck by moving stats collection per-thread and aggregating results on-demand. Second, we noticed that as we increased the number of threads transmitting UDP packets, performance decreased. We found significant contention on the lock that protects each network device’s transmit queue. Packets are enqueued for transmission and dequeued by the device driver. This queue is managed bv Linux’s “netdevice” layer that sits in-between IP and device drivers. Packets are added and removed from the queue one at a time, causing significant contention. One of our engineers changed the dequeue algorithm to batch dequeues for transmit, drop the queue lock, and then transmit the batched packets. This change amortizes the cost of the lock acquisition over many packets and reduces lock contention significantly, allowing us to scale memcached to 8 threads on an 8-core system. 

Since we’ve made all these changes, we have been able to scale memcached to handle 200,000 UDP requests per second with an average latency of 173 microseconds. The total throughput achieved is 300,000 UDP requests/s, but the latency at that request rate is too high to be useful in our system. This is an amazing increase from 50,000 UDP requests/s using the stock version of Linux and memcached.

We’re hoping to get our changes integrated into the official memcached repository soon, but until that happens, we’ve decided to release all our changes to memcached on github.

Thursday 2 January 2014

Install Cacti (Network Monitoring) on RHEL/CentOS 6.3/5.8 and Fedora 17-12

Cacti tool is an open source web based network monitoring and system monitoring graphing solution for IT business. Cacti enables a user to poll services at regular intervals to create graphs on resulting data using RRDtool. Generally, it is used to graph time-series data of metrics such as network bandwidth utilization, CPU load, running processes, disk space etc.
Install Cacti in Linux
Install Cacti in RHEL / CentOS / Fedora
In this how-to we are going to show you how to install and setup complete network monitoring application called Cacti using Net-SNMP tool on RHEL 6.3/6.2/6.1/6/5.8CentOS 6.3/6.2/6.1/6/5.8 and Fedora 17,16,15,14,13,12 systems using YUM package manager tool.

Cacti Required Packages

The Cacti required following packages to be installed on your Linux operating systems likeRHEL / CentOS / Fedora.
  1. Apache : A Web server to display network graphs created by PHP and RRDTool.
  2. MySQL : A Database server to store cacti information.
  3. PHP : A script module to create graphs using RRDTool.
  4. PHP-SNMP : A PHP extension for SNMP to access data.
  5. NET-SNMP : A SNMP (Simple Network Management Protocol) is used to manage network.
  6. RRDTool : A database tool to manage and retrieve time series data like CPU load,Network Bandwidth etc.

Installing Cacti Required Packages on RHEL / CentOS / Fedora

First, we need to install following dependency packages one-by-one using YUM package manager tool.

Install Apache

# yum install httpd httpd-devel

Install MySQL

# yum install mysql mysql-server

Install PHP

# yum install php-mysql php-pear php-common php-gd php-devel php php-mbstring php-cli php-mysql

Install PHP-SNMP

# yum install php-snmp

Install NET-SNMP

# yum install net-snmp-utils p net-snmp-libs php-pear-Net-SMTP

Install RRDTool

# yum install rrdtool

Staring Apache, MySQL and SNMP Services

Once you’ve installed all the required software’s for Cacti installation, lets start them one-by-one using following commands.
Starting Apache
# /etc/init.d/httpd start
OR
# service httpd start
Starting MySQL
# /etc/init.d/mysqld start
OR
# service mysqld start
Starting SNMP
# /etc/init.d/snmpd start
OR
# service snmpd start
Configure Start-up Links
Configuring ApacheMySQL and SNMP Services to start on boot.
# /sbin/chkconfig --levels 345 httpd on
# /sbin/chkconfig --levels 345 mysqld on
# /sbin/chkconfig --levels 345 snmpd on

Install Cacti on RHEL / CentOS / Fedora

Here, you need to install and enable EPEL Repository. Once you’ve enabled repository, type the following command to install Cacti application.
# yum install cacti

Sample Output:

Loaded plugins: fastestmirror, refresh-packagekit
Resolving Dependencies
--> Running transaction check
---> Package cacti.noarch 0:0.8.8a-2.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package    Arch  Version    Repository  Size
================================================================================
Installing:
 cacti                  noarch  0.8.8a-2.el6  epel            2.0 M

Transaction Summary
================================================================================
Install       1 Package(s)

Total download size: 2.0 M
Installed size: 5.4 M
Is this ok [y/N]: y
Downloading Packages:
cacti-0.8.8a-2.el6.noarch.rpm                           | 2.0 MB     00:40
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : cacti-0.8.8a-2.el6.noarch      1/1
  Verifying  : cacti-0.8.8a-2.el6.noarch      1/1

Installed:
  cacti.noarch 0:0.8.8a-2.el6

Complete!

Configuring MySQL Server for Cacti Installation

We need to configure MySQL for Cacti, to do this we need to set password for our newly installed MySQL server and then we will create Cacti database with user Cacti. If you’reMySQL is already password protected, then don’t need to set it again.

Set MySQL Password

To set new password for MySQL server, use the following command. (Note : This is for new MySQL installation only).
# mysqladmin -u root password YOUR-PASSWORD-HERE

Create MySQL Cacti Database

Login into MySQL server with newly created password and create Cacti database with userCacti and set the password for it.
# mysql -u root -p
mysql> create database cacti;
mysql> GRANT ALL ON cacti.* TO cacti@localhost IDENTIFIED BY 'your-password-here';
mysql> FLUSH privileges;
mysql> quit;

Install Cacti Tables to MySQL

Find out the database file path using RPM command, to install cacti tables into newly createdCacti database, use the following command.
# rpm -ql cacti | grep cacti.sql
Sample Output:
/usr/share/doc/cacti-0.8.7d/cacti.sql
Now we’ve of the location of Cacti.sql file, type the following command to install tables, here you need to type the Cacti user password.
mysql -u cacti -p cacti < /usr/share/doc/cacti-0.8.8a/cacti.sql

Configure MySQL settings for Cacti

Open the file called /etc/cacti/db.php with any editor.
# vi /etc/cacti/db.php
Make the following changes and save the file. Make sure you set password correctly.
/* make sure these values reflect your actual database/host/user/password */
$database_type = "mysql";
$database_default = "cacti";
$database_hostname = "localhost";
$database_username = "cacti";
$database_password = "your-password-here";
$database_port = "3306";
$database_ssl = false;

Configuring Apache Server for Cacti Installation

Open file called /etc/httpd/conf.d/cacti.conf with your choice of editor.
# vi /etc/httpd/conf.d/cacti.conf
You need to enabled access to Cacti application for your local network or per IP level. For example we’ve enabled access to our local LAN network 172.16.16.0/20. In your case, it would be different.
Alias /cacti    /usr/share/cacti
 
<Directory /usr/share/cacti/>
        Order Deny,Allow
        Deny from all
        Allow from 172.16.16.0/20
</Directory>
Finally, restart the Apache service.
# /etc/init.d/httpd restart
OR
# service httpd restart

Setting Cron for Cacti

Open file /etc/cron.d/cacti.
# vi /etc/cron.d/cacti
Uncomment the following line. The poller.php script runs every 5mins and collects data of known host which is used by Cacti application to display graphs.
#*/5 * * * *    cacti   /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1

Running Cacti Installer Setup

Finally, Cacti is ready, just go to http://YOUR-IP-HERE/cacti/ & follow the installer instruction through the following screens. Click Next button.
Cacti Installer Screen
Cacti Setup Screen
Please choose installation Type as “New Install“.
Cacti New Install Setup
Select Cacti New Install
Make sure all the following values are correct before continuing. Click Finish button.
Cacti Installation
Cacti Installation Directories
Cacti Login Screen, enter username as admin and password as admin.
Cacti Login Screen
Cacti Login Screen
Once you’ve entered username and password, it will ask you to enter a new password for cacti.
Set Cacti Password
Cacti Force Password Screen
Cacti Console Screen.
Cacti Console
Cacti Console Screen

How to Create New Graphs

To create graphs, Click on New Graphs –> Select Host –> Select SNMP – Interface Statistics and Select a graph type In/Out Bits. Click on Create button. Please refer screen below.
Create Graphs in Cacti
How to Create Graphs in Cacti
For more information and usage please visit the Cacti Page.

Sunday 30 June 2013

How to configure NIS Server in Linux

NIS, or Network Information Systems, is a network service that allows authentication and login information to be stored on a centrally located server. This includes the username and password database for login authentication, database of user groups, and the locations of home directories.

RHCE exam questions

One NIS Domain named rhce is configured in your lab, server is 192.168.0.254. nis1, nis2,nis3 user are created on domain server. Make your system as a member of rhce domain. Make sure that when nis user login in your system home directory should get by them. Home directory is shared on server /rhome/nis1.
RHCE exam doesn't ask candidate to configure NIS server. It test only NIS client side configuration. As you can see in example questions. But here in this article we will configure both server and client side for testing purpose so you can get more depth knowledge of nis server

Configure NIS server

In this example we will configure a NIS server and a user nis1 will login from client side.
For this example we are using two systems one linux server one linux clients . To complete these per quest of nis server Follow this link
Network configuration in Linux
  • A linux server with ip address 192.168.0.254 and hostname Server
  • A linux client with ip address 192.168.0.1 and hostname Client1
  • Updated /etc/hosts file on both linux system
  • Running portmap and xinetd services
  • Firewall should be off on server
We suggest you to review that article before start configuration of nis server. Once you have completed the necessary steps follow this guide.Seven rpm are required to configure nis server. ypserv, cach, nfs, make, ypbind, portmap, xinetd check them if not found then install
rpm
Now check nfs,ypserv,yppasswdd,ypbind, portmap, xinetd service in system service it should be on
#setup
Select  System service
from list
[*]portmap
[*]xinetd
[*]nfs
[*]ypserv
[*]yppasswdd
[*]ypbind
Now open /etc/sysconfig/network file
network
Set hostname and NIS domain name as shown here and save file
change hostname
Now create a user named nis1 and give his home directory on /rhome with full permission
useradd
Now open /etc/exports file
exportfs
share /rhome/nis1 directory for network
exportfs
save this with :wq and exit
now open /var/yp/Makefile file
Makefile
and locate line number 109 [ use ESC + : +set nu command to show hidden lines or read our vi editor article to know more about vi command line option ]
Makefile
Now remove other entry from this line excepts passwd group hosts netid \ [as shown here]
Makefile
save this with :wq and exit
Now restart these service
#service portmap restart
#service xinetd restart
#service nfs restart
#service ypserv restart
#service yppasswdd restart
Don't restart ypbind service at this time as we haven't updated our database
Now change directory to /var/yp and run make command to create database
make
now update this database by running this commands [ first add server and then add all client machine one by one. After adding press CTRL+D to save, confirm by pressing y]
ypinit
Now once again restart all these service this time there should be no error
#service portmap restart
#service xinetd restart
#service nfs restart
#service ypserv restart
#service yppasswdd restart
#service ypbind restart
Now set all these service to on with chkconfig so these could be on after restart
#chkconfig portmap on
#chkconfig xinetd on
#chkconfig nfs on
#chkconfig ypserv on
#chkconfig yppasswdd on
#chkconfig ypbind on

Client configuration

Before you start client configuration we suggest you to check proper connectivity between server and client. First try to login on NIS server from telnet. If you can successfully login via telnet then try to mount /rhome/nis1 directory via nfs server. If you get any error in telnet or nfs then remove those error first. You can read our pervious article for configuration related help.
To know how configure nfs server read
How to configure nfs server in Linux
To know how configure telnet server read
How to configure telnet server in Linux
Once you successfully completed necessary test then start configuration of client sides.
Two rpm are required to configure clients yp-tools and ypbind check them for install
rpm
now open /etc/sysconfig/network file
vi network
and make change as shown here
network
save the file with :wq and exit
now run setup command and select authentication configuration from list
#setup
select authenications
now check mark on NIS and enter on next
setup
Set domain name to rhce and server to 192.168.0.254 and click on ok
set nis domain name
No error should be occurred here if you see any error then check all configuration.
no open /etc/auto.master file
auto.master
in the end of file do editing of /rhome as shown here
editing in auto.master
save the file with :wq and exit
now open /etc/auto.misc file
auto.misc
in the end of file do editing of user nis1 as shown here
auto.misc
save the file with :wq and exit
now restart autofs and ypbind service
service restart
set these service on via chkconfig commands
#chkconfig autofs on
#chkconfig ypbind on
now restart the system
 #reboot -f 
login from nis1 user on client system
user login

Monday 1 April 2013

Linux Network bonding – setup guide

Linux Network bonding – setup guide

Linux network Bonding is creation of a single bonded interface by combining 2 or more Ethernet interfaces. This helps in high availability of your network interface and offers performance improvement. Bonding is same as port trunking or teaming.

Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed

Steps for bonding in Oracle Enterprise Linux and Redhat Enterprise Linux are as follows..

Step 1.

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168.1.12
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

Step 2.
Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above. Make sure you add the MASTER and SLAVE configuration in these files.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
# Settings for Bond
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none 
ONBOOT=yes
USERCTL=no
# Settings for bonding
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes


Step 3.
Set the parameters for bond0 bonding kernel module. Select the network bonding mode based on you need, documented at http://unixfoo.blogspot.com/2008/02/network-bonding-part-ii-modes-of.html. The modes are
  • mode=0 (Balance Round Robin)
  • mode=1 (Active backup)
  • mode=2 (Balance XOR)
  • mode=3 (Broadcast)
  • mode=4 (802.3ad)
  • mode=5 (Balance TLB)
  • mode=6 (Balance ALB)



Add the following lines to /etc/modprobe.conf
# bonding commands
alias bond0 bonding
options bond0 mode=1 miimon=100

Step 4.

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5.

Restart the network, or restart the computer.

$ service network restart # Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0


Look at ifconfig -a and check that your bond0 interface is active. You are done!. For more details on the different modes of bonding, please refer to unixfoo’s modes of bonding.



To verify whether the failover bonding works..
  • Do an ifdown eth0 and check /proc/net/bonding/bond0 and check the “Current Active slave”.
  • Do a continuous ping to the bond0 ipaddress from a different machine and do a ifdown the active interface. The ping should not break. 

Modes of bonding


The steps for creating network boding in Linux is available in http://unixfoo.blogspot.com/search/label/networking. RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.
  • Mode 0 (balance-rr)
    This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.
  • Mode 1 (active-backup)
    This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.
  • Mode 2 (balance-xor)
    Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.
  • Mode 3 (broadcast)
    This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.
  • Mode 4 (802.3ad)
    This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.
  • Mode 5 (balance-tlb)
    This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.
  • Mode 6 (balance-alb)
    This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

Monday 11 February 2013

Booting Process in Solaris


Booting Process in Solaris

1. Boot prom phase
2. Boot program phase
3. Kernel initialization phase
4. Init initialization phase
Power on –> POST –>Boot Device (1-15) –>ufs boot loader –>Kernel –>/file system –>/sbin/init –> /svc/lib/svc.startd
1. Boot prom phase
When we power on the server it displays banner. Banner includes host id, mac address, promt chip release & version, physical memory size.
After displays the banner it starts post (Power On Self Test) and starts boot program phase
2. Boot program phase
Here it starts reading the boot program which is available in 1-15 sector of the HDD.
This 1-15 sector of HDD contains ufs which is responsible for loading secondary boot program called ufs boot loader.
3. Kernel initialization phase
Ufs boot loader loads the kernel in to the memory. After the loading the kernel it starts unmapping the ufs boot loader for loading operating system (O/S) modules and start mounting the root file system.
4. Init initialization phase
Here it starts /sbin/init process which is invoke /svc/lib/svc.startd which is responsible for following process
a. configuring all network devices
b. mounting all file system
c. starts all network services
d. runs rc-scripts which brings the machine to multi user mode
NOTE:-
In solaros_10 svc.startd act as a separate boot phase
The common process which starts at boot time in all flavor of Unix is init. But where as in Solaris there is another process which starts before init is swapper and its process id is 0 but it is in solaris 9. from solaris 10 its renamed as scheduled process at the same process id.
Daemon
Its continuous processes which runs in a background and provide service as per client request. The daemon which is responsible for starting the service and stopping the service is init.d But from solaris_10 its replaced with svc.startd
Stand Alone Services
This services which as starts as start time and which as ends down time its called as Stand Alone Services . these are stored under /etc/init.d directory
PORT NO
For every service we are having an address called port number these are the port no are stored under /etc/services file. /etc/services file and /etc/init/service are having symbolically link