Thursday, July 29, 2010

Loop device information

You can see what is being used by a loop device with losetup:
# losetup /dev/loop0
/dev/loop0: [fd06]:234921356 (/linux/isos/backtrack.iso)

To detach an image from loop device
losetup -d /dev/loop0 ## to detach image associate with loop0

It is possible to increase the number of available loop devices. Free
all loop devices, and add a line with the following to
/etc/modprobe.conf:
options loop max_loop=64

(maximum is 256)

Then, do: rmmod loop && modprobe loop

If you get an error that the module couldn't be removed, you still have
loop devices in use.

Newer kernels (2.6.21 or 2.6.22) use a dynamic allocation of loop
devices, so you will only have to create the filesystem representation
of the devices:
for ((i=8;i<64;i++)); do
[ -e /dev/loop$i ] || mknod -m 0600 /dev/loop$i b 7 $i
done

Thursday, July 22, 2010

10 Steps to Configure tftpboot Server in UNIX / Linux (For installing Linux from Network using PXE)

In this article, let us discuss about how to setup tftpboot, including installation of necessary packages, and tftpboot configurations.


TFTP boot service is primarily used to perform OS installation on a remote machine for which you don’t have the physical access. In order to perform the OS installation successfully, there should be a way to reboot the remote server — either using wakeonlan or someone manually rebooting it or some other ways.


In those scenarios, you can setup the tftpboot services accordingly and the OS installation can be done remotely (you need to have the autoyast configuration file to automate the OS installation steps).



Step by step procedure is presented in this article for the SLES10-SP3 in 64bit architecture. However, these steps are pretty much similar to any other Linux distributions.


Required Packages


The following packages needs to be installed for the tftpboot setup.




  • dhcp services packages: dhcp-3.0.7-7.5.20.x86_64.rpm and dhcp-server-3.0.7-7.5.20.x86_64.rpm

  • tftpboot package: tftp-0.48-1.6.x86_64.rpm

  • pxeboot package: syslinux-3.11-20.14.26.x86_64.rpm


Package Installation


Install the packages for the dhcp server services:


$ rpm -ivh dhcp-3.0.7-7.5.20.x86_64.rpm
Preparing... ########################################### [100%]
1:dhcp ########################################### [100%]

$ rpm -ivh dhcp-server-3.0.7-7.5.20.x86_64.rpm
Preparing... ########################################### [100%]
1:dhcp ########################################### [100%]

$ rpm -ivh tftp-0.48-1.6.x86_64.rpm

$ rpm -ivh syslinux-3.11-20.14.26.x86_64.rpm

After installing the syslinux package, pxelinux.0 file will be created under /usr/share/pxelinux/ directory. This is required to load install kernel and initrd images on the client machine.


Verify that the packages are successfully installed.



$ rpm -qa | grep dhcp
$ rpm -qa | grep tftp

Download the appropriate tftpserver from the repository of your respective Linux distribution.


Steps to setup tftpboot


Step 1: Create /tftpboot directory


Create the tftpboot directory under root directory ( / ) as shown below.


# mkdir /tftpboot/

Step 2: Copy the pxelinux image


PXE Linux image will be available once you installed the syslinux package. Copy this to /tftpboot path as shown below.


# cp /usr/share/syslinux/pxelinux.0 /tftpboot


Step 3: Create the mount point for ISO and mount the ISO image


Let us assume that we are going to install the SLES10 SP3 Linux distribution on a remote server. If you have the SUSE10-SP3 DVD insert it in the drive or mount the ISO image which you have. Here, the iso image has been mounted as follows:


# mkdir /tftpboot/sles10_sp3

# mount -o loop SLES-10-SP3-DVD-x86_64.iso /tftpboot/sles10_sp3

Refer to our earlier article on How to mount and view ISO files.


Step 4: Copy the vmlinuz and initrd images into /tftpboot


Copy the initrd to the tftpboot directory as shown below.


# cd /tftpboot/sles10_sp3/boot/x86_64/loader

# cp initrd linux /tftpboot/

Step 5: Create pxelinux.cfg Directory



Create the directory pxelinux.cfg under /tftpboot and define the pxe boot definitions for the client.


# mkdir /tftpboot/pxelinux.cfg

# cat >/tftpboot/pxelinux.cfg/default
default linux
label linux
kernel linux
append initrd=initrd showopts instmode=nfs install=nfs://192.168.1.101/tftpboot/sles10_sp3/

The following options are used for,



  • kernel – specifies where to find the Linux install kernel on the TFTP server.

  • install – specifies boot arguments to pass to the install kernel.


As per the entries above, the nfs install mode is used for serving install RPMs and configuration files. So, have the nfs setup in this machine with the /tftpboot directory in the exported list. You can add the “autoyast” option with the autoyast configuration file to automate the OS installation steps otherwise you need to do run through the installation steps manually.


Step 6: Change the owner and permission for /tftpboot directory



Assign nobody:nobody to /tftpboot directory.


# chown nobody:nobody /tftpboot

# chmod 777 /tftpboot

Step 7: Modify /etc/dhcpd.conf


Modify the /etc/dhcpd.conf as shown below.


# cat /etc/dhcpd.conf

ddns-update-style none;
default-lease-time 14400;
filename "pxelinux.0";

# IP address of the dhcp server nothing but this machine.
next-server 192.168.1.101;
subnet 192.168.1.0 netmask 255.255.255.0 {
# ip distribution range between 192.168.1.1 to 192.168.1.100
range 192.168.1.1 192.168.1.100;
default-lease-time 10;
max-lease-time 10;
}

Specify the interface in /etc/syslinux/dhcpd to listen dhcp requests coming from clients.


# cat /etc/syslinux/dhcpd | grep DHCPD_INTERFACE
DHCPD_INTERFACE=”eth1”;

Here, this machine has the ip address of 192.168.1.101 on the eth1 device. So, specify eth1 for the DHCPD_INTERFACE as shown above.


On a related note, refer to our earlier article about 7 examples to configure network interface using ifconfig.



Step 8: Modify /etc/xinetd.d/tftp


Modify the /etc/xinetd.d/tftp file to reflect the following. By default the value for disable parameter is “yes”, please make sure you modify it to “no” and you need to change the server_args entry to -s /tftpboot.


# cat /etc/xinetd.d/tftp
service tftp {
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /tftpboot
disable = no
}

Step 9: No changes in /etc/xinetd.conf


There is no need to modify the etc/xinetd.conf file. Use the default values specified in the xinetd.conf file.


Step 10: Restart xinetd, dhcpd and nfs services


Restart these services as shown below.


# /etc/init.d/xinetd restart

# /etc/init.d/dhcpd restart

# /etc/init.d/nfsserver restart

After restarting the nfs services, you can view the exported directory list(/tftpboot) by the following command,



# showmount -e

Finally, the tftpboot setup is ready and now the client machine can be booted after changing the first boot device as “network” in the BIOS settings.


If you encounter any tftp error, you can do the troubleshooting by retrieving some files through tftpd service.


Retrieve some file from the tftpserver to make sure tftp service is working properly using the tftp client. Let us that assume that sample.txt file is present under /tftpboot directory.


 $ tftp -v 192.168.1.101 -c get sample.txt

Monday, July 19, 2010

Using the DBI Framework

Using the DBI Framework



Here are the basic steps for using DBI. For
more information on DBI, see Programming the Perl
DBI
by Alligator Descartes and Tim Bunce
(O'Reilly).




Step 1: Load the necessary Perl module



Nothing special here, you need to just:



use DBI;




Step 2: Connect to the database and receive a connection handle


The Perl code to establish a DBI connection to a MySQL database and
return a database handle looks like this:



# connect using to the database named $database using given 

# username and password, return a database handle
$database = "sysadm";
$dbh = DBI->connect("DBI:mysql:$database",$username,$pw);
die "Unable to connect: $DBI::errstr\n" unless (defined $dbh);


DBI will load the low-level DBD driver for us
(DBD::mysql) prior to actually connecting to the
server. We then test if the connect( ) succeeded
before continuing. DBI provides RaiseError and

PrintError options for connect(
)
, should we want DBI to perform this test or
automatically complain about errors when they happen. For example, if
we used:



$dbh = DBI->connect("DBI:mysql:$database",

$username,$pw,{RaiseError => 1});


then DBI would call die for us if the
connect( ) failed.




Step 3: Send SQL commands to the server


With our Perl module loaded and a connection to the database server
in place, it's showtime! Let's send some SQL commands to
the server. We'll use some of the SQL tutorial queries from
Appendix D, "The Fifteen-Minute SQL Tutorial" for examples. These queries will use the
Perl q convention for quoting (i.e.,
something is written as
q{something}), just so we don't have to
worry about single or double quotes in the actual queries themselves.
Here's the first of the two DBI methods for sending commands:



$results=$dbh->do(q{UPDATE hosts 

SET bldg = 'Main'
WHERE name = 'bendir'});
die "Unable to perform update:$DBI::errstr\n" unless (defined $results);



$results will receive either the number of rows
updated or undef if an error occurs. Though it
is useful to know how many rows were affected, that's not going
to cut it for statements like SELECT where we
need to see the actual data. This is where the second method comes
in.


To use the second method you first prepare a SQL
statement for use and then you ask the server to
execute it. Here's an example:



$sth = $dbh->prepare(q{SELECT * from hosts}) or 

die "Unable to prep our query:".$dbh->errstr."\n";
$rc = $sth->execute or
die "Unable to execute our query:".$dbh->errstr."\n";



prepare( ) returns a new
creature we haven't seen before: the statement handle. Just
like a database handle refers to an open database connection, a
statement handle refers to a particular SQL statement we've
prepare( )d. Once we have this statement handle,
we use execute to actually send the query to our
server. Later on, we'll be using the same statement handle to
retrieve the results of our query.


You might wonder why we bother to prepare( ) a
statement instead of just executing it directly. prepare(
)
ing a statement gives the DBD driver (or more likely the
database client library it calls) a chance to parse the SQL query.
Once a statement has prepare(
)
d, we can execute it repeatedly via our statement handle
without parsing it over and over. Often this is a major efficiency
win. In fact, the default do( ) DBI method does
a prepare( ) and then
execute( ) behind the scenes for each statement
it is asked to execute.


Like the do call we saw earlier,
execute( ) returns the number of rows affected.
If the query affects zero rows, the string 0E0 is
returned to allow a Boolean test to succeed. -1 is
returned if the number of rows affected is unknown by the driver.



Before we
move on to ODBC, it is worth mentioning one more twist supported by
most DBD modules on the prepare( ) theme:
placeholders. Placeholders, also called positional markers, allow you
to prepare( ) an SQL statement that has holes in
it to be filled at execute( ) time. This allows
you to construct queries on the fly without paying most of the parse
time penalty. The question mark character is used as the placeholder
for a single scalar value. Here's some Perl code to demonstrate
the use of placeholders:



@machines = qw(bendir shimmer sander);

$sth = $dbh->prepare(q{SELECT name, ipaddr FROM hosts WHERE name = ?});
foreach $name (@machines){
$sth->execute($name);
do-something-with-the-results
}


Each time we go through the foreach loop, the
SELECT query is executed with a different
WHERE clause. Multiple placeholders are
straightforward:



$sth->prepare(

q{SELECT name, ipaddr FROM hosts
WHERE (name = ? AND bldg = ? AND dept = ?)});
$sth->execute($name,$bldg,$dept);


Now that we know how to retrieve the number of rows affected by
non-SELECT SQL queries, let's look into
retrieving the results of our SELECT requests.




Step 4: Retrieve SELECT results


The mechanism here is similar to our brief discussion of cursors
during the SQL tutorial in Appendix D, "The Fifteen-Minute SQL Tutorial". When we send
a SELECT statement to the server using
execute( ), we're using a mechanism that
allows us to retrieve the results one line at a time.


In DBI, we call one of the methods in Table 7-1 to
return data from the result set.



Table 7.1. DBI Methods for Returning Data





























Name



Returns



Returns If No More Rows




fetchrow_arrayref( )



An array reference to an anonymous array with values that are the
columns of the next row in a result set




undef




fetchrow_array( )



An array with values that are the columns of the next row in a result
set



An empty list




fetchrow_hashref( )



A hash reference to an anonymous hash with keys that are the column
names and values that are the values of the columns of the next row
in a result set




undef




fetchall_arrayref( )



A reference to an array of arrays data structure



A reference to an empty array



Let's see these methods in context. For each of these examples,
assume the following was executed just prior:



$sth = $dbh->prepare(q{SELECT name,ipaddr,dept from hosts}) or

die "Unable to prepare our query: ".$dbh->errstr."\n";
$sth->execute or die "Unable to execute our query: ".$dbh->errstr."\n";


Here's fetchrow_arrayref( ) in action:



while ($aref = $sth->fetchrow_arrayref){

print "name: " . $aref->[0] . "\n";
print "ipaddr: " . $aref->[1] . "\n";
print "dept: " . $aref->[2] . "\n";
}


The DBI documentation mentions that fetchrow_hashref(
)
is less efficient than fetchrow_arrayref(
)
because of the extra processing it entails, but it can
yield more readable code. Here's an example:



while ($href = $sth->fetchrow_hashref){

print "name: " . $href->{name} . "\n";
print "ipaddr: " . $href->{ipaddr}. "\n";
print "dept: " . $href->{dept} . "\n";
}


Finally, let's take a look at the "convenience"
method, fetchall_arrayref( ). This method sucks
the entire result set into one data structure, returning a reference
to an array of references. Be careful to limit the size of your
queries when using this method because it does pull the entire result
set into memory. If you have a 100GB result set, this may prove to be
a bit problematic.


Each reference returned looks exactly like something we would receive
from fetchrow_arrayref( ). See Figure 7-2.




figure

Figure 7.2. The data structure returned by fetchrow_arrayref


Here's some code that will print out the entire query resultset:



$aref_aref = $sth->fetchall_arrayref;

foreach $rowref (@$aref_aref){
print "name: " . $rowref->[0] . "\n";
print "ipaddr: " . $rowref->[1] . "\n";
print "dept: " . $rowref->[2] . "\n";
print '-'x30,"\n";
}


This code sample is specific to our particular data set because it assumes a certain number of columns in a certain order. For instance,we assume the machine name is returned as the first column in the query ($rowref->[0]).

We can use some magic attributes (often called metadata) of statement handles to rewrite our result retrieval code to make it more generic.Specifically, if we look at $sth->{NUM_OF_FIELDS} after a query, it will tell us the number of fields (columns) in our result set.$sth->{NAME} contains a reference to an arraywith the names of each column. Here's a more generic way to write the last example:



$aref_aref = $sth->fetchall_arrayref;

foreach $rowref (@$aref_aref){
for ($i=0; $i < $sth->{NUM_OF_FIELDS};i++;){
print $sth->{NAME}->[$i].": ".$rowref->[$i]."\n";
}
print '-'x30,"\n";
}


Be sure to see the DBI documentation for more metadata attributes.




Step 5: Close the connection to the server


In DBI this is simply:



# tells server you will not need more data from statement handle

# (optional, since we're just about to disconnect)
$sth->finish;
# disconnects handle from database
$dbh->disconnect;





7.2.1. DBI Leftovers


There are two remaining DBI topics worth mentioning before we move on
to ODBC. The first is a set of methods I call "shortcut"
methods. The methods in Table 7-2 combine steps 3
and 4 from above.



Table 7.2. DBI Shortcut Methods




















Name



Combines These Methods into a Single Method




selectrow_arrayref($stmnt)




prepare($stmnt), execute(),
fetchrow_arrayref( )




selectcol_arrayref($stmnt)




prepare($stmnt), execute(),
(@{fetchrow_arrayref( )})[0] (i.e., returns
first column for each row)




selectrow_array($stmnt)




prepare($stmnt), execute(),
fetchrow_array( )



The second topic worth mentioning is DBI's ability to bind variables to query results. The methods bind_col() and bind_columns( ) are used to tell DBI to automatically place the results of a query into a specific variable or list of variables. This usually saves a step or two when coding. Here's an example using bind_columns( ) that makes its use clear:



$sth = $dbh->prepare(q{SELECT name,ipaddr,dept from hosts}) or

die "Unable to prep our query:".$dbh->errstr".\n";
$rc = $sth->execute or
die "Unable to execute our query:".$dbh->errstr".\n";

# these variables will receive the 1st, 2nd, and 3rd columns
# from our SELECT
$rc = $sth->bind_columns(\$name,\$ipaddr,\$dept);

while ($sth->fetchrow_arrayref){
# $name, $ipaddr, and $dept are automagically filled in from
# the fetched query results row
do-something-with-the-results
}





Tuesday, July 6, 2010

SSL Acceleration and Offloading: What Are the Security Implications?

Secure Sockets Layer (SSL) is a popular method for encrypting data transferred over the Internet. It is commonly used to provide secure transfer of credit card information and other sensitive data in an e-commerce situation. SSL can also be used to create a virtual private networking (VPN) tunnel, as an alternative to “old standbys” IPSec and PPTP. I will discuss SSL VPNs in next month’s article titled VPN Options.

SSL uses symmetric encryption (a single shared key for both encryption and decryption) to provide data confidentiality. Although this is considered less secure than asymmetric (public key) encryption that uses a matched key pair, that disadvantage is offset somewhat by the fact that symmetric encryption is much faster (something that is important in e-commerce transactions) and requires less processing. SSL encryption is strengthened by the use of a longer key; it can use DES, 3DES, RC2 and RC4, with key length up to 168 bits.


Note
Transport Layer Security (TLS) is an extension of and the successor to SSL and you will often see them discussed as “SSL/TLS.” However, the two are not interoperable. Most modern Web browsers support both.


Despite the fact that it uses faster symmetric encryption for confidentiality, SSL still causes a performance slowdown. That’s because there is more to SSL than the data encryption. The “handshake” process, whereby the server (and sometimes the client) is authenticated uses digital certificates based on asymmetric or public key encryption technology. Public key encryption is very secure, but also very processor-intensive and thus has a significant negative impact on performance. E-commerce sites are especially prone to SSL bottlenecks, and companies may lose business when customers encounter slow response and long waits.


In this article, we will take a look at some of the solutions that can be implemented to address this performance and processor-load problem. Specifically, we’ll discuss the concepts of SSL acceleration and two different SSL offloading techniques: SSL termination and SSL bridging (also referred to as SSL initiation). We’ll also look at possible security implications involved in deploying these solutions.


How SSL Works



SSL uses a “handshake” protocol to negotiate and establish a session between the client and server computers. During the handshake sequence, digital certificates are used to authenticate identity, and the communicating computers agree upon a hash algorithm (such as MD5 or SHA-1) for ensuring data integrity.


An SSL session is initiated by a message sent to the server by the client computer (called a Client Hello message). The server responds with a Server Hello message. These messages establish parameters for the communication, including what version of SSL will be used, a session ID (if the client is continuing a previous session), the “cipher suite” that will be used (this identifies the key exchange algorithm, encryption algorithm and hash function), and the compression algorithm that will be used.


The client is able to authenticate the server’s identity because the server sends its digital certificate containing its public key. In some cases, two-way authentication is necessary. That is, the server must verify the identity of the client in addition to the client verifying the server’s identity. Internet banking is a good example of such a situation. In this case, the server sends a client certificate request.


The client responds with its own certificate in two-way authentication situations. The client also sends a key exchange message with the premaster secret, which it encrypts with the server’s public key.


The server decrypts the message with its private key, which authenticates its identity since only the private key that belongs to the same key pair as the public key used to encrypt the message will be able to decrypt it.


The client may send a hash of the foregoing messages encrypted with its private key (in two-way authentication situations). The server can then verify the client’s identity by decrypting this with the client’s public key. The client will then send a message to let the server know that subsequent messages will be encrypted with the negotiated algorithms, and finally, the client sends a “Client Finished” message, which is encrypted and hashed.


The server will also send a message to tell the client that its subsequent messages will be encrypted, and then sends a “Server Finished” message that is encrypted. If the client is able to decrypt it, the handshake has succeeded. Communications between client and server are encrypted using the keys and algorithms that were negotiated in the handshake process.


What is SSL Acceleration?


One of the first methods used to address the SSL performance problem was the hardware accelerator. This is a card that plugs into a PCI slot or SCSI port and contains a co-processor that performs part of the SSL processing, relieving the load on the Web server’s main processor. SSL hardware accelerators are made by a number of vendors, including nCipher (www.ncipher.com/nfast).



Typically, only the RSA operations that use public key cryptography are offloaded to a hardware accelerator. That’s because the symmetric encryption is much faster and doesn’t need to be offloaded; in fact, offloading those operations could actually result in degrading performance. So the accelerator card performs the asymmetric cryptography operations and the symmetric cryptography operations are performed by the server’s main processor.


The level of performance improvement that you get with a hardware accelerator varies from one vendor to another. Some vendors claim an increase in SSL processing capacity of 500% or more. You can add more than one card to the same server to increase capacity even more, and you can install dual cards for high availability and failover. Some cards also include additional functions such as key management.


Some accelerators, called network accelerators as opposed to server-side accelerators, are designed to work with network switches and intercept and decrypt SSL traffic before it reaches the server. This goes beyond mere acceleration and gets into the area of SSL offloading.


What is SSL Offloading?


In a sense, an SSL hardware accelerator is performing SSL offloading, because part of the SSL processing is “offloaded” from the server’s CPU to the card’s co-processor. The term “offloading,” however, is generally used to describe an appliance or a completely separate computer that performs all SSL processing, so that the SSL load is taken off of the Web server completely.


An advantage of an offloader, as opposed to the typical accelerator, is that it can do SSL processing for more than one Web server, whereas the accelerator card is tied to a single server.


SSL offloaders can greatly enhance the effectiveness of intrusion detection systems, virus detection systems, etc. These systems are unable to detect attack signatures and virus signatures that are contained in data that’s SSL encrypted, but the offloader can decrypt the data so the IDS, virus software or application layer firewall can examine its contents and block suspicious packets.


There are two basic ways of doing this: the offloader can perform SSL termination or SSL bridging (sometimes called SSL initiation).



SSL Termination


An SSL offloader that acts as an SSL terminator decrypts the SSL-encrypted data and then sends it on to the server in an unencrypted state, so that the server does not have to perform decryption and the burden on its processor is relieved.


The unencrypted data may pass through an IDS, virus detection system and/or application layer firewall on its way to the server.


SSL termination increases the performance at the server level, but also poses a security problem: data is traveling from the offloader to the server without the protection of encryption.


SSL Bridging (Initiation)


There is a method for allowing inspection of SSL-encrypted data before it reaches the server to prevent application layer attacks hidden inside, without compromising the end-to-end security of the data. Microsoft calls this technology SSL bridging. Other vendors use different terminology; for example, SonicWall calls it SSL initiation.


Regardless of the name, here’s how it works: the application layer aware firewall intercepts and decrypts SSL-encrypted traffic, examines the contents to ensure that it doesn’t contain malicious code, then re-encrypts it before sending it on to the server. Although the data is temporarily in a decrypted state at the firewall, it is protected when it is sent across the network.


However, this means that the server will have to decrypt the data again, thus negating the performance advantage of SSL offloading.


What are the Security Implications of Offloading SSL?



SSL offloading can greatly increase the performance of your secure Web servers, thus increasing customer satisfaction. However, offloading means the SSL connection extends only from client to offloader, not from client to server. Data passes across the network unencrypted from offloader to server.


Granted, this data is moving across your internal network, not the public Internet. Thus the question becomes: how secure is that internal network? That depends to a great extend on your network topology. If you have the offloader and server deployed behind a department firewall on a secure subnet where only critical servers are located and to which users don’t have direct access, you might be confident in allowing unencrypted data to pass from offloader to server. If you are offloading SSL processing to a firewall located on the network edge, the exposure and risk of compromise after the data is decrypted is much greater.


Finally, you should consider customer perception and expectations. Do customers expect that, when they are told they are making a secure encrypted connection, the secure tunnel extends all the way from client to server? That would be a logical assumption for the typical user, who isn’t aware of technologies such as SSL offloading. What liability issues would you be subject to if confidential customer data were accessed by unauthorized persons while traveling over the network in a decrypted state, and then misused?


Summary


SSL encryption presents a dilemma wherein performance and security are at opposite ends of a continuum and more of one results in less of the other. Ultimately, you must evaluate your own network structure, the nature of the data that travels over it, and how much of a performance tradeoff is worth extra security or vice versa. The purpose of this article is to make you aware that once again the old maxim holds true: TANSTAAFL (“There Ain’t No Such Thing As A Free Lunch”). While SSL offloading offers distinct advantages to your business, those advantages come with a price.