Some quick kernel TCP tuning

Some quick kernel TCP tuning

Most Linux distributions ship with buffers and other Transmission Control Protocol (TCP) parameters conservatively defined. You should change these parameters to allocate more memory to enhancing network performance. Kernel parameters are set through the proc interface by reading and writing to values in /proc. Fortunately, the sysctl program manages these in a somewhat easier fashion by reading values from /etc/sysctl.conf and populating /proc as necessary. Listing 2 shows some more aggressive network settings that should be used on Internet servers.

/etc/sysctl.conf showing more aggressive network settings

# Use TCP syncookies when needed
net.ipv4.tcp_syncookies = 1
# Enable TCP window scaling
net.ipv4.tcp_window_scaling: = 1
# Increase TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase Linux autotuning TCP buffer limits
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# Increase number of ports available
net.ipv4.ip_local_port_range = 1024 65000

Add this file to whatever is already in /etc/sysctl.conf. The first setting enables TCP SYN cookies. When a new TCP connection comes in from a client by means of a packet with the SYN bit set, the server creates an entry for the half-open connection and responds with a SYN-ACK packet. In normal operation, the remote client responds with an ACK packet that moves the half-open connection to fully open. An attack called the SYN flood ensures that the ACK packet never returns so that the server runs out of room to process incoming connections. The SYN cookie feature recognizes this condition and starts using an elegant method that preserves space in the queue (see the Resources section for full details). Most systems have this enabled by default, but it’s worth making sure this one is configured.

Enabling TCP window scaling allows clients to download data at a higher rate. TCP allows for multiple packets to be sent without an acknowledgment from the remote side, up to 64 kilobytes (KB) by default, which can be filled when talking to higher latency peers. Window scaling enables some extra bits to be used in the header to increase this window size.

The next four configuration items increase the TCP send and receive buffers. This allows the application to get rid of its data faster so it can serve another request, and it also improves the remote client’s ability to send data when the server gets busier.

The final configuration item increases the number of local ports available for use, which increases the maximum number of connections that can be served at a time.

These settings become effective at next boot or the next time sysctl -p /etc/sysctl.conf is run.

Using cURL to measure the response time of a Web site

Using cURL to measure the response time of a Web site

$ curl -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total}\

the listing above shows the curl command being used to look up a popular news site. The output, which would normally be the HTML code, is sent to /dev/null with the -o parameter, and -s turns off any status information. The -w parameter tells curl to write out some status information such as the timers described in Table 1:

Table 1. Timers used by curl
Timer Description
time_connect The time it takes to establish the TCP connection to the server
time_starttransfer The time it takes for the Web server to return the first byte of data after the request is issued
time_total The time it takes to complete the request

Each of these timers is relative to the start of the transaction, even before the Domain Name Service (DNS) lookup. Thus, after the request was issued, it took 0.272 – 0.081 = 0.191 seconds for the Web server to process the request and start sending back data. The client spent 0.779 – 0.272 = 0.507 seconds downloading the data from the server.

By watching curl data and trending it over time, you get a good idea of how responsive the site is to users.

Of course, a Web site is more than just a single page. It has images, JavaScript code, CSS, and cookies to deal with. curl is good at getting the response time for a single element, but sometimes you need to see how fast the whole page loads.

iostat for linux

iostat for linux

One tool that isn’t very well known is called iostat, and it monitors I/O
(Input/Output) performance related to disk drives. Solaris users have
had this tool for years

Some Linux distributions ship with iostat. If your distribution doesn’t include it,
simply download the iostat source code and build it as follows:

It’s available for download at and
$ tar xvzf iostat-2.2.tar.gzcd iostat-2.2make

When the build is complete, you’ll have the tool iostat and the
manpage iostat.8 in the current directory. To install iostat and its
manpage in the /usr/local directory tree, simply perform a "make install":
$ tar xvzf iostat-2.2.tar.gzcd iostat-2.2make install

You can run iostat with a number of options and two optional
arguments: "interval" and "count."
To view disk activity over time, provide it with an interval
of 15 (seconds) and a count of 10 (samples). Here’s how:
$ ./iostat 15 10

However, this will give you only basic information on the installed
physical disk drives. To obtain more information, pass iostat a few
more options, such as printing statistics per disk (-d), printing CPU
activity stats (-c), including per-partition stats (-p), and including
extended statistics (-x). For a good sampling of data, enter this code snippet:
$ ./iostat -Dpxc

Since iostat prints out a lot of information, you’ll want to keep the
manpage handy so that you can identify what each column means. Some of
the information includes total transfer rate per second, total number
of requests per second, number of reads (and writes) per second, percentage
of CPU time spent in user, system, and idle modes, and much more.

If your system is slowing down and you’re having a hard time finding the bottleneck,
iostat may clue you in on some problem areas. Even before that slowdown, iostat
can tell you what disks are over- and under-utilized, which allows you to plan
ahead and balance the I/O load.

Linux Reference Guide

Linux Reference Guide
FILE AND DIRECTORY BASICS This cateogry also includes utilities
that change file/directory properties and permissions
ls List files/directories in a directory, comparable to dir in
ls -la Shows all files (including ones that start with a period),
directories, and details attributes for each file.
cd Change directory (e.g cd /usr/local/bin)
cd ~ Go to your home directory
cd – Go to the last directory you were in
cd .. Go up a directory
cat Print file contents to the screen
cat filename.txt Print the contents of filename.txt to your screen
tail Similar to cat, but only reads the end of the file
tail /var/log/messages See the last 20 (by default) lines of /var/log/messages
tail -f /var/log/messages Watch the file continuously, while it’s being updated
tail -200 /var/log/messages Print the last 200 lines of the file to the screen
head Similar to tail, but only reads the top of the file
head /var/log/messages See the first 20 (by default) lines of /var/log/messages
head -200 /var/log/messages Print the first 200 lines of the file to the screen
more Llike cat, but opens the file one screen at a time rather
than all at once
more /etc/userdomains Browse through the userdomains file. hit Spaceto go to the
next page, q to quit
less Page through files
od View binary files and data
xxd Also view binary files and data
gv View Postscript/PDF files
xdvi View TeX DVI files
nl Number lines
touch Create an empty file
touch /home/burst/public_html/404.html Create an empty file called 404.html in the directory /home/burst/public_html/
file Attempts to guess what type of file a file is by looking at
it’s content.
file * Prints out a list of all files/directories in a directory
cp Copy a file
cp filename filename.bak Copies filename to filename.bak
cp -a /etc/* /root/etc/ Copies all files, retaining permissions form one directory
to another.
cp -av * ../newdirectory Copies all files and directories recurrsively in the current
directory INTO newdirectory
mv Move a file command
mv oldfilename newfilename Move a file or directory from oldfilename to newfilename
rm delete a file
rm filename.txt deletes filename.txt, will more than likely ask if you really
want to delete it
rm -f filename.txt deletes filename.txt, will not ask for confirmation before
rm -rf tmp/ recursively deletes the directory tmp, and all files in it,
including subdirectories.

changes file access permissions. The set of 3 go in this order from left
to right:


0 = — No permission
1 = –X Execute only
2 = -W- Write only
3 = -WX Write and execute
4 = R– Read only

5 = R-X Read and execute
6 = RW- Read and write
7 = RWX Read, write and execute

chmod 000 No one can access
chmod 644 Usually for HTML pages
chmod 755 Usually for CGI scripts
chown Changes file ownership permissions
The set of 2 go in this order from left to right:

chown root myfile.txt Changes the owner of the file to root
chown root.root myfile.txt Changes the owner and group of the file to root
stat Display file attributes
grep Llooks for patterns in files
grep root /etc/passwd Shows all matches of root in /etc/passwd
grep -v root /etc/passwd Shows all lines that do not match root
ln Create’s "links" between files and directories
ln -s /usr/local/apache/conf/httpd.conf /etc/httpd.conf Now you can edit /etc/httpd.conf rather than the original.
changes will affect the orginal, however you can delete the link and it
will not delete the original.
wc Word count
wc -l filename.txt Tells how many lines are in filename.txt
find Utility to find files and directories on your server.
find / -name "filename" Find the file called "filename" on your filesystem
starting the search from the root directory "/".
locate filename Find the file name and path of which contains the string "filename".
Run ‘updatedb’ to build index.
EDITORS Most popular editors available on UNIX
pico Friendly, easy to use file editor
pico /home/burst/public_html/index.html Edit the index page for the user’s website.
vi Popular editor, tons of features, harder to use at first than
vi filename.txt

Edit filename.txt. All commands in vi are preceded by pressing the escape
key. Each time a different command is to be entered, the escape key needs
to be used. Except where indicated, vi is case sensitive. Fore more commands
go to:

H — Upper left corner (home)

M — Middle line
L — Lower left corner
h — Back a character
j — Down a line
k — Up a line
^ — Beginning of line

$ — End of line
l — Forward a character
w — Forward one word
b — Back one word
fc — Find c
; — Repeat find (find next c)

:q! — This force quits the file without saving and exits vi
:w — This writes the file to disk, saves it
:wq — This saves the file to disk and exists vi
:LINENUMBER : EG :25 — Takes you to line 25 within the file
:$ — Takes you to the last line of the file
:0 — Takes you to the first line of the file


Another popular editor. For more commands go to

C-\ t — Tutorial suggested for new emacs users.
C-x C-c exit emacs

emacs filename.txt

Edit filename.txt. While you’re in emacs, use the following quickies
to get around:

C-x C-f — read a file into emacs

C-x C-s — save a file back to disk
C-x i — insert contents of another file into this buffer
C-x C-v — replace this file with the contents of file you want
C-x C-w — write buffer to specified file

C-f — move forward one character
C-b — move backward one character

C-n — move to next line
C-p — move to previous line
C-a — move to beginning of line
C-e — move to end of line
M-f — move forward one word
M-b — move backword one word

C-v — move forward one screen
M-v — move backward one screen
M-< — go to beginning of file
M-> — go to end of file

NETWORK Some of the basic networking utilities.
w Shows who is currently logged in and where they are logged
in from.
who This also shows who is on the server in an shell.
netstat Shows all current network connections.
netstat -an Shows all connections to the server, the source and destination
ips and ports.
netstat -rn Shows routing table for all ips bound to the server.
netstat -an |grep :80 |wc -l Show how many active connections there are to apache (httpd
runs on port 80)

Shows live system processes in a formatted table, memory information,
uptime and other useful info.

While in top, Shift + M to sort by memory usage or Shift + P to sort
by CPU usage

top -u root Show processes running by user root only.
route -n Shows routing table for all ips bound to the server.
nslookup Query your default domain name server (DNS) for an Internet
name (or IP number) host_to_find.
traceroute Have a look how you messages travel to
ifconfig Display info on the network interfaces.
ifconfig -a Display into on all network interfaces on server, active or
ping Sends test packets to a specified server to check if it is
responding properly
tcpdump Print all the network traffic going through the network.
arp Command mostly used for checking existing Ethernet connectivity
and IP address
SYSTEM TOOLS Many of the basic system utilities used
to get things done.
ps ps is short for process status, which is similar to the top
command. It’s used to show currently running processes and their PID.
A process ID is a unique number that identifies a process, with that you
can kill or terminate a running program on your server (see kill command).
ps U username Shows processes for a certain user
ps aux Shows all system processes
ps aux –forest Shows all system processes like the above but organizes in
a hierarchy that’s very useful!
kill terminate a system process
kill -9 PID Immediately kill process ID
killall program_name Kill program(s) by name. For example to kill instances of
httpd, do ‘killall httpd’
du Shows disk usage.
du -sh Shows a summary of total disk space used in the
current directory, including subdirectories.
du / -bh | more Print detailed disk usage for each subdirectory starting at
the "/".
last Shows who logged in and when
last -20 Shows only the last 20 logins
last -20 -a Shows last 20 logins, with the hostname in the last field
pwd Print working directory, i.e., display the name of my current
directory on the screen.
hostname Print the name of the local host. Use netconf (as root) to
change the name of the machine.
whoami Print my login name.
date Print or change the operating system date and time
time Determine the amount of time that it takes for a process to
complete + other info.
uptime Show the number days server has been up including system load
uname -a Displays info on about your server such as kernel version.
free Memory info (in kilobytes).
lsmod Show the kernel modules currently loaded. Run as root.
dmesg | less Print kernel messages.
man topic Display the contents of the system manual pages (help) on
the topic. Do ‘man netstat’ to find all details of netstat command including
options and examples.
reboot / halt Halt or reboot the machine.
mount Mount local drive or remote file system.
mount -t auto /dev/fd0 /mnt/floppy Mount the floppy. The directory /mnt/floppy must exist.
mount -t auto /dev/cdrom /mnt/cdrom Mount the CD. The directory /mnt/cdrom must exist.
sudo The super-user do command that allows you to run specific
commands that require root access.
fsck Check a disk for errors
COMPRESSION UTILITIES There are many other compression utilities
but these are the default and most widely utilized.
tar Creating and Extracting .tar.gz and .tar files
tar -zxvf file.tar.gz Extracts the file
tar -xvf file.tar Extracts the file
tar -cf archive.tar contents/ Takes everything from contents/ and puts it into archive.tar
gzip -d filename.gz gzip -d filename.gz
zip Compress files
unzip Extracting .zip files shell command
compress Compress files. compress filename
uncompress Uncompress compressed files. uncompress filename.Z
bzip2 Compress files in bzip2 format
THE (DOT) FILES The good old dot files. Let’s clear
up some confusion here by defining each.
.bash_login Treated by bash like .bash_profileif that doesn’t exist.
.bash_logout Sourced by bash login shells at exit.
.bash_profile Sourced by bash login shells after /etc/profile
.bash_history The list of commands executed previously.
.profile Treated by bash like ~/.bash_profile if that and
.bash_login don’t exist.
.vimrc Default "Vim" configuration file.
.emacs Read by emacs at startup
CONFIGURATION FILES Listing everything is beyond the scope
of this article.
/etc This directory contains most of the basic Linux system-configuration
/etc/init.d Contains the permanent copies of System V–style run-level
scripts. These scripts are often linked to files in the /etc/rc?.d directories
to have each service associated with a script started or stopped for the
particular run level. The ? is replaced by the run-level number (0 through
6). (Slackware puts its run-level scripts in the /etc/rc.d directory.)
/etc/cron* Directories in this set contain files that define how the
crond utility runs applications on a daily (cron.daily), hourly (cron.hourly),
monthly (cron.monthly), or weekly (cron.weekly) schedule.
/etc/cups Contains files used to configure the CUPS printing service.
/etc/default Contains files that set default values for various utilities.
For example, the file for the useradd command defines the default group
number, home directory, password expiration date, shell, and skeleton directory
/etc/skel Any files contained in this directory are automatically copied
to a user’s home directory when that user is added to the system.
/etc/mail Contains files used to configure your sendmail mail service.
/etc/security Contains files that set a variety of default security conditions
for your computer.
/etc/sysconfig Contains important system configuration files that are created
and maintained by various services (including iptables, samba, and most
networking services).
/etc/passwd Holds some user account info including passwords (when not
/etc/shadow Contains the encrypted password information for users’ accounts
and optionally the password aging information.
/etc/xinetd.d Contains a set of files, each of which defines a network service
that the xinetd daemon listens for on a particular port.
/etc/syslogd.conf The configuration file for the syslogd daemon. syslogd is
the daemon that takes care of logging (writing to disk) messages coming
from other programs to the system.
/var Contains variable data like system logging files, mail and
printer spool directories, and transient and temporary files.
/var/log Log files from the system and various programs/services, especially
login (/var/log/wtmp, which logs all logins and logouts into the system)
and syslog (/var/log/messages, where all kernel and system program message
are usually stored).
/var/log/messages System logs. The first place you should look at if your system
is in trouble.
/var/log/utmp Active user sessions. This is a data file and as such it can
not be viewed normally.
/var/log/wtmp Log of all users who have logged into and out of the system.
The last command can be used to access a human readable form of this file.
Apache Shell Commands Some of the basic and helpful apache
httpd -v Outputs the build date and version of the Apache server.
httpd -l Lists compiled in Apache modules
httpd status Only works if mod_status is enabled and shows a page of active
service httpd restart Restarted Apache web server
MySQL Shell Commands Some of the basic and helpful MySQL
mysqladmin processlist Shows active mysql connections and queries
mysqladmin processlist |wc -l Show how many current open connections there are to mysql
mysqladmin drop database Drops/deletes the selected database
mysqladmin create database Creates a mysql database
mysql -u username -p password databasename < data.sql Restores a MySQL database from data.sql
mysqldump -u username -p password database > data.sql Backup MySQL database to data.sql
echo "show databases" | mysql -u root -p password|grep
-v Database
Show all databases in MySQL server.
mysqldump -u root -p password database > /tmp/database.exp Dump database including all data and structure into /tmp/database.exp

Getting to the system console in Linux

Getting to the system console in Linux

People who work with Sun or Cisco network equipment need to be able to connect to the console port on their devices. In Windows, you can simply fire up HyperTerminal to get basic access to your devices. If you are using Linux, then you need to know how this can be done with an application called Minicom.

The Connection Kit

Start with a Cisco rollover cable (db9 – RJ45) and add a rj45 – db9 adapter, and rj45 – db25 adapter, and a null modem adapter

If you do not have a serial port (like most new laptops), then you need to purchase a USB to Serial adapter that supports Linux. This device will allow you to use the standard Cisco cable, which has a serial port on one end.

Install Minicom

You can easily install Minicom by using "System > Administration > Synaptic Package Manager". Search for "minicom" and choose to install the package. Click "Apply" and Minicom should be installed within a few seconds. On Redhad and Fedora use yum, on Ubunto use apt-get

Find the name of your serial port

The first thing you need to find out is which device your serial port is mapped to. The easiest way to do this is to connect the console cable to a running Cisco device. Now open up a Terminal using "Applications > Accessories > Terminal" and type this command:

dmesg | grep tty

Look in this output for words that contain "tty". In this case, it is "ttyS0". That meas the name of the device the corresponds to your serial port is "ttyS0". Now we are ready to configure Minicom to use this information. USB will probably be /dev/ttyUSB0

Configure Minicom

Open a terminal using "Applications > Accessories > Terminal". Now type this command to enter the configuration menu of Minicom:

sudo minicom -s

Use the keyboard arrow keys to select the menu item labeled "Serial Port Setup" and then hit "Enter". This will open a window that looks similar to the one below:

Change your settings to match the ones in the picture above. Here is what I had to change:

* Change the line speed (press E) to "9600"
* Change the hardware flow control (press F) to "No"
* Change the serial device (press A) to "/dev/ttyS0"
o Be sure to use the device name that you learned in the previous step

Once your screen looks like mine, you can hit "Escape" to go back to the main menu. Next, you need to select "Save setup as dfl" and hit "Enter" to save these settings to the default profile. Then select "Exit Minicom" to exit Minicom… 😉

To find out if you have configured Minicom correctly, type this command in the terminal:

sudo minicom

After entering your Ubuntu user password, you should be connected to your device.

Note: You may want to delete the Minicom init string if you see a bunch of gibberish every time you connect to a device. To do this, enter Minicom configuration with:

sudo minicom -s

Then select "Modem and dialing". Press "A" to edit the Init string, and delete all characters so that it becomes empty. Make sure you save this to the default profile with "Save setup as dfl". You should no longer see gibberish when you connect to devices.

Create a desktop launcher

If you want to have quicker access to Minicom, you can create a desktop launcher.

1. Right-click on the desktop and choose "Create launcher"
2. Click on "Icon" and choose the picture you want to use
3. Use the "Type" pull-down menu and select "Application in terminal"
4. Create a name like "Cisco Console" in the field labeled "Name"
5. Enter this command into the field labeled "Command"
* sudo minicom
6. Hit "OK" and your desktop launcher is ready for you to use.

Excluding files from tar

Excluding files from tar:

A solution is to use the X flag to tar. This flag specifies that the matching argument to tar is the name of a file that lists files to exclude from the archive. Here is an example:

% find project ! -type d -print | \
egrep ‘/,|%$|~$|\.old$|SCCS|/core$|\.o$|\.orig$’ > Exclude
% tar cvfX project.tar Exclude project

In this example, find lists all files in the directories, but does not print the directory names explicitly. If you have a directory name in an excluded list, it will also exclude all the files inside the directory. egrep is then used as a filter to exclude certain files from the archive. Here, egrep is given several regular expressions to match certain files. This expression seems complex but is simple once you understand a few special characters:

A breakdown of the patterns and examples of the files that match these patterns is given here:

Instead of specifying which files are to be excluded, you can specify which files to archive using the – I option. As with the exclude flag, specifying a directory tells tar to include (or exclude) the entire directory. You should also note that the syntax of the – I option is different from the typical tar flag. The next example archives all C files and makefiles. It uses egrep’s () grouping operators to make the $ anchor character apply to all patterns inside the parentheses:

% find project -type f -print | \
egrep ‘(\.[ch]|[Mm]akefile)$’ > Include
% tar cvf project.tar -I Include

I suggest using find to create the include or exclude file. You can edit it afterward, if you wish. One caution: extra spaces at the end of any line will cause that file to be ignored.

One way to debug the output of the find command is to use /dev/null as the output file:

% tar cvfX /dev/null Exclude project

Including Other Directories

There are times when you want to make an archive of several directories. You may want to archive a source directory and another directory like /usr/local. The natural, but wrong, way to do this is to use the command:

% tar cvf /dev/rmt8 project /usr/local


When using tar, you must never specify a directory name starting with a slash (/). This will cause problems when you restore a directory.

The proper way to handle the incorrect example above is to use the – C flag:

% tar cvf /dev/rmt8 project -C /usr local

This will archive /usr/local/… as local/….
Type Pathnames Exactly

For the above options to work when you extract files from an archive, the pathname given in the include or exclude file must exactly match the pathname on the tape.

Here’s a sample run. I’m extracting from a file named appe.tar. Of course, this example applies to tapes, too:

% tar tf appe.tar

Next, I create an exclude file, named exclude, that contains the lines:


Now, I run the following tar command:

% tar xvfX appe.tar exclude
x appe, 6421 bytes, 13 tape blocks
x code/appendix/font_styles.c, 3457 bytes, 7 tape blocks
x code/appendix/xmemo.c, 10920 bytes, 22 tape blocks
x code/appendix/xshowbitmap.c, 20906 bytes, 41 tape blocks
code/appendix/zcard.c excluded
code/appendix/zcard.icon excluded

Exclude the Archive File!

If you’re archiving the current directory (.) instead of starting at a subdirectory, remember to start with two pathnames in the Exclude file: the archive that tar creates and the Exclude file itself. That keeps tar from trying to archive its own output!

% cat > Exclude
% find . -type f -print | \
egrep ‘/,|%$|~$|\.old$|SCCS|/core$|\.o$|\.orig$’ >>Exclude
% tar cvfX somedir.tar Exclude .

In that example, we used cat > to create the file quickly; you could use a text editor instead. Notice that the pathnames in the Exclude file start with ./; that’s what the tar command expects when you tell it to archive the current directory (.). The long find/egrep command line uses the >> operator to add other pathnames to the end of the Exclude file.

Or, instead of adding the archive and exclude file’s pathnames to the exclude file, you can move those two files somewhere out of the directory tree that tar will read.

Setup an FTP user account minus shells

Setup an FTP user account minus shells

It’s important to give to your strictly FTP users no real shell account on the Linux system. In this manner, if for any reasons someone could successfully get out of the FTP chrooted environment, it would not have the possibility of executing any user tasks since it doesn’t have a bash shell. First, create new users for this purpose;

These users will be the users allowed to connect to your FTP server.

This has to be separate from a regular user account with unlimited access because of how the chroot environment works. Chroot makes it appear from the user’s perspective as if the level of the file system you’ve placed them in is the top level of the file system.

Use the following command to create users in the /etc/passwd file. This step must be done for each additional new user you allow to access your FTP server.

[root@deep ] /# mkdir /home/ftp
[root@deep ] /# useradd -d /home/ftp/ftpadmin/ -s /dev/null ftpadmin > /dev/null 2>&1
[root@deep ] /# passwd ftpadmin

Changing password for user ftpadmin
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully


The mkdir command will create the ftp directory under the /home directory to handle all FTP users’ home directories we’ll have on the server.

The useradd command will add the new user named ftpadmin to our Linux server.

Finally, the passwd command will set the password for this user ftpadmin.

Once the home/ftp/ directory has been created you don’t have to use this command again for additional FTP users.


Edit the /etc/shells file, vi /etc/shells and add a non-existent shell name like null, for example. This fake shell will limit access on the system for FTP users.

[root@deep ] /# vi /etc/shells


/dev/null, This is our added no-existent shell. With Red Hat Linux, a special device name /dev/null exists for purposes such as these.

Now, edit your /etc/passwd file and add manually the /./ line to divide the /home/ftp directory with the /ftpadmin directory where the user ftpadmin should be automatically chdir’d to. This step must be done for each FTP user you add to your passwd file.


To read:


The account is ftpadmin, but you’ll notice the path to the home directory is a bit odd. The first part /home/ftp/ indicates the filesystem that should be considered their new root directory. The dot . divides that from the directory they should be automatically chdir’d. change directory’d into, /ftpadmin/.

Once again, the /dev/null part disables their login as a regular user. With this modification, the user ftpadmin now has a fake shell instead of a real shell resulting in properly limited access on the system.

Cryptographic File System under Linux HOW-TO

Cryptographic File System under Linux HOW-TO
March 14, 1996
Copyright (C) 1996 Alexander O. Yuriev (
CIS Laboratories

This document describes how to compile, install and setup CFS
that was written by Matt Blaze of AT&T, under Linux. The following
copyright statement copied directly from CFS 1.12 describes
the restrictions on the CFS usage:

* The author of this software is Matt Blaze.
* Copyright (c) 1992, 1993, 1994 by AT&T.
* Permission to use, copy, and modify this software without fee
* is hereby granted, provided that this entire notice is included in
* all copies of any software which is or includes a copy or
* modification of this software and in all copies of the supporting
* documentation for such software.
* This software is subject to United States export controls. You may
* not export it, in whole or in part, or cause or allow such export,
* through act or omission, without prior authorization from the United
* States government and written permission from AT&T. In particular,
* you may not make any part of this software available for general or
* unrestricted distribution to others, nor may you disclose this software
* to persons other than citizens and permanent residents of the United
* States and Canada.

Although the information in this document is believed to be
correct, neither the Author nor CIS Laboratories, nor Temple University
provides any kind of WARRANTIES and is not/are not responsible for
what happens if you follow these guidelines. The information in this
document is provided AS IS!


CFS provides application-independent encryption/decryption of the
filesystem layer that does not require modification of the
underlying filesystem code nor any kind of modification of the
kernel source. The symmetric cipher implemented in the mainstream
version of CFS is based on the modified DES cipher running in CBC
mode making the brute-force attack against the usual 56-bit DES
key-space unrealistic. The structure of CFS makes replacement of
the mainstream DES cipher with a Fast-DES or any other symmetric
cipher an extremely straightforward process. Please refer to the
"White" paper about CFS for more information


CFS does not compile "out of the box" under Linux. Follow these
instructions to get CFS running or your Linux system. There are
several methods to make CFS work under Linux, the cleanest one of
which is based on the modifications performed by Olaf Kirch. His
version of CFS is available from:

Olaf signed the modified archive. The PGP signature for the modified
version of the cfs-1.1.2 can be obtained from

In single-user mode, compile CFS by using the "make" command.

After compilation is completed, install "cfsd", "cdetach", "ccat",
"cmkdir", "cname" and "cattach" to the /usr/local/sbin directory
with the ownership "root:wheel" and the access mode "551".
Generate a list of MD5 hashes of the clean binaries. Now copy these
files together with the "md5sum" to a media such as an image of a CD
or a floppy and make the media write protected.

Create the directory /.cfsfs which will be used as a hook for the
CFS server. Make that directory owned by root:root and protected
with access mode "000". Create the directory /securefs which will
become a root of the CFS tree.

Add the following lines into your /etc/rc.d/rc.local:

echo -n "Initializing secure filesystem: "
if [ -x /usr/local/sbin/cfsd ]; then
/usr/local/sbin/cfsd > /dev/null
echo -n "cfsd "
/bin/mount -o port=3049,intr localhost:/.cfsfs /securefs
echo -n "loopback "
echo "done"
echo "Cryptographic Filesystem is not installed"

Users of the Caldera Network Desktop and Red Hat Commercial Linux
distributions should add the file "cfsfs" that is attached at the end
of this document to their /etc/rc.d/init.d directory. Then symlink
the file "S65cfsfs" to it in the appropriate run-level directories
using the command:

ln -s ../init.d/cfsfs S65cfsfs

in /etc/rc.d/rcX.d, where X is a run-level number, add the line:

/.cfsfs localhost

to /etc/exports. Finally, add the line:


to the /etc/hosts.allow file.

You should now restart your computer. When it comes back into a
multiuser mode, issue a mount command to verify that CFS is running.
If everything was successful, you should see a new line in a list of

localhost:/.cfsfs on /securefs type nfs (rw,port=3049,intr,addr=


To create a CFS protected directory called "secret" use the command

cmkdir secret

You will be requested to supply and verify the passphrase. If you
succeed, a new directory named "secret" will appear in the current
directory. This directory will contain encrypted information which
will be accessible only in the encrypted form unless it is attached
to the CFS tree.

In order to add the "secret" directory to a list of directories
managed by CFS, it has to be attached to the CFS tree using the

cattach secret Big-Secret

CFS will request you to type the access passphrase. If it matches
the passphrase supplied to the "cmkdir" command that created the
directory originally, then the information in the secret directory
will be accessible in a non-encrypted form under /securefs/Big-Secret
to the user who supplied the correct passphrase. Please note that
usually it takes about a minute to attach a protected directory to
the CFS tree. When the user is finished manipulating the information
they should issue the command:

cdetach Big-Secret

to destroy the access key. This command removes the directory
"secret" from the list of directories managed by CFS making it
impossible to access cleartext information in that directory until
it is again attached using the "cattach" command.


In order to grant a user access to encrypted parts of the directory
tree, CFS requires the user to supply a passphrase that is used to
generate a set of access keys. A compromise of a passphrase allows
an intruder to access the encrypted information through the Unix
security model. Therefore it is extremely important to protect
access passphrases. There are two basic ways that can be used by
intruders to gain access to your passphrase. They are (1) Sniffer
attacks (2) Attack against the protocol. The following simple
guidelines can be used to minimize the possibility of a successful
attack against CFS:

1. Make sure that the CFS binaries are not compromised in
any form.

* Ensure that "cattach", "ccat", "cmkdir", "cname",
the CFS server "cfsd" and finally, "cdattach"
are not replaced with Trojan versions that record
access passphrases or, in a case of "cfsd",
access keys.

* Ensure that the CFS server is not compromised in a
way that it does not perform the encryption
procedure correctly.

* An attack against "cdeattach" usually involves a
small modification that prevents correct
destruction of access keys allowing an intruder
to gain access to a supposedly detached part of
the directory tree.

The simplest way to verify that binaries are not
compromised is to statically link them and place them on
a CD. Another way is to again statically link the
binaries, use "md5sum" message-digest calculator and
write their MD5 hashes onto a write-protected media.
Prior to using any CFS programs on a system, mount
the floppy disk and compare MD5 hashes of binaries on the
system with the hashes of the clean statically linked
copies located on the floppy disk, replacing the
compromised versions.

2. Keyboard grabbers used to grab passphrases as they are
being typed rely on the fact that most users are careless
enough to ignore the following simple guidelines:

1. When typing a passphrase in an xterm, make sure
that the xterm program is not compromised and use
the "Secure Keyboard" option while typing the
passphrase. This prevents keystrokes from being
intercepted by X grabbers.

2. Type passphrases from a terminal attached directly
to a serial port of the system when such terminal
is available.

3. Make sure that your pty and ttys permissions
disallow others from reading your keystrokes
directly from the device node.

3. Never type your passphrase across the network, even if
the network is located behind a firewall and you trust
everybody who is connected to your network not to use
sniffers. This also applies to networks that use
scrambling routers, because there is absolutely no
guarantee that routers use a strong encryption or do not
have a back door or a loophole that potentially can allow
an intruder to defeat encryption used by a router. If
you have to type your password across the network, do it
only if you are using an encrypted tunnel between systems
such as the one created by the deslogin(8) protocol.

4. Always de-attach CFS protected trees from the filesystem when
not using them, even when you are leaving your system for
"only" a couple of minutes.


At this moment there is only one problem that can be reproduced.
"Permission denied" error is generated when a user attempts to
access the files located on a compact disc.


The following people helped in the preporation process of this
document: Topher Hughes of the Dickinson College, Elie Rosenblum of
the Montgomery Blair High School, Mario D. Santana of the Florida
State University, Daniel P Zepeda and Olaf Kirch.

# $Header: /Secure/secure-doc/linux/CFS/RCS/CFS-Doc,v 1.4 1996/03/15 04:49:37 alex Exp alex $
# cfsfs Crypto filesystem
# Author: Alexander O. Yuriev
# Derived from cron

# Source function library.
. /etc/rc.d/init.d/functions

# See how we were called.
case "$1" in
echo -n "Starting Crypto Filesystem: "
if [ -x /usr/local/sbin/cfsd ]; then
/usr/local/sbin/cfsd > /dev/null
/bin/mount -o port=3049,intr localhost:/.cfsfs /securefs
echo "done"
echo -n "Crypto Filesystem is not installed"
touch /var/lock/subsys/cfsfs
echo -n "Stopping Crypto filesystem: "
umount /securefs
killproc cfsd
rm -f /var/lock/subsys/cfsfs
echo "Usage: cfsfs {start|stop}"
exit 1

exit 0
====================[end of cfsfs]======================

Partitioning Scheme for Linux

In Linux, there are a few main folders that could be seperate partitions.
/ (root FS)

Making folders such as /etc and /bin in their own directories doesn’t work too well. The reason being, /etc contains /etc/fstab which tells the computer how and where the partitions should be mounted, and /bin contains the programs that mount it. /etc and /bin wouldn’t be able to be mounted without already being mounted, so unless you like paradoxes, don’t try putting /etc and /bin on their own partitions.

The amount of partitions you want varies computer to computer. Generally, more partitions (formatted correctly) means a faster read/write time for files, but having many partitions on some computers may not be reasonable (as it may cause problems).

In my opinion, you should always have these three partitions:

The highest amount of (sane) partitions anyone should have is as follows:
/media (I use a seperate partiton for media)

If you format each partition correctly, either form (to any degree) will work well.

I use XFS for everything but my /boot partition. It works wonders with all sorts of files. To make a XFS partition, just use mkfs.xfs -f /dev/hdXY where X is the harddrive letter and Y is the partition number.

Making a ext2 partition is as simple as mke2fs /dev/hdXY, once again, where X is the harddrive letter and Y is the partition number.