Parallel Processing with Perl for Bioinformatics

Hi guys,

Here is a small tutorial on how to make best use of multiple processors for bioinformatics analysis. One best way is using perl threads and forks. Knowing how these threads and forks work is very important before implementing them. Getting to know how these work would be really useful before reading this tutorial.

Many times in bioinformatics we need to deal with huge datasets which  are more than 100GB size. The traditional way to analysis a file is using the while loop

while (FILE){

Do something;

}

This is very slow(since we are using only one processor) and if we have 500 million lines in the dataset it takes more than a day to iterate through the whole dataset. So how do we make best use of all our processors and get the work done quickly?

Here is a very simple and efficient technique with perl which i have been using. I am  more inclined towards using perl fork than perl threads.

One of the oldest way to fork is

my $fork = fork();
if($fork){   
push (@childs,$fork); 
}
elseif($fork==0){
your code here;
exit(0);
}
else{die “Couldnt fork : $!”;}

## wait for the child process to finish
foreach(@childs){
my $tmp=waitid($_,0);
}

what a fork does is it creates a child process and takes the variables and code with it to analyze it separately (detached from the parent process) and thus a separate process is created( which usually runs on a separate processor). Thats it!! One big disadvantage of forking is its very difficult to share variables among the different processes. I will show you how to do it easily but still it has its own drawbacks.

Okie, now if you really do not want to use fork in your code, that’s okie too..There are many useful modules which do it for you very efficiently. One really useful module is Parallel::ForkManager. You can use Parallel::ForkManager to manage the number of forks you want to generate (number of processors you want to use).

Simple usage:
use Parallel::ForkManager;
my $max_processors=8;
my $fork= new Parallel::ForkManager($max_processors);
foreach (@dna) {
$fork->start and next; # do the fork
you code here;
$fork->finish; # do the exit in the child process
}
$pm->wait_all_children;

so you will be generating 8 forks which do the same thing for your each element of array. when one child finishes, Parallel::ForkManager generates a new one and thus you will be using all your processors to analyze the data. Now, if you have generated 8 child processes and want to write the data to one file. You need to lock the file to do this, because you will have problems with the buffering. You can lock the file using flock command.

open (my $QUAL, “myfile.txt”);
flock $QUAL, LOCK_EX or die “cant lock file $!”;
print $QUAL “$output”;
flock $QUAL, LOCK_UN or die “$!”;
close $QUAL;

I would not suggest using flock when dealing with multiple processes because it will decrease the processing efficiency( each child process must wait for the lock to be released by the other child process). Instead, I would suggest each fork writing to a separate file and after the processing just concatenating them.

Putting it all together, If you have 100GB data you can do this

step 1 : split the dataset equally according to number of processors you have. this may take a few hours(about 2-3 hrs for 100GB file)
You can use unix “split” command for this
for example:
my $number_split=int($number_of_entries_in_your_dataset/$max_processors);
my $split_Files=`split -l $number_split “your_file.fasta” “file_name”`;

step2: open you directory comtaining you split files and start Parallel::ForkManager.
For example:
opendir(DIRECTORY, $split_files_directory) or die $!; ### open the directory
my $fork= new Parallel::ForkManager($max_processors);
while (my $file = readdir(DIRECTORY)) { ### read the directory
if($file=~/^\./){next;}
print $file,”\n”;
########## Start fork ##########
my $pid= $super_fork->start and next;
Whatever you want to do with the split file ;
analyze my piece of $file;
######### end fork ###############
$super_fork->finish;
}
$super_fork->wait_all_children;

So basically each processor will be active with its piece of data (split file) and thus you have created 8 processes at one time which run without interfering with the other process. I again will not suggest writing output from each child process to one file(for reasons above). Write output from each fork to a separate file and finally concatenate them. Thats it, you have just increased your program speed by 8 times!! Isnt it easy?

Note:
You may worry about concatenation of the output each child generates, since it does take some time(remember 100GB). I think now you can use a mysql database LOAD DATA LOCAL INFILE command to load all the files into a single table(Should take about 3hrs for 100Gb dataset) and then export the whole table into one file. This should be faster than just concatenating them using “cat” command.(correct me if I am wrong)

Or much simpler way is to use pipes

cat output_dir/* | my_pipe or my_pipe <(file1) final_file;

Thats it guys!! Enjoy programming and please do comment. I am not a computer scientist so forgive me for any mistakes and if any please report them. Thank you.

Sandeep

Posted in Uncategorized | 1 Comment

Genetics Tutorial

I feel its the best animated genetics tutorial ever made. It explains even very minute details. Thanks to 23andMe for providing this beautiful video.

https://www.23andme.com/gen101/genes/

Posted in Uncategorized | Leave a comment

RNA Structure Prediction

Should be the best introductory paperĀ  on RNA structure prediction

Click to access 20080102_Capriotti_Marti-Renom_CBIO08.pdf

 

Posted in Uncategorized | Leave a comment

The Allergy Gene – The Scientist – Magazine of the Life Sciences

The Allergy Gene – The Scientist – Magazine of the Life Sciences.

Posted in Uncategorized | Leave a comment

Benchmark perl !!

The benchmark module is a great tool to know the time the code takes to run. The output is usually in terms of CPU time. This module provides us with a way to optimize our code. With the advent of petascale computing and other multicore processor it is becoming a neccesity to know about the CPU time taken by our perl program.

This is the simple way to use the module

Example1:

use Benchmark;

$first_time = Benchmark->new;

our code……

$second_time = Benchmark->new;

$final_difference = timediff($first_time,$second_time);

print “the code took, timestr($final_difference),”\n”;

that was a very simple way to know the time diff , we can use it to know the time taken by some part of the code in the program.

More sophisticated way:

use Benchmark;
sub first {

my(arguments) = @_;

}

timethese(100, { first => ‘first_sub(arguments)’});

The first argument to timethese is 100 (evaluate 100 times).

Hope this very small tutorial with Benchmark will help people get started.

-sandeep

Posted in Uncategorized | Leave a comment

Chip-seq Wow!!!

Chip-seq and its applications has opened new frontiers in genomics field. The applications are totally amazing. This field in genomics has a long way to go. Illumina’s chip-seq is an fully automated platform for whole genome chip-sequencing(http://www.illumina.com/Documents/products/datasheets/datasheet_chip_sequence.pdf).

Posted in Uncategorized | Leave a comment

CUDA Programming

I think this topic will be the future for bioinformatics. The courseĀ  Computational Algorithms course by Dr. David Bader gives a perfect introduction to the applications of CUDA programming in bioinformatics. The applications are aplenty for bioinformatics eg: RNA prediction, folding , protein modelling, genome assembly and many others. I think this paper gives a good introduction to CUDA programming, here is the link http://www.springerlink.com/content/er69l40p86777k27/fulltext.pdf . With CUDA we can make programs run 100 times faster than what we d0 now. This begins a new subfield of high performance computing in bioinformatics. Bye CPU, Welcome CUDA.

-Sandeep

Posted in Uncategorized | Leave a comment

Metabolic Reprogramming therapy for cancer

http://www.nature.com/nrd/journal/v9/n6/box/nrd3137_BX1.html

Posted in Uncategorized | Leave a comment

Microarray Data Analysis: Separating the Curd from the Whey – The Scientist – Magazine of the Life Sciences

Microarray Data Analysis: Separating the Curd from the Whey – The Scientist – Magazine of the Life Sciences.

Posted in Uncategorized | Leave a comment

Bacteria form electric circuits? – The Scientist – Magazine of the Life Sciences

Bacteria form electric circuits? – The Scientist – Magazine of the Life Sciences.

Posted in Uncategorized | Leave a comment