Linux Utils

Linux Utils

= SSH= Copy files To the server: scp -r  @:/path Copy files From the server: scp -r @:/path.

= csplit = csplit - split a file into sections determined by context lines csplit myfastafile.fa '/>/' {*}

= Process Management =

top              # top consumers of memory and CPU who            # who is logged into system w                # which users are logged into system ps               # processes running by user ps -e            # all processes on system;  '-a' and '-x' arguments ps aux | grep  # all processes of one user ps ax --tree     # the child-parent hierarchy of all processes ps -o %t -p # how long a particular process was running. fg               # Resume a suspended process and brings it into foreground bg               # Resume a suspended process but keeps it running in the background. Ctrl z       # Suspend (put to sleep) a process Ctrl c           # Kills the process that is currently running in the foreground kill  # Kills a specific process kill -9  # Kill with force renice -n  # Changes the priority value, which range from 1-19, # the higher the value the lower the priority, default is 10.

= Xargs = biounix delete all *.txt files in a directory: find. -name "*.txt" | xargs rm package all *.pl files in a directory: find. -name "*.pl" | xargs tar -zcf pl.tar.gz kill all processes that match "something": ps -u `whoami` | awk '/something/{print $1}' | xargs kill

Remove executable permission on all files except directories find. -type f -print0 | xargs -0 chmod uga-x

= Lynx browser = The following command downloads all the web pages of a website by crawling. It creates .dat files and also some log files. These log files are useful if you want to see the errors that occured.

lynx -crawl -traversal -realm -accept_all_cookies -connect_timeout=7 -nostatus http://websiteurl > /dev/null

Automated HTTP Response Code Checking
while read inputline do url="$(echo $inputline)" headers="$(lynx -dump -head $url | grep -e HTTP -e Location)" echo "$url $headers" sleep 2 done < filename.txt

The basic syntax for processing a file line-by-line in the shell is: while read inputline do  [some commands here] done < [input filename]

Get the text from a Web page as well as a list of links
lynx -dump "http://www.example.com/"

Get the source code from a Web page with Lynx
lynx -source "http://www.example.com/"

Get the response headers with Lynx
lynx -dump -head "http://www.example.com/"

List of outgoing links on the web page
lynx -dump "http://www.example.com" | grep -o "http:.*" >file.txt

Post_data piping
-post_data properly formatted data for a post form are read in from stdin and passed to the form. Input is terminated by a line that starts with '---'. So you can create your file with the post data then pipe it into lynx with: cat datafile | lynx -post_data http://yoursite.com/cgi-bin/script.cgi

cat > mydata username=433&password=1835205&id=MP+status ctrl+d lynx -post_data http://www.example.com/cgi_bin/my_cgi.cgi < ./mydata | grep key
 * 1) feed it into lynx -post_data

= gzip = source

tar -cvf - | gzip -c > .tar.gz tar -cvzf backup.tgz directory c = create, v = verbose, z = compress with GZIP, f = archive name To compress a directory

gzip -c file.txt > file.gz To compress file.txt into file.gz

tar cvf - bin info lib man | gzip > myFile.tar.gz To gzip the contents of the directory's bin info lib man into a compressed file

tar cf - myDirectory | gzip > myDir.tgz To compress a directory first tar then gzip the output.

gzip -cd old.gz | gzip > new.gz To recompress concatenated files to get better compression

gzip -1 myfile.txt To compress myfile.txt with fastest compression file will be larger

gzip -cd file.gz | wc -c If a compressed file consists of several members, the uncompressed size and CRC reported by the --list option applies to the last member only.

gzip -9 myfile.txt To compress myfile.txt with best compression, will take longer.

cat file1 file2 | gzip > foo.gz To compress two files into foo.gz yielding a better compression by compressing all members at once.

zcat old.gz | gzip > new.gz To recompress concatenated files to get better compression

gzip -c file1 file2 > foo.gz To keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing them.

gzip myfile.txt To compress myfile.txt to myfile.txt.gz

= alias = some useful alias alias hgrep='history | grep ' alias suniq=' sort | uniq ' alias server_name='ssh -v -l USERNAME IP ADDRESS' alias ll='ls -l' alias la='ls -a' alias ff1='/home/mpiroozn/firefox/firefox' alias chrome='google-chrome --user-data-dir & ' alias brpm='rpm -ivh ~/RPM/*rpm' "psg firefox" alias psg='ps -ef | grep -i' = grep = grep for either of two strings grep -iE "(jpg|gif)" or egrep -i "text1|text2"
 * 1) rpm batch install

= VIM commands = - VIM command

= wget vs curl = - wget vs curl

= bash math with bc = - bash math bc

= Search block = gene="A1CF" fname="carriers.txt" echo -e "awk 'BEGIN { RS=\"============== \"; } /^$gene/ { print RS \$0; }' $fname " \ > tmp.$gene.search && sh tmp.$gene.search && rm -f tmp.$gene.search ============== A1CF ================== LOCUS  POS     ALIAS   NVAR    TEST    P       I       DESC chr10:52573772 T/C     W=1     9:9 chr10:52596064 T/C     W=1     0:1 chr10:52573772 SID7100 1       CASE    1 chr10:52573772 SID28756        1       CASE    1 chr10:52573772 SID28766        1       CASE    1 chr10:52573772 SID28839        1       CASE    1 chr10:52573772 SID28846        1       CONTROL 1 chr10:52573772 SID29260        1       CASE    1 chr10:52573772 SID29956        1       CASE    1 chr10:52573772 SID30088        1       CONTROL 1 chr10:52573772 SID30095        1       CONTROL 1 chr10:52573772 SID30097        1       CONTROL 1 chr10:52573772 SID30159        1       CONTROL 1 chr10:52573772 SID30192        1       CONTROL 1 chr10:52573772 SID30251        1       CONTROL 1 chr10:52573772 SID30293        1       CONTROL 1 ============== ANK1 ==================