Thursday, 5 December 2013

Make an anchor tag with href but do not let user navigate away

This can be utilized a lot for SEO purposes. Anchor tag href would be crawled by search engines, but when user will be click, it will not navigate away the user from the page, and you can handle the onclick even in anyway you want to.

<a href="" onclick="dothis(event); return false;">Click me</a>

function dothis(e){
if (!e) var e = window.event;
if (e.stopPropagation)
return false;

Sunday, 14 July 2013

Useful linux commands

OS Details

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.4 LTS
Release: 10.04
Codename: lucid

System Information

$ uname -a
Linux 2.6.32-46-generic #108-Ubuntu SMP Thu Apr 11 15:56:25 UTC 2013 x86_64 GNU/Linux

$ uname -v

x86_64 (means system is 64 bit machine, otherwise its 32 bit machine)

User specific resource limit get/set

$ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Above command is very very important. Many a times TCP/IP socket connection grows a lot, if you sense that kind of problem, too many connections at a time, try to change "open files" limit to higher value, 20000 is  good enough. How to change the limit, you should read this article How to set ulimit in ubuntu/debian linux systems

Which user has opened how many files in sort order

$ lsof | awk '{if(NR>1) print $3}' | sort | uniq -c | sort -nr
   1256 root
    655 nishal
     16 www-data
      4 syslog
      4 ntp
      4 daemon

Check listening ports
$ netstat -nlp

change the open file limit in debian/ubuntu/linux system

For every user there is a resource limit configuration in linux. If it is not specified, it picks the default values, you can check them by command 
$ ulimit -a

Change Open file limit, ulimit on Debian/Ubuntu/Linux systems

Now to change it first of all pam limits by default is not loaded in ubuntu

$ vi /etc/pam.d/su
Un-comment the following line
#session    required
session    required

$ vi /etc/security/limits.conf

and add the following lines to the end of the file (before the line # End of file)
*       soft  nofile   16000                                                      
*       hard nofile  64000

Save the file and quit vi.

Now To bring this in affect, you must restart the machine.

Below two lines will change the limit only for mysql user
mysql               soft    nofile          10240
mysql               hard   nofile          10240

Usually we need to change the "open file" limit from 1024(which is default) to higher value. But before that you should check which user is opening how many files.
$ lsof | awk '{if(NR>1) print $3}' | sort | uniq -c | sort -nr
   1256 root
    655 nishal
     16 www-data
      4 syslog
      4 ntp
      4 memcache
      4 daemon

The above result clearly says that root user has opened 1256 files, in such cases default limit of 1024 will start creating IO wait issue, connection timeout issue. To fix them, above article is beautiful.


How to change Open file limit, ulimit on CentOS/Fedora/Red Hat

Command to check
ulimit -n
ulimit -a

1. vi sysctl.conf  and add this line  fs.file-max = 65536

2. vi /etc/security/limits.conf
*          soft     nproc          16384
*          hard     nproc          65535
*          soft     nofile         16384
*          hard     nofile         65535

3. Restart the server

4. ulmiit -n

Mantis is not sending Email, How to fix?

I installed mantis for bug tracking system and it was done quite quick, may be in an hour. But the miserable part is, it was not sending email notification for any update, and it took more than couple of hour and still not fix... wtf is that.

OK, Chill !!!! I got the solution

Configuring the email settings is challenging task for most of the users start working with Mantis. The confusion is partially caused because PHP (so therefore Mantis) does not give you a very precise description about why it can not deliver emails. But you should try to see the log file /var/log/apache2/error.log and see if you can get some valuable information to proceed further.

In Most of the cases the configuration is very easy, and the good part is Mantis does maintain the mail related configuration variable as global ones.
So just need to change them and it should start working.

1. open your config_inc.php (located in: /var/www/mantis/config_inc.php)
2. copy the following code to the file:

$g_allow_signup    = ON;  //allows the users to sign up for a new account
$g_enable_email_notification = ON; //enables the email messages
$g_phpMailer_method = PHPMAILER_METHOD_SMTP; // this is most important
$g_smtp_host = '';
$g_smtp_connection_mode = 'tls';
$g_smtp_port = 587;
$g_smtp_username = ''; //replace it with your gmail address
$g_smtp_password = '*********'; //replace it with your gmail password
$g_administrator_email = ''; //this will be your administrator email address

3. go to your Mantis homepage (
4. click sign up for a new account
5. create a dummy account with your gmail address
6. press Signup
7. check your mail

Suppose you want to use other modes of smtp_connection, you can read about them at here

If you want to use your own SMTP server without TLS or SSL conneciton, it would be something like.
$g_smtp_host = '';
$g_smtp_connection_mode = '';
$g_smtp_port = 25;
$g_smtp_username = 'youraccount@example'; //replace it with the email which can access your mail server
$g_smtp_password = '*********'; //replace it with your email account password

         * select the method to mail by:
         * PHPMAILER_METHOD_MAIL - mail()
         * PHPMAILER_METHOD_SENDMAIL - sendmail
         * @global int $g_phpMailer_method
        $g_phpMailer_method             = PHPMAILER_METHOD_MAIL;

Above code is present in "/mantis_home/config_defaults_inc.php" file.
As you see above mantis provide various ways of sending email, default is mail() command of linux, better is is make it "PHPMAILER_METHOD_SMTP", so that you can send email using any email server.

In the same file you can change from email also which will be shown in email. For that you need change the constant 
$g_from_email = ""
$g_from_name = "Mantis Nishal Bug Tracker"

Note : For any further help, use this link

#mantis not sending email notifications
#mantis email system not working
#mantis is not sending email
#mantis "sh: /usr/sbin/sendmail: not found"

Tuesday, 2 July 2013

Using compression, gizp, mod_deflate, amazon cloudfront issue with gzip

If you are not using compression, you are missing something great feature. Please enable it, you'll find a big difference in page load time.

How does gzip work over the HTTP ?

When browsers make a request to a server, they send a Accept-Encoding header.
For most of the browser they will send Accept-Encoding: gzip, deflate. The server then knows that this browser accepts data compressed using gzip or deflate. Now, the server sees Accept-Encoding: gzip, deflate, sends the response compressed as gzip and marks it with the response header Content-Encoding: gzip.

The server can also optionally send another header Vary: Accept-Encoding. This tells proxies to vary the object in the proxy cache based on the Accept-Encoding header. The result is that the proxy will have a compressed and uncompressed version of the file in cache (and maybe even three: uncompressed, gzip compressed, deflate compressed). Failing to provide the Vary header may result in the wrong encoding going to an incompatible browser. The Vary header was introduced in HTTP/1.1

How to use gzip/deflate?

With Apache, it comes with a module mod_deflate. 

In any standard installation of Apache it comes with mod deflate. If it is not enabled, you can enable using.
$ a2nmod defalte
And reload conf or restart apache. After enabling, check the configuration

<IfModule mod_deflate.c>
          # these are known to be safe with MSIE 6
          AddOutputFilterByType DEFLATE text/html text/plain text/xml

          # everything else may cause problems with MSIE 6
          AddOutputFilterByType DEFLATE text/css
          AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
          AddOutputFilterByType DEFLATE application/rss+xml

Nginx comes by default with gzip module

edit the fle /etc/nginx/nginx.conf

          gzip on;
          gzip_disable "msie6";
          gzip_vary on;
          gzip_proxied any;
          gzip_comp_level 6;
          gzip_buffers 16 8k;
          gzip_http_version 1.1;
          gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

Issues with Amazon aws cloudfront and nginx and compression

If you are using cloudfront as a CDN to deliver the static content, then using nginx, you'll suddenly see that cloudfront is serving unzipped static contents.

The reason is cloudfront uses HTTP 1.0 to make request to origin server, but in nginx there is directive "gzip_http_version" which is set to 1.1, just check the above configuration. So what you need to do it make it 1.0

gzip_http_version 1.0;

That will enable the compression for cloudfront requests as well.

Monday, 1 July 2013

Install latest version of Nginx

$ sudo -s
$ echo "deb$nginx/ubuntu lucid main" > /etc/apt/sources.list.d/nginx-$nginx-lucid.list
$ apt-key adv --keyserver --recv-keys C300EE8C
$ apt-get update

At this step you might get an error in upgrading
"W: GPG error: lucid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62"

So try this
$ wget
$ cat nginx_signing.key | sudo apt-key add -
$ apt-get update

Now Install
$apt-get install nginx

$/usr/sbin/nginx -V

Thursday, 27 June 2013

Redirect in nginx

From to

server {
          return    301 $scheme://$request_uri;

From to

server {
          return    301 $scheme://$request_uri;

Some people try it in this way also, but it's a bad way of doing redirect as per nginx documentation.

server   {
   rewrite  ^/(.*)$$1 permanent;

server   {
   rewrite  ^/(.*)$$1 permanent;

Securing your website while using nginx, Deploying SSL certificates in nginx

Nginx is very very simple for deploying certificated and start serving HTTPS requests. Just create the copy of server block that you have written for serving HTTP requests and create another server block with the following changes.

server {
        listen 443;
        ssl on;
        ssl_certificate      /etc/ssl/certs/;
        ssl_certificate_key  /etc/ssl/private/server.key;

        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;

        location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|txt|pdf)$ {
                root /var/www;
                access_log off;
                expires 365d;

        location / {
                proxy_pass        http://localhost:8181/;

                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

                proxy_set_header X-Forwarded-Proto $scheme;
                add_header Front-End-Https   on;

Issues and troubleshoots

1. While installing certificates, in the configuring you do not need to keep the intermediate certificate as you would have seen in Apache. Browsers usually store intermediate certificates which they receive and which are signed by trusted authorities, so actively used browsers may already have the required intermediate certificates and may not complain about a certificate sent without a chained bundle.
To check that try this URL :

To solve this possible issue : copy the intermediate certificate content in the main certificate content but after the main content.

$ cat bundle.crt >>

2. Here is a known error which you might face
"SSL_CTX_use_PrivateKey_file(" ... /") failed (SSL: error:0B080074:x509 certificate routines: X509_check_private_key:key values mismatch)"

This error means "nginx has tried to use the private key to use the certificate" and you might have copied the intermediate certificate first and then main certificate content, because in that case private key will not match. So change the content on to have main content first and
then intermediate certificate contents.

$ cat main_certificate bundle.crt >

If that is the not the case, possibly you should check the certificate issuing authorities, because somehow private key is not matching. Or try to figure out by reading the log file "/var/log/nginx/error.log".

3. One of the most important thing is to add these lines in configuration
proxy_set_header X-Forwarded-Proto $scheme;
        add_header Front-End-Https   on;

Because when you do the proxy_pass you do it on http protocol, so even if user is making https request, your back-end server won't be aware of that. So pass that information in a X header, "X-Forwarded-Proto" is de-facto to pass the protocol information over proxies.

Correspondingly in tomcat,  if you are using JAVA based application, request.isSecure() will not work any more. So write a central API to get the
protocol information, something like this.

public static boolean isSecure(HttpServletRequest request){
String protocol=request.getHeader("X-Forwarded-Proto");
return true;
return request.isSecure();

Saturday, 22 June 2013

nginx proxy_pass configuration, complexity, settings, issues, solutions

Ideally when you set these parameters for proxy_pass, its good enough.

location / {
               proxy_pass        http://localhost:8080;
               proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
               proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_connect_timeout      30;
                proxy_send_timeout         30;
                proxy_read_timeout         600;

                proxy_buffer_size          4k;
                proxy_buffers              4 16k;
                proxy_busy_buffers_size    64k;
                proxy_temp_file_write_size 64k;

How to pass the remote address to back-end server while using nginx

In case of proxy_pass, there is a complexity, when back-end server will try to access the requested IP address it will return either or may be the local subnet IP where nginx is deployed, because nginx is proxy server and it overrides the information of requested IP address. So the solution is to set an extra parameters in request header at the time of making proxy.
statement "proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;" meant for that only.

How to get the remote address in your back-end server while using "X-Forwarded-For"

Story is not yet over, Not now you have to do something at your back-end server to extract the requested IP from request header.

In case of back-end is Apache it is little simple, you just need to install a module
$ sudo apt-get install libapache2-mod-rpaf
And configure the file /etc/apache2/mods-available/rpaf.conf

<IfModule mod_rpaf.c>
RPAFenable On
RPAFsethostname On

But in case of back-end as Tomcat, it is little complex, 
you will never get it via request.getRemoteAddr(); So write a global API to access the remote address, like this,

public static String getRemoteAddress(HttpServletRequest request){
String ip = request.getHeader("X-Forwarded-For");
if(ip==null || "".equals(ip)){
return ip;

So if it is found in "X-Forwarded-For" as a request headers, it will return or else it will get from request.getRemoteAddr(). This kind of programming is good because tomorrow if you plan to use Apache proxying using AJP protocol, then you don't need to make any back-end change, in case of AJP, it will get you remote address directly from request object, and in case of other proxying, it will get you from "X-Forwarded-For" header.

Some other configurations points

1. proxy_connect_timeout directive assigns a timeout for the connection to the upstream server(or back-end server). It's default value is 60s.

This is not the time until the server returns the pages, that is the proxy_read_timeout statement. If your upstream server is up, but hanging (e.g. it does not have enough threads to process your request so it puts you in the pool of connections to deal with later), then this statement will not help as the connection to the server has been made. 

So in case you ever get proxy_connect_timeout at nginx, check your back-end connection limit.

2. proxy_read_timeout - this is very very important, default value is 60s.
This directive sets the read timeout for the response of the proxied server. It determines how long nginx will wait to get the response to a request. The timeout is established not for entire response, but only between two operations of reading.

In contrast to proxy_connect_timeout, this timeout will catch a server that puts you in it's connection pool but does not respond to you with anything beyond that, then proxy_read_timeout will come in picture. Be careful though not to set this too low, as your proxy server might take a longer time to respond to requests on purpose (e.g. when serving you a report page that takes some time to compute). 

You can also set different proxy_read_timeout which could be higer value like 10minutes for certain location.

location /admin/reports/ {
// other proxy_pass settings
proxy_read_timeout  600;

location / {
// other proxy_pass settings
proxy_read_timeout  30;

3. proxy_send_timeout - default value is 60s
This directive assigns timeout with the transfer of request to the upstream server. Timeout is established not on entire transfer of request, but only between two write operations. If after this time the upstream server will not take new data, then nginx is shutdown the connection.

Nginx setup for segregating static and dynamic content from nginx and back-end server using proxy_pass

This configuration will set the static contents to be served from nginx and dynamic contents from back-end server, may be Apache (in case of PHP based application), Tomcat (in case of Java based application).

For this purpose we essentially use proxy_pass module of nginx. Its very very simple with nginx, create two different location context and serve them differently. Once using the root mean providing the directory where to find the file, and other use proxy_pass

server {
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
root /var/www;
access_log off;
expires 365d;

location / {
 proxy_pass        http://localhost:8181;
 proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Now here come some basic knowledge of nginx

1. In nginx you can set as many locations as possible, what ever is best match will be picked and executed.
2. If you want to create something like "" and entire URL after /static/ should be served from nginx only, you can do that.
location /static/ {
         root /var/www/static/;
         access_log off;
         expires 30d;

3. "access_log off" means, it will not create any log record for such request which match that location.

4. "expires 30d" means, it will set expiry header to 30 days for all such requests which will match that location. Like in apache we use mod_expires for setting expiration time of the static contents, so that browser can cache that contents for a long time. In nginx its just a one line, :)

5.  proxy_pass will let you forward the request to any back-end server.
"proxy_pass        http://localhost:8080;" will forward your request for dynamic contents possibly to back-end tomcat server.

#How to set expiration time for static contents while using nginx

Friday, 7 June 2013

nginx - (13: Permission denied) while reading upstream

2013/06/07 21:13:38 [crit] 17799#0: *717313 open() "/var/lib/nginx/proxy/2/19/0000000192" failed (13: Permission denied) while reading upstream, client:, server:, request: "GET /web/jsp/example.jsp HTTP/1.1", upstream: "", host: ""

Typically this is a problem of saving the buffered data from the proxy server and sending it back. When the upstream server response returns large number of bytes then nginx keeps the part of data at the disk and start sending the first received bytes to browser. So for that it uses a certain directory to maintain the data at the configured directory, which is "/var/lib/nginx/proxy" in my case. So you just need to give access to that directory to worker user of nginx.

1. open the /etc/nginx/nginx.conf to find worker user of nginx
2. Or ps -ef | grep nginx  and check which user is running worker process
$ ps aux | grep “nginx: worker process” | awk ‘{print $1}’
3. In my case it is www-data
4. Give access to that directory
$ chown -R www-data.www-data /var/lib/nginx
5. Done

Nginx access related articles Why nginx usually throws 403, Forbidden?

# Nginx is returning on part data in response
# Nginx response is chunked abnormally

Friday, 31 May 2013

Configure an alternate JAVA

Get the new JDK and put it in /usr/liv/jvm and run this commands

$ update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk-7u21/bin/java" 1
$ update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk-7u21/bin/javac" 1
$ update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk-7u21/bin/javaws" 1

$ chmod a+x /usr/bin/java
$ chmod a+x /usr/bin/javac
$ chmod a+x /usr/bin/javaws
$ chown -R root:root /usr/lib/jvm/jdk-7u21

Make sure you read the below command output clear and choose which Java you want.
$ update-alternatives --config java
There are 2 alternatives which provide `java'.

  Selection    Alternative
 +        1    /usr/lib/jvm/java-6-openjdk/jre/bin/java
*         2    /usr/lib/jvm/jdk-7u21/bin/java

Press enter to keep the default[*], or type selection number: (If you want java 6, write 1 and if you java 7 write 2)

* This commands are being good in debian/ubuntu system. I am not sure about Red Hats/CentOs

Load Balancing using Apache mod_jk

After installing apache, install mod_jk module.

1. $ apt-get install libapache2-mod-jk

2. create a file jk.conf(if not present) in mods-available directory and write these lines
JkWorkersFile   /etc/apache2/
JkLogFile       /var/log/apache2/mod_jk.log
JkShmFile       /var/log/apache2/mod_jk.shm
JkLogLevel      error

After creating file, may be you need to do symbol linking to mods-enabled directory. Or just disabling and enabling the mod_jk will do.

$ a2dismod jk
$ a2enmod jk

3. create a file /etc/apache2/
and write lines for load balancing, creating a load balancer and workers and assembling the workers in load balancer. In below configuration server1 will take 60% load and server2 will take 40%.

worker.server1.port=8009 (-- The product server port where you want to forward the request) IP Address (-- The proxy server IP where you want to forward the request)
worker.server1.type=ajp13 (-- Protocol setting, in case of mod_jk module, it will be always ajp13)
worker.server1.lbfactor=60 (-- This parameter to set, how much load this server1 will be given)

worker.server2.port=8009 IP Address


4. In virtual host configuration of Apache
In below example, all the request starting with /web/static/css(js)(images), will be JkUnMounted so , they will be served from apache document root directory(which is /var/www ), and rest all requests will be JkMounted, so they will be forwarded to load balancer and load balancer will forward the reuqest either to server1 or server2. This is done using AJP protocol, so make sure that you have configured AJP protocol in your application server like tomcat. TO configure the tomcat, you can check this url Configuration Apache and Tomcat to user Mod_jk connector for proxy passing

  DocumentRoot /var/www

  DocumentRoot /var/www/

  JkMount /* loadbalancer
  JkUnMount /web/static/css/* loadbalancer
  JkUnMount /web/static/js/* loadbalancer
  JkUnMount /web/static/images/* loadbalancer

  ErrorLog /var/log/apache/example-com-error_log
  CustomLog /var/log/apache/example-com-access_log combined

Apache performance tuning and security tuning


It's actually the maximum number of requests to serve on a TCP connection. If you set it up to 100, clients with keepalive support will be forced to reconnect after downloading 100 items. Default in Apache is 100, you can increase it if you have enough memory on the system. If you are serving a page which contain high number of images then keeping is high is better because then it utilize the alive connections to serve the image requests.


KeepAliveTimeout determines how long to wait for the next request. Set this to a low value, perhaps between two to five seconds. If it is set too high, child processed are tied up waiting for the client when they could be used for serving new clients.


The MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. It's set to 0 by default, the child process will never expire. It is appropriate to set this to a value of few thousands. This can help prevent memory leakage, since the process dies after serving a certain number of requests. Don't set this too low, since creating new processes does have overhead.

Proper user of MPM (Multi-Processing Module)

This I have already explain at this URLConfiguring Apache/Tomcat for serving Maximum number of requests

Security tweaks

1. ServerTokens
This directive configures what you return as the Server HTTP response
Header. The default is 'Full' which sends information about the OS-Type and compiled in modules.
# Set to one of:  Full | OS | Minimal | Minor | Major | Prod
where Full conveys the most information, and Prod the least, you can also set it to "ProductOnly" which is best

ServerTokens ProductOnly

2. ServerSignature
Optionally add a line containing the server version and virtual host
# Set to one of:  On | Off | EMail
You can Set to "EMail" to also include a mailto: link to the ServerAdmin, better to set it to Off

ServerSignature Off

3. TraceEnable 
This Allow TRACE method to enable/disabled
# Set to one of:  On | Off | extended
Set to "extended" to also reflect the request body, best it to make it Off

TraceEnable Off

Monday, 27 May 2013

Why nginx usually throws 403, Forbidden?

A. This problem is mostly because user (who is running the nginx) doesn't have the access of that resource.

Opne file /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /var/run/;

events {
        worker_connections 768;
        # multi_accept on;
server {
    listen          80;
    access_log  /var/log/nginx/localhost.access.log;
    index           index.html;
    root            /var/www/

Try these:
1. Open nginx.conf, locate user directive (change as per who has the access, may be www-date is good)
2. Nginx master process would be running using the user who start the nginx service may be the root, but nginx will use another user to create threads to serve the contents, which is configured in nginx.conf file. Note that this user only need to have read access of the directory which are set using root directive.
2. Go to directory which you set as root in location context (which is /var/www in above example) and check the access, ls -al should show
3. You can change the ownership of files and direcotry using command "chown -R usergroup.username directoryName"

Friday, 17 May 2013

Create user in mysql

Create an admin user who can access anything from anywhere

mysql> grant all privileges on *.* to 'admin'@'%' identified by 'password';

Create an user who can access any database from a network colcation

mysql> grant all privileges on *.* to 'admin'@'192.168.%' identified by 'password';

*If somehow this doesn't work, execute this
update mysql.user set Host='192.168.%' where User='admin';

Create an user who can only access from localhost
mysql> grant all privileges on *.* to 'admin'@'localhost' identified by 'password';

Create an user who can access only one fix database
mysql> grant all privileges on dbname.* to 'admin'@'localhost' identified by 'password';

Here is the description of every word in the above command

grant all privileges - its granting permission(so it creates user also)
on dbname.* - its dbname and table name access restriction (*.* means all db, dbname.* means only one datase)
to 'admin'@'localhost' - its first quoted string is username, and 2nd quoted string is host access, who can connect to the mysql db, in the current only, only localhost host users would be allowed to connect
identified by 'password'; - its the password which is required to connect the mysql db server

Note : 
Here do not get confused with "bind-address" configuration in mysql configuraiton file, which actually provides binding access, click here to read more about "bind-address" and access point

Wednesday, 8 May 2013

Secure your website with SSL - guidelines and experience

First generate the key file
$ openssl genrsa -des3 -out server.key 2048
It will ask for a pass phrase, which will be further used to start the web server, so save it properly

Now generate the CSR (Certificate Signing Request) file
$ openssl req -new -key server.key -out server.csr

This ask informations like, Location, Company Name, Common Name. Its better to ignore the "challenge password". Be careful with entering common name, which has to be your domain name.

If you serve your users with, common name should be "". Once certificate is issued for, it won't be valid for If you want to secure with and without www, there is a certain preference you'll have to choose at the time of buying the certificate. If you want to secure all subdomains, there will be different prerefernece as well. Depending of number of sub domains you are looking for to make secure, cost will also vary. As of today verisign charges $ 400 USD for one domain, $ 600 for with and without www, and around $ 1500 USD for securing infinite sub domains.

3. Now use this CSR and avail the certificates which is crt file from any CA (certificate authority) company like verisign(costliest), go daddy cheapest (may be $ 10 USD)

4. Once you buy the SSL certificate, the product management will guide you on how to get the certificates. Its very simple.

5. In case of verisign, they will take average of 2 to 4 days for the entire process execution, as they will validate "CSR Verification", "Proof of Organization" and "Proof of Domain Registration".
They would ask company registration certificates also as a part of process. But if you buy from go daddy, no verification process, only based on CSR file they will issue you the certificates within a minute.

6. At the time of downloading the certificates makes sure that you also download the intermediate certificate. Intermediate certificates are connecting the certificate chains. In few browsers(without having intermediate certificate), some users might face unwanted error message.

7. Deploying the certificates, copy these 3 files at the following places and restart Apache

$ cp server.key /etc/ssl/private/
$ cp /etc/ssl/certs/
$ cp intermediate.crt /etc/ssl/certs/

8. Now change in apache
Enable the ssl module, if you are on debian(Ubuntu, RedHat) systems then you can use command a2enmod ssl.
Go to virtual host configuration and write these lines

SSLEngine on
SSLProtocol -all +TLSv1 +SSLv3

SSLCertificateKeyFile /etc/ssl/private/server.key
SSLCertificateFile /etc/ssl/certs/
SSLCertificateChainFile /etc/ssl/certs/intermediate.crt
$ /etc/init.d/apache2 restart (It will ask for the pass phrase that you created at step 1)
- and its Done :)
8. To validate everything done properly or not there are several websites to check one is,

Monday, 4 March 2013

How to setup replication (Master Slave) in MySQL

I'll start the article by assuming that there are two MySQL server ready and we just need to do the configuration setup to start the replication.

Go to Master Server

1. Make all the tables engine = innodb 
As only innodb engines have binary logging feature which is essentially used for replication. Binary logging must be enabled on the master because the binary log is the basis for sending data changes from the master to its slaves. If binary logging is not enabled, replication will not be possible. MyIsam does not support binary logging.

Use following command to convert all the tables to InnoDB

mysql > SELECT CONCAT('ALTER TABLE ', table_name, ' ENGINE=InnoDB;') as ExecuteTheseSQLCommands
FROM information_schema.tables WHERE table_schema = 'db_name' 
ORDER BY table_name DESC;

2. Start binary logging on Master and Assign server Id (Server ID assigning is necessary, If you omit server-id (or set it explicitly to its default value of 0), a master refuses connections from all slaves).
edit the file /etc/mysql/my.cnf

*For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1 and sync_binlog=1 in the master my.cnf file.

3. Create a slave user on Master DB
mysql> grant REPLICATION SLAVE on *.* to 'slave'@'IP_ADDRSS_OF_SLAVE' identified by 'slavePassword';
mysql> flush privileges;

4. Restart Master DB
and check 
mysql> show master status;

You can also check if mysql-bin log files are getting created on not, where you have given path of data to be stored, may be at /var/lib/mysql

5. Take a dump of database
 mysqldump -uroot -proot --single-transaction --master-data --databases db1,db2 > all_db.sql
And transfer the file on slave Machine

Go to slave Machine

6. Assign Server Id on slave DB

7. Restart Slave DB

8. Import the dump file in database
mysql -uroot -proot < all_db.sql

9. Make this slave listen to Master
mysql> flush privileges;

10. slave start
mysql > slave start;
mysql > show slave status;


Some troubleshoots and points:
  1. You can configure on slave mysql configuration that which all database or even which all tables you want to replicate or do not want to replicate
  2. If replication fails due to data consistency issue means, data already exist in slave and master is still trying to push (may be due to several kind of issue), you can change the slave configuration to move ahead
mysql > change MASTER TO MASTER_LOG_POS=desired_position;
mysql > change MASTER TO Master_Log_File='mysql-bin._desired_bin_log_file'

Reference :

Wednesday, 27 February 2013

Using POP on multiple clients or mobile devices

Using POP on multiple clients or mobile devices

If you have configured the email on outlook and on Blackberry/Android/iPhone/Gmail too, and on one of the client you are not able to receive the email, this article is useful to you.

Essentially POP (Post office protocol) is a one-way download of your messages that allows you to access your mail with a mail program like Outlook Express or Apple Mail. POP only offers one-way communication, which means that actions you take in the mail program. You should know two things "recent mode" and "Leave a copy of message on server".

What is 'recent mode?'
If you're accessing Gmail on multiple clients through POP, Gmail's 'recent mode' makes sure that all messages are made available to each client, rather than only to the first client to access new mail.
Recent mode fetches the last 30 days of mail, regardless of whether it's been sent to another POP1 client already.
Setting up 'recent mode'
In your POP client settings, replace '' in the 'Username' or 'Email' field with ''
Once you enable recent mode, please be sure to configure your POP client to leave messages on the server according to the instructions below:
  • Outlook or Outlook Express: on the Advanced tab, check the box next to 'Leave a copy of messages on the server.'
  • Apple Mail: on the Advanced tab, remove the check next to 'Remove copy from server after retrieving a message.'
  • Thunderbird: on the Server Settings tab, check the box next to 'Leave messages on server.'

* This is an exact copy the URL

Friday, 22 February 2013

Updating and installing package on debian machine

Its very simple :)
APT (Advance Package Tool) is a free user interface which is used in debian machine to install/remove/update any software. For the same in Red Hat machine is "yum".

How to search a package and Install?

Points :

  1. $ apt-cache search "Is the command to search a package"
  2. $ apt-get install "Is the command to install a package"
  3. apt-cache - query the APT cache
  4. apt-cache search/madison are two important commands you should know
  5. $ apt-cache search - "Performs a full text search on all available package lists for the POSIX regex pattern given"
  6. $ apt-cache madison - "Command attempts to mimic the output format and a subset of the functionality of the Debian archive management tool, madison. It displays available versions of a package in a tabular format"
  7. sudo apt-get install =version - "Is the command to install a package with a certain version"
  8. dpkg -s - "Is the command to about the package"

Example :

$ apt-cache search mysql-server
cacti - Frontend to rrdtool for monitoring systems and services
phpbb2-conf-mysql - Automatic configurator for phpbb2 on MySQL database
torrentflux - web based, feature-rich BitTorrent download manager
mysql-server - MySQL database server (meta package depending on the latest version)
mysql-server-5.0 - MySQL database server binaries

$ apt-cache madison mysql-server
mysql-server | 5.0.96-0ubuntu3 | hardy-security/main Packages
mysql-server | 5.0.96-0ubuntu3 | hardy-updates/main Packages
mysql-server | 5.0.51a-3ubuntu5 | hardy/main Packages

Now if you want to install a certain version package, you should use
$ apt-get install mysql-server=5.0.96-0ubuntu3

Command to know about the package
$ dpkg -s mysql-server


Sunday, 17 February 2013

Mysql Database Configuration, Access settings, Innodb configuration, Log slow query

To bind the server Access point
By default it is binded to localhost or
Open /etc/mysql/my.cnf fine

So lets say you have 4-5 machines from which you want to access mysql DB from any of the machine, but you do not want anyone from outside to access this,
bind-address will be Local LAN Ip Address.
bind-address = Local LAN IP Address

If you want only local machine to access mysql DB
bind-address            =

If you want to make it public, remove the bind-address line.

Logging the slow queries
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 1

Logging the queries which are not using index

Changing the InnoDB configuration
Buffer Pool Size is the memory which you provide to mysql server program.


max_allowed_packet : 
The max_allowed_packet variable limits the size of a single result set. In the [mysqld] section, any normal connection can only get that much worth of data in a single query. In mysqldump you typically produce "extended INSERT" queries, where you list multiple rows within the same INSERT command. It's better, then, to have this variable set high. In mysqld max_allowed_packet could be 16M (to be safe, because it doesn't uses memory until required), in mysqldump, max_allowed_packet  could be 128M or may be 512M, depends on your machine and requirement.

If you want mysqldump to work fast

max_allowed_packet  =  64M (Increase this value, default is 16M)

* You can also take take dump faster by passing as a command argument
$ mysqldump -u root -p --max_allowed_packet=512M dbname > dbname.sql

More Ideas on MySQL performance tuning

Wednesday, 16 January 2013

After setting javaagent in classpath tomcat is not starting

Q. I am trying to use newrelic as javaagent in tomcat on ubuntu machine. When I set the javaagent in classpath, tomcat fails to start. I tried with tomcat5 and tomcat7, its not starting. And I am not getting any log also to check anything.
I tried doing the same with tracelytics javaagent, still its the same, tomcat is not starting. And clue, no log, someone please help.
A. It was happening due to setting of Xss configuration at the time server start in CATALINA_OPTS. I had set it to 128K, somehow it was failing to start. But when I removed that setting, it started, then I changed to 256K then also it started. I am not sure about the inside story, but at least it is working now :)