Skip to main content

Installing NodeJS on Centos 6.6.

Installing nodejs and npm on centos is very simple.

 [nodejs-admin@nodejs ~]$ sudo su
 [nodejs-admin@nodejs ~]# curl -sL https://rpm.nodesource.com/setup | bash -
 [nodejs-admin@nodejs ~]# yum install -y nodejs

Installing gcc-c++ and make.

 [nodejs-admin@nodejs ~]$ sudo yum install gcc-c++ make
 [sudo] password for nodejs-admin: 
 Loaded plugins: fastestmirror, refresh-packagekit, security
 Setting up Install Process
 Loading mirror speeds from cached hostfile
  * base: mirrors.123host.vn
  * epel: ftp.cuhk.edu.hk
  * extras: centos-hn.viettelidc.com.vn
  * updates: mirrors.vonline.vn
 Package 1:make-3.81-20.el6.x86_64 already installed and latest version
 Resolving Dependencies
 ...           

 Complete!

Later on we will need kafka-node lets install that as well.

 [nodejs-admin@nodejs ~]$ sudo npm install kafka-node
 [sudo] password for nodejs-admin: 

 > snappy@3.0.6 install /home/nodejs-admin/node_modules/kafka-node/node_modules/snappy
 > node-gyp rebuild

 gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.node-gyp/0.10.36"
 gyp WARN EACCES attempting to reinstall using temporary dev dir "/home/nodejs-admin/node_modules/kafka-node/node_modules/snappy/.node-gyp"
 make: Entering directory `/home/nodejs-admin/node_modules/kafka-node/node_modules/snappy/build'
   CXX(target) Release/obj.target/snappy/deps/snappy/snappy-1.1.2/snappy-sinksource.o
   CXX(target) Release/obj.target/snappy/deps/snappy/snappy-1.1.2/snappy-stubs-internal.o
   CXX(target) Release/obj.target/snappy/deps/snappy/snappy-1.1.2/snappy.o
   AR(target) Release/obj.target/deps/snappy/snappy.a
   COPY Release/snappy.a
   CXX(target) Release/obj.target/binding/src/binding.o
   SOLINK_MODULE(target) Release/obj.target/binding.node
   SOLINK_MODULE(target) Release/obj.target/binding.node: Finished
   COPY Release/binding.node
 make: Leaving directory `/home/nodejs-admin/node_modules/kafka-node/node_modules/snappy/build'
 kafka-node@0.2.18 node_modules/kafka-node
 ├── buffer-crc32@0.2.5
 ├── retry@0.6.1
 ├── node-uuid@1.4.1
 ├── async@0.7.0
 ├── lodash@2.2.1
 ├── debug@2.1.1 (ms@0.6.2)
 ├── binary@0.3.0 (buffers@0.1.1, chainsaw@0.1.0)
 ├── node-zookeeper-client@0.2.0 (async@0.2.10, underscore@1.4.4)
 ├── buffermaker@1.2.0 (long@1.1.2)
 └── snappy@3.0.6 (bindings@1.1.1, nan@1.5.3)
 [nodejs-admin@nodejs ~]$ ls

Lets do a test.

Create a script called example.js with below code.
 var http = require('http');
 http.createServer(function (req, res) {
   res.writeHead(200, {'Content-Type': 'text/plain'});
   res.end('Hello World\n');
 }).listen(1337, '127.0.0.1');
 console.log('Server running at http://127.0.0.1:1337/');
Lets start the server on a terminal.
 [nodejs-admin@nodejs nodejs]$ node example.js 
 Server running at http://127.0.0.1:1337/
Hit the URL from the browser and We can see Hello World.
So we are all set.
NodeJS is Ready.

Lets make some simple changes to exsisting script to handle JSON.

Here is a simple script to handle JSON data.
 //    Getting some 'http' power
 var http=require('http');

 //    Setting where we are expecting the request to arrive.
 //    http://localhost:8125/upload
 var request = {
                 hostname: 'localhost',
                 port: 8125,
                 path: '/upload',
                 method: 'GET'
             };

 //    Lets create a server to wait for request.
 http.createServer(function(request, response) 
 {
     //    Making sure we are waiting for a JSON.
     response.writeHeader(200, {"Content-Type": "application/json"});

     //    request.on waiting for data to arrive.
     request.on('data', function (chunk) 
     {
         //    CHUNK which we recive from the clients
         //    For out request we are assuming its going to be a JSON data.
         //    We print it here on the console. 
         console.log(chunk.toString('utf8'))
     });
     //end of request
     response.end();
 //    Listen on port 8125
 }).listen(8125);
Lets fire up the script.
 [nodejs-admin@nodejs nodejs]$ node node_recv_json.js 
On a new terminal send some request to our script. Our script is listening on 8125 port.
 [nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz"}' http://localhost:8125/upload
You will see the message received on the script terminal.
 [nodejs-admin@nodejs nodejs]$ node node_recv_json.js 
 {"username":"xyz","password":"xyz"}
Now we are all set to do some RND.

Comments

Popular posts from this blog

Cloudera Manager - Duplicate entry 'zookeeper' for key 'NAME'.

We had recently built a cluster using cloudera API’s and had all the services running on it with Kerberos enabled. Next we had a requirement to add another kafka cluster to our already exsisting cluster in cloudera manager. Since it is a quick task to get the zookeeper and kafka up and running. We decided to get this done using the cloudera manager instead of the API’s. But we faced the Duplicate entry 'zookeeper' for key 'NAME' issue as described in the bug below. https://issues.cloudera.org/browse/DISTRO-790 I have set up two clusters that share a Cloudera Manger. The first I set up with the API and created the services with capital letter names, e.g., ZOOKEEPER, HDFS, HIVE. Now, I add the second cluster using the Wizard. Add Cluster->Select Hosts->Distribute Parcels->Select base HDFS Cluster install On the next page i get SQL errros telling that the services i want to add already exist. I suspect that the check for existing service names does n

Zabbix History Table Clean Up

Zabbix history table gets really big, and if you are in a situation where you want to clean it up. Then we can do so, using the below steps. Stop zabbix server. Take table backup - just in case. Create a temporary table. Update the temporary table with data required, upto a specific date using epoch . Move old table to a different table name. Move updated (new temporary) table to original table which needs to be cleaned-up. Drop the old table. (Optional) Restart Zabbix Since this is not offical procedure, but it has worked for me so use it at your own risk. Here is another post which will help is reducing the size of history tables - http://zabbixzone.com/zabbix/history-and-trends/ Zabbix Version : Zabbix v2.4 Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON Step 1 Stop the Zabbix server sudo service zabbix-server stop Script. echo "------------------------------------------" echo " 1. Stopping Zabbix Server &quo

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

Access Filter Setup with SSSD ldap_access_filter (string) If using access_provider = ldap , this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour. Example: access_provider = ldap ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com Prerequisites yum install sssd Single LDAP Group Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com Multiple LDAP Groups Under domain/default in /etc/sssd/sssd.conf add: access_provider = ldap ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com)) ldap_access_filter accepts standa