How to Install nodejs server on EC2 AWS centos 6

Hey,

Download nodejs server from source 
http://nodejs.org/download/
Look for the "Source Code" and copy the link.

cd /usr/src
wget http://nodejs.org/dist/v0.10.26/node-v0.10.26.tar.gz
tar -xzvf node-v0.10.26.tar.gz
Now before compiling , lets install the relevant dependencies.
yum -y groupinstall "Development Tools"
yum -y install screen

Right now let's enable the epel repository 
vim /etc/yum.repos.d/epel.repo
#change to enabled=1
yum install js-devel
yum install curl-devel
yum install autoconf-archive
yum install libicu-devel





Now let's Configure
./configure
make
make install
 
Now test it with a 'test' nodejs app


cd /root
vim test.js



pase this code



// Load the http module to create an http server.
var http = require('http');

// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World\n");
});

// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);

// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
 



and now run

nohup node test.js > node.log 2>&1 &

check it with 

curl http://127.0.0.1:8000/

and check the node.log 

you can also manage the nodejs process with supervisord


Amiram.
Tagged with: , ,
Posted in NodeJS

How to create an Impala table using Parquet file format (Cloudera Impala)

Hey,

The whole thing behind Impala tables is to create them from "impala-shell"
using the “hive metastore” service you will be able to access those tables from HIVE \ PIG

It is recommended to
run INSERT statements using HIVE (it is also possible via impala-shell)
run SELECT statements using IMPALA

So, suppose you want to create an Impala table
DO NOT try to create the table from the hive interface \ command line.

the procedure should be :

1. create the table from the Impala-shell
General syntax of create table would be:

CREATE TABLE  table_name

col1 type1,

col2 type2,

..

PARTITIONED BY (colx typex, … )
ROW FORMAT
STORED AS
LOCATION ”;

For example:

CREATE EXTERNAL TABLE IF NOT EXISTS table_name (
col1 DOUBLE,
col2 int
)
PARTITIONED BY (batch_id INT, date_day STRING )
STORED AS PARQUETFILE
LOCATION '/mnt/my_table';

Please make sure you are following this high level syntax.

2. After a successful creation of the desired table you will be able to access the table via Hive \ Impala \ PIG

hive> show tables;

impala-shell> show tables;

OR

impala-shell> show table stats table_name ;

3. Insert Data from Hive \ Impala-shell
4. Refresh the impala talbe

refresh table_name

OR

invalidate metadata table_name

5. Now you can enjoy SELECTING your data from Impala-shell.

Amiram.

Tagged with: , , , ,
Posted in BigData

ElasticSearch 1.0, Logstash changes – how to complete the upgrade

Hey,

ElasticSearch has recently announced on it’s new version 1.0 which includes many new features as well as stability and availability fixes.

The ones who worked with Logstash shippers or Logstash indexers that were shipping logs into Elastic search, probably encountered this error

warn: org.elasticsearch.discovery.zen.ping.unicast: [agent zero] failed to send ping to [[#zen_unicast_1#

or perhaps

org.elasticsearch.discovery.MasterNotDiscoveredException

Now, please take a look on you logstash config file,

it may contain

input {
file {path => “/<your_path/yourlog.log” codec => <your_codec>}
}
output {
stdout { debug => true debug_format => “json”}

elasticsearch {

host => “127.0.0.1”

…  }
}

Now the change that should be done is changing the “elasticsearch” section to  “elasticsearch_http”

You can read about it’s options here : http://www.logstash.net/docs/1.3.3/outputs/elasticsearch_http

Restart Logstash, and it will work.

* Also please make sure that when working with ElasticSearch 1.0, you will work with Logstash 1.3.3  it is very recommended to upgrade the logstash version from time to time.

Enjoy,

Amiram.

Tagged with: , ,
Posted in ElasticSearch

CouchDB Internals

Here are few Anecdotes about files and thier meaning on CouchDB server

Log files:

CouchDB’s main log file, containing all HTTP requests coming from Futun interface (web interface) or from you custom-application which communication with Couch through it’s API \ CURL commands

/usr/local/var/log/couchdb/couchdb.log

this log can be truncated while couchDB is up and running.

echo ” ” >  /usr/local/var/log/couchdb/couchdb.log

* Tip:

if you dont know where your logfile located  (maybe you have some custom installation)

you will be able to fine it here

cat /usr/local/etc/couchdb/default.ini

search for [log] section.

[log]
file = /usr/local/var/log/couchdb/couch.log
level = info
include_sasl = true

*Tip – if dont know where is your configuration file locate

run

couchdb -c

Datafiles and Views

Data file’s location will be always on your “data_dir” variable  which defined though CouchDB’s installation

you can check wehre is your datadir

Futon (Web) Interface -> Configuration  (http://your-server:5984/_utils/config.html

Search for data_dir  option on the left pane.

the defaulr directory is

/usr/local/var/lib/couchdb

inside  the data dir, every database is represented as a single file with .couch extention.

db-name.couch

Example:

[root@my-server couchdb]# ls -l
total 11581032
-rw-r–r– 1 couchdb couchdb 63103099 May 3 09:29 db1.couch
-rw-r–r– 1 couchdb couchdb 33235064 May 3 09:29 db2.couch
-rw-r–r– 1 couchdb couchdb 366506098 Feb 22 06:11 db3.couch
-rw-r–r– 1 couchdb couchdb 8281 Feb 8 2012 _replicator.couch
-rw-r–r– 1 couchdb couchdb 8290 Dec 31 14:40 _users.couch

And , Where are the the “views” ?

Inside the data dir , for each database couchDB creates a “hidden” folder name

 .db-name_design

Notice- that the folder name begins with “.”  which means that the folder is hidden.

we will be able to see al these folders by typing

ls -la

inside  /usr/local/var/lib/couchdb

[root@my-server couchdb]# ls -la

total 11581068
drwxr-xr-x 9 couchdb couchdb 4096 May 3 05:57 .
drwxr-xr-x 3 root root 4096 May 10 2012 ..
-rw-r–r– 1 couchdb couchdb 63103099 May 3 09:29 db1.couch
drwxr-xr-x 2 couchdb couchdb 4096 May 2 2012 .db1_design
-rw-r–r– 1 couchdb couchdb 172146 Feb 25 15:36 db2.couch
drwxr-xr-x 2 couchdb couchdb 4096 May 2 2012 .db2_design
-rw-r–r– 1 couchdb couchdb 8290 Dec 31 14:40 _users.couch

each database has it’s own _design foler.

the _design foler contains files which are the calculated views on your system.

each view represented as  <signature>.view

Example

[root@my-server .db1_design]# ls -la
total 22112
drwxr-xr-x 2 couchdb couchdb 4096 Apr 28 15:01 .
drwxr-xr-x 8 couchdb couchdb 4096 Apr 30 09:56 ..
-rw-r–r– 1 couchdb couchdb 22630517 May 2 04:56 e2bf9be9033e7101e52655ea1a8088f3.view

If you have multiple views on the same database, you can find out which signature belongs to each view bu running

http://127.0.0.1:5984/your-db/_design/your-view/_info

The output will be in a Jason format:

{"name":"your-view","view_index":
{"signature":"e2bf9be9033e7101e52655ea1a8088f3",
"language":"javascript","disk_size":22630517,"data_size":15932013,
"updater_running":false,
"compact_running":false,"waiting_commit":false,"waiting_clients":0,
"update_seq":395782,"purge_seq":0}}

*Tip:  by typing

du -h /usr/local/var/lib/couchdb

you will be able to know what is the size of each view on your system.

Processes:

CouchDB’s main process : (Should run under couchdb user, it’s parent process is init (PID=1))

/usr/local/bin/couchdb

Parameters

-a   config files (you can have as many as you want)

-a config1.ini  -a config2.ini

you can add more config files while couchDB is running

couchdb -a  my_new_configfile.ini

The following process responsible for creating views on your system (the Indexer)

/usr/local/bin/couchjs

which means, that if you want to stop a view calculation while its running , just run:

killall couchjs

Warning: it will kill all views calculations, as you cannot kill a single view calculation (yet).

Best !

Tagged with: , , , , ,
Posted in CouchDB

CouchDB Error : {badmatch,{error,eacces} {couch_file,init,1} …

Hey,

So today we had the following issue with our CouchDB environment.

First we were requsted to copy an exsiting environment of couchDB to another servers.

Few basic steps

1. Stop CouchDB (Optional)

2. tar -czvf /usr/local/var/lib/couchdb couchdbs

3. copy the tar file to destination server

4. untar  tar -xzvf couchdbs to the same folder /usr/local/var/lib/couchdb

5. make sure that all .couch files are owned by couchdb user.

6. make sure on more critical thing (we will talk about it later)

after copying the data files and setting permissions, we can access the dbs from our web UI .

http://1270.0.1:5984/_utils/index.html

Now, while trying to access the following view

http://1270.0.1:5984/_utils/database.html?ads/_design/search/_view/my_view_name

I ‘ve encountered the error below :

{{badmatch,{error,eacces}},
[{couch_file,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]},
[{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}},

So after looking in couchdb,log (which is located on  /usr/local/var/log/couchdb)

I saw this line – reporting something is wrong with accessing to my view file (which is stored under hidden directory .<view_name> )

[Sun, 28 Apr 2013 16:07:05 GMT] [error] [<0.4679.0>] Failed to open view file ‘/usr/local/var/lib/couchdb/.<view_name>/e2bf9be9033e7101e52655ea1a8088f3.view‘: unknown POSIX

a quick permissions check on /usr/local/var/libcouchdb revealed that the hidden folder  .<view_name> was owned by root and by couchdb user.

I ran

chown -R /usr/local/var/lig/couchdb/

and then the view worked fine !

In order to do it carefully, I deleted the view document’s revision, and recreated the view.

1. Getting the document’s revision

curl -X GET http://127.0.0.1:5984/<db_name>/_design/<view_name&gt;

2. curl -X DELETE http://127.0.0.1:5984/<db_name>/_design/<view_name&gt;?rev=REV-ID

3. Recreate the view

curl -X PUT http://127.0.0.1:5984/<db_name>/_design/<view_name&gt; -d @<view_name>.json

Enjoy !

Tagged with: , , ,
Posted in CouchDB

What is NSS and how to use it ?

NSS – Named Saved Systems

An NSS is a copy of an operating system’s kernel or nucleus, which that has been saved in a chunk of CP’s storage.

Using an NSS to IPL an operating system has several advantages over using disks.

First, only one copy of the operating system will exist in memory no matter how many guests have IPLed it. This can lead to tremendous storage savings if you have numerous guests.

Second, because only one copy of the operating system exists, then updating everyone who uses it to a newer version is as simple as replacing that single NSS. To IPL from an NSS, provide the name of the saved system

From a performance perspective, there are benefits to this support:
it substantially reduces the amount of real storage that we need, and provides  better performance for a guest.

If viewed from a Linux perspective, having a large part of the kernel resident in storage would speed up the boot of the operating system significantly.

A guest would boot a NSS called, for example

 IPL LNXTST

instead of using a virtual device number such as

 IPL 500

The most common example of a NSS in z/VM is CMS.

And if you issue the command  (from a privileged user)

Q NSS ALL

you will see many other functions (such as HELP, CMS Pipeline and NLS or National Language Support), that are DCSSs and which benefit using this support
Most z/VM installations have a CMS NSS set up.
Example of IPL of the CMS named saved system

IPL CMS
z/VM V5.3.0 2007-05-02 16:25
Ready; T=0.01/0.01 09:37:40

Notice that CMS starts exactly the same way that it did when we IPLed the 190 disk.

 

Setting up a Linux NSS

Perform these steps to create a Linux NSS:

  1. Boot Linux.
  2. Insert savesys=<nssname> into the kernel parameter file used by your boot configuration, where <nssname> is the name you want to assign to the NSS. The name can be 1-8 characters long and must consist of alphabetic or numeric characters. Examples of valid names include: 73248734, NSSCSITE, or NSS1234. Be sure not to assign a name that matches any of the device numbers used at your installation.
  3.  Issue a zipl command to write the modified configuration to the boot device.
  4. you can also create nss via CP  “IPL <devno> PARM savesys=<nssname>”
  5.  Close down Linux.
  6. Issue an IPL command to boot Linux from the device that holds the Linux kernel. During the IPL process, the NSS is created and Linux is actually booted from the NSS.
  7. now you can boot another machine with your new created NSS  IPL <nss_name>

 

 

 

Tagged with: , , , , ,
Posted in z/VM, zlinux

Understanding where DASD configuration resides

Actually, I had wrote that article before a long time ago even before I opened that blog, And after series of extensive improvements I’m glad to publish it.
This article was wrriten in order to clarify how to manage dasd devices on SUSE zLinux SLES  on its many releases..

SUSE configuration locations:

  • Configured via parameters line in /etc/zipl.conf

Kernel parameter line “dasd=…” option.
parameters = “root=/dev/disk/by-path/ccw-0.0.322b-part1 dasd=322b,322f TERM=dumb”
– Requires that you run “zipl” after making changes to /etc/zipl.conf

  • Every Dasd device represented by corresponding configuration file

/etc/sysconfig/hwcfg-dasd-bus-ccw-0.0.*
– Example:

/etc/sysconfig/hardware/hwcfg-dasd-bus-ccw-0.0.0200

Example of configuring new dasd with 0200 device number

format the dasd from zVM:

cpfmtxa 0200 as perm

Login as root to the zlinux server and type

#> lscss

you shuold see in the output an Appropriate line with 0200 device number and you will see that this device is not online.

now we would like to bring that device online with

#> dasd_configure 0.0.200 1

or

#> chccwdev -e 0.0.200

Check that disk is using the DIAG module for access:

#> lsdasd
0.0.0201(ECKD) at ( 94:  0) is dasda      : active at blocksize 4096, 601020
blocks, 2347 MB
0.0.0200(DIAG) at ( 94:  4) is dasdb      : active at blocksize 512, 2048000
blocks, 1000 MB

Now, what’s left is to format it from linux

dasdfmt -b 4096 -y -f /dev/dasdb

then edit /etc/zipl.conf 

parameters = "root=/dev/dasda dasd=100-101,300-301,102-103,200 TERM=dumb"

*What is DIAG (or diagnose instruction)
The diagnose instruction is what some other platforms refer to as “hipervisor call”
so it allows the virtual mchine to request some service from the hipervisor.
One such diagnose instruction that z/VM implements is for high-level disk I/O

Tagged with: , , , , , ,
Posted in z/VM, zlinux