Ask Your Question

alazarev's profile - activity

2015-05-18 18:58:28 -0500 answered a question [Sahara] Run jobs with input multiple files

What type of job are you running? Sahara requests input/output are job types where it is usually only one input and output. For other job types you can provide as many arguments as you like. And arguments could point to specific datasources.

We are working on more clear UI for providing arguments/configs/parameters. You can track this effort at

2014-12-09 12:47:00 -0500 commented answer Juno Sahara Spark 1.0.0 Security Group Error
2014-12-08 15:57:08 -0500 answered a question Juno Sahara Spark 1.0.0 Security Group Error

This is exactly . The fix is already in master and backported to juno.

2014-12-08 15:39:33 -0500 commented answer Juno Sahara Spark 1.0.0 Security Group Error
2014-10-15 08:57:21 -0500 answered a question Sahara Can't login to node

It means that Sahara successfully received IP addresses for VMs but can't ssh to them. Do you use floating ips? Do you have 22 port open in VMs security group? You can ssh to controller node and try to ssh to VMs from console. Once you can do it manually Sahara will be able too.

2014-10-08 14:44:36 -0500 received badge  Supporter (source)
2014-10-07 11:11:27 -0500 answered a question sahara part of Juno or icehouse?

Sahara was graduated at Mar 18, 2014 ( ) which corresponds to late Icehouse ( ). Most documentation and testing procedures are usually prepared in advance. That's why most of Icehouse stuff doesn't mention Sahara as integrated. It is safe to consider Sahara as integrated starting from Juno.

2014-10-02 11:13:37 -0500 received badge  Commentator
2014-10-02 11:13:37 -0500 commented answer Enable Spark Plugin

There is no official way. But you can try to backport code manually. Provisioning part didn't change in Juno. But EDP part changed a lot. So, if you need just provisioning, it should not be hard to backport.

2014-09-22 11:47:27 -0500 commented answer Sahara floating ip issue (floating ip not found)

I've filed to address that.

2014-09-19 12:56:00 -0500 received badge  Nice Answer (source)
2014-09-16 23:14:02 -0500 commented answer the username and password for swift is gone when trying the java type job in sahara

I see "fs.swift.service.sahara.username" option there.

2014-09-15 16:41:02 -0500 answered a question Sahara floating ip issue (floating ip not found)

floating_ip_pool field is required when use_floating_ips=true. That's why you are getting error in case of "Do not assign floating IPs" selection.

The first error message is more tricky. Horizon uses neutron.Client().list_networks({'router:external': True}) to get list of available pools. Sahara uses nova.Client().networks.find(id=pool_id) that passthrough request to neutron. So, results must be pretty the same. Could you please check if you really have Floating IP pool with ID 9de9b761-621d-454f-83d9-5f63aa71ca0d. Is it accessible from the current tenant?

2014-09-09 16:29:58 -0500 received badge  Nice Answer (source)
2014-09-09 16:29:43 -0500 received badge  Nice Answer (source)
2014-09-09 16:09:28 -0500 answered a question the username and password for swift is gone when trying the java type job in sahara

Sahara sanitizes swift username/password for security reasons. That's why they are empty in responses. This has nothing to do with hadoop error.

Could you please provide generated oozie workflow file? It is usually located in HDFS under /user/<user_name>/<job_name>

2014-08-14 09:59:22 -0500 commented answer Sahara: Error database when reboot server

Actually there is no one file to restore all processes. We do not recommend to reboot VM with hadoop running. Restoring processes depends on hadoop version you use. For vanilla 1.2.1 it is 'sudo su -c "/usr/sbin/ start X" hadoop' where X is hadoop process.

2014-08-14 00:38:00 -0500 commented answer no cluster scaling support in sahara client yet?

Oh, I see. You are talking about sahara shell. It has very limited functionality now and scaling is not implemented. Here is TODO for it: I have no information on when it will be implemented.

2014-08-13 10:06:27 -0500 commented answer Sahara: Error database when reboot server

Sahara doesn't add hadoop processes to startup scripts. That's why all processes need to be restarted manually after VM restart. I've filed bp for Sahara to update startup scripts during VM configuration: .

2014-08-13 09:56:50 -0500 commented question Enable Spark Plugin

" -p spark" will create image for spark

2014-08-13 00:39:29 -0500 answered a question no cluster scaling support in sahara client yet?

Scaling command was in python client from day one. E.g. here it is for icehouse:

2014-08-13 00:33:54 -0500 received badge  Enthusiast
2014-08-11 15:55:49 -0500 commented answer Sahara: Can't login to node

I see that Sahara can't login to VM. So, possible problems: 1. VM is not accessible on network level (no route or closed port) 2. image is wrong (make sure you took image for right version of plugin) 3. credentials are wrong (make sure you specified correct user name in image registry)

2014-08-11 15:49:34 -0500 commented answer Sahara: Change Java heap space

OpenStack works well on virtual machine. Virtualization may have impact on performance (nested virtualization helps a lot here), but in general there should be no difference.

2014-08-11 15:46:02 -0500 commented answer I got error when Building sahara UI

I've filed to improve error reporting.

2014-06-13 12:07:52 -0500 answered a question Sahara installation on virtual instance

Yes, it is possible. Sahara doesn't have requirements to be installed on controller. Just make sure that all openstack services (keystone, nova, etc.) are accessible from machine with Sahara installed. Also make sure that Sahara can SSH to VMs created by openstack (floating IPs may be used as Sahara management network).

To use external Sahara in horizon you should either register it in keystone or set SAHARA_URL config (note, it will be removed after merging sahara-dashboard to horizon).

2014-06-04 10:52:13 -0500 answered a question How to login to Sahara VM via broswer console ?

Actually there is undocumented option IMAGE_GENERATION_DEBUG_MODE in our diskimage-builder that allows to create images with password set.

2014-03-03 17:49:49 -0500 received badge  Teacher (source)
2014-02-28 15:43:14 -0500 answered a question savanna 0.3 excuting pig script encounter error

I see the following line in your logs

2014-02-28 06:15:05,358 WARN LiteWorkflowAppService:542 - USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] libpath [hdfs://v-ndhoo-v-nn-001:8020/user/hadoop/worldcount/4496286e-d5e0-42fe-8842-800bace75f54/lib] does not exist

'lib' folder usually contains all libraries attached to the job. Most probably you forgot to add .jar file with classes you use in pig script (second tab on UI during Job creation).

2014-02-07 21:09:02 -0500 answered a question can not start cluster
  1. "Can't start cluster 'ostf-test-savanna-cluster' (reason: node ostf-test-savanna-cluster-ostf-test-worker-node-tt-dn-001 has error status)" This means that instance didn't appear in 'active' state. You should check openstack logs on the reason.

  2. "'RemoteCommandException: Error during command execution: "sudo chown -R $USER:$USER /etc/hadoop"" I see you are using vanilla plugin while both images you listed are for HDP plugin. Please use only images supported by plugin you use.

2014-02-07 05:21:32 -0500 answered a question can not start cluster

Hi Gowri,

Could you please provide savanna stacktrace on this error?

2014-02-05 17:12:58 -0500 answered a question Cluster creation state in Starting only

'RuntimeError : node newcluster_vanilla_worker-001 has error status ' means that OpenStack moved the instance to ERROR state. You should check OpenStack logs (start with nova) to see what happened. The most common reasons I faced are network misconfiguration and quota limit. But it could be anything else in your particular case.

Also I see 'vanilla' word in node naming while you are using image for HDP. Please make sure that you are using image for right plugin (and version of savanna).

2014-02-05 06:32:30 -0500 answered a question Why does Cluster template create need node group name and id?

As I see it is here from day one. Probably it was added initially and no one raised this questions before. I don't see any sense in having "name" required if node group template is provided. Please file a bug on this.

2014-02-05 06:03:11 -0500 answered a question Cluster creation state in Starting only


  1. Fuel 4.0 uses savanna 0.3. So, images from savanna 0.2 will not work. See (

  2. "Booting from hard disk" message is a sing that image you are using is broken. Please make sure that it is downloaded correctly and uploaded to glance with "--disk-format qcow2 --container-format=bare". centos-6_64-hdp-1.3.qcow2 should work fine with savanna 0.3.

  3. It is better to use ( for 0.3. But don't see much difference in your case.

2013-11-04 17:34:33 -0500 answered a question Savanna with Nova-network with Grizlly


According to config you are using neutron (use_neutron=True) and doesn't use floating IPs (use_floating_ips=False). In this case host with Savanna need to have direct access to VMs via private network (usually it has not). You can always see IP address Savanna trying to use in cluster details (management IP column). Please check that you can ssh to VMs from the host with Savanna after VMs started up.

What version of savanna do you use? Master branch contains fix with netns proxy support ( ). Enabling it (use_namespaces=True) could help you with the issue.

2013-09-20 15:04:57 -0500 answered a question Error while accessing Savanna Tab in Horizon/Openstack UI

Savanna currently doesn't support PKI as token format. Could you check token_format property in /etc/keystone/keystone.conf file? It should be UUID.