Ask Your Question

Aimar47's profile - activity

2016-08-18 13:41:13 -0500 received badge  Famous Question (source)
2016-06-24 01:35:52 -0500 received badge  Famous Question (source)
2016-02-29 15:33:21 -0500 received badge  Famous Question (source)
2016-02-22 15:00:38 -0500 received badge  Notable Question (source)
2016-02-22 15:00:38 -0500 received badge  Famous Question (source)
2016-02-22 12:09:26 -0500 received badge  Notable Question (source)
2016-02-08 10:50:50 -0500 received badge  Notable Question (source)
2016-02-07 05:55:00 -0500 received badge  Popular Question (source)
2016-02-05 15:06:40 -0500 received badge  Popular Question (source)
2016-01-29 09:41:23 -0500 received badge  Popular Question (source)
2016-01-15 19:42:28 -0500 received badge  Popular Question (source)
2016-01-15 19:42:28 -0500 received badge  Notable Question (source)
2015-12-07 13:37:58 -0500 asked a question Sahara HDP Cluster: error_message=Cluster is missing a service: YARN

Hi,

I am running OpenStack Liberty using RDO and tried to setup a HDP 2.0.6 cluster using Sahara. I have defined 2 node group templates with minimum processes such as DataNode and Apache Ambari. After creating the cluster template which includes 1 DataNode and 1 Aapche Ambari server process I have gotten an errror of cluster creation complaining for a missing service as shown in the sahara-api.log file:

Validation Error occurred: error_code=400, error_message=Cluster is missing a service: YARN
Error ID: 72347556-43a6-224a-a445-a2c125512ab1, error_name=MISSING_SERVICE

I Do not have some much experience with HDP but has someone tried out the recent plugin of HDP and which processes should run ?

Thanks !

2015-11-23 12:41:01 -0500 received badge  Famous Question (source)
2015-11-23 12:41:01 -0500 received badge  Notable Question (source)
2015-11-23 12:41:01 -0500 received badge  Popular Question (source)
2015-11-23 02:58:55 -0500 received badge  Enthusiast
2015-11-17 16:21:16 -0500 asked a question What are the minimum requirements to run Sahara Hadoop Cluster in Liberty

Dear all,

Lately was struggling to run a Hadoop Cluster using Vanilla plugin 2.7.1 ended by different sorts of errors. I am installing OpenStack RDO Liberty from scratch and was wondering what are the minimum requirements regarding a Hadoop cluster in OpenStack using Sahara such as flavors (small, medium, large, xlarge ??). Sahara cluster hangs mostly on Starting phase and returns error. Instances keep running but Hadoop processes do not look properly running when SSH to the new instances. I am using a host with 8 CPUs and 14 GB RAM, 150 GB disk space. Instance templates are using medium image flavor (2 GB RAM, 1 VCPU and 40 GB disk space). The cluster has 1 Master Node and 3 Slaves nodes. Eventually, SSHing to the instances look very slow and can be the issue that Sahara was not able to start Hadoop processes in each instance after downloading the packages. Are they any minimum hardware requirements could be missed in the aforementioned setup ? Thanks in advance.

2015-11-17 09:23:19 -0500 asked a question Sahara SSHException: Error reading SSH protocol banner

Dear all,

After installing RDO icehouse successfully, Sahara has been integrated. Unfortunately, I went through many issues and have been resolved using debug mode from the log file. However, was not able to provision a Hadoop Vanilla cluster using Sahara and kept getting the same error message below. The cluster used 1 Master and 3 Slaves. Then tried to use 1 Master and 2 and then 1 Slave. The only combination that brought the cluster to Starting phase in the Dashboard was 1 Master and 1 Slave using 2G RAM and 2 VCP per each instance. The host has 14 GB RAM and 8 CPUs. I ended up by the following debug error trace:

2015-10-06 08:37:38.043 3236 ERROR sahara.service.api [-] Can't start services for cluster 'Hadoop-Cluster' (reason: SSHException: Error reading SSH protocol banner)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api Traceback (most recent call last):
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/service/api.py", line 220, in _provision_cluster
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     plugin.start_cluster(cluster)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/plugins/vanilla/plugin.py", line 60, in start_cluster
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     cluster.hadoop_version).start_cluster(cluster)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/plugins/vanilla/v1_2_1/versionhandler.py", line 109, in start_cluster
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     with remote.get_remote(nn_instance) as r:
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/utils/ssh_remote.py", line 288, in __enter__
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     _release_remote_semaphore()
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/openstack/common/excutils.py", line 68, in __exit__
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     six.reraise(self.type_, self.value, self.tb)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/utils/ssh_remote.py", line 284, in __enter__
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     self.bulk = BulkInstanceInteropHelper(self.instance)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/utils/ssh_remote.py", line 419, in __init__
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     procutils.shutdown_subprocess(self.proc, _cleanup)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/openstack/common/excutils.py", line 68, in __exit__
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api     six.reraise(self.type_, self.value, self.tb)
2015-10-06 08:37:38.043 3236 TRACE sahara.service.api   File "/usr/lib/python2.6/site-packages/sahara/utils/ssh_remote.py", line 416, in __init__
2015-10-06 08:37:38.043 3236 ...
(more)
2015-11-17 02:18:42 -0500 commented question Error Sahara create cluster: 'Error attach volume to instance

Can it be hardware capabilities issues ? I am running RDO allinone installation with 14GB RAM and 8 cores.

2015-11-16 10:37:33 -0500 asked a question Error Sahara create cluster: 'Error attach volume to instance

Hi all,

When trying to create a new Hadoop Cluster using Sahara in OpenStack, we ended up by the following debug error trace in sahara api log file:

2015-10-05 06:31:18.917 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-instance Hadoop-Cluster Hadoop-Master-Node-001' fails with exception: 'Error attach volume to instance Hadoop-Cluster-Hadoop-Master-Node-001'
2015-10-05 06:31:18.953 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-ng-Hadoop-Master-Node' fails with exception: 'An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Master-Node-001': Error attach volume to instance Hadoop-Cluster-Hadoop-Master-Node-001'

2015-10-05 06:31:39.659 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-ng-Hadoop-Worker-Node' fails with exception: 'An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Worker-Node-001': Error attach volume to instance Hadoop-Cluster-Hadoop-Worker-Node-001'
2015-10-05 06:31:41.613 3139 ERROR sahara.context [-] Thread 'cluster-creating-d8d21ee9-03f1-4a26-90de-b534a6cb5fc5' fails with exception: 'An error occurred in thread 'attach-volumes-for-ng-Hadoop-Master-Node': An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Master-Node-001': Error attach volume to instance PP-Hadoop-Cluster-Hadoop-Master-Node-001'

What looks strange that the templates both in Master and Worker nodes have not been configured to use cinder volumes and they suppose to create ephemeral disks as default storage option.

The Implementation is configured to create a Hadoop Cluster within Vanilla 2.3.0 in Icehouse release (RDO installation) The status of the cluster in Sahara is looping on 'Waiting' status and finished by Error. Image used is ubuntu vanilla from Mirantis sahara file repository with a medium flavor per instance (2 VCPUs, 40 GB Disk and 4 GB RAM)

Googling was not really helping this time. Does anyone has faced such error in Sahara ?

Thanks in advance !

2015-11-16 10:37:33 -0500 asked a question Creating Sahara cluster: Error attach volume to instance

Hi all,

When trying to create a new Hadoop Cluster using Sahara in OpenStack, we ended up by the following debug error trace in sahara api log file:

2015-10-05 06:31:18.917 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-instance Hadoop-Cluster Hadoop-Master-Node-001' fails with exception: 'Error attach volume to instance Hadoop-Cluster-Hadoop-Master-Node-001'
2015-10-05 06:31:18.953 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-ng-Hadoop-Master-Node' fails with exception: 'An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Master-Node-001': Error attach volume to instance Hadoop-Cluster-Hadoop-Master-Node-001'

2015-10-05 06:31:39.659 3139 ERROR sahara.context [-] Thread 'attach-volumes-for-ng-Hadoop-Worker-Node' fails with exception: 'An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Worker-Node-001': Error attach volume to instance Hadoop-Cluster-Hadoop-Worker-Node-001'
2015-10-05 06:31:41.613 3139 ERROR sahara.context [-] Thread 'cluster-creating-d8d21ee9-03f1-4a26-90de-b534a6cb5fc5' fails with exception: 'An error occurred in thread 'attach-volumes-for-ng-Hadoop-Master-Node': An error occurred in thread 'attach-volumes-for-instance-Hadoop-Cluster-Hadoop-Master-Node-001': Error attach volume to instance PP-Hadoop-Cluster-Hadoop-Master-Node-001'

What looks strange that the templates both in Master and Worker nodes have not been configured to use cinder volumes and they suppose to create ephemeral disks as default storage option.

The Implementation is configured to create a Hadoop Cluster within Vanilla 2.3.0 in Icehouse release (RDO installation) The status of the cluster in Sahara is looping on 'Waiting' status and finished by Error. Image used is ubuntu vanilla from Mirantis sahara file repository with a medium flavor per instance (2 VCPUs, 40 GB Disk and 4 GB RAM)

Googling was not really helping this time. Does anyone has faced such error in Sahara ?

Thanks in advance !