Ask Your Question

carlosedp's profile - activity

2017-04-04 12:19:16 -0600 received badge  Famous Question (source)
2017-03-22 05:22:21 -0600 received badge  Notable Question (source)
2017-03-22 05:22:21 -0600 received badge  Popular Question (source)
2017-03-17 14:03:46 -0600 asked a question No network on Centos Cloud converted from QCOW to VMDK

I have a brand new Mirantis Openstack 9.2 deployment and I'm trying to launch some CentOS 7 images with no success.

I downloaded CentOS-7-x86_64-GenericCloud.qcow2 and converted to VMDK since my Openstack deployment is over a VMware cluster. After this, I load the image into Glance and deployed it but when the instance is launched I get no network. Here the commands I used:

qemu-img convert -f qcow2 -O vmdk CentOS-7-x86_64-GenericCloud.qcow2 CentOS-7-x86_64-GenericCloud.vmdk

glance image-create --name CentOS-7-x86_64-GenericCloud.vmdk --container-format bare --disk-format vmdk --min-disk 0 --min-ram 0 --progress --property
vmware_disktype=sparse --property vmware_adaptertype=ide --property hypervisor_type="vmware" --file  CentOS-7-x86_64-GenericCloud.vmdk

I manage to log into the CentOS terminal and got that there is no interface eth0 (not present). Also there is a "eno16777728" interface in "ip a" without IP of respective confir files in network-scripts.

Both CentOS 7 1511 and 1611 have the same problem.

I tried with Cirros, Ubuntu, Debian and CentOS 6 Generic Cloud and they all work perfectly. All of them were converted and loaded into Glance with the same commands and parameters.

Thanks

2017-03-14 11:54:34 -0600 received badge  Famous Question (source)
2017-03-13 14:01:22 -0600 received badge  Notable Question (source)
2017-03-13 14:01:22 -0600 received badge  Popular Question (source)
2016-10-26 08:09:35 -0600 received badge  Commentator
2016-10-26 08:09:35 -0600 commented answer Mirantis Fuel 9 update to 9.1 result in an error

The update finished without errors but the UI and the CLI does not show that the current version is 9.1:

[root@fuel ~]# fuel fuel-version api: '1' auth_required: true feature_groups: - experimental - advanced openstack_version: mitaka-9.0 release: '9.0'

Is there anything wrong?

2016-10-07 16:44:25 -0600 answered a question Change in keypair with which instace was created

If you still have that actual keypair working, you could create a script that executes ssh commands into each instance, injects the new key into ~/.ssh/authorized_keys and then remove the old key.

Something like:

ssh user@server echo "NEW KEY" >> ~/.ssh/authorized_keys
ssh user@server "sed -i.bak '/OLD KEY/d' ~/.ssh/authorized_keys"

You could create a list of all instance IPs you have or fetch the instance IPs using Openstack API though this would be more advanced.

2016-10-07 16:30:13 -0600 commented answer Pass parameters to docker container in Murano

If you are interested testing the Murano packages, let me know.

2016-10-07 16:27:57 -0600 received badge  Scholar (source)
2016-10-07 16:27:42 -0600 answered a question [Murano] How to get the instance public IP address on container deployment

Since I'm modifying docker-container package, I was able to fetch $.host.instance.floatingIpAddress and replace my placeholder text. The solution is actually:

Properties:
....
.....
extraopts:
   Contract: $.string()

Methods:
  initialize:
    Body:
    ....
      - $._address: $.host.instance.floatingIpAddress
      - $._extraOpts: str($.extraopts).replace("%FLOATINGIP%", str($._address))

Using this, I can pass --hostname %FLOATINGIP% to the docker container dynamically.

The only quirk is that this package only works when deploying over Docker Standalone Hosts. If you deploy on a Kubernetes Pod, the deployment fails since it does not find "instance" object. Still looking for a solution.

2016-10-03 13:55:44 -0600 asked a question [Murano] How to get the instance public IP address on container deployment

I'm facing a problem where I'm trying to deploy a container (Gitlab in this case) where it expects a docker parameter with the instance's IP address (--hostname [IP]).

I've implemented some changes to docker-container, docker-interface-library and docker-standalone-host to support additional run parameters like --restart, etc. I'm trying to pass --hostname %HOST% and try to replace %HOST% with the public IP address but I saw that the $.ApplicationEndpoints is not created until the application is deployed but I need in the beginning of the process.

Is there any way to fetch the public IP from the docker host? Thanks

2016-10-03 13:34:17 -0600 answered a question Pass parameters to docker container in Murano

To link containers, you can check Influx and InfluxDB applications from Murano apps repository (https://github.com/openstack/k8s-dock...). They implement a linked architecture between two containers and fetch information from each other.

Regarding other parameters, I've implemented a new version of the docker-container, docker-interface and standalone-host packages to support volume mapping in the format [HostDirectory]:[ContainerDirectory]. Ex. /srv/etc:/etc,/srv/log:/var/log.

I'm already implementing a way to pass additional parameters like "--restart always" and etc... It works but I've been trying to send the application's IP address to the container (to use on --hostname [IP]) for example but this is still unsuccessful.

2016-08-25 09:23:57 -0600 received badge  Editor (source)
2016-08-24 12:58:17 -0600 answered a question Kilo: How to get the scheduling of VMs spread across multiple ESXI hypervisros in the case with vcenter as compute plugin

How's your cluster DRS settings? Check if the admission control is set to "Fully Automated".

Right click your cluster -> Edit Settings -> "vSphere DRS" -> Automation Level.

I think this will automate the distribution.

2016-08-24 12:51:52 -0600 answered a question openstack nova vmware Instance failed to spawn

Have you deployed an instance with a VMDK image? VMware ESXi doesn't support QCOW images. You can convert between formats and upload it back to Glance.

Check this guide: http://docs.openstack.org/image-guide...

2016-08-24 12:48:14 -0600 commented answer Mirantis upload_cirros fails at 100% in Fuel

After replacing the switch with one that has 1Gbps ports (the old one had 100Mbps ports) the problem was solved. No more timeouts.

2016-08-19 09:07:24 -0600 commented answer Mirantis upload_cirros fails at 100% in Fuel

I think the 300 seconds is not enough to upload the files to my datastore, maybe it's my network. Any idea about which file contains the timeout parameter? I'm not too used to puppet, nailgun, astute and etc... Thanks

2016-08-18 14:37:53 -0600 commented answer Mirantis upload_cirros fails at 100% in Fuel

I tried with more nodes with the same problem. It seems like a problem related to deploy the images into VMware datastores. No idea on what's wrong. The only thing is that the Fuel node and the controller nodes are all VMs in ESXi.

2016-08-09 15:41:20 -0600 commented answer Mirantis upload_cirros fails at 100% in Fuel

It's not complaining (other than having no HA). It fails on image upload to the datastore. I see the files getting uploaded than they get deleted. I will try with 3 nodes to check if same problem happens.

2016-08-09 15:39:14 -0600 received badge  Enthusiast
2016-08-08 13:51:14 -0600 commented answer Mirantis upload_cirros fails at 100% in Fuel

By the way, I'm deploying Mitaka over Ubuntu 14.04. The MOS 9.0 default.

2016-08-08 13:49:00 -0600 answered a question Mirantis upload_cirros fails at 100% in Fuel

--Edit--

My problem was related to network being slow to transfer the files. Once I replaced the switch with one that has 1Gbps ports the problem was solved.

--Edit--

Same problem here trying to deploy Mirantis Openstack 9.0 with just one node over VMware VCenter. The error appears around 88%.

2016-08-02 20:04:53 NOTICE   Finished catalog run in 301.56 seconds
2016-08-02 20:04:52 ERR  (/Stage[main]/Main/Exec[upload_cirros_shell]/returns) change from notrun to 0 failed: Command exceeded timeout
2016-08-02 20:04:52 ERR  /usr/bin/puppet:8:in `<main>'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:92:in `execute'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:146:in `run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in `run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/util.rb:496:in `exit_on_fail'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in `block in run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application.rb:507:in `plugin_hook'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in `block (2 levels) in run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:159:in `run_command'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:198:in `main'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet.rb:246:in `override'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/context.rb:64:in `override'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:236:in `block in main'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:302:in `apply_catalog'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:133:in `run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet.rb:246:in `override'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/context.rb:64:in `override'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:134:in `block in run'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:227:in `run_internal'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:119:in `apply_catalog'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/util.rb:160:in `benchmark'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/1.9.1/benchmark.rb:295:in `realtime'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/util.rb:161:in `block in benchmark'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/configurer.rb:120:in `block in apply_catalog'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:168:in `apply'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet/transaction/report.rb:112:in `as_logging_destination'
2016-08-02 20:04:52 ERR  /usr/lib/ruby/vendor_ruby/puppet ...
(more)
2016-04-08 02:34:21 -0600 received badge  Notable Question (source)
2016-04-08 02:34:21 -0600 received badge  Famous Question (source)
2015-11-30 07:49:01 -0600 received badge  Popular Question (source)
2015-11-25 05:53:16 -0600 commented answer Error deploying MOS 7.0 with Zabbix 2.0 plugin

You can mirror the repositories into your fuel node (with internet access) and them deploy the packages locally without internet access to all the nodes. Take a look into fuel-createmirror --help

2015-11-17 06:28:14 -0600 answered a question Error deploying MOS 7.0 with Zabbix 2.0 plugin

There is a problem deploying Zabbix 2.0.0 having mirrored the reopsitories locally.

The problem has been fixed on the 2.5.0 version that can be built from git source:

yum install createrepo rpm rpm-build dpkg-devel
easy_install pip
pip install fuel-plugin-builder
git clone https://github.com/openstack/fuel-plugin-external-zabbix.git 
fpb --check ./fuel-plugin-external-zabbix
fpb --build ./fuel-plugin-external-zabbix
fuel plugins --install fuel-plugin-external-zabbix/zabbix_monitoring-2.5-2.5.0-1.noarch.rpm
2015-11-16 13:25:45 -0600 asked a question Error deploying MOS 7.0 with Zabbix 2.0 plugin

I recently deployed a MOS 7.0 fuel instance and created a new environment but the Zabbix 2.0.0 plugin gives me an error that the package php5-mysql could not be found.

2015-11-16 18:53:15 +0000 /Stage[main]/Plugin_zabbix::Frontend/Package[php5-mysql] (info): Starting to evaluate the resource
2015-11-16 18:53:15 +0000 Puppet (debug): Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' php5-mysql'
2015-11-16 18:53:15 +0000 Puppet (debug): Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install php5-mysql'
2015-11-16 18:53:15 +0000 Puppet (err): Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install php5-mysql' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
Package php5-mysql is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'php5-mysql' has no installation candidate

/usr/lib/ruby/vendor_ruby/puppet/util/execution.rb:188:in `execute'
/usr/lib/ruby/vendor_ruby/puppet/provider/command.rb:23:in `execute'
/usr/lib/ruby/vendor_ruby/puppet/provider.rb:237:in `block in has_command'
/usr/lib/ruby/vendor_ruby/puppet/provider.rb:444:in `block in create_class_and_instance_method'
/usr/lib/ruby/vendor_ruby/puppet/provider/package/apt.rb:68:in `install'
/usr/lib/ruby/vendor_ruby/puppet/type/package.rb:67:in `block (3 levels) in <module:Puppet>'
/usr/lib/ruby/vendor_ruby/puppet/property.rb:197:in `call_valuemethod'
/usr/lib/ruby/vendor_ruby/puppet/property.rb:498:in `set'
/usr/lib/ruby/vendor_ruby/puppet/property.rb:581:in `sync'
/usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:191:in `sync'
/usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:128:in `sync_if_needed'
/usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:81:in `perform_changes'
/usr/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:20:in `evaluate'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:174:in `apply'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:187:in `eval_resource'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `call'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `block (2 levels) in evaluate'
/usr/lib/ruby/vendor_ruby/puppet/util.rb:327:in `block in thinmark'
/usr/lib/ruby/1.9.1/benchmark.rb:295:in `realtime'
/usr/lib/ruby/vendor_ruby/puppet/util.rb:326:in `thinmark'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:117:in `block in evaluate'
/usr/lib/ruby/vendor_ruby/puppet/graph/relationship_graph.rb:118:in `traverse'
/usr/lib/ruby/vendor_ruby/puppet/transaction.rb:108:in `evaluate'
/usr/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:164:in `block in apply'
/usr/lib/ruby/vendor_ruby/puppet/util/log.rb:149:in `with_destination'
/usr/lib/ruby/vendor_ruby/puppet/transaction/report.rb:108:in `as_logging_destination'
/usr/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:163:in `apply'
/usr/lib/ruby/vendor_ruby/puppet/configurer.rb:125:in `block in apply_catalog'
/usr/lib/ruby/vendor_ruby/puppet/util.rb:161:in `block in benchmark'
/usr/lib/ruby/1.9.1/benchmark.rb:295:in `realtime'
/usr/lib/ruby/vendor_ruby/puppet/util ...
(more)
2015-10-19 12:35:38 -0600 received badge  Famous Question (source)
2015-10-09 12:37:31 -0600 commented answer Plugins not installed on Mirantis Openstack 7.0 after update

I checked some repositories and a couple of them are already updated but I bellieve Mirantis still need to package and publish on the site. https://github.com/stackforge/fuel-pl... https://github.com/stackforge/fuel-pl...

2015-10-06 13:29:58 -0600 received badge  Notable Question (source)
2015-10-06 05:06:34 -0600 received badge  Teacher (source)
2015-10-02 16:17:44 -0600 answered a question Plugins not installed on Mirantis Openstack 7.0 after update

Answering my own question, inside each plugin the file "metadata.yaml" defines a parameter with a version constraint:

Ex. fuel_version: ['6.1']

Now all plugins are only available to MOS 6.1. They must be updated to be compatible with 7.0.

2015-10-02 16:14:09 -0600 answered a question One big virtual CPU

No, this is not possible. The vCPU allocation is limited into one host.

You can have multiple VMs having a sum of vCPUs to more than the host has CPUs but you cannot have the vCPUs spanning to multiple hosts (physical servers).

2015-10-02 13:48:45 -0600 commented question Plugins not installed on Mirantis Openstack 7.0 after update

I opened a plugin package and found out that there is a tag for MOS version where i'ts currently set to 6,1. I believe the plugins must be updated to be able to deploy them on a 7.0 environment. For now I will revert my env to 6.1.

2015-10-02 13:44:59 -0600 received badge  Popular Question (source)
2015-10-01 16:40:17 -0600 asked a question Plugins not installed on Mirantis Openstack 7.0 after update

I had a MOS 6.1 Fuel installation with a couple plugins (fwaas, lbaas, zabbix) installed. The environments I deployed in this version had the plugins available and it's settings shown when deploying.

After updating the Fuel master node to 7.0, I deleted the environment I had to redeploy using latest version but the plugins are not installed nor it's settings shown in "Settings" pane.

All plugins are show in the "Plugins" tab from top and also in "fuel plugins" CLI.