Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Why can't I install keystone with openstack/keystone Puppet module

Hi,

I am trying to set up a controller node using Openstack (Kilo) Puppet modules.
I have some centos7 VM I am testing on. I am running PE 3.7* and I have installed the openstack/keystone Puppet module on my puppet master.

Here is my puppet code:

class wrapcontroller(
    ... big long list of paramaters ...    
) {

    file { $ldaps_cert_filename:
        ensure => 'present',
        content => $ldaps_cert_content,
    }   

    exec { '/bin/yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm':
        unless => '/bin/rpm -q epel-release',
    } 

    exec { '/bin/yum -y install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm':
        unless => '/bin/rpm -q rdo-release',
    } 

    # need sysfsutils for live migration
    package { 'sysfsutils':
        ensure        => 'present',
    }  

    class {'::memcached':
        user            => 'memcached',
        max_connections => '8192',
        max_memory      => '4096',
        tcp_port        => '11211',
        udp_port        => '11211',
        listen_ip       => '0.0.0.0',
        manage_firewall => true,
        processorcount  => '32',
    } 


    package { 'mariadb':
        ensure => present,
    } 
    exec { 'keystone_database_init':
        command => '/usr/bin/keystone-manage db_sync',
        onlyif => [ 
            "/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'",
            "/usr/bin/test -z \"`/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'`\"" 
        ],
        require => Package['mariadb'],
    } 

    class {'::keystone':
        debug                      => $debug,
        verbose                    => $debug,
        admin_token                => $keystone_auth_token,
        database_connection        => "mysql://${keystone_db_user}:${keystone_db_pass}@${db_host}:${db_port}/${keystone_db_name}",
        token_driver               => 'keystone.token.persistence.backends.memcache_pool.Token',
        token_provider             => 'keystone.token.providers.uuid.Provider',
    memcache_servers       => split( "${memcached_servers}", ','),
        rabbit_hosts           => split( "${rabbit_hosts}", ','),
        rabbit_password        => $rabbit_password,
        rabbit_userid              => $rabbit_userid,
        admin_endpoint             => "https://${controller_vip_name}:35357/",
        public_endpoint            => "https://${controller_vip_name}:5000/",
        enable_ssl                 => true,
   } 

   class { 'keystone::ldap':
       url                         => $ldapurl,
       user                        => $keystone_ldap_user,
       password                    => $keystone_ldap_pass,
       suffix                      => $ldapsuffix,
       query_scope                 => 'one',
       user_tree_dn                => $user_tree_dn,
       user_objectclass            => 'person',
       user_enabled_attribute      => 'userAccountControl',
       user_enabled_mask           => 2,
       user_enabled_default        => 512,
       page_size                   => 500,
       use_tls                     => 'True',
       user_id_attribute           => 'cn',
       user_name_attribute         => 'cn',
       tls_cacertfile              => $ldaps_cert_filename,
       tls_req_cert                => 'never',
       user_allow_create           => 'False',
       user_allow_update           => 'False',
       user_allow_delete           => 'False',
       group_allow_create          => 'False',
       group_allow_update          => 'False',
       group_allow_delete          => 'False',
   } 

   notify { 'line 196 doing class keystone role admin':} ->
   class { '::keystone::roles::admin':
       admin        => $keystone_admin_user,
       email        => $keystone_admin_email,
       password     => $keystone_admin_password,
   } ->
   notify { 'line 202 did class keystone role admin':} 

   class { '::keystone::endpoint':
       public_url => "http://${controller_vip_name}:5000/",
       admin_url  => "http://${controller_vip_name}:35357/",
   } 

   keystone_user { $nova_admin_user:
       ensure   => present,
       email    => $keystone_admin_email,
       tenant   => $admin_tenant_name,
       enabled  => true,
       password => $nova_admin_password,
   } 
}

Here is the output from puppet agent -t:

# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: NetworkManager is not running.
Info: Caching catalog for svl-ost-mgmt-centos-001.cisco.com
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Warning: The ignore_default_tenant parameter is deprecated and will be removed in the future.
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Info: Applying configuration version '1449179610'
Notice: line 196 doing class keystone role admin
Notice: /Stage[main]/Wrapcontroller/Notify[line 196 doing class keystone role admin]/message: defined 'message' as 'line 196 doing class keystone role admin'

This is where puppet agent -t appears to hang for 10 minutes. The output from puppet agent -t continues with ...

Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `each' for nil:NilClass

While puppet is running on my controller node I see this in my ps -ef --forest output.

root      5689   820  0 14:43 ?        00:00:00  \_ sshd: root@pts/0
root      6645  5689  0 14:44 pts/0    00:00:00  |   \_ -bash
root     28710  6645  1 16:33 pts/0    00:00:09  |       \_ /opt/puppet/bin/ruby /usr/local/bin/puppet agent -t
root     28913 28710 18 16:44 ?        00:00:00  |           \_ /usr/bin/python2 /usr/bin/openstack domain list --quiet --format csv

The /usr/bin/openstack domain list ... command looks like it is hanging.

My question is, "What am I doing wrong and why is /usr/bin/openstack domain list --quiet --format csv hanging?"

Why can't I install keystone with openstack/keystone Puppet module

Hi,

I am trying to set up a controller node using Openstack (Kilo) Puppet modules.
I have some centos7 VM I am testing on. I am running PE 3.7* and I have installed the openstack/keystone Puppet module on my puppet master.

Here is my puppet code:

class wrapcontroller(
    ... big long list of paramaters ...    
) {

    file { $ldaps_cert_filename:
        ensure => 'present',
        content => $ldaps_cert_content,
    }   

    exec { '/bin/yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm':
        unless => '/bin/rpm -q epel-release',
    } 

    exec { '/bin/yum -y install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm':
        unless => '/bin/rpm -q rdo-release',
    } 

    # need sysfsutils for live migration
    package { 'sysfsutils':
        ensure        => 'present',
    }  

    class {'::memcached':
        user            => 'memcached',
        max_connections => '8192',
        max_memory      => '4096',
        tcp_port        => '11211',
        udp_port        => '11211',
        listen_ip       => '0.0.0.0',
        manage_firewall => true,
        processorcount  => '32',
    } 


    package { 'mariadb':
        ensure => present,
    } 
    exec { 'keystone_database_init':
        command => '/usr/bin/keystone-manage db_sync',
        onlyif => [ 
            "/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'",
            "/usr/bin/test -z \"`/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'`\"" 
        ],
        require => Package['mariadb'],
    } 

    class {'::keystone':
        debug                      => $debug,
        verbose                    => $debug,
        admin_token                => $keystone_auth_token,
        database_connection        => "mysql://${keystone_db_user}:${keystone_db_pass}@${db_host}:${db_port}/${keystone_db_name}",
        token_driver               => 'keystone.token.persistence.backends.memcache_pool.Token',
        token_provider             => 'keystone.token.providers.uuid.Provider',
    memcache_servers       => split( "${memcached_servers}", ','),
        rabbit_hosts           => split( "${rabbit_hosts}", ','),
        rabbit_password        => $rabbit_password,
        rabbit_userid              => $rabbit_userid,
        admin_endpoint             => "https://${controller_vip_name}:35357/",
        public_endpoint            => "https://${controller_vip_name}:5000/",
        enable_ssl                 => true,
   } 

   class { 'keystone::ldap':
       url                         => $ldapurl,
       user                        => $keystone_ldap_user,
       password                    => $keystone_ldap_pass,
       suffix                      => $ldapsuffix,
       query_scope                 => 'one',
       user_tree_dn                => $user_tree_dn,
       user_objectclass            => 'person',
       user_enabled_attribute      => 'userAccountControl',
       user_enabled_mask           => 2,
       user_enabled_default        => 512,
       page_size                   => 500,
       use_tls                     => 'True',
       user_id_attribute           => 'cn',
       user_name_attribute         => 'cn',
       tls_cacertfile              => $ldaps_cert_filename,
       tls_req_cert                => 'never',
       user_allow_create           => 'False',
       user_allow_update           => 'False',
       user_allow_delete           => 'False',
       group_allow_create          => 'False',
       group_allow_update          => 'False',
       group_allow_delete          => 'False',
   } 

   notify { 'line 196 doing class keystone role admin':} ->
   class { '::keystone::roles::admin':
       admin        => $keystone_admin_user,
       email        => $keystone_admin_email,
       password     => $keystone_admin_password,
   } ->
   notify { 'line 202 did class keystone role admin':} 

   class { '::keystone::endpoint':
       public_url => "http://${controller_vip_name}:5000/",
       admin_url  => "http://${controller_vip_name}:35357/",
   } 

   keystone_user { $nova_admin_user:
       ensure   => present,
       email    => $keystone_admin_email,
       tenant   => $admin_tenant_name,
       enabled  => true,
       password => $nova_admin_password,
   } 
}

Here is the output from puppet agent -t:

# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: NetworkManager is not running.
Info: Caching catalog for svl-ost-mgmt-centos-001.cisco.com
ost-mgmt-centos-001
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Warning: The ignore_default_tenant parameter is deprecated and will be removed in the future.
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Info: Applying configuration version '1449179610'
Notice: line 196 doing class keystone role admin
Notice: /Stage[main]/Wrapcontroller/Notify[line 196 doing class keystone role admin]/message: defined 'message' as 'line 196 doing class keystone role admin'

This is where puppet agent -t appears to hang for 10 minutes. The output from puppet agent -t continues with ...

Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `each' for nil:NilClass

While puppet is running on my controller node I see this in my ps -ef --forest output.

root      5689   820  0 14:43 ?        00:00:00  \_ sshd: root@pts/0
root      6645  5689  0 14:44 pts/0    00:00:00  |   \_ -bash
root     28710  6645  1 16:33 pts/0    00:00:09  |       \_ /opt/puppet/bin/ruby /usr/local/bin/puppet agent -t
root     28913 28710 18 16:44 ?        00:00:00  |           \_ /usr/bin/python2 /usr/bin/openstack domain list --quiet --format csv

The /usr/bin/openstack domain list ... command looks like it is hanging.

My question is, "What am I doing wrong and why is /usr/bin/openstack domain list --quiet --format csv hanging?"

Why can't I install keystone with openstack/keystone Puppet module

Hi,

I am trying to set up a controller node using Openstack (Kilo) Puppet modules.
I have some centos7 VM I am testing on. I am running PE 3.7* and I have installed the openstack/keystone Puppet module on my puppet master.

Here is my puppet code:

class wrapcontroller(
    ... big long list of paramaters ...    
) {

    file { $ldaps_cert_filename:
        ensure => 'present',
        content => $ldaps_cert_content,
    }   

    exec { '/bin/yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm':
        unless => '/bin/rpm -q epel-release',
    } 

    exec { '/bin/yum -y install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm':
        unless => '/bin/rpm -q rdo-release',
    } 

    # need sysfsutils for live migration
    package { 'sysfsutils':
        ensure        => 'present',
    }  

    class {'::memcached':
        user            => 'memcached',
        max_connections => '8192',
        max_memory      => '4096',
        tcp_port        => '11211',
        udp_port        => '11211',
        listen_ip       => '0.0.0.0',
        manage_firewall => true,
        processorcount  => '32',
    } 


    package { 'mariadb':
        ensure => present,
    } 
    exec { 'keystone_database_init':
        command => '/usr/bin/keystone-manage db_sync',
        onlyif => [ 
            "/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'",
            "/usr/bin/test -z \"`/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'`\"" 
        ],
        require => Package['mariadb'],
    } 

    class {'::keystone':
        debug                      => $debug,
        verbose                    => $debug,
        admin_token                => $keystone_auth_token,
        database_connection        => "mysql://${keystone_db_user}:${keystone_db_pass}@${db_host}:${db_port}/${keystone_db_name}",
        token_driver               => 'keystone.token.persistence.backends.memcache_pool.Token',
        token_provider             => 'keystone.token.providers.uuid.Provider',
    memcache_servers       => split( "${memcached_servers}", ','),
        rabbit_hosts           => split( "${rabbit_hosts}", ','),
        rabbit_password        => $rabbit_password,
        rabbit_userid              => $rabbit_userid,
        admin_endpoint             => "https://${controller_vip_name}:35357/",
        public_endpoint            => "https://${controller_vip_name}:5000/",
        enable_ssl                 => true,
   } 

   class { 'keystone::ldap':
       url                         => $ldapurl,
       user                        => $keystone_ldap_user,
       password                    => $keystone_ldap_pass,
       suffix                      => $ldapsuffix,
       query_scope                 => 'one',
       user_tree_dn                => $user_tree_dn,
       user_objectclass            => 'person',
       user_enabled_attribute      => 'userAccountControl',
       user_enabled_mask           => 2,
       user_enabled_default        => 512,
       page_size                   => 500,
       use_tls                     => 'True',
       user_id_attribute           => 'cn',
       user_name_attribute         => 'cn',
       tls_cacertfile              => $ldaps_cert_filename,
       tls_req_cert                => 'never',
       user_allow_create           => 'False',
       user_allow_update           => 'False',
       user_allow_delete           => 'False',
       group_allow_create          => 'False',
       group_allow_update          => 'False',
       group_allow_delete          => 'False',
   } 

   notify { 'line 196 doing class keystone role admin':} ->
   class { '::keystone::roles::admin':
       admin        => $keystone_admin_user,
       email        => $keystone_admin_email,
       password     => $keystone_admin_password,
   } ->
   notify { 'line 202 did class keystone role admin':} 

   class { '::keystone::endpoint':
       public_url => "http://${controller_vip_name}:5000/",
       admin_url  => "http://${controller_vip_name}:35357/",
   } 

   keystone_user { $nova_admin_user:
       ensure   => present,
       email    => $keystone_admin_email,
       tenant   => $admin_tenant_name,
       enabled  => true,
       password => $nova_admin_password,
   } 
}

Here is the output from puppet agent -t:

# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: NetworkManager is not running.
Info: Caching catalog for ost-mgmt-centos-001
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Warning: The ignore_default_tenant parameter is deprecated and will be removed in the future.
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Info: Applying configuration version '1449179610'
Notice: line 196 doing class keystone role admin
Notice: /Stage[main]/Wrapcontroller/Notify[line 196 doing class keystone role admin]/message: defined 'message' as 'line 196 doing class keystone role admin'

This is where puppet agent -t appears to hang for 10 minutes. The output from puppet agent -t continues with ...

Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `each' for nil:NilClass

While puppet is running on my controller node I see this in my ps -ef --forest output.

root      5689   820  0 14:43 ?        00:00:00  \_ sshd: root@pts/0
root      6645  5689  0 14:44 pts/0    00:00:00  |   \_ -bash
root     28710  6645  1 16:33 pts/0    00:00:09  |       \_ /opt/puppet/bin/ruby /usr/local/bin/puppet agent -t
root     28913 28710 18 16:44 ?        00:00:00  |           \_ /usr/bin/python2 /usr/bin/openstack domain list --quiet --format csv

The /usr/bin/openstack domain list ... command looks like it is hanging.

My question is, "What am I doing wrong and why is /usr/bin/openstack domain list --quiet --format csv hanging?"