OpenStack Operations Guide (2014)

Part II. Operations

Chapter 18. Upgrades

With the exception of Object Storage, upgrading from one version of OpenStack to another can take a great deal of effort. Until the situation improves, this chapter provides some guidance on the operational aspects that you should consider for performing an upgrade based on detailed steps for a basic architecture.

Pre-Upgrade Testing Environment

Probably the most important step of all is the pre-upgrade testing. Especially if you are upgrading immediately after release of a new version, undiscovered bugs might hinder your progress. Some deployers prefer to wait until the first point release is announced. However, if you have a significant deployment, you might follow the development and testing of the release, thereby ensuring that bugs for your use cases are fixed.

Each OpenStack cloud is different, and as a result, even with what may seem a near-identical architecture to this guide, you must still test upgrades between versions in your environment. For this, you need an approximate clone of your environment.

However, that is not to say that it needs to be the same size or use identical hardware as the production environment—few of us have that luxury. It is important to consider the hardware and scale of the cloud you are upgrading, but here are some tips to avoid that incredible cost:

Use your own cloud

The simplest place to start testing the next version of OpenStack is by setting up a new environment inside your own cloud. This may seem odd—especially the double virtualization used in running compute nodes—but it’s a sure way to very quickly test your configuration.

Use a public cloud

Especially because your own cloud is unlikely to have sufficient space to scale test to the level of the entire cloud, consider using a public cloud to test the scalability limits of your cloud controller configuration. Most public clouds bill by the hour, which means it can be inexpensive to perform even a test with many nodes.

Make another storage endpoint on the same system

If you use an external storage plug-in or shared file system with your cloud, in many cases, it’s possible to test that it works by creating a second share or endpoint. This will enable you to test the system before entrusting the new version onto your storage.

Watch the network

Even at smaller-scale testing, it should be possible to determine whether something is going horribly wrong in intercomponent communication if you look at the network packets and see too many.

To actually set up the test environment, there are several methods. Some prefer to do a full manual install using the OpenStack Installation Guides and then see what the final configuration files look like and which packages were installed. Others prefer to create a clone of their automated configuration infrastructure with changed package repository URLs and then alter the configuration until it starts working. Either approach is valid, and which you use depends on experience.

An upgrade pre-testing system is excellent for getting the configuration to work; however, it is important to note that the historical use of the system and differences in user interaction can affect the success of upgrades, too. We’ve seen experiences where database migrations encountered a bug (later fixed!) because of slight table differences between fresh Grizzly installs and those that migrated from Folsom to Grizzly.

Artificial scale testing can go only so far. Once your cloud is upgraded, you’ll also need to pay careful attention to the performance aspects of your cloud.

Preparing for a Rollback

Like all major system upgrades, your upgrade could fail for one or more difficult-to-determine reasons. You should prepare for this situation by leaving the ability to roll back your environment to the previous release, including databases, configuration files, and packages. We provide an example process for rolling back your environment in “Rolling Back a Failed Upgrade”.

Upgrades

The upgrade process generally follows these steps:

1.    Perform some “cleaning” of the environment prior to starting the upgrade process to ensure a consistent state. For example, instances not fully purged from the system after deletion may cause indeterminate behavior.

2.    Read the release notes and documentation.

3.    Find incompatibilities between your versions.

4.    Develop an upgrade procedure and assess it thoroughly using a test environment similar to your production environment.

5.    Run the upgrade procedure on the production environment.

You can perform an upgrade with operational instances, but this strategy can be dangerous. You might consider using live migration to temporarily relocate instances to other compute nodes while performing upgrades. However, you must ensure database consistency throughout the process; otherwise your environment may become unstable. Also, don’t forget to provide sufficient notice to your users, including giving them plenty of time to perform their own backups.

The following order for service upgrades seems the most successful:

1.    Upgrade the OpenStack Identity Service (keystone).

2.    Upgrade the OpenStack Image Service (glance).

3.    Upgrade OpenStack Compute (nova), including networking components.

4.    Upgrade OpenStack Block Storage (cinder).

5.    Upgrade the OpenStack dashboard.

The general upgrade process includes the following steps:

1.    Create a backup of configuration files and databases.

2.    Update the configuration files according to the release notes.

3.    Upgrade the packages using your distribution’s package manager.

4.    Stop services, update database schemas, and restart services.

5.    Verify proper operation of your environment.

How to Perform an Upgrade from Grizzly to Havana—Ubuntu

For this section, we assume that you are starting with the architecture provided in the OpenStack Installation Guide and upgrading to the same architecture for Havana. All nodes should run Ubuntu 12.04 LTS. This section primarily addresses upgrading core OpenStack services, such as the Identity Service (keystone); Image Service (glance); Compute (nova), including networking; Block Storage (cinder); and the dashboard.

Impact on Users

The upgrade process will interrupt management of your environment, including the dashboard. If you properly prepare for this upgrade, tenant instances will continue to operate normally.

Upgrade Considerations

Always review the release notes before performing an upgrade to learn about newly available features that you may want to enable and deprecated features that you should disable.

Perform a Backup

Save the configuration files on all nodes, as shown here:

# for i in keystone glance nova cinder openstack-dashboard

> do mkdir $i-grizzly

> done

# for i in keystone glance nova cinder openstack-dashboard

> do cp -r /etc/$i/* $i-grizzly/

> done

NOTE

You can modify this example script on each node to handle different services.

Back up all databases on the controller:

# mysqldump -u root -p --opt --add-drop-database \

--all-databases > grizzly-db-backup.sql

Manage Repositories

On all nodes, remove the repository for Grizzly packages and add the repository for Havana packages:

# apt-add-repository -r cloud-archive:grizzly

# apt-add-repository cloud-archive:havana

WARNING

Make sure any automatic updates are disabled.

Update Configuration Files

Update the glance configuration on the controller node for compatibility with Havana.

If not currently present and configured as follows, add or modify the following keys in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:

[keystone_authtoken]

auth_uri = http://controller:5000

auth_host = controller

admin_tenant_name = service

admin_user = glance

admin_password = GLANCE_PASS

[paste_deploy]

flavor = keystone

If currently present, remove the following key from the [filter:authtoken] section in /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini:

[filter:authtoken]

flavor = keystone

Update the nova configuration on all nodes for compatibility with Havana.

Add the new [database] section and associated key to /etc/nova/nova.conf:

[database]

connection = mysql://nova:NOVA_DBPASS@controller/nova

Remove defunct configuration from the [DEFAULT] section in /etc/nova/nova.conf:

[DEFAULT]

sql_connection = mysql://nova:NOVA_DBPASS@controller/nova

If not already present and configured as follows, add or modify the following keys in /etc/nova/nova.conf:

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

auth_host = controller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = NOVA_PASS

On all compute nodes, increase the DHCP lease time (measured in seconds) in /etc/nova/nova.conf to enable currently active instances to continue leasing their IP addresses during the upgrade process:

[DEFAULT]

dhcp_lease_time = 86400

WARNING

Setting this value too high may cause more dynamic environments to run out of available IP addresses. Use an appropriate value for your environment.

You must restart dnsmasq and the networking component of Compute to enable the new DHCP lease time:

# pkill -9 dnsmasq

# service nova-network restart

Update the Cinder configuration on the controller and storage nodes for compatibility with Havana.

Add the new [database] section and associated key to /etc/cinder/cinder.conf:

[database]

connection = mysql://cinder:CINDER_DBPASS@controller/cinder

Remove defunct configuration from the [DEFAULT] section in /etc/cinder/cinder.conf:

[DEFAULT]

sql_connection = mysql://cinder:CINDER_DBPASS@controller/cinder

If not currently present and configured as follows, add or modify the following key in /etc/cinder/cinder.conf:

[keystone_authtoken]

auth_uri = http://controller:5000

Update the dashboard configuration on the controller node for compatibility with Havana.

The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that /etc/openstack-dashboard/local_settings contains a correctly configured ALLOWED_HOSTSkey that contains a list of hostnames recognized by the dashboard.

If users will access your dashboard using http://dashboard.example.com, you would set:

ALLOWED_HOSTS=['dashboard.example.com']

If users will access your dashboard on the local system, you would set:

ALLOWED_HOSTS=['localhost']

If users will access your dashboard using an IP address in addition to a hostname, you would set:

ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200']

Upgrade Packages on the Controller Node

Upgrade packages on the controller node to Havana, as shown below:

# apt-get update

# apt-get dist-upgrade

NOTE

Depending on your specific configuration, performing a dist-upgrade may restart services supplemental to your OpenStack environment. For example, if you use Open-iSCSI for Block Storage volumes and the upgrade includes a new open-scsi package, the package manager will restart Open-iSCSI services, which may cause the volumes for your users to be disconnected.

The package manager will ask you about updating various configuration files. We recommend denying these changes. The package manager will append .dpkg-dist to the end of newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process.

Stop Services, Update Database Schemas, and Restart Services on the Controller Node

Stop each service, run the database synchronization command if necessary to update the associated database schema, and restart each service to apply the new configuration. Some services require additional commands:

OpenStack Identity

# service keystone stop

# keystone-manage token_flush

# keystone-manage db_sync

# service keystone start

OpenStack Image Service

# service glance-api stop

# service glance-registry stop

# glance-manage db_sync

# service glance-api start

# service glance-registry start

OpenStack Compute

# service nova-api stop

# service nova-scheduler stop

# service nova-conductor stop

# service nova-cert stop

# service nova-consoleauth stop

# service nova-novncproxy stop

# nova-manage db sync

# service nova-api start

# service nova-scheduler start

# service nova-conductor start

# service nova-cert start

# service nova-consoleauth start

# service nova-novncproxy start

OpenStack Block Storage

# service cinder-api stop

# service cinder-scheduler stop

# cinder-manage db sync

# service cinder-api start

# service cinder-scheduler start

The controller node update is complete. Now you can upgrade the compute nodes.

Upgrade Packages and Restart Services on the Compute Nodes

Upgrade packages on the compute nodes to Havana:

# apt-get update

# apt-get dist-upgrade

NOTE

Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages.

WARNING

Due to a packaging issue, this command may fail with the following error:

Errors were encountered while processing:

    /var/cache/apt/archives/

      qemu-utils_1.5.0+dfsg-3ubuntu5~cloud0_amd64.deb

    /var/cache/apt/archives/

      qemu-system-common_1.5.0+dfsg-3ubuntu5~cloud0_

        amd64.deb

    E: Sub-process /usr/bin/dpkg

    returned an error code (1)

You can fix this issue by using the following command:

# apt-get -f install

The packaging system will ask about updating the /etc/nova/api-paste.ini file. As with the controller upgrade, we recommend denying these changes and reviewing the .dpkg-dist file after completing the upgrade process.

To restart compute services:

# service nova-compute restart

# service nova-network restart

# service nova-api-metadata restart

Upgrade Packages and Restart Services on the Block Storage Nodes

Upgrade packages on the storage nodes to Havana:

# apt-get update

# apt-get dist-upgrade

NOTE

Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages.

The packaging system will ask about updating the /etc/cinder/api-paste.ini file. Like the controller upgrade, we recommend denying these changes and reviewing the .dpkg-dist file after completing the upgrade process.

To restart Block Storage services:

# service cinder-volume restart

How to Perform an Upgrade from Grizzly to Havana—Red Hat Enterprise Linux and Derivatives

For this section, we assume that you are starting with the architecture provided in the OpenStack Installation Guide and upgrading to the same architecture for Havana. All nodes should run Red Hat Enterprise Linux 6.4 or compatible derivatives. Newer minor releases should also work. This section primarily addresses upgrading core OpenStack services, such as the Identity Service (keystone); Image Service (glance); Compute (nova), including networking; Block Storage (cinder); and the dashboard.

Impact on Users

The upgrade process will interrupt management of your environment, including the dashboard. If you properly prepare for this upgrade, tenant instances will continue to operate normally.

Upgrade Considerations

Always review the release notes before performing an upgrade to learn about newly available features that you may want to enable and deprecated features that you should disable.

Perform a Backup

First, save the configuration files on all nodes:

# for i in keystone glance nova cinder openstack-dashboard

> do mkdir $i-grizzly

> done

# for i in keystone glance nova cinder openstack-dashboard

> do cp -r /etc/$i/* $i-grizzly/

> done

NOTE

You can modify this example script on each node to handle different services.

Next, back up all databases on the controller:

# mysqldump -u root -p --opt --add-drop-database \

  --all-databases > grizzly-db-backup.sql

Manage Repositories

On all nodes, remove the repository for Grizzly packages and add the repository for Havana packages:

# yum erase rdo-release-grizzly

# yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/  \

rdo-release-havana-7.noarch.rpm

WARNING

Make sure any automatic updates are disabled.

NOTE

Consider checking for newer versions of the Havana repository.

Update Configuration Files

Update the glance configuration on the controller node for compatibility with Havana.

If not currently present and configured as follows, add or modify the following keys in /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf:

# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \

  auth_uri http://controller:5000

# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \

  auth_host controller

# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \

  admin_tenant_name service

# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \

  admin_user glance

# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \

  admin_password GLANCE_PASS

# openstack-config --set /etc/glance/glance-api.conf paste_deploy \

  flavor keystone

# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \

  auth_uri http://controller:5000

# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \

  auth_host controller

# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \

  admin_tenant_name service

# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \

  admin_user glance

# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \

  admin_password GLANCE_PASS

# openstack-config --set /etc/glance/glance-registry.conf paste_deploy \

  flavor keystone

If currently present, remove the following key from the [filter:authtoken] section in /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini:

[filter:authtoken]

flavor = keystone

Update the nova configuration on all nodes for compatibility with Havana.

Add the new [database] section and associated key to /etc/nova/nova.conf:

# openstack-config --set /etc/nova/nova.conf database \

  connection mysql://nova:NOVA_DBPASS@controller/nova

Remove defunct database configuration from /etc/nova/nova.conf:

# openstack-config --del /etc/nova/nova.conf DEFAULT sql_connection

If not already present and configured as follows, add or modify the following keys in /etc/nova/nova.conf:

# openstack-config --set /etc/nova/nova.conf keystone_authtoken \

  auth_uri http://controller:5000/v2.0

# openstack-config --set /etc/nova/nova.conf keystone_authtoken \

  auth_host controller

# openstack-config --set /etc/nova/nova.conf keystone_authtoken \

  admin_tenant_name service

# openstack-config --set /etc/nova/nova.conf keystone_authtoken \

  admin_user nova

# openstack-config --set /etc/nova/nova.conf keystone_authtoken \

  admin_password NOVA_PASS

On all compute nodes, increase the DHCP lease time (measured in seconds) in /etc/nova/nova.conf to enable currently active instances to continue leasing their IP addresses during the upgrade process, as shown here:

# openstack-config --set /etc/nova/nova.conf DEFAULT \

  dhcp_lease_time 86400

WARNING

Setting this value too high may cause more dynamic environments to run out of available IP addresses. Use an appropriate value for your environment.

You must restart dnsmasq and the nova networking service to enable the new DHCP lease time:

# pkill -9 dnsmasq

# service openstack-nova-network restart

Update the cinder configuration on the controller and storage nodes for compatibility with Havana.

Add the new [database] section and associated key to /etc/cinder/cinder.conf:

# openstack-config --set /etc/cinder/cinder.conf database \

  connection mysql://cinder:CINDER_DBPASS@controller/cinder

Remove defunct database configuration from /etc/cinder/cinder.conf:

# openstack-config --del /etc/cinder/cinder.conf DEFAULT sql_connection

If not currently present and configured as follows, add or modify the following key in /etc/cinder/cinder.conf:

# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \

  auth_uri http://controller:5000

Update the dashboard configuration on the controller node for compatibility with Havana.

The dashboard installation procedure and configuration file changed substantially between Grizzly and Havana. Particularly, if you are running Django 1.5 or later, you must ensure that /etc/openstack-dashboard/local_settings contains a correctly configured ALLOWED_HOSTSkey that contains a list of hostnames recognized by the dashboard.

If users will access your dashboard using http://dashboard.example.com, you would set:

ALLOWED_HOSTS=['dashboard.example.com']

If users will access your dashboard on the local system, you would set:

ALLOWED_HOSTS=['localhost']

If users will access your dashboard using an IP address in addition to a hostname, you would set:

ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200']

Upgrade Packages on the Controller Node

Upgrade packages on the controller node to Havana:

# yum upgrade

NOTE

Some services may terminate with an error during the package upgrade process. If this may cause a problem with your environment, consider stopping all services before upgrading them to Havana.

Install the OpenStack SELinux package on the controller node:

# yum install openstack-selinux

NOTE

The package manager will append .rpmnew to the end of newer versions of existing configuration files. You should consider adopting conventions associated with the newer configuration files and merging them with your existing configuration files after completing the upgrade process.

Stop Services, Update Database Schemas, and Restart Services on the Controller Node

Stop each service, run the database synchronization command if necessary to update the associated database schema, and restart each service to apply the new configuration. Some services require additional commands:

OpenStack Identity

# service openstack-keystone stop

# keystone-manage token_flush

# keystone-manage db_sync

# service openstack-keystone start

OpenStack Image Service

# service openstack-glance-api stop

# service openstack-glance-registry stop

# glance-manage db_sync

# service openstack-glance-api start

# service openstack-glance-registry start

OpenStack Compute

# service openstack-nova-api stop

# service openstack-nova-scheduler stop

# service openstack-nova-conductor stop

# service openstack-nova-cert stop

# service openstack-nova-consoleauth stop

# service openstack-nova-novncproxy stop

# nova-manage db sync

# service openstack-nova-api start

# service openstack-nova-scheduler start

# service openstack-nova-conductor start

# service openstack-nova-cert start

# service openstack-nova-consoleauth start

# service openstack-nova-novncproxy start

OpenStack Block Storage

# service openstack-cinder-api stop

# service openstack-cinder-scheduler stop

# cinder-manage db sync

# service openstack-cinder-api start

# service openstack-cinder-scheduler start

The controller node update is complete. Now you can upgrade the compute nodes.

Upgrade Packages and Restart Services on the Compute Nodes

Upgrade packages on the compute nodes to Havana:

# yum upgrade

NOTE

Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages.

Install the OpenStack SELinux package on the compute nodes:

# yum install openstack-selinux

Restart compute services:

# service openstack-nova-compute restart

# service openstack-nova-network restart

# service openstack-nova-metadata-api restart

Upgrade Packages and Restart Services on the Block Storage Nodes

Upgrade packages on the storage nodes to Havana:

# yum upgrade

NOTE

Make sure you have removed the repository for Grizzly packages and added the repository for Havana packages.

Install the OpenStack SELinux package on the storage nodes:

# yum install openstack-selinux

Restart Block Storage services:

# service openstack-cinder-volume restart

Cleaning Up and Final Configuration File Updates

On all distributions, you need to perform some final tasks to complete the upgrade process.

Decrease DHCP timeouts by modifying /etc/nova/nova.conf on the compute nodes back to the original value for your environment.

Update all of the .ini files to match passwords and pipelines as required for Havana in your environment.

After a migration, your users will see different results from nova image-list and glance image-list unless you match up policies for access to private images. To do so, edit /etc/glance/policy.json and /etc/nova/policy.json to contain "context_is_admin": "role:admin", which limits access to private images for projects.

Thoroughly test the environment, and then let your users know that their cloud is running normally again.

Rolling Back a Failed Upgrade

While we do not wish this fate upon anyone, upgrades involve complex operations and can fail. This section provides guidance for rolling back to a previous release of OpenStack. Although only tested on Ubuntu, other distributions follow a similar procedure.

In this section, we consider only the most immediate case: you have taken down production management services in preparation for an upgrade, completed part of the upgrade process, discovered one or more problems not encountered during testing, and need to roll back your environment to the original “known good” state. We specifically assume that you did not make any state changes after attempting the upgrade process: no new instances, networks, storage volumes, etc.

Within this scope, you need to accomplish three main steps to successfully roll back your environment:

§  Roll back configuration files

§  Roll back databases

§  Roll back packages

The upgrade instructions provided in earlier sections ensure that you have proper backups of your databases and configuration files. You should read through this section carefully and verify that you have the requisite backups to restore. Rolling back upgrades is a tricky process because distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades often take significantly more effort to troubleshoot and, hopefully, resolve than broken upgrades. Only you can weigh the risks of trying to push a failed upgrade forward versus rolling it back. Generally, we consider rolling back the very last option.

The following steps described for Ubuntu have worked on at least one production environment, but they may not work for all environments.

Perform the rollback from Havana to Grizzly

1.    Stop all OpenStack services.

2.    Copy contents of configuration backup directories /etc/<service>.grizzly that you created during the upgrade process back to /etc/<service>:

3.    Restore databases from the backup file grizzly-db-backup.sql that you created with mysqldump during the upgrade process:

# mysql -u root -p < grizzly-db-backup.sql

If you created this backup using the --add-drop-database flag as instructed, you can proceed to the next step. If you omitted this flag, MySQL will revert all of the tables that existed in Grizzly, but not drop any tables created during the database migration for Havana. In this case, you need to manually determine which tables should not exist and drop them to prevent issues with your next upgrade attempt.

4.    Downgrade OpenStack packages.

WARNING

We consider downgrading packages by far the most complicated step; it is highly dependent on the distribution as well as overall administration of the system.

a.     Determine the OpenStack packages installed on your system. This is done using dpkg --get-selections, filtering for OpenStack packages, filtering again to omit packages explicitly marked in the deinstall state, and saving the final output to a file. For example, the following command covers a controller node with keystone, glance, nova, neutron, and cinder:

b. # dpkg --get-selections | grep -e keystone -e glance -e nova -e neutron \

c. -e cinder | grep -v deinstall | tee openstack-selections

d. cinder-api                                      install

e. cinder-common                                   install

f. cinder-scheduler                                install

g. cinder-volume                                   install

h. glance                                          install

i. glance-api                                      install

j. glance-common                                   install

k. glance-registry                                 install

l. neutron-common                                  install

m. neutron-dhcp-agent                              install

n. neutron-l3-agent                                install

o. neutron-lbaas-agent                             install

p. neutron-metadata-agent                          install

q. neutron-plugin-openvswitch                      install

r. neutron-plugin-openvswitch-agent                install

s. neutron-server                                  install

t. nova-api                                        install

u. nova-cert                                       install

v. nova-common                                     install

w. nova-conductor                                  install

x. nova-consoleauth                                install

y. nova-novncproxy                                 install

z. nova-objectstore                                install

aa.nova-scheduler                                  install

bb.python-cinder                                   install

cc.python-cinderclient                             install

dd.python-glance                                   install

ee.python-glanceclient                             install

ff.python-keystone                                 install

gg.python-keystoneclient                           install

hh.python-neutron                                  install

ii.python-neutronclient                            install

jj.python-nova                                     install

kk.python-novaclient                               install

NOTE

Depending on the type of server, the contents and order of your package list may vary from this example.

ll.    You can determine the package versions available for reversion by using apt-cache policy. If you removed the Grizzly repositories, you must first reinstall them and run apt-get update:

mm.# apt-cache policy nova-common

nn.nova-common:

oo.  Installed: 1:2013.2-0ubuntu1~cloud0

pp.  Candidate: 1:2013.2-0ubuntu1~cloud0

qq.  Version table:

rr. *** 1:2013.2-0ubuntu1~cloud0 0

ss.        500 http://ubuntu-cloud.archive.canonical.com/ubuntu/

tt.            precise-updates/havana/main amd64 Packages

uu.        100 /var/lib/dpkg/status

vv.     1:2013.1.4-0ubuntu1~cloud0 0

ww.        500 http://ubuntu-cloud.archive.canonical.com/ubuntu/

xx.            precise-updates/grizzly/main amd64 Packages

yy.     2012.1.3+stable-20130423-e52e6912-0ubuntu1.2 0

zz.        500 http://us.archive.ubuntu.com/ubuntu/

aaa.                     precise-updates/main amd64 Packages

bbb.                 500 http://security.ubuntu.com/ubuntu/

ccc.                     precise-security/main amd64 Packages

ddd.              2012.1-0ubuntu2 0

eee.                 500 http://us.archive.ubuntu.com/ubuntu/

fff.                     precise/main amd64 Packages

This tells us the currently installed version of the package, newest candidate version, and all versions along with the repository that contains each version. Look for the appropriate Grizzly version—1:2013.1.4-0ubuntu1~cloud0 in this case. The process of manually picking through this list of packages is rather tedious and prone to errors. You should consider using the following script to help with this process:

# for i in `cut -f 1 openstack-selections | sed 's/neutron/quantum/;'`;

  do echo -n $i ;apt-cache policy $i | grep -B 1 grizzly |

  grep -v Packages | awk '{print "="$1}';done | tr '\n' ' ' |

  tee openstack-grizzly-versions

cinder-api=1:2013.1.4-0ubuntu1~cloud0

cinder-common=1:2013.1.4-0ubuntu1~cloud0

cinder-scheduler=1:2013.1.4-0ubuntu1~cloud0

cinder-volume=1:2013.1.4-0ubuntu1~cloud0

glance=1:2013.1.4-0ubuntu1~cloud0

glance-api=1:2013.1.4-0ubuntu1~cloud0

glance-common=1:2013.1.4-0ubuntu1~cloud0

glance-registry=1:2013.1.4-0ubuntu1~cloud0

quantum-common=1:2013.1.4-0ubuntu1~cloud0

quantum-dhcp-agent=1:2013.1.4-0ubuntu1~cloud0

quantum-l3-agent=1:2013.1.4-0ubuntu1~cloud0

quantum-lbaas-agent=1:2013.1.4-0ubuntu1~cloud0

quantum-metadata-agent=1:2013.1.4-0ubuntu1~cloud0

quantum-plugin-openvswitch=1:2013.1.4-0ubuntu1~cloud0

quantum-plugin-openvswitch-agent=1:2013.1.4-0ubuntu1~cloud0

quantum-server=1:2013.1.4-0ubuntu1~cloud0

nova-api=1:2013.1.4-0ubuntu1~cloud0

nova-cert=1:2013.1.4-0ubuntu1~cloud0

nova-common=1:2013.1.4-0ubuntu1~cloud0

nova-conductor=1:2013.1.4-0ubuntu1~cloud0

nova-consoleauth=1:2013.1.4-0ubuntu1~cloud0

nova-novncproxy=1:2013.1.4-0ubuntu1~cloud0

nova-objectstore=1:2013.1.4-0ubuntu1~cloud0

nova-scheduler=1:2013.1.4-0ubuntu1~cloud0

python-cinder=1:2013.1.4-0ubuntu1~cloud0

python-cinderclient=1:1.0.3-0ubuntu1~cloud0

python-glance=1:2013.1.4-0ubuntu1~cloud0

python-glanceclient=1:0.9.0-0ubuntu1.2~cloud0

python-quantum=1:2013.1.4-0ubuntu1~cloud0

python-quantumclient=1:2.2.0-0ubuntu1~cloud0

python-nova=1:2013.1.4-0ubuntu1~cloud0

python-novaclient=1:2.13.0-0ubuntu1~cloud0

NOTE

If you decide to continue this step manually, don’t forget to change neutron to quantum where applicable.

ggg.                 Use apt-get install to install specific versions of each package by specifying <package-name>=<version>. The script in the previous step conveniently created a list of package=version pairs for you:

# apt-get install `cat openstack-grizzly-versions`

This completes the rollback procedure. You should remove the Havana repository and run apt-get update to prevent accidental upgrades until you solve whatever issue caused you to roll back your environment.