settingsLogin | Registersettings

[openstack-announce] [release][nova] nova 13.0.0 release (mitaka)

0 votes

We are pleased to announce the release of:

nova 13.0.0: Cloud computing fabric controller

This release is part of the mitaka release series.

For more details, please see below.


Nova 13.0.0 release is including a lot of new features and bugfixes.
It can be extremely hard to mention all the changes we introduced
during that release but we beg you to read at least the upgrade
section which describes the required modifications that you need to do
for upgrading your cloud from 12.0.0 (Liberty) to 13.0.0 (Mitaka).

That said, a few major changes are worth to notice here. This is not
an exhaustive list of things to notice, rather just important things
you need to know :

  • Latest API microversion supported for Mitaka is v2.25

  • Nova now requires a second database (called 'API DB').

  • A new nova-manage script allows you to perform all online DB
    migrations once you upgrade your cloud

  • EC2 API support is fully removed.

New Features

  • Enables NUMA topology reporting on PowerPC architecture from the
    libvirt driver in Nova but with a caveat as mentioned below. NUMA
    cell affinity and dedicated cpu pinning code assumes that the host
    operating system is exposed to threads. PowerPC based hosts use core
    based scheduling for processes. Due to this, the cores on the
    PowerPC architecture are treated as threads. Since cores are always
    less than or equal to the threads on a system, this leads to non-
    optimal resource usage while pinning. This feature is supported from
    libvirt version 1.2.19 for PowerPC.

  • A new REST API to cancel an ongoing live migration has been added
    in microversion 2.24. Initially this operation will only work with
    the libvirt virt driver.

  • It is possible to call attach and detach volume API operations for
    instances which are in shelved and shelvedoffloaded state. For an
    instance in shelved
    offloaded state Nova will set to None the value
    for the device_name field, the right value for that field will be
    set once the instance will be unshelved as it will be managed by a
    specific compute manager.

  • It is possible to block live migrate instances with additional
    cinder volumes attached. This requires libvirt version to be
    >=1.2.17 and does not work when livemigrationtunnelled is set to

  • Project-id and user-id are now also returned in the return data of
    os-server-groups APIs. In order to use this new feature, user have
    to contain the header of request microversion v2.13 in the API

  • Add support for enabling uefi boot with libvirt.

  • A new hoststatus attribute for servers/detail and
    id}. In order to use this new feature, user have to
    contain the header of request microversion v2.16 in the API request.
    A new policy "oscomputeapi:servers:show:host_status" added to
    enable the feature. By default, this is only exposed to cloud

  • A new server action triggercrashdump has been added to the REST
    API in microversion 2.17.

  • When RBD is used for ephemeral disks and image storage, make
    snapshot use Ceph directly, and update Glance with the new location.
    In case of failure, it will gracefully fallback to the "generic"
    snapshot method. This requires changing the typical permissions for
    the Nova Ceph user (if using authx) to allow writing to the pool
    where vm images are stored, and it also requires configuring Glance
    to provide a v2 endpoint with direct_url support enabled (there are
    security implications to doing this). See for more
    information on configuring OpenStack with RBD.

  • A new option "livemigrationinboundaddr" has been added in the
    configuration file, set None as default value. If this option is
    present in pre
    migration_data, the ip address/hostname provided will
    be used instead of the migration target compute node's hostname as
    the uri for live migration, if it's None, then the mechanism remains
    as it is before.

  • Added support for CPU thread policies, which can be used to
    control how the libvirt virt driver places guests with respect to
    CPU SMT "threads". These are provided as instance and image metadata
    options, 'hw:cputhreadpolicy' and 'hwcputhread_policy'
    respectively, and provide an additional level of control over CPU
    pinning policy, when compared to the existing CPU policy feature.
    These changes were introduced in commits '83cd67c' and 'aaaba4a'.

  • Add support for enabling discard support for block devices with
    libvirt. This will be enabled for Cinder volume attachments that
    specify support for the feature in their connection properties. This
    requires support to be present in the version of libvirt (v1.0.6+)
    and qemu (v1.6.0+) used along with the configured virtual drivers
    for the instance. The virtio-blk driver does not support this

  • A new "auto" value for the configuration option
    "upgrade_levels.compute" is accepted, that allows automatic
    determination of the compute service version to use for RPC
    communication. By default, we still use the newest version if not
    set in the config, a specific version if asked, and only do this
    automatic behavior if 'auto' is configured. When 'auto' is used,
    sending a SIGHUP to the service will cause the value to be re-
    calculated. Thus, after an upgrade is complete, sending SIGHUP to
    all services will cause them to start sending messages compliant
    with the newer RPC version.

  • Libvirt driver in Nova now supports Cinder DISCO volume driver.

  • A disk space scheduling filter is now available, which prefers
    compute nodes with the most available disk space. By default, free
    disk space is given equal importance to available RAM. To increase
    the priority of free disk space in scheduling, increase the
    diskweightmultiplier option.

  • A new REST API to force live migration to complete has been added
    in microversion 2.22.

  • The os-instance-actions methods now read actions from deleted
    instances. This means that 'GET /v2.1/{tenant-id}/servers/{server-id
    }/os-instance-actions' and 'GET /v2.1/{tenant-id}/servers/{server-id
    }/os-instance-actions/{req-id}' will return instance-action items
    even if the instance corresponding to '{server-id}' has been

  • When booting an instance, its sanitized 'hostname' attribute is
    now used to populate the 'dnsname' attribute of the Neutron ports
    the instance is attached to. This functionality enables the Neutron
    internal DNS service to know the ports by the instance's hostname.
    As a consequence, commands like 'hostname -f' will work as expected
    when executed in the instance. When a port's network has a non-blank
    domain' attribute, the port's 'dnsname' combined with the
    network's 'dns
    domain' will be published by Neutron in an external
    DNS as a service like Designate. As a consequence, the instance's
    hostname is published in the external DNS as a service. This
    functionality is added to Nova when the 'DNS Integration' extension
    is enabled in Neutron. The publication of 'dnsname' and
    domain' combinations to an external DNS as a service
    additionaly requires the configuration of the appropriate driver in
    Neutron. When the 'Port Binding' extension is also enabled in
    Neutron, the publication of a 'dnsname' and 'dnsdomain'
    combination to the external DNS as a service will require one
    additional update operation when Nova allocates the port during the
    instance boot. This may have a noticeable impact on the performance
    of the boot process.

  • The libvirt driver now has a livemigrationtunnelled
    configuration option which should be used where the
    VIRMIGRATETUNNELLED flag would previously have been set or unset
    in the livemigrationflag and blockmigrationflag configuration

  • For the libvirt driver, by default hardware properties will be
    retrieved from the Glance image and if such haven't been provided,
    it will use a libosinfo database to get those values. If users want
    to force a specific guest OS ID for the image, they can now use a
    new glance image property "osdistro" (eg. "--property
    distro=fedora21"). In order to use the libosinfo database, you
    need to separately install the related native package provided for
    your operating system distribution.

  • Add support for allowing Neutron to specify the bridge name for
    the OVS, Linux Bridge, and vhost-user VIF types.

  • Added a nova-manage db onlinedatamigrations command for
    forcing online data migrations, which will run all registered
    migrations for the release, instead of there being a separate
    command for each logical data migration. Operators need to make sure
    all data is migrated before upgrading to the next release, and the
    new command provides a unified interface for doing it.

  • Provides API 2.18, which makes the use of project_ids in API urls

  • Libvirt with Virtuozzo virtualisation type now supports snapshot

  • Remove "onSharedStorage" parameter from server's evacuate action
    in microversion 2.14. Nova will automatically detect if the instance
    is on shared storage. Also adminPass is removed from the response
    body which makes the response body empty. The user can get the
    password with the server's os-server-password action.

  • Add two new list/show API for server-migration. The list API will
    return the in progress live migratons information of a server. The
    show API will return a specified in progress live migration of a
    server. This has been added in microversion 2.23.

  • A new service.status versioned notification has been introduced.
    When the status of the Service object is changed nova will send a
    new service.update notification with versioned payload according to
    bp versioned-notification-api. The new notification is documented in

  • Two new policies soft-affinty and soft-anti-affinity have been
    implemented for the server-group feature of Nova. This means that
    POST /v2.1/{tenant_id}/os-server-groups API resource now accepts
    'soft-affinity' and 'soft-anti-affinity' as value of the 'policies'
    key of the request body.

  • In Nova Compute API microversion 2.19, you can specify a
    "description" attribute when creating, rebuilding, or updating a
    server instance. This description can be retrieved by getting
    server details, or list details for servers. Refer to the Nova
    Compute API documentation for more information. Note that the
    description attribute existed in prior Nova versions, but was set to
    the server name by Nova, and was not visible to the user. So,
    servers you created with microversions prior to 2.19 will return the
    description equals the name on server details microversion 2.19.

  • As part of refactoring the notification interface of Nova a new
    config option 'notification_format' has been added to specifies
    which notification format shall be used by nova. The possible values
    are 'unversioned' (e.g. legacy), 'versioned', 'both'. The default
    value is 'both'. The new versioned notifications are documented in

  • For the VMware driver, the flavor extra specs for quotas has been
    extended to support:

    • quota:cpu_limit - The cpu of a virtual machine will not exceed
      this limit, even if there are available resources. This is
      typically used to ensure a consistent performance of virtual
      machines independent of available resources. Units are MHz.

    • quota:cpu_reservation - guaranteed minimum reservation (MHz)

    • quota:cpushareslevel - the allocation level. This can be
      'custom', 'high', 'normal' or 'low'.

    • quota:cpusharesshare - in the event that 'custom' is used,
      this is the number of shares.

    • quota:memory_limit - The memory utilization of a virtual machine
      will not exceed this limit, even if there are available resources.
      This is typically used to ensure a consistent performance of
      virtual machines independent of available resources. Units are MB.

    • quota:memory_reservation - guaranteed minimum reservation (MB)

    • quota:memoryshareslevel - the allocation level. This can be
      'custom', 'high', 'normal' or 'low'.

    • quota:memorysharesshare - in the event that 'custom' is used,
      this is the number of shares.

    • quota:diskiolimit - The I/O utilization of a virtual machine
      will not exceed this limit. The unit is number of I/O per second.

    • quota:diskioreservation - Reservation control is used to
      provide guaranteed allocation in terms of IOPS

    • quota:diskioshares_level - the allocation level. This can be
      'custom', 'high', 'normal' or 'low'.

    • quota:diskioshares_share - in the event that 'custom' is used,
      this is the number of shares.

    • quota:vif_limit - The bandwidth limit for the virtual network
      adapter. The utilization of the virtual network adapter will not
      exceed this limit, even if there are available resources. Units in

    • quota:vif_reservation - Amount of network bandwidth that is
      guaranteed to the virtual network adapter. If utilization is less
      than reservation, the resource can be used by other virtual
      network adapters. Reservation is not allowed to exceed the value
      of limit if limit is set. Units in Mbits/sec.

    • quota:vifshareslevel - the allocation level. This can be
      'custom', 'high', 'normal' or 'low'.

    • quota:vifsharesshare - in the event that 'custom' is used,
      this is the number of shares.

Upgrade Notes

  • All noVNC proxy configuration options have been added to the 'vnc'
    group. They should no longer be included in the 'DEFAULT' group.

  • All VNC XVP configuration options have been added to the 'vnc'
    group. They should no longer be included in the 'DEFAULT' group.

  • Upon first startup of the scheduler service in Mitaka, all defined
    aggregates will have UUIDs generated and saved back to the database.
    If you have a significant number of aggregates, this may delay
    scheduler start as that work is completed, but it should be minor
    for most deployments.

  • During an upgrade to Mitaka, operators must create and initialize
    a database for the API service. Configure this in
    [apidatabase]/connection, and then run "nova-manage apidb sync"

  • We can not use microversion 2.25 to do live-migration during
    upgrade, nova-api will raise bad request if there is still old
    compute node in the cluster.

  • The option "schedulerdriver" is now changed to use entrypoint
    instead of full class path. Set one of the entrypoints under the
    namespace 'nova.scheduler.driver' in 'setup.cfg'. Its default value
    is 'host
    manager'. The full class path style is still supported in
    current release. But it is not recommended because class path can be
    changed and this support will be dropped in the next major release.

  • The option "schedulerhostmanager" is now changed to use
    entrypoint instead of full class path. Set one of the entrypoints
    under the namespace 'nova.scheduler.hostmanager' in 'setup.cfg'.
    Its default value is 'host
    manager'. The full class path style is
    still supported in current release. But it is not recommended
    because class path can be changed and this support will be dropped
    in the next major release.

  • The local conductor mode is now deprecated and may be removed as
    early as the 14.0.0 release. If you are using local conductor mode,
    plan on deploying remote conductor by the time you upgrade to the
    14.0.0 release.

  • The Extensible Resource Tracker is deprecated and will be removed
    in the 14.0.0 release. If you use this functionality and have custom
    resources that are managed by the Extensible Resource Tracker,
    please contact the Nova development team by posting to the
    openstack-dev mailing list. There is no future planned support for
    the tracking of custom resources.

  • For Liberty compute nodes, the diskallocationratio works as
    before, you must set it on the scheduler if you want to change it.
    For Mitaka compute nodes, the diskallocationratio set on the
    compute nodes will be used only if the configuration is not set on
    the scheduler. This is to allow, for backwards compatibility, the
    ability to still override the disk allocation ratio by setting the
    configuration on the scheduler node. In Newton, we plan to remove
    the ability to set the disk allocation ratio on the scheduler, at
    which point the compute nodes will always define the disk allocation
    ratio, and pass that up to the scheduler. None of this changes the
    default disk allocation ratio of 1.0. This matches the behaviour of
    the RAM and CPU allocation ratios.

  • (Only if you do continuous deployment)
    1337890ace918fa2555046c01c8624be014ce2d8 drops support for an
    instance major version, which means that you must have deployed at
    least commit 713d8cb0777afb9fe4f665b9a40cac894b04aacb before
    deploying this one.

  • nova now requires ebtables 2.0.10 or later

  • nova recommends libvirt 1.2.11 or later

  • Filters internal interface changed using now the RequestSpec
    NovaObject instead of an old filterproperties dictionary. In case
    you run out-of-tree filters, you need to modify the host
    method to accept a new RequestSpec object and modify the filter
    internals to use that new object. You can see other in-tree filters
    for getting the logic or ask for help in #openstack-nova IRC

  • The "forceconfigdrive" configuration option provided an "always"
    value which was deprecated in the previous release. That "always"
    value is now no longer accepted and deployments using that value
    have to change it to "True" before upgrading.

  • Support for Windows / Hyper-V Server 2008 R2 has been deprecated
    in Liberty (12.0.0) and it is no longer supported in Mitaka
    (13.0.0). If you have compute nodes running that version, please
    consider moving the running instances to other compute nodes before
    upgrading those to Mitaka.

  • The libvirt driver will now correct unsafe and invalid values for
    the livemigrationflag and blockmigrationflag configuration
    options. The livemigrationflag must not contain
    VIRMIGRATESHAREDINC but blockmigrationflag must contain it.
    Both options must contain the VIR
    MIGRATEPEER2PEER, except when
    using the 'xen' virt type this flag is not supported. Both flags
    must contain the VIR
    MIGRATEUNDEFINESOURCE flag and not contain

  • The libvirt driver has changed the default value of the
    'livemigrationuri' flag, that now is dependent on the 'virt_type'.
    The old default 'qemu+tcp://%s/system' now is adjusted for each of
    the configured hypervisors. For Xen this will be
    'xenmigr://%s/system', for kvm/qemu this will be

  • The minimum required libvirt is now version 0.10.2. The minimum
    libvirt for the N release has been set to 1.2.1.

  • In order to make projectid optional in urls, we must constrain
    the set of allowed values for project
    id in our urls. This defaults
    to a regex of "[0-9a-f-]+", which will match hex uuids (with /
    without dashes), and integers. This covers all known projectid
    formats in the wild. If your site uses other values for project
    you can set a site specific validation with "projectidregex"
    config variable.

  • The old neutron communication options that were slated for removal
    in Mitaka are no longer available. This means that going forward
    communication to neutron will need to be configured using auth

  • All code and tests for Nova's EC2 and ObjectStore API support
    which was deprecated in Kilo
    has been completely removed in Mitaka. This has been replaced by the
    new ec2-api project

  • The commit with change-id
    Idd4bbbe8eea68b9e538fa1567efd304e9115a02a requires that the nova_api
    database is setup and Nova is configured to use it. Instructions on
    doing that are provided below.

    Nova now requires that two databases are available and configured.
    The existing nova database needs no changes, but a new novaapi
    database needs to be setup. It is configured and managed very
    similarly to the nova database. A new connection string
    configuration option is available in the api
    database group. An

    connection = mysql+pymysql://user:secret@

    And a new nova-manage command has been added to manage db migrations
    for this database. "nova-manage apidb sync" and "nova-manage
    db version" are available and function like the parallel "nova-
    manage db ..." version.

  • A new "useneutron" option is introduced which replaces the obtuse
    apiclass" option. This defaults to 'False' to match
    existing defaults, however if "network
    api_class" is set to the
    known Neutron value Neutron networking will still be used as before.

  • The FilterScheduler is now including disabled hosts. Make sure you
    include the ComputeFilter in the "schedulerdefaultfilters" config
    option to avoid placing instances on disabled hosts.

  • Upgrade the rootwrap configuration for the compute service, so
    that patches requiring new rootwrap configuration can be tested with

  • For backward compatible support the setting
    "CONF.vmware.integration_bridge" needs to be set when using the
    Neutron NSX|MH plugin. The default value has been set to "None".

  • XenServer hypervisor type has been changed from "xen" to
    "XenServer". It could impact your aggregate metadata or your flavor
    extra specs if you provide only the former.

  • The glance xenserver plugin has been bumped to version 1.3 which
    includes new interfaces for referencing glance servers by url. All
    dom0 will need to be upgraded with this plugin before upgrading the
    nova code.

Deprecation Notes

  • It is now deprecated to use [glance] apiservers without a
    protocol scheme (http / https). This is required to support urls
    throughout the system. Update any api
    servers list with fully
    qualified https / http urls.

  • The conductor.manager configuration option is now deprecated and
    will be removed.

  • Deprecate "computestatsclass" config option. This allowed
    loading an alternate implementation for collecting statistics for
    the local compute host. Deployments that felt the need to use this
    facility are encoraged to propose additions upstream so we can
    create a stable and supported interface here.

  • Deprecate the "db_driver" config option. Previously this let you
    replace our SQLAlchemy database layer with your own. This approach
    is deprecated. Deployments that felt the need to use the facility
    are encourage to work with upstream Nova to address db driver
    concerns in the main SQLAlchemy code paths.

  • The host, port, and protocol options in the [glance] configuration
    section are deprecated, and will be removed in the N release. The
    api_servers value should be used instead.

  • Deprecate the use of nova.hooks. This facility used to let
    arbitrary out of tree code be executed around certain internal
    actions, but is unsuitable for having a well maintained API. Anyone
    using this facility should bring forward their use cases in the
    Newton cycle as nova-specs.

  • Nova used to support the concept that "service managers" were
    replaceable components. There are many config options where you can
    replace a manager by specifying a new class. This concept is
    deprecated in Mitaka as are the following config options.

    • [cells] manager

    • metadata_manager

    • compute_manager

    • console_manager

    • consoleauth_manager

    • cert_manager

    • scheduler_manager

    Many of these will be removed in Newton. Users of these options are
    encouraged to work with Nova upstream on any features missing in the
    default implementations that are needed.

  • Deprecate "securitygroupapi" configuration option. The current
    values are "nova" and "neutron". In future the correct
    securitygroupapi option will be chosen based on the value of
    "use_neutron" which provides a more coherent user experience.

  • Deprecate the "vendordata_driver" config option. This allowed
    creating a different class loader for defining vendordata metadata.
    The default driver loads from a json file that can be arbitrarily
    specified, so is still quite flexible. Deployments that felt the
    need to use this facility are encoraged to propose additions
    upstream so we can create a stable and supported interface here.

  • The configuration option "api_version" in the "ironic" group was
    marked as deprecated and will be removed in the future. The only
    possible value for that configuration was "1" (because Ironic only
    has 1 API version) and the Ironic team came to an agreement that
    setting the API version via configuration option should not be
    supported anymore. As the Ironic driver in Nova requests the Ironic
    v1.8 API, that means that Nova 13.0.0 ("Mitaka") requires Ironic
    4.0.0 ("Liberty") or newer if you want to use the Ironic driver.

  • The libvirt livemigrationflag and blockmigrationflag config
    options are deprecated. These options gave too fine grained control
    over the flags used and, in some cases, misconfigurations could have
    dangerous side effects. Please note the availability of a new
    livemigrationtunnelled configuration option.

  • The "networkdevicemtu" option in Nova is deprecated for removal
    since network MTU should be specified when creating the network with
    nova-network. With Neutron networks, the MTU value comes from the
    "segment_mtu" configuration option in Neutron.

  • The old top-level resource /os-migrations is deprecated, it
    won't be extended anymore. And migration_type for /os-migrations,
    also add ref link to the /servers/{uuid}/migrations/{id} for it when
    the migration is an in-progress live-migration. This has been added
    in microversion 2.23.

  • Deprecate "volumeapiclass" and "networkapiclass" config
    options. We only have one sensible backend for either of these.
    These options will be removed and turned into constants in Newton.

  • Option "memcachedservers" is deprecated in Mitaka. Operators
    should use oslo.cache configuration instead. Specifically "enabled"
    option under [cache] section should be set to True and the url(s)
    for the memcached servers should be in [cache]/memcache

  • The Zookeeper Service Group driver has been removed.

    The driver has no known users and is not actively mantained. A
    warning log message about the driver's state was added for the Kilo
    release. Also, evzookeeper library that the driver depends on is
    unmaintained and incompatible with recent eventlet releases.

    A future release of Nova will use the Tooz library to track service
    liveliness, and Tooz supports Zookeeper.


Security Issues

Bug Fixes

  • In a race condition if base image is deleted by ImageCacheManager
    while imagebackend is copying the image to instance path, then the
    instance goes in to error state. In this case when libvirt has
    changed the base file ownership to libvirt-qemu while imagebackend
    is copying the image, then we get permission denied error on
    updating the file access time using os.utime. Fixed this issue by
    updating the base file access time with root user privileges using
    'touch' command.

  • When plugging virtual interfaces of type vhost-user the MTU value
    will not be applied to the interface by nova. vhost-user ports exist
    only in userspace and are not backed by kernel netdevs, for this
    reason it is not possible to set the mtu on a vhost-user interface
    using standard tools such as ifconfig or ip link.

Other Notes

  • Conductor RPC API no longer supports v2.x.

  • The service subcommand of nova-manage is deprecated. Use the nova
    service-* commands from python-novaclient instead or the os-services
    REST resource. The service subcommand will be removed in the 14.0

  • The Neutron network MTU value is now used when plugging virtual
    interfaces in nova-compute. If the value is 0, which is the default
    value for the "segmentmtu" configuration option in Neutron before
    Mitaka, then the (deprecated) "network
    device_mtu" configuration
    option in Nova is used, which defaults to not setting an MTU value.

  • The sample policy file shipped with Nova contained many policies
    set to ""(allow all) which was not the proper default for many of
    those checks. It was also a source of confusion as some people
    thought "" meant to use the default rule. These empty policies have
    been updated to be explicit in all cases. Many of them were changed
    to match the default rule of "adminorowner" which is a more
    restrictive policy check but does not change the restrictiveness of
    the API calls overall because there are similar checks in the
    database already. This does not affect any existing deployment, just
    the sample file included for use by new deployments.

  • Nova's EC2 API support which was deprecated in Kilo
    is removed from Mitaka. This has been replaced by the new ec2-api
    project (

Changes in nova

7105f88 Imported Translations from Zanata
5de98cb Imported Translations from Zanata
a9d5542 Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI
5b6ee70 Imported Translations from Zanata
29042e0 Imported Translations from Zanata
3e9819d Update cells blacklist regex for testserverbasicops
c71c4e0 Stop providing force
hosts to the scheduler for move ops

Diffstat (except docs and test files)

devstack/tempest-dsvm-cells-rc | 2 +-
nova/conductor/ | 8 +
nova/conductor/tasks/ | 4 +
MESSAGES/nova.po | 22 +-
nova/locale/fr/LCMESSAGES/nova.po | 34 +-
MESSAGES/nova.po | 1315 ++++++++++++--
nova/locale/koKR/LCMESSAGES/nova-log-warning.po | 1914 ++++++++++++++++++++
nova/objects/ | 12 +
.../unit/conductor/tasks/test | 3 +
nova/virt/libvirt/ | 13 +-
13 files changed, 3194 insertions(+), 194 deletions(-)

OpenStack-announce mailing list
asked Apr 7, 2016 in openstack-announce by no-reply_at_openstac (33,960 points)   2 16 44