Quantcast
Channel: MySQL Forums - Announcements
Viewing all articles
Browse latest Browse all 1041

MySQL Cluster 7.4.9 has been released (no replies)

$
0
0
Dear MySQL Users,

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

  - In-memory persistent storage - Real-time performance
  - Transparent Auto-Sharding - Read & write scalability
  - Active-Active/Multi-Master geographic replication
  - 99.999% High Availability with no single point of failure
    and on-line maintenance
  - NoSQL and SQL APIs (including C++, Java, http, Memcached
    and JavaScript/Node.js)

MySQL Cluster 7.4 makes significant advances in performance;
operational efficiency (such as enhanced reporting and faster restarts
and upgrades) and conflict detection and resolution for active-active
replication between MySQL Clusters.

MySQL Cluster 7.4.9, has been released and can be downloaded from

  http://www.mysql.com/downloads/cluster/

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

The release notes are available from

  http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/index.html

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

More details can be found at

  http://www.mysql.com/products/cluster/

Enjoy !


Changes in MySQL Cluster NDB 7.4.9 (5.6.28-ndb-7.4.9) (2016-01-18)

   MySQL Cluster NDB 7.4.9 is a new release of MySQL Cluster
   7.4, based on MySQL Server 5.6 and including features in
   version 7.4 of the NDB storage engine, as well as fixing
   recently discovered bugs in previous MySQL Cluster releases.

   Obtaining MySQL Cluster NDB 7.4.  MySQL Cluster NDB 7.4
   source code and binaries can be obtained from
   http://dev.mysql.com/downloads/cluster/.

   For an overview of changes made in MySQL Cluster NDB 7.4, see
   MySQL Cluster Development in MySQL Cluster NDB 7.4
   (http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development-
   5-6-ndb-7-4.html).

   This release also incorporates all bugfixes and changes made
   in previous MySQL Cluster releases, as well as all bugfixes
   and feature changes which were added in mainline MySQL 5.6
   through MySQL 5.6.28 (see Changes in MySQL 5.6.28
   (2015-12-07)
   (http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-28.html)).

   Functionality Added or Changed

     * Important Change: Previously, the NDB scheduler always
       optimized for speed against throughput in a predetermined
       manner (this was hard coded); this balance can now be set
       using the SchedulerResponsiveness data node configuration
       parameter. This parameter accepts an integer in the range
       of 0-10 inclusive, with 5 as the default. Higher values
       provide better response times relative to throughput.
       Lower values provide increased throughput, but impose
       longer response times. (Bug #78531, Bug #21889312)

     * Added the tc_time_track_stats table to the ndbinfo
       information database. This table provides time-tracking
       information relating to transactions, key operations, and
       scan operations performed by NDB. (Bug #78533, Bug
       #21889652)

     * Cluster Replication: Normally, RESET SLAVE causes all
       entries to be deleted from the mysql.ndb_apply_status
       table. This release adds the ndb_clear_apply_status
       system variable, which makes it possible to override this
       behavior. This variable is ON by default; setting it to
       OFF keeps RESET SLAVE from purging the ndb_apply_status
       table. (Bug #12630403)

   Bugs Fixed

     * Important Change: Users can now set the number and length
       of connection timeouts allowed by most NDB programs with
       the --connect-retries and --connect-retry-delay command
       line options introduced for the programs in this release.
       For ndb_mgm, --connect-retries supersedes the existing
       --try-reconnect option. (Bug #57576, Bug #11764714)

     * When executing a schema operation such as CREATE TABLE on
       a MySQL Cluster with multiple SQL nodes, it was possible
       for the SQL node on which the operation was performed to
       time out while waiting for an acknowledgement from the
       others. This could occur when different SQL nodes had
       different settings for --ndb-log-updated-only,
       --ndb-log-update-as-write, or other mysqld options
       effecting binary logging by NDB.
       This happened due to the fact that, in order to
       distribute schema changes between them, all SQL nodes
       subscribe to changes in the ndb_schema system table, and
       that all SQL nodes are made aware of each others
       subscriptions by subscribing to TE_SUBSCRIBE and
       TE_UNSUBSCRIBE events. The names of events to subscribe
       to are constructed from the table names, adding REPL$ or
       REPLF$ as a prefix. REPLF$ is used when full binary
       logging is specified for the table. The issue described
       previously arose because different values for the options
       mentioned could lead to different events being subscribed
       to by different SQL nodes, meaning that all SQL nodes
       were not necessarily aware of each other, so that the
       code that handled waiting for schema distribution to
       complete did not work as designed.
       To fix this issue, MySQL Cluster now treats the
       ndb_schema table as a special case and enforces full
       binary logging at all times for this table, independent
       of any settings for mysqld binary logging options. (Bug
       #22174287, Bug #79188)

     * Attempting to create an NDB table having greater than the
       maximum supported combined width for all BIT columns
       (4096) caused data node failure when these columns were
       defined with COLUMN_FORMAT DYNAMIC. (Bug #21889267)

     * Creating a table with the maxmimum supported number of
       columns (512) all using COLUMN_FORMAT DYNAMIC led to data
       node failures. (Bug #21863798)

     * In certain cases, a cluster failure (error 4009) was
       reported as Unknown error code. (Bug #21837074)

     * For a timeout in GET_TABINFOREQ while executing a CREATE
       INDEX statement, mysqld returned Error 4243 (Index not
       found) instead of the expected Error 4008 (Receive from
       NDB failed).
       The fix for this bug also fixes similar timeout issues
       for a number of other signals that are sent the DBDICT
       kernel block as part of DDL operations, including
       ALTER_TAB_REQ, CREATE_INDX_REQ, DROP_FK_REQ,
       DROP_INDX_REQ, INDEX_STAT_REQ, DROP_FILE_REQ,
       CREATE_FILEGROUP_REQ, DROP_FILEGROUP_REQ, CREATE_EVENT,
       WAIT_GCP_REQ, DROP_TAB_REQ, and LIST_TABLES_REQ, as well
       as several internal functions used in handling NDB schema
       operations. (Bug #21277472)
       References: See also Bug #20617891, Bug #20368354, Bug
       #19821115.

     * Using ndb_mgm STOP -f to force a node shutdown even when
       it triggered a complete shutdown of the cluster, it was
       possible to lose data when a sufficient number of nodes
       were shut down, triggering a cluster shutodwn, and the
       timing was such that SUMA handovers had been made to
       nodes already in the process of shutting down. (Bug
       #17772138)

     * The internal NdbEventBuffer::set_total_buckets() method
       calculated the number of remaining buckets incorrectly.
       This caused any incomplete epoch to be prematurely
       completed when the SUB_START_CONF signal arrived out of
       order. Any events belonging to this epoch arriving later
       were then ignored, and so effectively lost, which
       resulted in schema changes not being distributed
       correctly among SQL nodes. (Bug #79635, Bug #22363510)

     * Compilation of MySQL Cluster failed on SUSE Linux
       Enterprise Server 12. (Bug #79429, Bug #22292329)

     * Schema events were appended to the binary log out of
       order relative to non-schema events. This was caused by
       the fact that the binlog injector did not properly handle
       the case where schema events and non-schema events were
       from different epochs.
       This fix modifies the handling of events from the two
       schema and non-schema event streams such that events are
       now always handled one epoch at a time, starting with
       events from the oldest available epoch, without regard to
       the event stream in which they occur. (Bug #79077, Bug
       #22135584, Bug #20456664)

     * When executed on an NDB table, ALTER TABLE ... DROP INDEX
       made changes to an internal array referencing the indexes
       before the index was actually dropped, and did not revert
       these changes in the event that the drop was not
       completed. One effect of this was that, after attempting
       to drop an index on which there was a foreign key
       dependency, the expected error referred to the wrong
       index, and subsequent attempts using SQL to modify
       indexes of this table failed. (Bug #78980, Bug #22104597)

     * NDB failed during a node restart due to the status of the
       current local checkpoint being set but not as active,
       even though it could have other states under such
       conditions. (Bug #78780, Bug #21973758)

     * ndbmtd checked for signals being sent only after a full
       cycle in run_job_buffers, which is performed for all job
       buffer inputs. Now this is done as part of
       run_job_buffers itself, avoiding executing for extended 
       periods without sending to other nodes or flushing signals 
       to other threads. (Bug #78530, Bug #21889088)

     * In MySQL Cluster 7.4 scan executions was optimised by 
       handling multiple rows at a time. This has two effects, 
       1) it gives scans higher prio compared to key lookup 
       operations, 2) it changes the behaviour of the scheduler. 
       2) was fixed, restoring the former scheduling behaviour.
       1) is kept since it has a significant performance benefit 
       but means were provided for change of this in exceptional 
       circumstances. (Bug #78526, Bug #21886644)

     * Disk Data: A unique index on a column of an NDB table is
       implemented with an associated internal ordered index,
       used for scanning. While dropping an index, this ordered
       index was dropped first, followed by the drop of the
       unique index itself. This meant that, when the drop was
       rejected due to (for example) a constraint violation, the
       statement was rejected but the associated ordered index
       remained deleted, so that any subsequent operation using
       a scan on this table failed. We fix this problem by
       causing the unique index to be removed first, before
       removing the ordered index; removal of the related
       ordered index is no longer performed when removal of a
       unique index fails. (Bug #78306, Bug #21777589)

     * Cluster Replication: While the binary log injector thread
       was handling failure events, it was possible for all NDB
       tables to be left indefinitely in read-only mode. This
       was due to a race condition between the binlog injector
       thread and the utility thread handling events on the
       ndb_schema table, and to the fact that, when handling
       failure events, the binlog injector thread places all NDB
       tables in read-only mode until all such events are
       handled and the thread restarts itself.
       When the binlog inject thread receives a group of one or
       more failure events, it drops all other existing event
       operations and expects no more events from the utility
       thread until it has handled all of the failure events and
       then restarted itself. However, it was possible for the
       utility thread to continue attempting binary log setup
       while the injector thread was handling failures and thus
       attempting to create the schema distribution tables as
       well as event subscriptions on these tables. If the
       creation of these tables and event subscriptions occurred
       during this time, the binlog injector thread's
       expectation that there were no further event operations
       was never met; thus, the injector thread never restarted,
       and NDB tables remained in read-only as described
       previously.
       To fix this problem, the Ndb object that handles schema
       events is now definitely dropped once the ndb_schema
       table drop event is handled, so that the utility thread
       cannot create any new events until after the injector
       thread has restarted, at which time, a new Ndb object for
       handling schema events is created. (Bug #17674771, Bug
       #19537961, Bug #22204186, Bug #22361695)

     * Cluster API: The binlog injector did not work correctly
       with TE_INCONSISTENT event type handling by
       Ndb::nextEvent(). (Bug #22135541)
       References: See also Bug #20646496.

     * Cluster API: Ndb::pollEvents() and pollEvents2() were
       slow to receive events, being dependent on other client
       threads or blocks to perform polling of transporters on
       their behalf. This fix allows a client thread to perform
       its own transporter polling when it has to wait in either
       of these methods.
       Introduction of transporter polling also revealed a
       problem with missing mutex protection in the
       ndbcluster_binlog handler, which has been added as part
       of this fix. (Bug #20957068, Bug #22224571, Bug #79311)

     * Cluster API: Garbage collection is performed on several
       objects in the implementation of NdbEventOperation, based
       on which GCIs have been consumed by clients, including
       those that have been dropped by
       Ndb::dropEventOperation(). In this implementation, the
       assumption was made that the global checkpoint index
       (GCI) is always monotonically increasing, although this
       is not the case during an initial restart, when the GCI
       is reset. This could lead to event objects in the NDB API
       being released prematurely or not at all, in the latter
       case causing a resource leak.
       To prevent this from happening, the NDB event object's
       implementation now tracks, internally, both the GCI and
       the generation of the GCI; the generation is incremented
       whenever the node process is restarted, and this value is
       now used to provide a monotonically increasing sequence.
       (Bug #73781, Bug #21809959)

On behalf of the MySQL Release Team,
Lars Tangvald

Viewing all articles
Browse latest Browse all 1041

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>