Dear MySQL Users, MySQL Cluster is the distributed, shared-nothing variant of MySQL. This storage engine provides: - In-Memory storage - Real-time performance (with optional checkpointing to disk) - Transparent Auto-Sharding - Read & write scalability - Active-Active/Multi-Master geographic replication - 99.999% High Availability with no single point of failure and on-line maintenance - NoSQL and SQL APIs (including C++, Java, http, Memcached and JavaScript/Node.js) MySQL Cluster 7.5.9, has been released and can be downloaded from http://www.mysql.com/downloads/cluster/ where you will also find Quick Start guides to help you get your first MySQL Cluster database up and running. MySQL Cluster 7.5 is also available from our repository for Linux platforms, go here for details: http://dev.mysql.com/downloads/repo/ The release notes are available from http://dev.mysql.com/doc/relnotes/mysql-cluster/7.5/en/index.html MySQL Cluster enables users to meet the database challenges of next generation web, cloud, and communications services with uncompromising scalability, uptime and agility. More details can be found at http://www.mysql.com/products/cluster/ Enjoy ! Changes in MySQL NDB Cluster 7.5.9 (5.7.21-ndb-7.5.9) (2018-01-17, General Availability) MySQL NDB Cluster 7.5.9 is a new release of MySQL NDB Cluster 7.5, based on MySQL Server 5.7 and including features in version 7.5 of the NDB (http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster.html) storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases. Obtaining MySQL NDB Cluster 7.5. MySQL NDB Cluster 7.5 source code and binaries can be obtained from http://dev.mysql.com/downloads/cluster/. For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5 (http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-what-is-new-7-5.html). This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.21 (see Changes in MySQL 5.7.21 (Not yet released, General Availability) (http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-21.html)). Bugs Fixed * NDB Replication: On an SQL node not being used for a replication channel with sql_log_bin=0 (http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sql_log_bin) it was possible after creating and populating an NDB table for a table map event to be written to the binary log for the created table with no corresponding row events. This led to problems when this log was later used by a slave cluster replicating from the mysqld where this table was created. Fixed this by adding support for maintaining a cumulative any_value bitmap for global checkpoint event operations that represents bits set consistently for all rows of a specific table in a given epoch, and by adding a check to determine whether all operations (rows) for a specific table are all marked as NOLOGGING, to prevent the addition of this table to the Table_map held by the binlog injector. As part of this fix, the NDB API adds a new getNextEventOpInEpoch3() (http://dev.mysql.com/doc/ndbapi/en/ndb-ndb-getnexteventopinepoch3.html) method which provides information about any AnyValue received by making it possible to retrieve the cumulative any_value bitmap. (Bug #26333981) * A query against the INFORMATION_SCHEMA.FILES (http://dev.mysql.com/doc/refman/5.7/en/files-table.html) table returned no results when it included an ORDER BY clause. (Bug #26877788) * During a restart, DBLQH loads redo log part metadata for each redo log part it manages, from one or more redo log files. Since each file has a limited capacity for metadata, the number of files which must be consulted depends on the size of the redo log part. These files are opened, read, and closed sequentially, but the closing of one file occurs concurrently with the opening of the next. In cases where closing of the file was slow, it was possible for more than 4 files per redo log part to be open concurrently; since these files were opened using the OM_WRITE_BUFFER option, more than 4 chunks of write buffer were allocated per part in such cases. The write buffer pool is not unlimited; if all redo log parts were in a similar state, the pool was exhausted, causing the data node to shut down. This issue is resolved by avoiding the use of OM_WRITE_BUFFER during metadata reload, so that any transient opening of more than 4 redo log files per log file part no longer leads to failure of the data node. (Bug #25965370) * In certain circumstances where multiple Ndb objects were being used in parallel from an API node, the block number extracted from a block reference in DBLQH was the same as that of a SUMA block even though the request was coming from an API node. Due to this ambiguity, DBLQH mistook the request from the API node for a request from a SUMA block and failed. This is fixed by checking node IDs before checking block numbers. (Bug #88441, Bug #27130570) * When the duplicate weedout algorithm was used for evaluating a semi-join, the result had missing rows. (Bug #88117, Bug #26984919) References: See also: Bug #87992, Bug #26926666. * A table used in a loose scan could be used as a child in a pushed join query, leading to possibly incorrect results. (Bug #87992, Bug #26926666) * When representing a materialized semi-join in the query plan, the MySQL Optimizer inserted extra QEP_TAB and JOIN_TAB objects to represent access to the materialized subquery result. The join pushdown analyzer did not properly set up its internal data structures for these, leaving them uninitialized instead. This meant that later usage of any item objects referencing the materialized semi-join accessed an initialized tableno column when accessing a 64-bit tableno bitmask, possibly referring to a point beyond its end, leading to an unplanned shutdown of the SQL node. (Bug #87971, Bug #26919289) * When a data node was configured for locking threads to CPUs, it failed during startup with Failed to lock tid. This was is a side effect of a fix for a previous issue, which disabled CPU locking based on the version of the available glibc. The specific glibc issue being guarded against is encountered only in response to an internal NDB API call (Ndb_UnlockCPU()) not used by data nodes (and which can be accessed only through internal API calls). The current fix enables CPU locking for data nodes and disables it only for the relevant API calls when an affected glibc version is used. (Bug #87683, Bug #26758939) References: This issue is a regression of: Bug #86892, Bug #26378589. * The NDBFS block's OM_SYNC flag is intended to make sure that all FSWRITEREQ signals used for a given file are synchronized, but was ignored by platforms that do not support O_SYNC, meaning that this feature did not behave properly on those platforms. Now the synchronization flag is used on those platforms that do not support O_SYNC. (Bug #76975, Bug #21049554)
↧
MySQL Cluster 7.5.9 has been released (no replies)
↧