-
Latest Version
-
Operating System
macOS 10.12 Sierra or later
-
User Rating
Click to vote -
Author / Product
-
Filename
mysql-8.0.23-macos10.15-x86_64.dmg
Sometimes latest versions of the software can cause issues when installed on older devices or devices running an older version of the operating system.
Software makers usually fix these issues but it can take them some time. What you can do in the meantime is to download and install an older version of MySQL 8.0.23.
For those interested in downloading the most recent release of MySQL for Mac or reading our review, simply click here.
All old versions distributed on our website are completely virus-free and available for download at no cost.
We would love to hear from you
If you have any questions or ideas that you want to share with us - head over to our Contact page and let us know. We value your feedback!
What's new in this version:
Added or Changed:
InnoDB: Performance was improved for the following operations:
- Dropping a large tablespace on a MySQL instance with a large buffer pool (>32GBs).
- Dropping a tablespace with a significant number of pages referenced from the adaptive hash index.
- Truncating temporary tablespaces.
- The pages of dropped or truncated tablespaces and associated AHI entries are now removed from the buffer pool passively as pages are encountered during normal operations. Previously, dropping or truncating tablespaces initiated a full list scan to remove pages from the buffer pool immediately, which negatively impacted performance Bug #98869)
- InnoDB: The new AUTOEXTEND_SIZE option defines the amount by which InnoDB extends the size of a tablespace when it becomes full, making it possible to extend tablespace size in larger increments. Allocating space in larger increments helps to avoid fragmentation and facilitates ingestion of large amounts of data. The AUTOEXTEND_SIZE option is supported with the CREATE TABLE, ALTER TABLE, CREATE TABLESPACE, and ALTER TABLESPACE statements. For more information, see Tablespace AUTOEXTEND_SIZE Configuration.
- An AUTOEXTEND_SIZE size column was added to the INFORMATION_SCHEMA.INNODB_TABLESPACES table.
- InnoDB: InnoDB now supports encryption of doublewrite file pages belonging to encrypted tablespaces. The pages are encrypted using the encryption key of the associated tablespace. For more information, see InnoDB Data-at-Rest Encryption.
- InnoDB: InnoDB atomics code was revised to use C++ std::atomic.
- When invoked with the --all-databases option, mysqldump now dumps the mysql database first, so that when the dump file is reloaded, any accounts named in the DEFINER clause of other objects will already have been created
- Some overhead for disabled Performance Schema and LOCK_ORDER tool instrumentation was identified and eliminated
- For BLOB and TEXT columns that have a default value expression, the INFORMATION_SCHEMA.COLUMNS table and SHOW COLUMNS statement now display the expression
- CRC calculations for binlog checksums are faster on ARM platforms. Thanks to Krunal Bauskar for the contributiong
- MySQL Server’s asynchronous connection failover mechanism now supports Group Replication topologies, by automatically monitoring changes to group membership and distinguishing between primary and secondary servers. When you add a group member to the source list and define it as part of a managed group, the asynchronous connection failover mechanism updates the source list to keep it in line with membership changes, adding and removing group members automatically as they join or leave. The new asynchronous_connection_failover_add_managed() and asynchronous_connection_failover_delete_managed() UDFs are used to add and remove managed sources.
- The connection is failed over to another group member if the currently connected source goes offline, leaves the group, or is no longer in the majority, and also if the currently connected source does not have the highest weighted priority in the group. For a managed group, a source’s weight is assigned depending on whether it is a primary or a secondary server. So assuming that you set up the managed group to give a higher weight to a primary and a lower weight to a secondary, when the primary changes, the higher weight is assigned to the new primary, so the replica changes over the connection to it. This function also applies to single (non- managed) servers, so the connection is now failed over if another source server is available that has a higher weighted priority.
- Replication channels can now be set to assign a GTID to replicated transactions that do not already have one, using the ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS option of the CHANGE REPLICATION SOURCE TO statement. This feature enables replication from a source that does not use GTID-based replication, to a replica that does. For a multi-source replica, you can have a mix of channels that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS, and channels that do not. The GTID can include the replica’s own server UUID or a server UUID that you assign to identify transactions from different sources.
- Note that a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel cannot be promoted to replace the replication source server in the event that a failover is required, and a backup taken from the replica cannot be used to restore the replication source server. The same restriction applies to replacing or restoring other replicas that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel. The GTID set (gtid_executed) from a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS is nonstandard and should not be transferred to another server, or compared with another server's gtid_executed set.
- For a multithreaded replica (where slave_parallel_workers is greater than 0), setting slave_preserve_commit_order=1 ensures that transactions are executed and committed on the replica in the same order as they appear in the replica's relay log. Each executing worker thread waits until all previous transactions are committed before committing. If a worker thread fails to execute a transaction because a possible deadlock was detected, or because the transaction's execution time exceeded a relevant wait timeout, it automatically retries the number of times specified by slave_transaction_retries before stopping with an error. Transactions with a non-temporary error are not retried.
- The replication applier on a multithreaded replica has always handled data access deadlocks that were identified by the storage engines involved. However, some other types of lock were not detected by the replication applier, such as locks involving access control lists (ACLs) or metadata locking (for example, FLUSH TABLES WITH READ LOCK statements). This could lead to three-actor deadlocks with the commit order locking, which could not be resolved by the replication applier, and caused replication to hang indefinitely. From MySQL 8.0.23, deadlock handling on multithreaded replicas that preserve the commit order has been enhanced to mitigate these types of deadlocks. The deadlocks are not specifically resolved by the replication applier, but the applier is aware of them and initiates automatic retries for the transaction, rather than hanging. If the retries are exhausted, replication stops in a controlled manner so that the deadlock can be resolved manually.
- The new temptable_max_mmap variable defines the maximum amount of memory the TempTable storage engine is permitted to allocate from memory-mapped temporary files before it starts storing data to InnoDB internal temporary tables on disk. A setting of 0 disables allocation of memory from memory-mapped temporary files. For more information, see Internal Temporary Table Use in MySQL.
Fixed:
- InnoDB: A CREATE TABLE operation that specified the COMPRESSION option was permitted with a warning on a system that does not support hole punching. The operation now fails with an error instead
- InnoDB: A MySQL DB system restart following an upgrade that was initiated while a data load operation was in progress raised an assertion failure
- InnoDB: An error message regarding the number of truncate operations on the same undo tablespace between checkpoints incorrectly indicated a limit of 64. The limit was raised from 64 to 50,000 in MySQL 8.0.22
- InnoDB: rw_lock_t and buf_block_t source code structures were reduced in size
- InnoDB: An InnoDB transaction became inconsistent after creating a table using a storage engine other than InnoDB from a query expression that operated on InnoDB tables
- InnoDB: In some circumstances, such as when an existing gap lock inherits a lock from a deleted record, the number of locks that appear in the INFORMATION_SCHEMA.INNODB_TRX table could diverge from the actual number of record locks.
- Thanks to Fungo Wang from Alibaba for the patch
- InnoDB: An off-by-one error in Fil_system sharding code was corrected, and the maximum number of shards (MAX_SHARDS) was changed to 69
- InnoDB: The TempTable storage engine memory allocator allocated extra blocks of memory unnecessarily
- InnoDB: A SELECT COUNT(*) operation on a table containing uncommitted data performed poorly due to unnecessary I/O.
- Thanks to Brian Yue for the contribution
- InnoDB: A race condition when shutting down the log writer raised an assertion failure
- InnoDB: Page cleaner threads were not utilized optimally in sync-flush mode, which could cause page flush operations to slow down or stall in some cases. Sync-flush mode occurs when InnoDB is close to running out of free space in the redo log, causing the page cleaner coordinator to initiate aggressive page flushing
- InnoDB: A high frequency of updates while undo log truncation was enabled caused purge to lag. The lag was due to the innodb_purge_rseg_truncate_frequency setting being changed temporarily from 128 to 1 when an undo tablespace was selected for truncation. The code that modified the setting has been removed
- InnoDB: Automated truncation of undo tablespaces caused a performance regression. To address this issue, undo tablespace files are now initialized at 16MB and extended by a minimum of 16MB. To handle aggressive growth, the file extension size is doubled if the previous file extension happened less than 0.1 seconds earlier. Doubling of the extension size can occur multiple times to a maximum of 256MB. If the previous file extension occurred more than 0.1 seconds earlier, the extension size is reduced by half, which can also occur multiple times, to a minimum of 16MB. Previously, the initial size of an undo tablespace depended on the InnoDB page size, and undo tablespaces were extended four extents at a time.
- If the AUTOEXTEND_SIZE option is defined for an undo tablespace, the undo tablespace is extended by the greater of the AUTOEXTEND_SIZE setting and the extension size determined by the logic described above.
- When an undo tablespace is truncated, it is normally recreated at 16MB in size, but if the current file extension size is larger than 16MB, and the previous file extension happened within the last second, the new undo tablespace is created at a quarter of the size defined by the innodb_max_undo_log_size variable.
- Stale undo tablespace pages are no longer removed at the next checkpoint. Instead, the pages are removed in the background by the InnoDB master thread Bug #32020900, Bug #101194)
- InnoDB: A posix_fallocate() failure while preallocating space for a temporary tablespace raised an error and caused an initialization failure. A warning is now issued instead, and InnoDB falls back to the non-posix_fallocate() method for preallocating space
- InnoDB: An invalid pointer caused a shutdown failure on a MySQL Server compiled with the DISABLE_PSI_MEMORY source configuration option enabled
- InnoDB: A long SX lock held by an internal function that calculates new statistics for a given index caused a failure
- InnoDB: The INFORMATION_SCHEMA.INNODB_TABLESPACES table reported a FILE_SIZE of 0 for some tables and schemas. When the associated tablespace was not in the memory cache, the tablespace name was used to determine the tablespace file name, which was not always a reliable method. The tablespace ID is now used instead. Using the tablespace name remains as a fallback method
- InnoDB: After dropping a FULLTEXT index and renaming the table to move it to a new schema, the FULLTEXT auxiliary tables were not renamed accordingly and remained in the old schema directory
- InnoDB: After upgrading to MySQL 8.0, a failure occurred when attempting to perform a DML operation on a table that was previously defined with a full-text search index
- InnoDB: Importing a tablespace with a page-compressed table did not report a schema mismatch error for source and destination tables defined with a different COMPRESSION setting. The COMPRESSION setting of the exported table is now saved to the .cfg metadata file during the FLUSH TABLES ... FOR EXPORT operation, and that information is checked on import to ensure that both tables are defined with the same COMPRESSION setting
- InnoDB: Dummy keys used to check if the MySQL Keyring plugin is functioning were left behind in an inactive state, and the number of inactive dummy keys increased over time. The actual master key is now used instead, if present. If no master key is available, a dummy master key is generated
- InnoDB: Querying the INFORMATION_SCHEMA.FILES table after moving the InnoDB system tablespace outside of the data directory raised a warning indicating that the innodb_system filename is unknown
- InnoDB: In a replication scenario involving a replica with binary logging or log_slave_updates disabled, the server failed to start due to an excessive number of gaps in the mysql.gtid_executed table. The gaps occurred for workloads that included both InnoDB and non-InnoDB transactions. GTIDs for InnoDB transactions are flushed to the mysql.gtid_executed table by the GTID persister thread, which runs periodically, while GTIDs for non-InnoDB transactions are written to the to the mysql.gtid_executed table directly by replica server threads. The GTID persister thread fell behind as it cycled through merging entries and compressing the mysql.gtid_executed table. As a result, the size of the GTID flush list for InnoDB transactions grew over time along with the number of gaps in the mysql.gtid_executed table, eventually causing a server failure and subsequent startup failures. To address this issue, the GTID persister thread now writes GTIDs for both InnoDB and non-InnoDB transactions, and foreground commits are forced to wait if the GTID persister thread falls behind. Also, the gtid_executed_compression_period default setting was changed from 1000 to 0 to disabled explicit compression of the mysql.gtid_executed table by default.
- Thanks to Venkatesh Prasad for the contribution
- InnoDB: Persisting GTID values for XA transactions affected XA transaction performance. Two GTID values are generated for XA transactions, one for the prepare stage and another for the commit stage. The first GTID value is written to the undo log and later overwritten by the second GTID value. Writing of the second GTID value could only occur after flushing the first GTID value to the gtid_executed table. Space is now reserved in the undo log for both XA transaction GTID values
- InnoDB: InnoDB source files were updated to address warnings produced when building Doxygen source code documentation
- InnoDB: The full-text search synchronization thread attempted to read a previously-freed word from the index cache
- InnoDB: A 20µs sleep in the buf_wait_for_read() function introduced with parallel read functionality in MySQL 8.0.17 took 1ms on Windows, causing an unexpected timeout when running certain tests. Also, AIO threads were found to have uneven amounts of waiting operating system IO requests
- InnoDB: Cleanup in certain replicated XA transactions failed to reattach transaction object (trx_t), which raised an assertion failure
- InnoDB: The tablespace encryption type setting was not properly updated due to a failure during the resumption of an ALTER TABLESPACE ENCRYPTION operation following a server failure
- InnoDB: An interrupted tablespace encryption operation did not update the encrypt_type table option information in the data dictionary when the operation resume processing after the server was restarted
- InnoDB: Internal counter variables associated with thread sleep delay and threads entering an leaving InnoDB were revised to use C++ std::atomic. Built-in atomic operations were removed. Thanks to Yibo Cai from ARM for the contribution
- InnoDB: A relaxed memory order was implemented for dictionary memory variable fetch-add (dict_temp_file_num.fetch_add) and store (dict_temp_file_num.store) operations.
- InnoDB: A background thread that resumed a tablespace encryption operation after the server started failed to take an metadata lock on the tablespace, which permitted concurrent DDL operations and led to a race condition with the startup thread. The startup thread now waits until the tablespace metadata lock is taken
- InnoDB: Calls to numa_all_nodes_ptr were replaced by the numa_get_mems_allowed() function. Thanks to Daniel Black for the contribution
- Partitioning: ALTER TABLE t1 EXCHANGE PARTITION ... WITH TABLE t2 led to an assert when t1 was not a partitioned tableug
- Replication: The network_namespace parameter for the asynchronous_connection_failover_add_source() and asynchronous_connection_failover_delete_source() UDFs is no longer used from MySQL 8.0.23. These UDFs add and remove replication source servers from the source list for a replication channel for the asynchronous connection failover mechanism. The network namespace for a replication channel is managed using the CHANGE REPLICATION SOURCE statement, and has special requirements for Group Replication source servers, so it should no longer be specified in the UDFs
- Replication: When the system variable transaction_write_set_extraction=XXHASH64 is set, which is the default in MySQL 8.0 and a requirement for Group Replication, the collection of writes for a transaction previously had no upper size limit. Now, for standard source to replica replication, the numeric limit on write sets specified by binlog_transaction_dependency_history_size is applied, after which the write set information is discarded but the transaction continues to execute. Because the write set information is then unavailable for the dependency calculation, the transaction is marked as non-concurrent, and is processed sequentially on the replica. For Group Replication, the process of extracting the writes from a transaction is required for conflict detection and certification on all group members, so the write set information cannot be discarded if the transaction is to complete. The byte limit set by group_replication_transaction_size_limit is applied instead of the numeric limit, and if the limit is exceeded, the transaction fails to execute
- Replication: When mysqlbinlog’s --print-table-metadata option was used, mysqlbinlog used a different method for assessing numeric fields to the method used by the server when writing to the binary log, resulting in incorrect metadata output relating to these fields. mysqlbinlog now uses the same method as the server
- Replication: When using network namespaces in a replication channel and the initial connection from the replica to the master was interrupted, subsequent connection attempts failed to use the correct namespace information
- Replication: If the Group Replication applier channel (group_replication_applier) was holding a lock on a table, for example because of a backup in progress, and the member was expelled from the group and tried to rejoin automatically, the auto-rejoin attempt was unsuccessful and did not retry. Now, Group Replication checks during startup and rejoin attempts whether the group_replication_applier channel is already running. If that is the case at startup, an error message is returned. If that is the case during an auto-rejoin attempt, that attempt fails, but further attempts are made as specified by the group_replication_autorejoin_tries system variable
- Replication: If a group member was expelled and made an auto-rejoin attempt at a point when some tables on the instance were locked (for example while a backup was running), the attempt failed and no further attempts were made. This scenario is now handled correctly
- Replication: As the number of replicas replicating from a semisynchronous source server increased, locking contention could result in a performance degradation. The locking mechanisms used by the plugins have been changed to use shared locks where possible, avoid unnecessary lock acquisitions, and limit callbacks. The new behaviors can be implemented by enabling the following system variables:
- replication_sender_observe_commit_only=1 limits callbacks.
- replication_optimize_for_static_plugin_config=1 adds shared locks and avoids unnecessary lock acquisitions. This system variable must be disabled if you want to uninstall the plugin.
- Both system variables can be enabled before or after installing the semisynchronous replication plugin, and can be enabled while replication is running. Semisynchronous replication source servers can also get performance benefits from enabling these system variables, because they use the same locking mechanisms as the replicas
- Replication: On a multi-threaded replica where the commit order is preserved, worker threads must wait for all transactions that occur earlier in the relay log to commit before committing their own transactions. If a deadlock occurs because a thread waiting to commit a transaction later in the commit order has locked rows needed by a transaction earlier in the commit order, a deadlock detection algorithm signals the waiting thread to roll back its transaction. Previously, if transaction retries were not available, the worker thread that rolled back its transaction would exit immediately without signalling other worker threads in the commit order, which could stall replication. A worker thread in this situation now waits for its turn to call the rollback function, which means it signals the other threads correctly Bug #87796)
- Replication: GTIDs are only available on a server instance up to the number of non-negative values for a signed 64-bit integer (2 to the power of 63 minus 1). If you set the value of gtid_purged to a number that approaches this limit, subsequent commits can cause the server to run out of GTIDs and take the action specified by binlog_error_action. From MySQL 8.0.23, a warning message is issued when the server instance is approaching the limit
- Microsoft Windows: On Windows, running the MySQL server as a service caused shared-memory connections to fail
- JSON: JSON_ARRAYAGG() did not always perform proper error handling Bug #32012559, Bug #32181438)
- JSON: When updating a JSON value using JSON_SET(), JSON_REPLACE(), or JSON_REMOVE(), the target column can sometimes be updated in-place. This happened only when the target table of the update operation was a base table, but when the target table was an updatable view, the update was always performed by writing the full JSON value.
- Now in such cases, an in-place update (that is, a partial update) is also performed when the target table is an updatable view
- JSON: Work done in MySQL 8.0.22 to cause prepared statements to be prepared only once introduced a regression in the handling of dynamic parameters to JSON functions. All JSON arguments were classified as data type MYSQL_TYPE_JSON, which overlooked the fact that JSON functions take two kinds of JSON parameters—JSON values and JSON documents—and this distinction cannot be made with the data type only. For Bug #31667405, this problem was solved for comparison operators and the IN() operator by making it possible to tag a JSON argument as being a scalar value, while letting arguments to other JSON functions be treated as JSON documents.
- The present fix restores for a number of JSON functions their treatment of certain arguments as JSON values, as listed here:
- The first argument to MEMBER OF()
- The third, fifth, seventh, and subsequent odd-numbered arguments to the functions JSON_INSERT(), JSON_REPLACE(), JSON_SET(), JSON_ARRAY_APPEND(), and JSON_ARRAY_INSERT()
- JSON: When mysqld was run with --debug, attempting to execute a query that made use of a multi-valued index raised an errorg
- Use of the thread_pool plugin could result in Address Sanitizer warnings
- While pushing a condition down to a materialized derived table, and a condition is partially pushed down, the optimizer may, in some cases in which a query transformation has added new conditions to the WHERE condition, call the internal fix_fields() function for the condition that remains in the outer query block. A successful return from this function call was misinterpreted as an error, leading to the silent failure of the original statement
- Multiple calls to a stored procedure containing an ALTER TABLE statement that included an ORDER BY clause could cause a server exit
- Prepared statements involving stored programs could cause heap-use-after-free memory problems
- Queries on INFORMATION_SCHEMA tables that involved materialized derived tables could fail
- A potential buffer overflow was fixed. Thanks to Sifang Zhao for pointing out the issue, and for suggesting a fix (although it was not used)
- Conversion of FLOAT values to values of type INT could generate Undefined Behavior Sanitizer warnings
- In multiple-row queries, the LOAD_FILE() function evaluated to the same value for every row
- Generic Linux tar file distributions had too-restrictive file permissions after unpacking, requiring a manual chmod to correct
- For debug builds, prepared SET statements containing subqueries in stored procedures could raise an assertion
- For prepared statements, illegal mix of collations errors could occur for legal collation mixes
- The functions REGEXP_LIKE(), REGEXP_INSTR(), and REGEXP_REPLACE() raise errors for malformed regular expression patterns, but could also return NULL for such cases, causing subsequent debug asserts. Now we ensure that these functions do not return NULL except in certain specified cases.
- The function REGEXP_SUBSTR() can always return NULL, so no such check is needed, and for this function we make sure that one is not performed
- Testing an aggregate function for IS NULL or IS NOT NULL in a HAVING condition using WITH ROLLUP led to wrong results
- When a new aggregate function was added to the current query block because an inner query block had an aggregate function requiring evaluation in the current one, the server did not add rollup wrappers to it as needed
- For debug builds, certain CREATE TABLE statements with CHECK constraints could raise an assertion
- Incorrect BLOB field values were passed from InnoDB during a secondary engine load operation
- The LOCK_ORDER tool did not correctly represent InnoDB share exclusive locks
- The server did not handle properly an error raised when trying to use an aggregation function with an invalid column type as part of a hash join
- The length of the WORD column of the INFORMATION_SCHEMA.KEYWORDS table could change depending on table contents
- The Performance Schema host_cache table was empty and did not expose the contents of the host cache if the Performance Schema was disabled. The table now shows cache contents regardless of whether the Performance Schema is enabled
- A HANDLER READ statement sometimes hit an assert when a previous statement did not restore the original value of THD::mark_used_columns after use
- Importing a compressed table could cause an unexpected server exit if the table contained values that were very large when uncompressed
- Removed a memory leak that could occur when a subquery using a hash join and LIMIT was executed repeatedly
- A compilation failure on Ubuntu was corrected
- Memory used for storing partial-revokes information could grow excessively for sessions that executed a large number of statements
- The server did not handle all cases of the WHERE_CONDITION optimization correctly
- FLUSH TABLES WITH READ LOCK could block other sessions from executing SHOW TABLE STATUS
- In some cases, MIN() and MAX() incorrectly returned NULL when used as window functions with temporal or JSON values as arguments
- GRANT ... GRANT OPTION ... TO and GRANT ... TO .. WITH GRANT OPTION sometimes were not correctly written to the server logs
- For debug builds, CREATE TABLE using a partition list of more than 256 entries raised an assertion
- It was possible for queries in the file named by the init_file system variable to cause server startup failure
- When performing a hash join, the optimizer could register a false match between a negative integer value and a very large unsigned integer value
- SHOW VARIABLES could report an incorrect value for the partial_revokes system variable
- In the Performance Schema user_defined_functions table, the value of the UDF_LIBRARY column is supposed to be NULL for UDFs registered via the service API. The value was incorrectly set to the empty string
- The server automatic upgrade procedure failed to upgrade older help tables that used the latin1 character set
- Duplicate warnings could occur when executing an SQL statement that read the grant tables in serializable or repeatable-read transaction isolation level
- In certain queries with DISTINCT aggregates (which in general are solved by sorting before aggregation), the server used a temporary table instead of streaming due to the mistaken assumption that the logic for handling the temporary table performed deduplication. Now the server checks for the implied unique index instead, which is more robust and allows for the removal of unnecessary logic
- Certain combinations of lower_case_table_names values and schema names in Event Scheduler event definitions could cause the server to stall
- Calling one stored function from within another could produce a conflict in field resolution, resulting in a server exit
- User-defined functions defined without a udf_init() method could cause an unexpected server exit
- Setting the secure_file_priv system variable to NULL should disable its action, but instead caused the server to create a directory named NULL
- mysqlpump could exit unexpectedly due to improper simultaneous accesses to shared structures
- Uninstalling a component and deregistering user-defined functions (UDFs) installed by the component was not properly synchronized with whether the UDFs were currently in use
- Cleanup following execution of a prepared statement that performed a multi-table UPDATE or DELETE was not always done correctly, which meant that, following the first execution of such a prepared statement, the server reported a nonzero number of rows updated, even though no rows were actually changed
- For the engines which support primary key extension, when the total key length exceeded MAX_KEY_LENGTH or the number of key parts exceeded MAX_REF_PARTS, key parts of primary keys which did not fit within these limits were not added to the secondary key, but key parts of primary keys were unconditionally marked as part of secondary keys.
- This led to a situation in which the secondary key was treated as a covering index, which meant sometimes the wrong access method was chosen.
- This is fixed by modifying the way in which key parts of primary keys are added to secondary keys so that those which do not fit within which do not fit within the limits mentioned previously mentioned are cleared
- When MySQL is configured with -DWITH_ICU=system, CMake now checks that the ICU library version is sufficiently recent
- When invoked with the --binary-as-hex option, mysql displayed NULL values as empty binary strings (0x).
- Selecting an undefined variable returned the empty binary string (0x) rather than NULL
- Enabling DISABLE_PSI_xxx Performance Schema-related CMake options caused build failures
- Some queries returned different results depending on the value of internal_tmp_mem_storage_engine.
- The root cause of this issue related to the fact that, when buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified, a new temporary table is created on disk; the frame buffer partition offset is set at the beginning of a new partition to the total number of rows that have been read so far, and is updated specifically for use when the temporary table is moved to disk (this being used to calculate the hints required to process window functions). The problem arose because the frame buffer partition offset was not updated for the specific case when a new partition started while creating the temporary table on disk, which caused the wrong rows to be read.
- This issue is fixed by making sure to update the frame buffer partition offset correctly whenever a new partition starts while a temporary table is moved to disk
- While buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified by temptable_max_ram, a new temporary table is created on disk. After the creation of the temporary table, hints used to process window functions need to be reset, since the temporary table is now moved to disk, making the existing hints unusable. When the creation of the temporary table on disk occurred when the first row in the frame buffer was being processed, the hints had not been initialized and trying to reset these uninitialized hints resulted in an unplanned server exit.
- This issue is fixed by adding a check to verify whether frame buffer hints have been initialized, prior to resetting them
- The Performance Schema could produce incorrect results for joins on a CHANNEL_NAME column when the index for CHANNEL_NAME was disabled with USE INDEX ()
- When removing unused window definitions, a subquery that was part of an ORDER BY was not removed
- In certain cases, the server did not handle multiply-nested subqueries correctly
- The recognized syntax for a VALUES statement includes an ORDER BY clause, but this clause was not resolved, so the execution engine could encounter invalid data
- The server attempted to access a non-existent temporary directory at startup, causing a failure. Checks were added to ensure that temporary directories exist, and that files are successfully created in the tmpdir directory
- While removing redundant sorting, a window's ordering was removed due to the fact that rows were expected to come in order because of the ordering of another window. When the other window was subsequently removed because it was unused, this resulted in unordered rows, which was not expected during evaluation.
- Now in such cases, removal of redundant sorts is not performed until after any unused windows have been removed. In addition, resolution of any rollups has been moved to the preparation phase
- Semisynchronous replication errors were incorrectly written to the error log with a subsystem tag of Server. They are now written with a tag of Repl, the same as for other replication errors
- A user could grant itself as a role to itself
- The server did not always correctly handle cases in which multiple WHERE conditions, one of which was always FALSE, referred to the same subquery
- With a lower_case_table_names=2 setting, InnoDB background threads sometimes acquired table metadata locks using the wrong character case for the schema name part of a lock key, resulting in unprotected metadata and race conditions. The correct character case is now applied. Changes were also implemented to prevent metadata locks from being released before corresponding data dictionary objects, and to improve assertion code that checks lock protection when acquiring data dictionary objects
- If a CR_UNKNOWN_ERROR was to be sent to a client, an exception occurred
- Conversion of DOUBLE values to values of type BIT, ENUM, or SET could generate Undefined Behavior Sanitizer warnings
- Certain accounts could cause server startup failure if the skip_name_resolve system variable was enabled
- Client programs could unexpectedly exit if communication packets contained bad data
- A buffer overflow in the client library was fixed
- When creating a multi-valued or other functional index, a performance drop was seen when executing a query against the table on which the index was defined, even though the index itself was not actually used. This occurred because the hidden virtual column that backs such indexes was evaluated unnecessarily for each row in the query
- CMake checks for libcurl dependencies were improved
- mysql_config_editor incorrectly treated # in password values as a comment character
- In some cases, the optimizer attempted to compute the hash value for an empty string. Now a fixed value is always used instead
- The INSERT() and RPAD() functions did not correctly set the character set of the result
- Some corner cases for val1 BETWEEEN val2 AND val3 were fixed, such as that -1 BETWEEN 9223372036854775808 AND 1 returned true
- For the Performance Schema memory_summary_global_by_event_name table, the low watermark columns could have negative values, and the high watermark columns had ever-increasing values even when the server memory usage did not increase
- Several issues converting strings to numbers were fixed
- Certain group by queries that performed correctly did not return the expected result when WITH ROLLUP was added. This was due to the fact that decimal information was not always correctly piped through rollup group items, causing functions returning decimal values such as TRUNCATE() to receive data of the wrong typeug
- When creating fields for materializing temporary tables (that is, when needing to sort a join), the optimizer checks whether the item needs to be copied or is only a constant. This was not done correctly in one specific case; when performing an outer join against a view or derived table containing a constant, the item was not properly materialized into the table, which could yield spurious occurrences of NULL in the resultug
- When REGEXP_REPLACE() was used in an SQL statement, the internal function Regexp_engine::Replace() did not reset the error code value after handling a record, which could affect processing of the next record, which lead to issues.
- Our thanks to Hope Lee for the contributionug
- For a query having the following form, the column list sometimes assumed an inconsistent state after temporary tables were created, causing out-of-bounds indexing later:
SELECT * FROM (
SELECT PI()
FROM t1 AS table1, t1 AS table2
ORDER BY PI(), table1.a
) AS d1;
- When aggregating data that was already sorted (known as performing streaming aggregation, due to no temporary tables being used), it was not possible to determine when a group ended until processing the first row in the next group, by which time the group expressions to be output were often already overwritten.
- This is fixed by replacing the complex logic previously used with the much simpler method of saving a representative row for the group when encountering it the first time, so that its columns can easily be retrieved for the output row when neededug
- Subqueries making use of fulltext matching might not perform properly when subquery_to_derived was enabled, and could lead to an assert in debug buildsug
- When an ALTER TABLE ... CONVERT TO CHARACTER SET statement is executed, the character set of every CHAR, VARCHAR, and TEXT column in the table is updated to the new CHARACTER SET value. This change was also applied to the hidden CHAR column used by an ARRAY column for a multi-valued index; since the character set of the hidden column must be one of my_charset_utf8mb4_0900_bin or binary, this led to an assert in debug builds of the server.
- This issue is resolved by no longer setting the character set of the hidden column to that of the table when executing the ALTER TABLE statement referenced previously; this is similar to what is done for BLOB columns in similar circumstancesg
- In some cases, the server's internal string-conversion routines had problems handling floating-point values which used length specifiers and triggered use of scientific notationg
- OperaOpera 114.0 Build 5282.185
- PhotoshopAdobe Photoshop CC 2024 25.12
- OKXOKX - Buy Bitcoin or Ethereum
- Hero WarsHero Wars - Online Action Game
- Adobe AcrobatAdobe Acrobat Pro 2024.002.20854
- TradingViewTradingView - Track All Markets
- ParallelsParallels Desktop 20.1.1
- 4K Video4K Video Downloader+ 1.9.4
- CleanMyMacCleanMyMac X 4.15.8
- 4DDiG4DDiG Mac Data Recovery 5.2.2
Comments and User Reviews