Table of Contents
The discussion here describes restrictions that apply to the use of MySQL features such as subqueries or views.
These restrictions apply to the features described in Chapter 23, Stored Programs and Views.
Some of the restrictions noted here apply to all stored routines; that is, both to stored procedures and stored functions. There are also some restrictions specific to stored functions but not to stored procedures.
The restrictions for stored functions also apply to triggers. There are also some restrictions specific to triggers.
The restrictions for stored procedures also apply to the
DO
clause of Event Scheduler event
definitions. There are also some
restrictions
specific to events.
Stored routines cannot contain arbitrary SQL statements. The following statements are not permitted:
The locking statements LOCK
TABLES
and
UNLOCK
TABLES
.
LOAD DATA
and LOAD
TABLE
.
SQL prepared statements
(PREPARE
,
EXECUTE
,
DEALLOCATE PREPARE
) can be used
in stored procedures, but not stored functions or triggers.
Thus, stored functions and triggers cannot use dynamic SQL
(where you construct statements as strings and then execute
them).
Generally, statements not permitted in SQL prepared statements
are also not permitted in stored programs. For a list of
statements supported as prepared statements, see
Section 13.5, “Prepared SQL Statement Syntax”. Exceptions
are SIGNAL
,
RESIGNAL
, and
GET DIAGNOSTICS
, which are not
permissible as prepared statements but are permitted in stored
programs.
Because local variables are in scope only during stored
program execution, references to them are not permitted in
prepared statements created within a stored program. Prepared
statement scope is the current session, not the stored
program, so the statement could be executed after the program
ends, at which point the variables would no longer be in
scope. For example, SELECT ... INTO
cannot be used
as a prepared statement. This restriction also applies to
stored procedure and function parameters. See
Section 13.5.1, “PREPARE Syntax”.
local_var
Within all stored programs (stored procedures and functions,
triggers, and events), the parser treats
BEGIN [WORK]
as the beginning of a
BEGIN ...
END
block. To begin a transaction in this context,
use START
TRANSACTION
instead.
The following additional statements or operations are not
permitted within stored functions. They are permitted within
stored procedures, except stored procedures that are invoked from
within a stored function or trigger. For example, if you use
FLUSH
in a stored procedure, that
stored procedure cannot be called from a stored function or
trigger.
Statements that perform explicit or implicit commit or rollback. Support for these statements is not required by the SQL standard, which states that each DBMS vendor may decide whether to permit them.
Statements that return a result set. This includes
SELECT
statements that do not
have an INTO
clause and other
statements such as var_list
SHOW
,
EXPLAIN
, and
CHECK TABLE
. A function can
process a result set either with
SELECT ... INTO
or by using a
cursor and var_list
FETCH
statements.
See Section 13.2.10.1, “SELECT ... INTO Syntax”, and
Section 13.6.6, “Cursors”.
FLUSH
statements.
Stored functions cannot be used recursively.
A stored function or trigger cannot modify a table that is already being used (for reading or writing) by the statement that invoked the function or trigger.
If you refer to a temporary table multiple times in a stored
function under different aliases, a Can't reopen
table:
'
error occurs, even if the references occur in different
statements within the function.
tbl_name
'
HANDLER ...
READ
statements that invoke stored functions can
cause replication errors and are disallowed.
For triggers, the following additional restrictions apply:
Triggers are not activated by foreign key actions.
When using row-based replication, triggers on the slave are not activated by statements originating on the master. The triggers on the slave are activated when using statement-based replication. For more information, see Section 17.4.1.36, “Replication and Triggers”.
The RETURN
statement is not
permitted in triggers, which cannot return a value. To exit a
trigger immediately, use the
LEAVE
statement.
Triggers are not permitted on tables in the
mysql
database. Nor are they permitted on
INFORMATION_SCHEMA
or
performance_schema
tables. Those tables are
actually views and triggers are not permitted on views.
The trigger cache does not detect when metadata of the underlying objects has changed. If a trigger uses a table and the table has changed since the trigger was loaded into the cache, the trigger operates using the outdated metadata.
The same identifier might be used for a routine parameter, a local variable, and a table column. Also, the same local variable name can be used in nested blocks. For example:
CREATE PROCEDURE p (i INT) BEGIN DECLARE i INT DEFAULT 0; SELECT i FROM t; BEGIN DECLARE i INT DEFAULT 1; SELECT i FROM t; END; END;
In such cases, the identifier is ambiguous and the following precedence rules apply:
A local variable takes precedence over a routine parameter or table column.
A routine parameter takes precedence over a table column.
A local variable in an inner block takes precedence over a local variable in an outer block.
The behavior that variables take precedence over table columns is nonstandard.
Use of stored routines can cause replication problems. This issue is discussed further in Section 23.7, “Binary Logging of Stored Programs”.
The
--replicate-wild-do-table=
option applies to tables, views, and triggers. It does not apply
to stored procedures and functions, or events. To filter
statements operating on the latter objects, use one or more of the
db_name.tbl_name
--replicate-*-db
options.
There are no stored routine debugging facilities.
The MySQL stored routine syntax is based on the SQL:2003 standard. The following items from that standard are not currently supported:
UNDO
handlers
FOR
loops
To prevent problems of interaction between sessions, when a client issues a statement, the server uses a snapshot of routines and triggers available for execution of the statement. That is, the server calculates a list of procedures, functions, and triggers that may be used during execution of the statement, loads them, and then proceeds to execute the statement. While the statement executes, it does not see changes to routines performed by other sessions.
For maximum concurrency, stored functions should minimize their side-effects; in particular, updating a table within a stored function can reduce concurrent operations on that table. A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. When statement-based binary logging is used, statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel. In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log, even for statement-based binary logging. See Section 23.7, “Binary Logging of Stored Programs”.
The following limitations are specific to the Event Scheduler:
Event names are handled in case-insensitive fashion. For
example, you cannot have two events in the same database with
the names anEvent
and
AnEvent
.
An event may not be created, altered, or dropped by a stored routine, trigger, or another event. An event also may not create, alter, or drop stored routines or triggers. (Bug #16409, Bug #18896)
DDL statements on events are prohibited while a
LOCK TABLES
statement is in
effect.
Event timings using the intervals YEAR
,
QUARTER
, MONTH
, and
YEAR_MONTH
are resolved in months; those
using any other interval are resolved in seconds. There is no
way to cause events scheduled to occur at the same second to
execute in a given order. In addition—due to rounding,
the nature of threaded applications, and the fact that a
nonzero length of time is required to create events and to
signal their execution—events may be delayed by as much
as 1 or 2 seconds. However, the time shown in the
INFORMATION_SCHEMA.EVENTS
table's
LAST_EXECUTED
column or the
mysql.event
table's
last_executed
column is always accurate to
within one second of the actual event execution time. (See
also Bug #16522.)
Each execution of the statements contained in the body of an
event takes place in a new connection; thus, these statements
has no effect in a given user session on the server's
statement counts such as Com_select
and
Com_insert
that are displayed by
SHOW STATUS
. However, such
counts are updated in the global scope.
(Bug #16422)
Events do not support times later than the end of the Unix Epoch; this is approximately the beginning of the year 2038. Such dates are specifically not permitted by the Event Scheduler. (Bug #16396)
References to stored functions, user-defined functions, and
tables in the ON SCHEDULE
clauses of
CREATE EVENT
and
ALTER EVENT
statements are not
supported. These sorts of references are not permitted. (See
Bug #22830 for more information.)
SIGNAL
,
RESIGNAL
, and
GET DIAGNOSTICS
are not permissible
as prepared statements. For example, this statement is invalid:
PREPARE stmt1 FROM 'SIGNAL SQLSTATE "02000"';
SQLSTATE
values in class
'04'
are not treated specially. They are
handled the same as other exceptions.
In standard SQL, the first condition relates to the
SQLSTATE
value returned for the previous SQL
statement. In MySQL, this is not guaranteed, so to get the main
error, you cannot do this:
GET DIAGNOSTICS CONDITION 1 @errno = MYSQL_ERRNO;
Instead, do this:
GET DIAGNOSTICS @cno = NUMBER; GET DIAGNOSTICS CONDITION @cno @errno = MYSQL_ERRNO;
Server-side cursors are implemented in the C API using the
mysql_stmt_attr_set()
function.
The same implementation is used for cursors in stored routines. A
server-side cursor enables a result set to be generated on the
server side, but not transferred to the client except for those
rows that the client requests. For example, if a client executes a
query but is only interested in the first row, the remaining rows
are not transferred.
In MySQL, a server-side cursor is materialized into an internal
temporary table. Initially, this is a MEMORY
table, but is converted to a MyISAM
table when
its size exceeds the minimum value of the
max_heap_table_size
and
tmp_table_size
system variables.
The same restrictions apply to internal temporary tables created
to hold the result set for a cursor as for other uses of internal
temporary tables. See Section 8.4.4, “Internal Temporary Table Use in MySQL”.
One limitation of the implementation is that for a large result
set, retrieving its rows through a cursor might be slow.
Cursors are read only; you cannot use a cursor to update rows.
UPDATE WHERE CURRENT OF
and DELETE
WHERE CURRENT OF
are not implemented, because updatable
cursors are not supported.
Cursors are nonholdable (not held open after a commit).
Cursors are asensitive.
Cursors are nonscrollable.
Cursors are not named. The statement handler acts as the cursor ID.
You can have open only a single cursor per prepared statement. If you need several cursors, you must prepare several statements.
You cannot use a cursor for a statement that generates a result
set if the statement is not supported in prepared mode. This
includes statements such as CHECK
TABLE
, HANDLER READ
, and
SHOW BINLOG EVENTS
.
In general, you cannot modify a table and select from the same table in a subquery. For example, this limitation applies to statements of the following forms:
DELETE FROM t WHERE ... (SELECT ... FROM t ...); UPDATE t ... WHERE col = (SELECT ... FROM t ...); {INSERT|REPLACE} INTO t (SELECT ... FROM t ...);
Exception: The preceding prohibition does not apply if for the modified table you are using a derived table and that derived table is materialized rather than merged into the outer query. (See Section 8.2.2.3, “Optimizing Derived Tables, View References, and Common Table Expressions”.) Example:
UPDATE t ... WHERE col = (SELECT * FROM (SELECT ... FROM t...) AS dt ...);
Here the result from the derived table is materialized as a
temporary table, so the relevant rows in t
have already been selected by the time the update to
t
takes place.
In general, you may be able to influence the optimizer to
materialize a derived table by adding a
NO_MERGE
optimizer hint. See
Section 8.9.2, “Optimizer Hints”.
Row comparison operations are only partially supported:
For
,
expr
[NOT] IN
subquery
expr
can be an
n
-tuple (specified using row
constructor syntax) and the subquery can return rows of
n
-tuples. The permitted syntax
is therefore more specifically expressed as
row_constructor
[NOT]
IN table_subquery
For
,
expr
op
{ALL|ANY|SOME}
subquery
expr
must be a scalar value and
the subquery must be a column subquery; it cannot return
multiple-column rows.
In other words, for a subquery that returns rows of
n
-tuples, this is supported:
(expr_1
, ...,expr_n
) [NOT] INtable_subquery
But this is not supported:
(expr_1
, ...,expr_n
)op
{ALL|ANY|SOME}subquery
The reason for supporting row comparisons for
IN
but not for the others is that
IN
is implemented by rewriting it as a
sequence of =
comparisons and AND
operations.
This approach cannot be used for ALL
,
ANY
, or SOME
.
Subqueries in the FROM
clause cannot be
correlated subqueries. They are materialized in whole
(evaluated to produce a result set) during query execution, so
they cannot be evaluated per row of the outer query. The
optimizer delays materialization until the result is needed,
which may permit materialization to be avoided. See
Section 8.2.2.3, “Optimizing Derived Tables, View References, and Common Table Expressions”.
MySQL does not support LIMIT
in subqueries
for certain subquery operators:
mysql>SELECT * FROM t1
->WHERE s1 IN (SELECT s2 FROM t2 ORDER BY s1 LIMIT 1);
ERROR 1235 (42000): This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
MySQL permits a subquery to refer to a stored function that
has data-modifying side effects such as inserting rows into a
table. For example, if f()
inserts rows,
the following query can modify data:
SELECT ... WHERE x IN (SELECT f() ...);
This behavior is an extension to the SQL standard. In MySQL,
it can produce nondeterministic results because
f()
might be executed a different number of
times for different executions of a given query depending on
how the optimizer chooses to handle it.
For statement-based or mixed-format replication, one implication of this indeterminism is that such a query can produce different results on the master and its slaves.
View processing is not optimized:
It is not possible to create an index on a view.
Indexes can be used for views processed using the merge algorithm. However, a view that is processed with the temptable algorithm is unable to take advantage of indexes on its underlying tables (although indexes can be used during generation of the temporary tables).
There is a general principle that you cannot modify a table and select from the same table in a subquery. See Section C.4, “Restrictions on Subqueries”.
The same principle also applies if you select from a view that selects from the table, if the view selects from the table in a subquery and the view is evaluated using the merge algorithm. Example:
CREATE VIEW v1 AS SELECT * FROM t2 WHERE EXISTS (SELECT 1 FROM t1 WHERE t1.a = t2.a); UPDATE t1, v2 SET t1.a = 1 WHERE t1.b = v2.b;
If the view is evaluated using a temporary table, you
can select from the table in the view
subquery and still modify that table in the outer query. In this
case the view will be stored in a temporary table and thus you are
not really selecting from the table in a subquery and modifying it
“at the same time.” (This is another reason you might
wish to force MySQL to use the temptable algorithm by specifying
ALGORITHM = TEMPTABLE
in the view definition.)
You can use DROP TABLE
or
ALTER TABLE
to drop or alter a
table that is used in a view definition. No warning results from
the DROP
or ALTER
operation,
even though this invalidates the view. Instead, an error occurs
later, when the view is used. CHECK
TABLE
can be used to check for views that have been
invalidated by DROP
or ALTER
operations.
With regard to view updatability, the overall goal for views is that if any view is theoretically updatable, it should be updatable in practice. MySQL as quickly as possible. Many theoretically updatable views can be updated now, but limitations still exist. For details, see Section 23.5.3, “Updatable and Insertable Views”.
There exists a shortcoming with the current implementation of
views. If a user is granted the basic privileges necessary to
create a view (the CREATE VIEW
and
SELECT
privileges), that user will
be unable to call SHOW CREATE VIEW
on that object unless the user is also granted the
SHOW VIEW
privilege.
That shortcoming can lead to problems backing up a database with mysqldump, which may fail due to insufficient privileges. This problem is described in Bug #22062.
The workaround to the problem is for the administrator to manually
grant the SHOW VIEW
privilege to
users who are granted CREATE VIEW
,
since MySQL doesn't grant it implicitly when views are created.
Views do not have indexes, so index hints do not apply. Use of index hints when selecting from a view is not permitted.
SHOW CREATE VIEW
displays view
definitions using an AS
clause for each
column. If a column is created from an expression, the default
alias is the expression text, which can be quite long. Aliases for
column names in alias_name
CREATE VIEW
statements are checked against the maximum column length of 64
characters (not the maximum alias length of 256 characters). As a
result, views created from the output of SHOW
CREATE VIEW
fail if any column alias exceeds 64
characters. This can cause problems in the following circumstances
for views with too-long aliases:
View definitions fail to replicate to newer slaves that enforce the column-length restriction.
Dump files created with mysqldump cannot be loaded into servers that enforce the column-length restriction.
A workaround for either problem is to modify each problematic view
definition to use aliases that provide shorter column names. Then
the view will replicate properly, and can be dumped and reloaded
without causing an error. To modify the definition, drop and
create the view again with DROP
VIEW
and CREATE VIEW
, or
replace the definition with
CREATE OR REPLACE
VIEW
.
For problems that occur when reloading view definitions in dump
files, another workaround is to edit the dump file to modify its
CREATE VIEW
statements. However,
this does not change the original view definitions, which may
cause problems for subsequent dump operations.
XA transaction support is limited to the InnoDB
storage engine.
For “external XA,” a MySQL server acts as a Resource
Manager and client programs act as Transaction Managers. For
“Internal XA”, storage engines within a MySQL server
act as RMs, and the server itself acts as a TM. Internal XA
support is limited by the capabilities of individual storage
engines. Internal XA is required for handling XA transactions that
involve more than one storage engine. The implementation of
internal XA requires that a storage engine support two-phase
commit at the table handler level, and currently this is true only
for InnoDB
.
For XA
START
, the JOIN
and
RESUME
clauses are not supported.
For XA
END
, the SUSPEND [FOR MIGRATE]
clause
is not supported.
The requirement that the bqual
part of
the xid
value be different for each XA
transaction within a global transaction is a limitation of the
current MySQL XA implementation. It is not part of the XA
specification.
An XA transaction is written to the binary log in two parts. When
XA PREPARE
is issued, the first part of the
transaction up to XA PREPARE
is written using
an initial GTID. A XA_prepare_log_event
is used
to identify such transactions in the binary log. When XA
COMMIT
or XA ROLLBACK
is issued, a
second part of the transaction containing only the XA
COMMIT
or XA ROLLBACK
statement is
written using a second GTID. Note that the initial part of the
transaction, identified by
XA_prepare_log_event
, is not necessarily
followed by its XA COMMIT
or XA
ROLLBACK
, which can cause interleaved binary logging of
any two XA transactions. The two parts of the XA transaction can
even appear in different binary log files. This means that an XA
transaction in PREPARED
state is now persistent
until an explicit XA COMMIT
or XA
ROLLBACK
statement is issued, ensuring that XA
transactions are compatible with replication.
On a replication slave, immediately after the XA transaction is
prepared, it is detached from the slave applier thread, and can be
committed or rolled back by any thread on the slave. This means
that the same XA transaction can appear in the
events_transactions_current
table
with different states on different threads. The
events_transactions_current
table
displays the current status of the most recent monitored
transaction event on the thread, and does not update this status
when the thread is idle. So the XA transaction can still be
displayed in the PREPARED
state for the
original applier thread, after it has been processed by another
thread. To positively identify XA transactions that are still in
the PREPARED
state and need to be recovered,
use the XA
RECOVER
statement rather than the Performance Schema
transaction tables.
The following restrictions exist for using XA transactions:
XA transactions are not fully resilient to an unexpected halt
with respect to the binary log. If there is an unexpected halt
while the server is in the middle of executing an XA
PREPARE
, XA COMMIT
, XA
ROLLBACK
, or XA COMMIT ... ONE
PHASE
statement, the server might not be able to
recover to a correct state, leaving the server and the binary
log in an inconsistent state. In this situation, the binary
log might either contain extra XA transactions that are not
applied, or miss XA transactions that are applied. Also, if
GTIDs are enabled, after recovery
@@GLOBAL.GTID_EXECUTED
might not correctly
describe the transactions that have been applied. Note that if
an unexpected halt occurs before XA
PREPARE
, between XA PREPARE
and
XA COMMIT
(or XA
ROLLBACK
), or after XA COMMIT
(or
XA ROLLBACK
), the server and binary log are
correctly recovered and taken to a consistent state.
The use of replication filters or binary log filters in
combination with XA transactions is not supported. Filtering
of tables could cause an XA transaction to be empty on a
replication slave, and empty XA transactions are not
supported. Also, with the settings
master_info_repository=TABLE
and
relay_log_info_repository=TABLE
on a replication slave, which became the defaults in MySQL
8.0, the internal state of the data engine transaction is
changed following a filtered XA transaction, and can become
inconsistent with the replication transaction context state.
The error
ER_XA_REPLICATION_FILTERS
, is
logged whenever an XA transaction is impacted by a replication
filter, whether or not the transaction was empty as a result.
The replication slave is able to continue running in this
situation, but as the error message warns, it might be in an
undefined state.
FLUSH TABLES WITH READ LOCK
is
not compatible with XA transactions.
XA transactions are considered unsafe for statement-based
replication. If two XA transactions committed in parallel on
the master are being prepared on the slave in the inverse
order, locking dependencies can occur that cannot be safely
resolved, and it is possible for replication to fail with
deadlock on the slave. This situation can occur for a
single-threaded or multithreaded replication slave. When
binlog_format=STATEMENT
is
set, a warning is issued for DML statements inside XA
transactions. When
binlog_format=MIXED
or
binlog_format=ROW
is set, DML
statements inside XA transactions are logged using row-based
replication, and the potential issue is not present.
Prior to MySQL 5.7.7, XA transactions were not compatible with
replication at all. This was because an XA transaction that was
in PREPARED
state would be rolled back on
clean server shutdown or client disconnect. Similarly, an XA
transaction that was in PREPARED
state would
still exist in PREPARED
state in case the
server was shutdown abnormally and then started again, but the
contents of the transaction could not be written to the binary
log. In both of these situations the XA transaction could not be
replicated correctly.
Identifiers are stored in mysql
database
tables (user
, db
, and so
forth) using utf8
, but identifiers can
contain only characters in the Basic Multilingual Plane (BMP).
Supplementary characters are not permitted in identifiers.
The ucs2
, utf16
,
utf16le
, and utf32
character sets have the following restrictions:
They cannot be used as a client character set, which means
that they do not work for SET
NAMES
or SET CHARACTER
SET
. (See Section 10.4, “Connection Character Sets and Collations”.)
It is currently not possible to use
LOAD DATA
INFILE
to load data files that use these
character sets.
FULLTEXT
indexes cannot be created on a
column that uses any of these character sets. However, you
can perform IN BOOLEAN MODE
searches on
the column without an index.
The REGEXP
and
RLIKE
operators work in byte-wise fashion, so they are not multibyte
safe and may produce unexpected results with multibyte
character sets. In addition, these operators compare
characters by their byte values and accented characters may
not compare as equal even if a given collation treats them as
equal.
The Performance Schema avoids using mutexes to collect or produce
data, so there are no guarantees of consistency and results can
sometimes be incorrect. Event values in
performance_schema
tables are nondeterministic
and nonrepeatable.
If you save event information in another table, you should not
assume that the original events will still be available later. For
example, if you select events from a
performance_schema
table into a temporary
table, intending to join that table with the original table later,
there might be no matches.
mysqldump and BACKUP
DATABASE
ignore tables in the
performance_schema
database.
Tables in the performance_schema
database
cannot be locked with LOCK TABLES
, except the
setup_
tables.
xxx
Tables in the performance_schema
database
cannot be indexed.
Tables in the performance_schema
database are
not replicated.
The types of timers might vary per platform. The
performance_timers
table shows which
event timers are available. If the values in this table for a
given timer name are NULL
, that timer is not
supported on your platform.
Instruments that apply to storage engines might not be implemented for all storage engines. Instrumentation of each third-party engine is the responsibility of the engine maintainer.
The first part of this section describes general restrictions on the applicability of the pluggable authentication framework described at Section 6.3.10, “Pluggable Authentication”. The second part describes how third-party connector developers can determine the extent to which a connector can take advantage of pluggable authentication capabilities and what steps to take to become more compliant.
The term “native authentication” used here refers to
authentication against passwords stored in the
mysql.user
table. This is the same
authentication method provided by older MySQL servers, before
pluggable authentication was implemented. “Windows native
authentication” refers to authentication using the
credentials of a user who has already logged in to Windows, as
implemented by the Windows Native Authentication plugin
(“Windows plugin” for short).
Connector/C, Connector/C++: Clients that use these connectors can connect to the server only through accounts that use native authentication.
Exception: A connector supports pluggable authentication if it
was built to link to libmysqlclient
dynamically (rather than statically) and it loads the current
version of libmysqlclient
if that version
is installed, or if the connector is recompiled from source to
link against the current libmysqlclient
.
For information about writing connectors to handle informatin from the server about the default server-side authentication plugin, see Authentication Plugin Connector-Writing Considerations.
Connector/Net: Clients that use Connector/Net can connect to the server through accounts that use native authentication or Windows native authentication.
Connector/PHP: Clients that
use this connector can connect to the server only through
accounts that use native authentication, when compiled using
the MySQL native driver for PHP (mysqlnd
).
Windows native authentication: Connecting through an account that uses the Windows plugin requires Windows Domain setup. Without it, NTLM authentication is used and then only local connections are possible; that is, the client and server must run on the same computer.
Proxy users: Proxy user
support is available to the extent that clients can connect
through accounts authenticated with plugins that implement
proxy user capability (that is, plugins that can return a user
name different from that of the connecting user). For example,
the PAM and Windows plugins support proxy users. The
mysql_native_password
and
sha256_password
authentication plugins do
not support proxy users by default, but can be configured to
do so; see Server Support for Proxy User Mapping.
Replication: Replication
slaves can employ not only master accounts using native
authentication, but can also connect through master accounts
that use nonnative authentication if the required client-side
plugin is available. If the plugin is built into
libmysqlclient
, it is available by default.
Otherwise, the plugin must be installed on the slave side in
the directory named by the slave
plugin_dir
system variable.
FEDERATED
tables: A FEDERATED
table can access the remote table only through accounts on the
remote server that use native authentication.
Third-party connector developers can use the following guidelines to determine readiness of a connector to take advantage of pluggable authentication capabilities and what steps to take to become more compliant:
An existing connector to which no changes have been made uses native authentication and clients that use the connector can connect to the server only through accounts that use native authentication. However, you should test the connector against a recent version of the server to verify that such connections still work without problem.
Exception: A connector might work with pluggable
authentication without any changes if it links to
libmysqlclient
dynamically (rather than
statically) and it loads the current version of
libmysqlclient
if that version is
installed.
To take advantage of pluggable authentication capabilities, a
connector that is libmysqlclient
-based
should be relinked against the current version of
libmysqlclient
. This enables the connector
to support connections though accounts that require
client-side plugins now built into
libmysqlclient
(such as the cleartext
plugin needed for PAM authentication and the Windows plugin
needed for Windows native authentication). Linking with a
current libmysqlclient
also enables the
connector to access client-side plugins installed in the
default MySQL plugin directory (typically the directory named
by the default value of the local server's
plugin_dir
system variable).
If a connector links to libmysqlclient
dynamically, it must be ensured that the newer version of
libmysqlclient
is installed on the client
host and that the connector loads it at runtime.
Another way for a connector to support a given authentication method is to implement it directly in the client/server protocol. Connector/Net uses this approach to provide support for Windows native authentication.
If a connector should be able to load client-side plugins from
a directory different from the default plugin directory, it
must implement some means for client users to specify the
directory. Possibilities for this include a command-line
option or environment variable from which the connector can
obtain the directory name. Standard MySQL client programs such
as mysql and mysqladmin
implement a --plugin-dir
option. See also
Section 27.7.13, “C API Client Plugin Functions”.
Proxy user support by a connector depends, as described earlier in this section, on whether the authentication methods that it supports permit proxy users.
This section lists current limits in MySQL 8.0.
The maximum number of tables that can be referenced in a single
join is 61. This includes a join handled by merging derived
tables and views in the FROM
clause into the
outer query block (see
Section 8.2.2.3, “Optimizing Derived Tables, View References, and Common Table Expressions”). It also applies
to the number of tables that can be referenced in the definition
of a view.
MySQL has no limit on the number of databases. The underlying file system may have a limit on the number of directories.
MySQL has no limit on the number of tables. The underlying file
system may have a limit on the number of files that represent
tables. Individual storage engines may impose engine-specific
constraints. InnoDB
permits up to 4 billion
tables.
The effective maximum table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits. For up-to-date information operating system file size limits, refer to the documentation specific to your operating system.
Windows users, please note that FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.
If you encounter a full-table error, there are several reasons why it might have occurred:
The disk might be full.
You are using InnoDB
tables and have run
out of room in an InnoDB
tablespace file.
The maximum tablespace size is also the maximum size for a
table. For tablespace size limits, see
Section 15.8.1.7, “Limits on InnoDB Tables”.
Generally, partitioning of tables into multiple tablespace files is recommended for tables larger than 1TB in size.
You have hit an operating system file size limit. For
example, you are using MyISAM
tables on
an operating system that supports files only up to 2GB in
size and you have hit this limit for the data file or index
file.
You are using a MyISAM
table and the
space required for the table exceeds what is permitted by
the internal pointer size. MyISAM
permits
data and index files to grow up to 256TB by default, but
this limit can be changed up to the maximum permissible size
of 65,536TB (2567 − 1
bytes).
If you need a MyISAM
table that is larger
than the default limit and your operating system supports
large files, the CREATE TABLE
statement supports AVG_ROW_LENGTH
and
MAX_ROWS
options. See
Section 13.1.18, “CREATE TABLE Syntax”. The server uses these
options to determine how large a table to permit.
If the pointer size is too small for an existing table, you
can change the options with ALTER
TABLE
to increase a table's maximum permissible
size. See Section 13.1.8, “ALTER TABLE Syntax”.
ALTER TABLEtbl_name
MAX_ROWS=1000000000 AVG_ROW_LENGTH=nnn
;
You have to specify AVG_ROW_LENGTH
only
for tables with BLOB
or
TEXT
columns; in this case,
MySQL can't optimize the space required based only on the
number of rows.
To change the default size limit for
MyISAM
tables, set the
myisam_data_pointer_size
,
which sets the number of bytes used for internal row
pointers. The value is used to set the pointer size for new
tables if you do not specify the MAX_ROWS
option. The value of
myisam_data_pointer_size
can be from 2 to 7. A value of 4 permits tables up to 4GB; a
value of 6 permits tables up to 256TB.
You can check the maximum data and index sizes by using this statement:
SHOW TABLE STATUS FROMdb_name
LIKE 'tbl_name
';
You also can use myisamchk -dv /path/to/table-index-file. See Section 13.7.6, “SHOW Syntax”, or Section 4.6.4, “myisamchk — MyISAM Table-Maintenance Utility”.
Other ways to work around file-size limits for
MyISAM
tables are as follows:
If your large table is read only, you can use myisampack to compress it. myisampack usually compresses a table by at least 50%, so you can have, in effect, much bigger tables. myisampack also can merge multiple tables into a single table. See Section 4.6.6, “myisampack — Generate Compressed, Read-Only MyISAM Tables”.
MySQL includes a MERGE
library that
enables you to handle a collection of
MyISAM
tables that have identical
structure as a single MERGE
table.
See Section 16.7, “The MERGE Storage Engine”.
You are using the MEMORY
(HEAP
) storage engine; in this case you
need to increase the value of the
max_heap_table_size
system
variable. See Section 5.1.7, “Server System Variables”.
MySQL has hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact column limit depends on several factors:
The maximum row size for a table constrains the number (and possibly size) of columns because the total length of all columns cannot exceed this size. See Row Size Limits.
The storage requirements of individual columns constrain the number of columns that fit within a given maximum row size. Storage requirements for some data types depend on factors such as storage engine, storage format, and character set. See Section 11.8, “Data Type Storage Requirements”.
Storage engines may impose additional restrictions that
limit table column count. For example,
InnoDB
has a limit of 1017
columns per table. See
Section 15.8.1.7, “Limits on InnoDB Tables”. For information
about other storage engines, see
Chapter 16, Alternative Storage Engines.
The maximum row size for a given table is determined by several factors:
The internal representation of a MySQL table has a maximum
row size limit of 65,535 bytes, even if the storage engine
is capable of supporting larger rows.
BLOB
and
TEXT
columns only
contribute 9 to 12 bytes toward the row size limit because
their contents are stored separately from the rest of the
row.
The maximum row size for an InnoDB
table, which applies to data stored locally within a
database page, is slightly less than half a page for 4KB,
8KB, 16KB, and 32KB
innodb_page_size
settings. For example, the maximum row size is slightly
less than 8KB for the default 16KB
InnoDB
page size. For 64KB pages, the
maximum row size is slightly less than 16KB. See
Section 15.8.1.7, “Limits on InnoDB Tables”.
If a row containing
variable-length
columns exceeds the InnoDB
maximum row size, InnoDB
selects
variable-length columns for external off-page storage
until the row fits within the InnoDB
row size limit. The amount of data stored locally for
variable-length columns that are stored off-page differs
by row format. For more information, see
Section 15.10, “InnoDB Row Storage and Row Formats”.
Different storage formats use different amounts of page header and trailer data, which affects the amount of storage available for rows.
For information about InnoDB
row
formats, see Section 15.10, “InnoDB Row Storage and Row Formats”, and
Section 15.8.1.2, “The Physical Row Structure of an InnoDB Table”.
For information about MyISAM
storage formats, see
Section 16.2.3, “MyISAM Table Storage Formats”.
The MySQL maximum row size limit of 65,535 bytes is
demonstrated in the following InnoDB
and MyISAM
examples. The limit is
enforced regardless of storage engine, even though the
storage engine may be capable of supporting larger rows.
mysql>CREATE TABLE t (a VARCHAR(10000), b VARCHAR(10000),
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
f VARCHAR(10000), g VARCHAR(6000)) ENGINE=InnoDB CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs
mysql>CREATE TABLE t (a VARCHAR(10000), b VARCHAR(10000),
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
f VARCHAR(10000), g VARCHAR(6000)) ENGINE=MyISAM CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs
In the following MyISAM
example,
changing a column to TEXT
avoids the 65,535-byte row size limit and permits the
operation to succeed because
BLOB
and
TEXT
columns only
contribute 9 to 12 bytes toward the row size.
mysql>CREATE TABLE t (a VARCHAR(10000), b VARCHAR(10000),
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
f VARCHAR(10000), g TEXT(6000)) ENGINE=MyISAM CHARACTER SET latin1;
Query OK, 0 rows affected (0.02 sec)
The operation succeeds for an InnoDB
table because changing a column to
TEXT
avoids the MySQL
65,535-byte row size limit, and InnoDB
off-page storage of variable-length columns avoids the
InnoDB
row size limit.
mysql>CREATE TABLE t (a VARCHAR(10000), b VARCHAR(10000),
c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
f VARCHAR(10000), g TEXT(6000)) ENGINE=InnoDB CHARACTER SET latin1;
Query OK, 0 rows affected (0.02 sec)
Storage for variable-length columns includes length bytes,
which are counted toward the row size. For example, a
VARCHAR(255)
CHARACTER SET utf8mb3
column takes two bytes to
store the length of the value, so each value can take up
to 767 bytes.
The statement to create table t1
succeeds because the columns require 32,765 + 2 bytes and
32,766 + 2 bytes, which falls within the maximum row size
of 65,535 bytes:
mysql>CREATE TABLE t1
(c1 VARCHAR(32765) NOT NULL, c2 VARCHAR(32766) NOT NULL)
ENGINE = InnoDB CHARACTER SET latin1;
Query OK, 0 rows affected (0.02 sec)
The statement to create table t2
fails
because, although the column length is within the maximum
length of 65,535 bytes, two additional bytes are required
to record the length, which causes the row size to exceed
65,535 bytes:
mysql>CREATE TABLE t2
(c1 VARCHAR(65535) NOT NULL)
ENGINE = InnoDB CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs
Reducing the column length to 65,533 or less permits the statement to succeed.
mysql>CREATE TABLE t2
(c1 VARCHAR(65533) NOT NULL)
ENGINE = InnoDB CHARACTER SET latin1;
Query OK, 0 rows affected (0.01 sec)
For MyISAM
tables,
NULL
columns require additional space
in the row to record whether their values are
NULL
. Each NULL
column takes one bit extra, rounded up to the nearest
byte.
The statement to create table t3
fails
because MyISAM
requires space
for NULL
columns in addition to the
space required for variable-length column length bytes,
causing the row size to exceed 65,535 bytes:
mysql>CREATE TABLE t3
(c1 VARCHAR(32765) NULL, c2 VARCHAR(32766) NULL)
ENGINE = MyISAM CHARACTER SET latin1;
ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs
For information about InnoDB
NULL
column storage, see
Section 15.8.1.2, “The Physical Row Structure of an InnoDB Table”.
InnoDB
restricts row size (for data
stored locally within the database page) to slightly less
than half a database page for 4KB, 8KB, 16KB, and 32KB
innodb_page_size
settings, and to slightly less than 16KB for 64KB pages.
The statement to create table t4
fails
because the defined columns exceed the row size limit for
a 16KB InnoDB
page.
mysql>CREATE TABLE t4 (
c1 CHAR(255),c2 CHAR(255),c3 CHAR(255),
c4 CHAR(255),c5 CHAR(255),c6 CHAR(255),
c7 CHAR(255),c8 CHAR(255),c9 CHAR(255),
c10 CHAR(255),c11 CHAR(255),c12 CHAR(255),
c13 CHAR(255),c14 CHAR(255),c15 CHAR(255),
c16 CHAR(255),c17 CHAR(255),c18 CHAR(255),
c19 CHAR(255),c20 CHAR(255),c21 CHAR(255),
c22 CHAR(255),c23 CHAR(255),c24 CHAR(255),
c25 CHAR(255),c26 CHAR(255),c27 CHAR(255),
c28 CHAR(255),c29 CHAR(255),c30 CHAR(255),
c31 CHAR(255),c32 CHAR(255),c33 CHAR(255)
) ENGINE=InnoDB ROW_FORMAT=DYNAMIC DEFAULT CHARSET latin1;
ERROR 1118 (42000): Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
The following limitations apply to use of MySQL on the Windows platform:
Process memory
On Windows 32-bit platforms, it is not possible by default to use more than 2GB of RAM within a single process, including MySQL. This is because the physical address limit on Windows 32-bit is 4GB and the default setting within Windows is to split the virtual address space between kernel (2GB) and user/applications (2GB).
Some versions of Windows have a boot time setting to enable larger applications by reducing the kernel application. Alternatively, to use more than 2GB, use a 64-bit version of Windows.
File system aliases
When using MyISAM
tables, you cannot use
aliases within Windows link to the data files on another
volume and then link back to the main MySQL
datadir
location.
This facility is often used to move the data and index files to a RAID or other fast solution.
Limited number of ports
Windows systems have about 4,000 ports available for client connections, and after a connection on a port closes, it takes two to four minutes before the port can be reused. In situations where clients connect to and disconnect from the server at a high rate, it is possible for all available ports to be used up before closed ports become available again. If this happens, the MySQL server appears to be unresponsive even though it is running. Ports may be used by other applications running on the machine as well, in which case the number of ports available to MySQL is lower.
For more information about this problem, see http://support.microsoft.com/default.aspx?scid=kb;en-us;196271.
DATA DIRECTORY
and
INDEX DIRECTORY
The DATA DIRECTORY
option for
CREATE TABLE
is supported on
Windows only for InnoDB
tables, as
described in Section 15.7.5, “Creating File-Per-Table Tablespaces Outside the Data Directory”. For
MyISAM
and other storage engines, the
DATA DIRECTORY
and INDEX
DIRECTORY
options for CREATE
TABLE
are ignored on Windows and any other
platforms with a nonfunctional realpath()
call.
You cannot drop a database that is in use by another session.
Case-insensitive names
File names are not case-sensitive on Windows, so MySQL database and table names are also not case-sensitive on Windows. The only restriction is that database and table names must be specified using the same case throughout a given statement. See Section 9.2.2, “Identifier Case Sensitivity”.
Directory and file names
On Windows, MySQL Server supports only directory and file names that are compatible with the current ANSI code pages. For example, the following Japanese directory name will not work in the Western locale (code page 1252):
datadir="C:/私たちのプロジェクトのデータ"
The same limitation applies to directory and file names
referred to in SQL statements, such as the data file path
name in LOAD DATA
INFILE
.
The \
path name
separator character
Path name components in Windows are separated by the
\
character, which is also the escape
character in MySQL. If you are using
LOAD DATA
INFILE
or
SELECT ... INTO
OUTFILE
, use Unix-style file names with
/
characters:
mysql>LOAD DATA INFILE 'C:/tmp/skr.txt' INTO TABLE skr;
mysql>SELECT * INTO OUTFILE 'C:/tmp/skr.txt' FROM skr;
Alternatively, you must double the \
character:
mysql>LOAD DATA INFILE 'C:\\tmp\\skr.txt' INTO TABLE skr;
mysql>SELECT * INTO OUTFILE 'C:\\tmp\\skr.txt' FROM skr;
Problems with pipes
Pipes do not work reliably from the Windows command-line
prompt. If the pipe includes the character
^Z
/ CHAR(24)
, Windows
thinks that it has encountered end-of-file and aborts the
program.
This is mainly a problem when you try to apply a binary log as follows:
C:\> mysqlbinlog binary_log_file
| mysql --user=root
If you have a problem applying the log and suspect that it
is because of a ^Z
/
CHAR(24)
character, you can use the
following workaround:
C:\>mysqlbinlog
C:\>binary_log_file
--result-file=/tmp/bin.sqlmysql --user=root --execute "source /tmp/bin.sql"
The latter command also can be used to reliably read in any SQL file that may contain binary data.