ศุกร์, 27 มี.ค. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Translated books in Chinese
    I have just received copies of my Effective MySQL series that are available in Chinese. The picture speaks more than words.

  • SQL, ANSI Standards, PostgreSQL and MySQL
    I have recently been working with the Donors Choose Open Data Set which happens to be in PostgreSQL. Easy enough to install and load the data in PostgreSQL, however as I live and breath MySQL, lets load the data into MySQL. And here is where start our discussion, first some history. SQL History SQL – Structure Query Language is a well known common language for communicating with Relational Databases (RDBMS). It is not the only language I might add, having both used many years ago and just mentioned QUEL at a Looker Look and Tell event in New York. It has also been around since the 1970s making it; along with C; one of oldest in general use programming languages today. SQL became an ANSI standard in 1986, and an ISO standard in 1987. The purpose of a standard is to provide commonality when communicating or exchanging information; in our case; a programming language communicating with a RDBMS. There have been several iterations of the standard as functionality and syntax improves. These are commonly referred to as SQL-86, SQL-89, SQL-92, SQL:1999, SQL:2003, SQL:2006, SQL:2008 and SQL:2011. And so, with SQL being a standard it means that what we can do in PostgreSQL should translate to what we can do in MySQL. SQL Communication Both products provide a Command Line Interface (CLI) client tool for SQL communication, mysql for MySQL and psql for PostgreSQL. No surprises there. Both use by default the semicolon ; as a SQL statement terminator, and both CLI tools use \q as a means to quit and exit the tool. Certainly not a standard but great for syntax compatibility. DDL Specification Our journey begins with defining tables. DROP TABLE Both products SQL syntax support DROP TABLE. Infact, both support the DROP TABLE [IF EXISTS] syntax. DROP TABLE donorschoose_projects; DROP TABLE IF EXISTS donorschoose_projects; CREATE TABLE Both support CREATE TABLE. Both support defining columns in the typical format <column_name> <datatype>, and both support the NOT NULL attribute. Talking about specific datatypes for columns is a topic on its own and so I discuss this later. The PostgreSQL syntax was a table option WITHOUT OIDS which is not valid in MySQL. It is also obsolescent syntax in PostgreSQL 9.3. From the PostgreSQL manual “This optional clause specifies whether rows of the new table should have OIDs (object identifiers) assigned to them. The default is to have OIDs. Specifying WITHOUT OIDS allows the user to suppress generation of OIDs for rows of a table. This may be worthwhile for large tables … Specifying WITHOUT OIDS also reduces the space required to store the table on disk by 4 bytes per row of the table, thereby improving performance.” In this example as this is just for testing, dropping the WITHOUT OIDS syntax creates a mutually compatible syntax. Comments Both MySQL and PostgreSQL support -- as an inline comment in an SQL statement. No need to strip those out. ALTER TABLE Both support ALTER TABLE ADD CONSTRAINT syntax which in our example is used to define the PRIMARY KEY, however while the syntax remains the same, the choice of datatype affects the outcome. The following works in both products when the datatype is CHARACTER(32). More about CHARACTER() later. ALTER TABLE donorschoose_projects ADD CONSTRAINT pk_donorschoose_projects PRIMARY KEY(_projectid); In our example dataset, the primary key is defined with a TEXT datatype, and in MySQL this fails. ERROR 1170 (42000): BLOB/TEXT column '_projectid' used in key specification without a key length As the data in the dataset for primary keys by further analysis is indeed a 32 byte hexadecimal value, this is changed to CHARACTER(32) to be compatible for this data loading need. This however is an important key difference in any migration process with other data sets. Side Note Both products support the definition of the PRIMARY KEY in the CREATE TABLE syntax two different ways. CREATE TABLE demo_pk1 (id character(32) NOT NULL PRIMARY KEY); CREATE TABLE demo_pk2 (id character(32) NOT NULL, PRIMARY KEY(id)); CREATE INDEX Both use CREATE INDEX syntax however with our sample dataset, this is the first observed difference in syntax with provided sample SQL statements. PostgresSQL CREATE INDEX projects_schoolid ON projects USING btree (_schoolid); MySQL The USING <type> qualifier must appear before the ON <table>. CREATE INDEX USING btree projects_schoolid ON projects (_schoolid); In both products USING btree is an optional syntax (for minimum compatibility) purposes so removing this provides a consistency. Data Types The following data types are defined in the PostgreSQL example data set. Each is discussed to identify a best fit in MySQL. For reference: PostgreSQL 9.3 Documentation for Data Types MySQL 5.6 Documentation on Data Types character This data type is for a fixed width character field and requires a length attribute. MySQL supports CHARACTER(n) syntax for compatibility, however generally CHAR(n) is the preferred syntax. Indeed, PostgreSQL also supports CHAR(n). The following showing both variants is valid in both products. CREATE TABLE demo_character(c1 CHARACTER(1), c2 CHAR(1)); varchar/character varying While this dataset does not use these datatypes, they are critical in the general conservations of character (aka string) types. This refers to a variable length string. While character varying is not a valid MySQL syntax, varchar is compatible with both products. CREATE TABLE demo_varchar(vc1 VARCHAR(10)); text In PostgresSQL, text is used for variables of undefined length. The maximum length of a field is 1GB as stated in the FAQ. In MySQL however TEXT only stores 2^16 characters (64K). The use of LONGTEXT is needed to support the full length capacity in PostgeSQL. This store 2^32 characters (~4GB). Of all the complexity of this example dataset, the general use of text will be the most difficult to modify to a more applicable VARCHAR or TEXT datatype when optimizing in MySQL. integer PostgreSQL uses the integer datatype for a signed 4 byte integer value. MySQL supports the same syntax, however generally prefers to refer to the shorter INT syntax. Both products support both overall. mysql> CREATE TABLE demo_integer(i1 INTEGER, i2 INT); Query OK, 0 rows affected (0.11 sec) mysql> INSERT INTO demo_integer VALUES (1,-1); Query OK, 1 row affected (0.05 sec) mysql> SELECT * FROM demo_integer; +------+------+ | i1 | i2 | +------+------+ | 1 | -1 | +------+------+ 1 row in set (0.00 sec) demo=# CREATE TABLE demo_integer(i1 INTEGER, i2 INT); CREATE TABLE demo=# INSERT INTO demo_integer VALUES (1,-1); INSERT 0 1 demo=# SELECT * FROM demo_integer; i1 | i2 ----+---- 1 | -1 (1 row) And just to note the boundary of this data type. mysql> TRUNCATE TABLE demo_integer; Query OK, 0 rows affected (0.04 sec) mysql> INSERT INTO demo_integer VALUES (2147483647, -2147483648); Query OK, 1 row affected (0.04 sec) mysql> SELECT * FROM demo_integer; +------------+-------------+ | i1 | i2 | +------------+-------------+ | 2147483647 | -2147483648 | +------------+-------------+ 1 row in set (0.00 sec) demo=# TRUNCATE TABLE demo_integer; TRUNCATE TABLE demo=# INSERT INTO demo_integer VALUES (2147483647, -2147483648); INSERT 0 1 demo=# SELECT * FROM demo_integer; i1 | i2 ------------+------------- 2147483647 | -2147483648 (1 row) The difference is in out-of-bounds value management, and here MySQL defaults suck. You can read my views at DP#4 The importance of using sql_mode. demo=# TRUNCATE TABLE demo_integer; TRUNCATE TABLE demo=# INSERT INTO demo_integer VALUES (2147483647 + 1, -2147483648 - 1); ERROR: integer out of range demo=# SELECT * FROM demo_integer; i1 | i2 ----+---- (0 rows) mysql> TRUNCATE TABLE demo_integer; Query OK, 0 rows affected (0.04 sec) mysql> INSERT INTO demo_integer VALUES (2147483647 + 1, -2147483648 - 1); Query OK, 1 row affected, 2 warnings (0.07 sec) mysql> SELECT * from demo_integer; +------------+-------------+ | i1 | i2 | +------------+-------------+ | 2147483647 | -2147483648 | +------------+-------------+ 1 row in set (0.00 sec) While not in this dataset, both support the bigint data type. While the PostgreSQL docs indicate bigint is 8 bytes, testing with PostgresSQL 9.3 failed. Something to investigate more later. demo=# CREATE TABLE demo_bigint(i1 BIGINT); CREATE TABLE demo=# INSERT INTO demo_bigint VALUES (2147483647 + 1), (-2147483648 - 1); ERROR: integer out of range mysql> CREATE TABLE demo_bigint(i1 BIGINT); Query OK, 0 rows affected (0.12 sec) mysql> INSERT INTO demo_bigint VALUES (2147483647 + 1), (-2147483648 - 1); Query OK, 2 rows affected (0.04 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> SELECT * from demo_bigint; +-------------+ | i1 | +-------------+ | 2147483648 | | -2147483649 | +-------------+ 2 rows in set (0.01 sec) And for reference, both products support smallint, a 2-byte integer. Each product has additional integer data types. numeric For a fixed-precision number, PostgreSQL uses numeric but supports decimal.It would not be surprising to know that MySQL uses DECIMAL and for compatibility supports NUMERIC. This leads to a side-bar discussion on knowing your data-types for your product. In a recent interview for a MySQL Engineer, a candidate (with SQL Server experience) provided a code example defining the NUMERIC datatype. I knew it was technically valid in MySQL syntax, but never actually seen this in use. When I asked the candidate for what was the syntax commonly used for a fixed-precision datatype they were unable to answer. real/double precision This dataset does not include these data types, however for reference, PostgresSQL uses real for 4 bytes, and double precision for 8 bytes. MySQL uses float for 4 bytes, and double for 8 bytes. MySQL however supports both PostgreSQL syntax options, however PostgreSQL supports float, but not double. demo=# CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2 DOUBLE PRECISION); ERROR: type "double" does not exist LINE 1: ...TE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2... demo=# CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d2 DOUBLE PRECISION); CREATE TABLE mysql> CREATE TABLE demo_floatingpoint(f1 FLOAT, f2 REAL, d1 DOUBLE, d2 DOUBLE PRECISION); Query OK, 0 rows affected (0.07 sec) date Both PostgreSQL and MySQL use the date data type. timestamp Both PostgreSQL and MySQL use the timestamp data type to store date/time values. However, there is a difference in both precision and implementation here. In PostgresSQL, timestamp supports a date before EPOCH, while in MySQL it does not. MySQL uses the DATETIME datatype. Using PostgresSQL timestamp and MySQL DATETIME, both support microsecond precision. MySQL however only started to provide this in MySQL 5.6. A key difference in column definition is the PostgreSQL timestamp without time zone syntax, used in our example dataset. Analysis of data loading will determine the impact here. boolean SQL:1999 calls for a Boolean datatype, and both PostgreSQL and MySQL support defining a column as BOOLEAN. MySQL however implicitly converts this to a SIGNED TINYINT, and any future DDL viewing shows this reference. When referencing boolean, in PostgreSQL WHERE column_name = TRUE or WHERE column_name = t retrieves a true value. In MySQL WHERE column_name = TRUE or WHERE column_name = 1. When you SELECT a boolean, in PostgresSQL the answer is ‘t’, in MySQL, the answer is 1. demo=# CREATE TABLE demo_boolean (b1 boolean); CREATE TABLE demo=# INSERT INTO demo_boolean VALUES (TRUE),(FALSE); INSERT 0 2 demo=# SELECT * FROM demo_boolean; b1 ---- t f (2 rows) mysql> CREATE TABLE demo_boolean (b1 boolean); Query OK, 0 rows affected (0.11 sec) mysql> INSERT INTO demo_boolean VALUES (TRUE),(FALSE); Query OK, 2 rows affected (0.03 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM demo_boolean; +------+ | b1 | +------+ | 1 | | 0 | +------+ 2 rows in set (0.00 sec) Other Data Types Only the data types in this example have been reviewed. Other syntax In our sample SQL script, there is psql specific syntax to show a debugging line with \qecho .... For compatibility these are removed. The loading of data with the \COPY <table_name> FROM PSTDIN WITH CSV HEADER is PostgreSQL specific and so loading the data is a future topic. Finally, the VACUUM ANALYZE <table_name> command is also PostgreSQL specific and removed. This is a means effectively of optimizing and analyzing the table. Both PostgreSQL and MySQL have an ANALYZE command, however the syntax is different, with the required TABLE keyword in MySQL. PostgresSQL ANALYZE donorschoose_projects; ANALYZE TABLE donorschoose_projects; ERROR: syntax error at or near "table" MySQL ANALYZE donorschoose_projects; ERROR 1064 (42000): You have an error in your SQL syntax;... ANALYZE TABLE donorschoose_projects; MySQL has an OPTIMIZE TABLE syntax, however while technically valid syntax this is not compatible with the default storage table InnoDB. mysql> OPTIMIZE TABLE donorschoose_projects; +----------------------------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +----------------------------+----------+----------+-------------------------------------------------------------------+ | test.donorschoose_projects | optimize | note | Table does not support optimize, doing recreate + analyze instead | | test.donorschoose_projects | optimize | status | OK | +----------------------------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (0.32 sec)

  • How to Manage the World’s Top Open Source Databases: ClusterControl 1.2.9 Features Webinar Replay
    Thanks to everyone who attended and participated in this week’s webinar on 'New Features - ClusterControl 1.2.9 Release'. If you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online. Our speaker this time was Johan Andersson, CTO, Severalnines. Watch the replay Introducing the new ClusterControl 1.2.9 - with live demo from Severalnines AB   Read the slides Slides: Introducing the new ClusterControl 1.2.9 - with live demo from Severalnines AB   With over 7,000 users to date, ClusterControl is the leading, platform independent automation and management solution for the MySQL, MongoDB and now Postgres databases, its latest main feature.  Highlights in ClusterControl 1.2.9 include: Support for PostgreSQL Servers Advanced HAProxy Configurations and Built-in Stats Hybrid Replication with Galera Clusters Galera Replication Traffic Encryption Encrypted Communication between ClusterControl and MySQL-based systems Query Deadlock Detection in MySQL-based systems Bootstrap Galera Cluster Restore of Backups New UI theme RPC interface to ClusterControl Chef Recipe and Puppet Manifest for ClusterControl Zabbix Plugin for ClusterControl   RELATED BLOGS ClusterControl 1.2.9 Release Blog & Notes ClusterControl 1.2.9 Press Release How to manage and monitor your existing Postgres servers   ABOUT CLUSTERCONTROL Setting up, maintaining and operating a database cluster can be tricky. ClusterControl gives you the power to deploy, manage, monitor and scale entire clusters efficiently and reliably. ClusterControl supports a variety of MySQL-based clusters (Galera, NDB, 5.6 Replication), MariaDB as well as MongoDB/TokuMX-based clusters - and now Postgres.   Blog category: Product UpdatesTags: AWSChefclustercontrolgalera clusterhaproxyhigh availabilityload balancingmanagementMariaDBMySQLmysql clustermysql replicationpercona xtradb clusterPostgreSQLPuppetreplicationwebinar

  • InnoDB全文索引:N-gram Parser
    InnoDB默认的全文索引parser非常合适于Latin,因为Latin是通过空格来分词的。但对于像中文,日文和韩文来说,没有这样的分隔符。一个词可以由多个字来组成,所以我们需要用不同的方式来处理。在MySQL 5.7.6中我们能使用一个新的全文索引插件来处理它们:n-gram parser. 什么是N-gram? 在全文索引中,n-gram就是一段文字里面连续的n个字的序列。例如,用n-gram来对”信息系统”来进行分词,得到的结果如下:N=1 : '信', '息', '系', '统'; N=2 : '信息', '息系', '系统'; N=3 : '信息系', '息系统'; N=4 : '信息系统'; 如何在InnoDB中使用N-gram Parser? N-gram parser是默认加载到MySQL中并可以直接使用的。我们只需要在DDL中创建全文索引时使用WITH PARSER ngram。比如,下面的SQL语句在MySQL 5.7.6及更高版本上可以运行。mysql > CREATE TABLE articles ( FTS_DOC_ID BIGINT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(100), FULLTEXT INDEX ngram_idx(title) WITH PARSER ngram ) Engine=InnoDB CHARACTER SET utf8mb4; Query OK, 0 rows affected (0.06 sec) mysql> # ALTER TABLE articles ADD FULLTEXT INDEX ngram_idx(title) WITH PARSER ngram; mysql> # CREATE FULLTEXT INDEX ngram_idx ON articles(title) WITH PARSER ngram;我们引入了一个新的全局变量叫ngram_token_size。由它来决定n-gram中n的大小,也就是词的大小。它的默认值是2,这个时候,我们使用的是bigram。它的合法的取值范围是1到10。现在,我们很自然会想到一个问题:实际应用中应该如何设置ngram_token_size值的大小呢?当然,我们推荐使用2。但是你也可以通过如下这个简单的规则来可以选择任何合法的值:设置到你希望能查询到的最小的词的大小。如果你想查询到单个字,那么我们需要设置为1。 ngram_token_size的值设置的越小,全文索引占用的空间也越小。一般来说,查询正好等于ngram_token_size的词,速度会更快,但是查询比它更长的词或短语,则会变慢。 N-gram分词处理 N-gram parser和系统默认的全文索引parser有如下不同点: 词大小检查:因为有了ngram_token_size,所以innodb_ft_min_token_size和innodb_ft_max_token_size将不适用于n-gram。 无用词(stopword)处理:通常,对于一个新的词,我们会查找stopwords表,看是否有匹配的词。如果有,这个词就不会加入到全文索引中。但是在n-gram中,我们会查找stopwords表,看是否包含里面的词。这样处理的原因是,在中日韩的文本中,有很多没有意义的字符,词语和标点符号。比如,如果我们把‘的’加入到stopwords表中,那么对于句子‘信息的系统’,在默认情况下我们分词结果为‘信息’,‘系统’。其中‘息的’和‘的系’被过滤掉了。 我们可以通过查询INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE和INFORMATION_SCHEMA.INNODB_FT_TABLE_TABLE来查询哪些词在全文索引里面。这是一个非常有用的调试工具。如果我们发现一个包含某个词的文档,没有如我们所期望的那样出现在查询结果中,那么这个词可能是因为某些原因不在全文索引里面。比如,它含有stopword,或者它的大小小于ngram_token_size等等。这个时候我们就可以通过查询这两个表来确认。下面是一个简单的例子:mysql> INSERT INTO articles (title) VALUES ('信息系统'); Query OK, 1 row affected (0.01 sec) mysql> SET GLOBAL innodb_ft_aux_table="test/articles"; Query OK, 0 rows affected (0.00 sec) mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE; +--------+--------------+-------------+-----------+--------+----------+ | WORD | FIRST_DOC_ID | LAST_DOC_ID | DOC_COUNT | DOC_ID | POSITION | +--------+--------------+-------------+-----------+--------+----------+ | 信息 | 1 | 1 | 1 | 1 | 0 | | 息系 | 1 | 1 | 1 | 1 | 3 | | 系统 | 1 | 1 | 1 | 1 | 6 | +--------+--------------+-------------+-----------+--------+----------+ 3 rows in set (0.00 sec) N-gram查询处理 文本查询(Text Searches) 在自然语言模式(NATURAL LANGUAGE MODE)下,文本的查询被转换为n-gram分词查询的并集。例如,(‘信息系统’)转换为(‘信息 息系 系统’)。下面一个例子: mysql> INSERT INTO articles (title) VALUES ('信息系统'), ('信息 系统'), ('信息的系统'), ('信息'), ('系统'), ('息系'); Query OK, 6 rows affected (0.01 sec) Records: 6 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST ('信息系统' IN NATURAL LANGUAGE MODE); +------------+-----------------+ | FTS_DOC_ID | title | +------------+-----------------+ | 1 | 信息系统 | | 6 | 息系 | | 2 | 信息 系统 | | 3 | 信息的系统 | | 4 | 信息 | | 5 | 系统 | +------------+-----------------+ 6 rows in set (0.01 sec) 在布尔模式(BOOLEAN MODE),文本查询被转化为n-gram分词的短语查询。例如,(‘信息系统’)转换为(“‘信息 息系 系统'”)。 mysql> SELECT * FROM articles WHERE MATCH(title) AGAINST('信息系统' IN BOOLEAN MODE); +------------+--------------+ | FTS_DOC_ID | title | +------------+--------------+ | 1 | 信息系统 | +------------+--------------+ 1 row in set (0.00 sec) 通配符查询(Wildcard Searches) 如果前缀的长度比ngram_token_size小,那么查询结果将返回在全文索引中所有以这个词作为前缀的n-gram的词。 mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST ('信*' IN BOOLEAN MODE); +------------+-----------------+ | FTS_DOC_ID | title | +------------+-----------------+ | 1 | 信息系统 | | 2 | 信息 系统 | | 3 | 信息的系统 | | 4 | 信息 | +------------+-----------------+ 4 rows in set (0.00 sec) 如果前缀的长度大于等于ngam_token_size,那么这个查询则转换为一个短语(phrase search),通配符则被忽略。例如,(‘信息*’)转换为(‘”信息”‘),(‘信息系*’)转换为(‘”信息 息系”‘)。 mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST ('信息*' IN BOOLEAN MODE); +------------+-----------------+ | FTS_DOC_ID | title | +------------+-----------------+ | 1 | 信息系统 | | 2 | 信息 系统 | | 3 | 信息的系统 | | 4 | 信息 | +------------+-----------------+ 4 rows in set (0.00 sec) mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST ('信息系*' IN BOOLEAN MODE); +------------+--------------+ | FTS_DOC_ID | title | +------------+--------------+ | 1 | 信息系统 | +------------+--------------+ 1 row in set (0.00 sec) 短语查询(Phrase Searches) 短语查询则被转换为n-gram分词的短语查询。比如,(‘信息系统’)转换为(‘”信息 息系 系统”‘)。 mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST('"信息系统"' IN BOOLEAN MODE); +------------+--------------+ | FTS_DOC_ID | title | +------------+--------------+ | 1 | 信息系统 | +------------+--------------+ 1 row in set (0.00 sec) mysql> SELECT * FROM articles WHERE MATCH (title) AGAINST ('"信息 系统"' IN BOOLEAN MODE); +------------+---------------+ | FTS_DOC_ID | title | +------------+---------------+ | 2 | 信息 系统 | +------------+---------------+ 1 row in set (0.01 sec) 如果您想了解更多关于InnoDB全文索引的详细内容,可以参考用户手册中InnoDB全文索引的部分,还有Jimmy在Dr. Dobb上的精彩文章。如果您想了解更多关于n-gram的详细内容,则可以参考用户手册中n-gram parser的部分。 我们很高兴在MySQL 5.7全文索引中增强对中日韩文的支持,这也是我们工作中很重要的部分,希望这个功能对大家有帮助。如果您有任何问题,可以在本blog中进行评论,提交一个服务需求,或者提交一个bug报告。 最后,感谢您使用MySQL!

  • A few interesting findings on MariaDB and MySQL scalability, multi-table OLTP RO
    It’s been almost a year since I benchmarked MariaDB and MySQL on our good old 4 CPU / 32 Cores / 64 Threads Sandy Bridge server. There seem to be a few interesting things happened since that time. MySQL 5.6.23 peak throughput dropped by ~8% compared to 5.6.14. Looks like this regression appeared in MySQL 5.6.21. 10.0.18 (git snapshot) peak threads increased by ~20% compared to 10.0.9 and reached parity with 5.6.23 (not with 5.6.20 though). 10.1.4 (git snapshot) and 5.7.5 are the champions (though 10.1.4 was usually 1-5% faster). Both have similar peaks @ 64 threads. They seem to be pushing this system limits, since both have capacity to scale more. 5.7.6 has serious (~50%) scalability regression compared to 5.7.5. There is heavy LOCK_plugin mutex contention, which seem to be caused by query rewrite plugin framework. I couldn’t find way to disable it. Hope this will be fixed in coming releases. Raw data available at google doc.