พุธ, 30 ก.ค. 2014
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Using innodb_large_prefix to avoid ERROR 1071
    If you've ever tried to add an index that includes a long varchar column to an InnoDB table in MySQL, you may have seen this error: ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes The character limit depends on the character set you use. For example if you use latin1 then the largest column you can index is varchar(767), but if you use utf8 then the limit is varchar(255). There is also a separate 3072 byte limit per index. The 767 byte limit is per column, so you can include multiple columns (each 767 bytes or smaller) up to 3072 total bytes per index, but no column longer than 767 bytes. (MyISAM is a little different. It has a 1000 byte index length limit, but no separate column length limit within that). One workaround for these limits is to only index a prefix of the longer columns, but what if you want to index more than 767 bytes of a column in InnoDB? In that case you should consider using innodb_large_prefix, which was introduced in MySQL 5.5.14 and allows you to include columns up to 3072 bytes long in InnoDB indexes. It does not affect the index limit, which is still 3072 bytes as quoted in the manual: The InnoDB internal maximum key length is 3500 bytes, but MySQL itself restricts this to 3072 bytes. This limit applies to the length of the combined index key in a multi-column index. Read on for details and examples about innodb_large_prefix. Here are a few pre-requisites for using innodb_large_prefix: At the database level you have to use innodb_file_format=BARRACUDA At the table level you have to use ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED The default file format is still Antelope for backwards compatibility, and the default row format is COMPACT. You can set both innodb_file_format and innodb_large_prefix dynamically, but you should also set them in my.cnf so they survive a restart. Here's an example. If I try to create this table with innodb_large_prefix disabled I get an error: ``` mysql> create table if not exists utf8_test ( -> day date not null, -> product_id int not null, -> dimension1 varchar(500) character set utf8 collate utf8_bin not null, -> dimension2 varchar(500) character set utf8 collate utf8_bin not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes ``` If I enable innodb_large_prefix I can create the table successfully: ``` mysql> set global innodb_file_format = BARRACUDA; Query OK, 0 rows affected (0.00 sec) mysql> set global innodb_large_prefix = ON; Query OK, 0 rows affected (0.00 sec) mysql> create table if not exists utf8_test ( -> day date not null, -> product_id int not null, -> dimension1 varchar(500) character set utf8 collate utf8_bin not null, -> dimension2 varchar(500) character set utf8 collate utf8_bin not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC; Query OK, 0 rows affected (0.02 sec) ``` The examples are similar for latin1, but I can use columns three times as long since it's a single-byte character set. ``` mysql> create table if not exists latin1_test ( -> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes mysql> set global innodb_file_format = BARRACUDA; Query OK, 0 rows affected (0.00 sec) mysql> set global innodb_large_prefix = ON; Query OK, 0 rows affected (0.00 sec) mysql> create table if not exists latin1_test ( -> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC; Query OK, 0 rows affected (0.02 sec) ``` And here's what happens if I try to create an index longer than 3072 bytes: ``` mysql> create table if not exists long_index_test ( -> day date not null, -> product_id int not null, -> dimension1 varchar(1500) not null, -> dimension2 varchar(1500) not null, -> dimension3 varchar(1500) not null, -> unique index unique_index (day, product_id, dimension1, dimension2, dimension3) -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC; ERROR 1071 (42000): Specified key was too long; max key length is 3072 bytes ```

  • Early Bird Pricing for MySQL Central @ Oracle Open World Extended to August 1st
    Register before August 1st for early bird pricing! Millions of organizations around the world trust MySQL to power their business-critical web, cloud, and embedded applications. Want to learn best practices to develop next-generation applications with MySQL? Joins us at MySQL Central @ OpenWorld. Highlights Learn new skills Share and network with the global MySQL community Hear about new MySQL features directly from Oracle Get insight on product roadmaps Have fun

  • Prevent MySQL downtime: Set max_user_connections
    One of the common causes of downtime with MySQL is running out of connections. Have you ever seen this error? “ERROR 1040 (00000): Too many connections.” If you’re working with MySQL long enough you surely have. This is quite a nasty error as it might cause complete downtime… transient errors with successful transactions mixed with failing ones as well as only some processes stopping to run properly causing various kinds of effects if not monitored properly.There are number of causes for running out of connections, the most common ones involving when the Web/App server is creating unexpectedly large numbers of connections due to a miss-configuration or some script/application leaking connections or creating too many connections in error.The solution I see some people employ is just to increase max_connections to some very high number so MySQL “never” runs out of connections. This however can cause resource utilization problems – if a large number of connections become truly active it may use a lot of memory and cause the MySQL server to swap or be killed by OOM killer process, or cause very poor performance due to high contention.There is a better solution: use different user accounts for different scripts and applications and implement resource limiting for them. Specifically set max_user_connections:mysql> GRANT USAGE ON *.* TO 'batchjob1'@'localhost' -> WITH MAX_USER_CONNECTIONS 10;This approach (available since MySQL 5.0) has multiple benefits:Security – different user accounts with only required permissions make your system safer from development errors and more secure from intruders Preventing Running out of Connections – if there is a bug or miss-configuration the application/script will run out of connections of course but it will be the only part of the system affected and all other applications will be able to use the database normally. Overload Protection – Additional numbers of connections limits how much queries you can run concurrently. Too much concurrency is often the cause of downtime and limiting it can reduce the impact of unexpected heavy queries running concurrently by the application.In addition to configuring max_user_connections for given accounts you can set it globally in my.cnf as “max_user_connections=20.” This is too coarse though in my opinion – you’re most likely going to need a different number for different applications/scripts. Where max_user_connections is most helpful is in multi-tenant environments with many equivalent users sharing the system.The post Prevent MySQL downtime: Set max_user_connections appeared first on MySQL Performance Blog.

  • MySQL Cluster latest developments – webinar replay + Q&A
    I recently hosted hosting a webinar which explained what MySQL Clusrter is, what it can deliver and what the latest developments were. The “Discover the latest MySQL Cluster Developments” webinar is now available to view here. At the end of this article you’ll find a full transcript of the Q&A from the live session. Details: View this webinar to learn how MySQL Cluster 7.3, the latest GA release, enables developer agility by making it far simpler and faster to build your products and web-based applications with MySQL Cluster. You’ll also learn how MySQL Cluster and its linear scalability, 99.999% uptime, real-time responsiveness, and ability to perform over 1 BILLION Writes per Minute can help your products and applications meet the needs of the most demanding markets. MySQL Cluster combines these capabilities and the affordability of open source, making it well suited for use as an embedded database. In this replay you’ll learn about the following MySQL Cluster capabilities, including the latest innovations in the 7.3 GA release: Auto-sharding (partitioning) across commodity hardware for extreme read and write scalability Cross-data center geographic synchronous and asynchronous replication Online scaling and schema upgrades, now with improved Connection Thread Scalability Real-time optimizations for ultra-low, predictable latency Foreign Key Support for tight referential integrity SQL and NoSQL interfaces, now with support for Node.js Support for MySQL 5.6, allowing use of the latest InnoDB and NDB engines within one database Integrated HA for 99.999% availability Auto-Installer that installs, configures, provisions and tunes a production grade cluster in minutes In addition, you will get a sneak preview of some of the new features planned in MySQL Cluster 7.4 Come and learn how MySQL Cluster can help you differentiate your products and extend their reach into new markets, as well as deliver highly demanding web-based applications, either on premises or in the cloud. Q&A Transcript When using the Memcached API, can I use my existing Memcached connector? Yes. The Memcached API actually uses the regular memcached protocol but then has a custom plugin that acesses the MySQL Cluster data nodes rather than using its local in-memory store. If I’m replicating between 2 Clusters in 2 data centres and the WAN fails for a minute – what happens? Because the replication between MySQL Cluster instances is asynchronous – the application isn’t impacted (for example, there will be no extra errors or latency). The changes will be stored in the binary log of the Cluster to which they were sent and then replicated to the other site once the WAN returns. Can I scale back down as well as up? It’s an online operation to reduce the number of MySQL Servers (or other application nodes) but that isn’t currently possible for the data nodes. In reality, it’s very rare that applications need to reduce the amount of data they store. Are there any MySQL connectors that don’t work with MySQL Cluster? No, any connector that works with MySQL will work just as well with MySQL Cluster. Do you have more details on the benchmark results? Yes – take a look at the MySQL Cluster Benchmarks page. I’ve been hearing about MySQL Fabric – does that also allow queries and joins ot span multiple shards? Currently, the only option for cross-shard queries is to use MySQL Cluster or implement them at the application layer. Is the data is partioned over diffrent cluster nodes or do all cluster nodes hold the full data set. Each node group stores a subset of the rows from each table. The 2 data nodes within the node group will store the exact same set of rows. Where can I find a definition of those different kinds of Foreign Key constraints? The wikipedia definition for Foreign Keys is a good place to start. What is the diffrence between ndbcluster and MySQL Cluster ? None – they’re one and the same. When you hear any of “Cluster”, “MySQL Cluster”, “NDB” and “NDB Cluster” the meaning is the same. Do I need to have a web server installed for the Auto-Installer to work? No – the MySQL Cluster auto-installer comes with a small web server built-in. Are there any dependencies to meet before installing MySQL Cluster on RHEL Liunx? It should work out of the box. My preferred way of working is to use the generic Linux tar ball for MySQL Cluster (get it from the MySQL Cluster download page) – extract it and then run the auto-installer or configure it manually. Is there any guide available to migrate mysql nodes to mysql cluster? Probably the closest we have is a white paper on how to get the best out of any PoC for MySQL Cluster (as it highlights what needs to be done differently in order to get the best results)… MySQL Cluster Evaluation Guide. Note that MySQL Cluster uses a different version of the mysqld binary and so you’ll need to stop your existing MySQL Server and start up the new one. To migrate a specific table to MySQL Cluster after that is done use “ALTER TABLE my-tab ENGINE=NDB;”. Does drupal support MySQL Cluster? I’ve heard of people doing it but I suspect that minor tweaks to teh Drupal code may have been needed. How do the NoSQL APIs map to the SQL database schemas? It varies slightly by API – in general, you provide some annotations or meta-data to specify how tables or columns should map to keys/objects/properties. With Memcached you have the option of being schema-less and having all data stored in one, big, generic table. Where can I learn more about MySQL Fabric? The MySQL Fabric page is a good starting point; for an end-to-end example, take a look at this tutorial on adding HA and then sharding using MySQL Fabric. What is difference between MySQL Fabric and MySQL Cluster? MySQL Fabric provides server farm management on top of ‘regular’ MySQL Servers storing data with the InnoDB storage engine it delivers HA and sharding. MySQL Cluster works below the MySQL Server, storing data in the NDB storage engine (on the data nodes). MySQL Cluster can deliver higher levels of High Availability; better application transparency and cross-shard queries, joins and transactions but it does mean using a different storage engine which of course comes with its own limitations (see the MySQL Cluster Evaluation Guide for details of those). So, if I have any full table scans, should I forget about MySQL Cluster> Note necessarily. If every one of your high running operations is a full table scan then MySQL Cluster might not be ideal. However if most operations are simpler but you have some full table scans then that could be fine. The optimisations going into MySQL Cluster 7.4 should particularly benefit table scans.

  • Orchestrator 1.0.4 released
    Outbrain's orchestrator Version 1.0.4 is released. Quick links: Orchestrator Manual, FAQ, Downloads What's new? Co-masters orchestrator now does a much better visualization of Master-Master replication:   The work on making the visualization more appealing also resulted in making a better distinction between the visual tree topology and the replication topology. This in turn also fixes the ruleset for moving slaves in a co-master topology, and lays the ground for future work on co-masters (i.e. Galera; unscheduled). resolve Had a few reports on orchestrator not being able to connect to some topology instances. It seems like the problem is with name resolving. To prove/disprove this, the resolve command or resolve API call now allows checking for instance connectivity. Orchestrator will first test whether the CNAME at all resolves, then try to dial the TCP address (host + port) to see whether it can make a connection. A sample API call would be: http://127.0.0.1:3000/api/resolve/myhost.mydomain/3306 A command line invocation would be: orchestrator -c resolve -i myhost.mydomain:3306 chef cookbook Silvia Botros of SendGrid has published a generic cookbook for deploying orchestrator via chef: https://github.com/sendgrid-ops/chef-orchestrator Thanks, Silvia! Topology Annonymizer Cheatsheet If you want to share your topology graph, but do not wish to expose your host names, open your cluster page and execute the following in your browser's JavaScript console (jQuery is already included by orchestrator): var _=function() {   var counter = 0;     var port = 3306;   jQuery("h3.popover-title").each(function() {     jQuery(this).html("instance-"+(counter++)+":"+port)   }); }(); This results in a modified topology such as the one presented above ("instance-11", "instance-12" and so on). Other enhancements since 1.0: Proper error log entry when backend database is not found; also terminates execution. Added --stack command line. Combined with --debug this prints the stack trace upon error. When a known instance is not found (gone from the radar), now showing time since last seen instead of irrelevant last known slave lag. Various UI enhancements/fixes Orchestrator is released as open source under the Apache 2.0 license and is available at: https://github.com/outbrain/orchestrator