พฤหัสบดี, 29 ก.ย. 2016


Donate using PayPal

PTT Oil Price

Gold Status


สมุยสเก็ตอัพ คอมมิวนิตี้
Planet MySQL
Planet MySQL - http://planet.mysql.com

  • New UUID functions in MySQL 8.0.0
    MySQL 8.0.0 introduces three new miscellaneous UUID functions of IS_UUID(), UUID_TO_BIN() and BIN_TO_UUID() joining the UUID() (in 5.0) and UUID_SHORT() (in 5.1) functions. See 8.0.0 Release Notes. Thanks to the great work and hosting by Marcus Popp anybody can test out the SQL syntax of MySQL 8.0.0 using db4free without installing anything. If you want a minimal install Giuseppe Maxia provides docker minimal images of 5.0+ versions including 8.0.0. A running docker container with MySQL 8.0 is as easy as: The following script shows the usage and checks of these new functions. Historically, to encode a UUID into a BINARY(16) datatype was to use UNHEX(REPLACE()) syntax. There was however no easy to unencode a BINARY(16) into the original value. BIN_TO_UUID() as shown in the output below solves this problem. mysql> SELECT IS_UUID(1); +------------+ | IS_UUID(1) | +------------+ | 0 | +------------+ 1 row in set (0.01 sec) mysql> SET @uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'; Query OK, 0 rows affected (0.00 sec) mysql> SELECT IS_UUID(@uuid) AS is_uuid; +---------+ | is_uuid | +---------+ | 1 | +---------+ 1 row in set (0.01 sec) mysql> SELECT IS_UUID(REPLACE(@uuid,'-','')) AS is_uuid; +---------+ | is_uuid | +---------+ | 1 | +---------+ 1 row in set (0.00 sec) mysql> SELECT @uuid_bin := UUID_TO_BIN(@uuid) AS uuid_bin, LENGTH(@uuid_bin) AS len; +------------------+------+ | uuid_bin | len | +------------------+------+ | ���������������� | 16 | +------------------+------+ 1 row in set (0.00 sec) mysql> SELECT @old_uuid_bin := UNHEX(REPLACE(@uuid,'-','')) AS old_uuid_bin, LENGTH(@old_uuid_bin) AS len; +------------------+------+ | old_uuid_bin | len | +------------------+------+ | ���������������� | 16 | +------------------+------+ 1 row in set (0.00 sec) mysql> SELECT @uuid_bin = @old_uuid_bin; +---------------------------+ | @uuid_bin = @old_uuid_bin | +---------------------------+ | 1 | +---------------------------+ 1 row in set (0.00 sec) mysql> SELECT BIN_TO_UUID(@uuid_bin) AS uuid, HEX(@old_uuid_bin) AS uuid_old; +--------------------------------------+----------------------------------+ | uuid | uuid_old | +--------------------------------------+----------------------------------+ | aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee | AAAAAAAABBBBCCCCDDDDEEEEEEEEEEEE | +--------------------------------------+----------------------------------+ 1 row in set (0.01 sec)

  • Optimizing VividCortex’s Samples View: Part 2
    A few weeks ago, I wrote a blog post explaining how sketch sampling methods can be employed to achieve fast 30-day data visibility for monitoring users. The problem we faced was that with that standard of retention, we’ve frequently seen systems that involve nearly a million query samples in a 30-day window, meaning that special solutions are needed in order to avoid overloading users’ browsers. The solution we’ve found lies in a hash ordering that's proven to be both surprisingly simple and efficient. In this Part 2 post, I’ll look at why it works so well. Ordering with a Hash VividCortex’s query samples are stored in MySQL tables. The simplified schema definition looks like this: query bigint PRIMARY host int PRIMARY ts int PRIMARY sample blob Our primary key for those tables is (query, host, ts), as you may have guessed with the PRIMARY keyword. When we load the query details page for a user, we know the query, host, and time range, so our queries use the primary key. As mentioned above, in Part 1 of this blog post we wrote that the following condition solved our subsampling problem: ORDER BY MD5(CONCAT(qs.ts, qs.host)) LIMIT <our limit> One optimization that we’ve made so far is replacing the MD5 operation with a CRC32. It’s less expensive and produces the same end result. So, how does this work? MD5 and CRC32 functions are hash functions that we can use for a randomly distributed ordering. They also deterministic; given the same inputs, we’ll get the same output, and therefore the same ordering. Let’s dig into the details and see an example. First, think about how samples are stored as rows for the following view: Here it is as a diagram: I’ve split up the time range into two chunks so we can think about how things look when we “zoom in.” Imagine there are two ranges to represent different zoom levels. The first includes both chunks. The second includes only the first chunk. In each chunk, I’ve listed four rows which represent samples, so there are eight samples total. Each sample also has a hash. Let’s pick some samples! In each example, I’m limiting the count to three samples. Picking 3 samples from timestamps 1 to 4 gives us the following samples: (host 1, ts 4) (host 1, ts 1) (host 2, ts 4) I’ve also bolded them in the diagram below. Now, “zooming in” to timestamps 1 to 2 and picking 3 samples will give us: (host 1, ts 1) (host 1, ts 2) (host 2, ts 2) Notice how we’re still choosing the same samples in the first range that we saw before: (host 1, ts 1) (host 1, ts 2) And we’re including a new sample within our smaller time range to add more “detail”: (host 2, ts 2) We still haven’t seen (host 2, ts 1) yet. How would we see that? It’s easy: zoom in even more so that we only see timestamp 1, and we’ll be able to capture that as well. I hope this helps elucidate one method for using samples to manage massive data sets and query volumes. The best way to understand such a solution, though, is to see it actually put to use in a monitoring environment. In order to do so, you can request a free trial of VividCortex and experience what high-volume, highly-granular database monitoring is like, firsthand.

  • 7 Fresh Bugs in MySQL 8.0
    This blog post will look at seven bugs in MySQL 8.0. Friday afternoon is always ideal for a quick look at the early quality of MySQL 8.0! Last Friday, I did just that. If you haven’t heard the news yet, MySQL 8.0 DMR is available for download on mysql.com! Tools to the ready: pquery2, updated 8.0 compatible scripts in Percona-qa and some advanced regex to wade through the many cores generated by the test run. For those of you who know and use pquery-run.sh, this should mean a lot![09:41:50] [21492] ====== TRIAL #39308 ======In other words, almost 40K trials and 21.5K core dumps (or other significant issues) detected! This run had been churning away on a server for a number of days. On to the bug logging fun! After reducing test cases, and filtering duplicates, we have the following seven bugs logged in upstream; Bug #83120 virtual void Field::store_timestamp(const timeval*): Assertion ‘false’ failed. Bug #83118 handle_fatal_signal (sig=11) in replace_user_table Bug #83117 Assertion MDL_checker::is_read_locked(m_thd, *object)’ failed. Bug #83115 Assertion ‘maybe_null’ failed. handle_fatal_signal in tem_func_concat::val_str Bug #83114 Assertion `strlen(db_name) <= (64*3) && strlen(table_name) <= (64*3)’ failed. Bug #83113 SIGKILL myself on DROP TABLE Bug #83112 handle_fatal_signal (sig=11) in sp_get_flags_for_command My first impressions? MySQL 8.0 DMR is a reasonably solid release for a first iteration. It seems our friends at upstream are on an excellent path to making MySQL 8.0 another rock-solid release. Chapeau!

  • Three wishes for a new year
    (Almost) another new year by Jewish calendar. What do I wish for the following year? World peace Good health to all Relaxed GTID constraints I'm still not using GTID, and still see operational issues with working with GTID. As a latest example, our new schema migration solution, gh-ost, allows us to test migrations in production, on replicas. The GTID catch? gh-ost has to write something to the binary log. Thus, it "corrupts" the replica with a bogus GTID entry that will never be met in another server, thus making said replica unsafe to promote. We can work around this, but... I understand the idea and need for the Executed GTID Set. It will certainly come in handy with multi-writer InnoDB Cluster. However for most use cases GTID poses a burden. The reason is that our topologies are imperfect, and we as humans are imperfect, and operations are most certainly imperfect. We may wish to operate on a replica: test something, by intention or mistake. We may wish to use a subchain as the seed for a new cluster split. We may wish to be able to write to downstream replicas. We may use a 3rd party tool that issues a flush tables with read lock without disabling sql_log_bin. Things just happen. For that, I would like to suggest GTID control levels, such as: Strict: same as Oracle's existing implementation. Executed sets, purged sets, whatnot. Last executed: a mode where the only thing that counts is the last executed GTID value. If I repoint replica, all it needs to check is "hey this is my last executed GTID entry, give me the coordinates of yours. And, no, I don't care about comparing executed and purged sets, I will trust you and keep running from that point on" Declarative: GTIDs are generated, are visible in each and every binary log entry, but are completely ignored. I realize Oracle MySQL GTID is out for some over 3 years now, but I'm sorry - I still have reservations and see use cases where I fear it will not serve me right. How about my previous years wishes? World peace and good health never came through, however: My 2015 wish for "decent, operations friendly built in online table refactoring" was unmet, however gh-ost is a thing now and exceeds my expectations. No, really. Please come see Tom & myself present gh-ost and how it changed our migration paradigm. My 2012 wish for "decent, long waited for, implementation of Window Functions (aka Analytic Functions) for MySQL" was met by MariaDB's window functions. Not strictly Window Functions, but Oracle MySQL 8.0 will support CTE (hierarchial/recursive), worth a mention. See you in Amsterdam!

  • View Galera Cluster Best Practices Webinar Replay Part One!
    Codership’s Galera Cluster Best Practices Webinar for MySQL DBAs and DevOps Part One is now available to watch here