พุธ, 23 เม.ย. 2014
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • It is event season for SkySQL and MariaDB: Where will we see you next?
    The conference season has begun for us here at SkySQL. It seems like the almost all linux and MySQL shows are in the next four months. This is the time of year that we get to collect those airline points, have our beds made for us, eat out a few times a day and put on a few pounds! I have been practicing for this all year. Check out our Events Section to see all of our upcoming events. read more

  • It is event season for SkySQL and MariaDB: Where will we see you next?
    The conference season has begun for us here at SkySQL. It seems like the almost all linux and MySQL shows are in the next four months. This is the time of year that we get to collect those airline points, have our beds made for us, eat out a few times a day and put on a few pounds! I have been practicing for this all year. Check out our Events Section to see all of our upcoming events. read more

  • Ubuntu 14.04 – some MySQL ecosystem notes
    Following my previous post on the launch, I just rolled Ubuntu 14.04 LTS on an Amazon EC2 t1.micro instance (not something you expect to run a database server on, for sure – 1 vCPU, 0.613GiB RAM). If you do an apt-cache search mysql you get 435 return result sets with the default configuration (trusty: main & universe). If you do apt-get install mysql-server, you get MySQL 5.5. You enter the password of choice, and before you know it, MySQL is installed (a SELECT VERSION() will return 5.5.35-1ubuntu1). Next you decide to install MariaDB. I run an apt-get install mariadb-server. It pulls in libjemalloc (for TokuDB) and I expect future releases to ship this engine by default. You enter the password, and you get a new message (as pictured).   I verify my test database that I created exists. It does. A SELECT VERSION() returns 5.5.36-MariaDB-1. The innodb_version returns 5.5.36-MariaDB-33.0. I’m curious about MySQL 5.6 now. So I run apt-get install mysql-server-5.6. Not so straightforward.  start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.6 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up mysql-common-5.6 (5.6.16-1~exp1) ... Processing triggers for libc-bin (2.19-0ubuntu6) ... Errors were encountered while processing: mysql-server-5.6 E: Sub-process /usr/bin/dpkg returned an error code (1) Looks like MySQL 5.6 is more memory hungry… I edited /etc/mysql/my.cnf to ensure that innodb_buffer_pool_size = 64M (I increased this to 128M and it worked too) was set (there was nothing in the default config) and re-ran apt-get install mysql-server-5.6 and it started. My test database was still around ;-) I wanted to make sure that MySQL 5.6 isn’t more memory hungry just on that instance so I created yet another clean t1.micro instance and did an apt-get install mysql-server-5.6. Same error. Reported lp#1311387. Nothing to report in particular about Percona – 5.5.34 Percona XtraDB Cluster (GPL), Release 31.1 (Ubuntu), wsrep_25.9.rXXXX. One thing is for sure – if you’re playing around with the ecosystem, installs and upgrades aren’t exactly straightforward. Related posts: MariaDB 10.0.5 storage engines – check the Linux packages Using MariaDB on CentOS 6 Testing Fedora 19

  • Concurrent, read-only & cached: MongoDB, TokuMX, MySQL
    This has results for a read-only workload where all data is cached. The test query fetches all columns in one doucment/row by PK. For InnoDB all data is in the buffer pool. For TokuMX and MongoDB all data is in the OS filesystem cache and accessed via mmap'd files. The test server has 40 CPU cores with HT enabled and the test clients share the host with mysqld/mongod to reduce variance from network latency. This was similar to a previous test, except the database is in cache and the test host has more CPU cores. The summary of my results is:MongoDB 2.6 has a performance regression from using more CPU per query. The regression might be limited to simple queries that do single row lookups on the _id index. I spent a bit of time rediscovering how to get hierarchical CPU profile data from gperftools to explain this. JIRAs 13663 and 13685 are open for this.MySQL gets much more QPS at high concurrency than MongoDB and TokuMXMySQL gets more QPS using the HANDLER interface than SELECT. I expect the InnoDB memcached API to be even faster than HANDLER but did not test it.MySQL uses more CPU per query in 5.7.4 than in 5.6.12 but this didn't have an impact on QPSSetupThe test was repeated for 1, 2, 4, 8, 16, 32 and 40 concurrent clients. It uses my forked versions of the MongoDB and C clients for sysbench. There are 8 collections/tables in one database. Each table has 400M rows but queries are limited to the first 1M. I don't know yet whether using a database per collection would improve the MongoDB results. Each query fetches all columns in one document/row by PK. I have yet to push my changes to the MongoDB sysbench client to make it fetch all columns. I tested these binaries:fb56.handler - MySQL 5.6.12 with the Facebook patch and 8k pages. Uses HANDLER to fetch data.fb56.sql - MySQL 5.6.12 with the Facebook patch and 8k pages. Uses SELECT to fetch data.orig57.handler - MySQL 5.7.4 without the Facebook patch and 8k pages. Uses HANDLER to fetch data.orig57.sql - MySQL 5.7.4 without the Facebook patch and 8k pages. Uses SELECT to fetch data.tokumx - TokuMX 1.4.1 using quicklz and 32kb pages. There should be no decompression during the test as all data used by the test (1M documents) is much smaller than 50% of RAM.mongo24 - MongoDB 2.4.9mongo26 - MongoDB 2.6.0ResultsAt last I included a graph. I have been reluctant to include graphs on previous posts comparing MongoDB, TokuMX and MySQL because I want to avoid benchmarketing and drive-by analysis. These tests have been time consuming to run and document and I don't want to make it too easy to misinterpret the results. Results for MySQL 5.7.4 are not in the graph to make it easier to read. The top two bars (blue & red) are for MySQL and you can see that QPS increases with more concurrency. QPS for MongoDB and TokuMX saturates at a lower level of concurrency.Numbers used for the graph above.point queries per second    1      2      4      8     16     32     40  clients17864  32397  60294 106374 184566 298276 350665  fb56.handler11730  22884  39646  73485 131533 215487 249402  fb56.sql18161  33262  59413 107505 185894 306084 371045  orig57.handler11775  21838  40528  75322 135331 227450 266917  orig57.sql14298  25219  45743  83214 142489 168498 161840  tokumx17203  30158  52476  94705 161922 174453 170177  mongo2410705  19502  34318  61977 109684 152667 151555  mongo26AnalysisI used vmstat to measure the average CPU utilization (user + system) during the test. The numbers below are: (CPU_utilization / QPS) * 1,000,000. There are some interesting details.the values are larger for MySQL 5.7 than for 5.6 at low concurrency. Note that in both cases the performance schema was disabled at compile time.the values are much larger for MongoDB 2.6 than for 2.4 and hopefully this can be fixed via JIRAs 13663 and 13685.(CPU_utilization / QPS) * 1,000,000  1      2      4      8     16     32     40  clients218    197    197    208    216    251    268  fb56.handler323    310    287    298    304    352    372  fb56.sql357    279    240    216    215    248    250  orig57.handler407    380    313    288    302    342    359  orig57.sql272    269    251    254    266    302    296  tokumx232    215    219    225    234    257    252  mongo24373    333    340    342    355    425    422  mongo26I also used vmstat to measure the context switch rate and the table below lists the number of context switches per query. Note that the rate decreases with concurrency for MySQL but not for MongoDB and TokuMX. I don't know enough about Linux internals to interpret this.vmstat.cs / QPScontext switch per query     1      2      4      8     16     32     40  clients  4.44   4.14   4.01   3.79   3.47   3.05   2.19  fb56.handler  4.61   4.32   4.03   3.84   3.59   3.23   2.65  fb56.sql  4.53   4.27   4.07   3.88   3.52   3.08   2.20  orig57.handler  4.81   4.48   4.19   3.96   3.63   3.07   2.19  orig57.sql  4.59   4.30   4.08   3.87   3.77   4.32   4.32  tokumx  4.54   4.23   4.03   3.84   3.79   4.29   4.30  mongo24  4.80   4.43   4.21   3.99   3.93   4.58   4.63  mongo26

  • Using mysqldump and the MySQL binary log – a quick guide on how to backup and restore MySQL databases
    Be sure to check out my other posts on mysqldump: - Scripting Backups of MySQL with Perl via mysqldump - Splitting a MySQL Dump File Into Smaller Files Via Perl - Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 1 of 2 - Creating and restoring database backups with mysqldump and MySQL Enterprise Backup – Part 2 of 2 I have already written several posts on using mysqldump. This post will be a quick guide to using mysqldump to backup and restore your databases. And we will look at using the MySQL binary log (binlog) along with mysqldump for point-in-time recovery. If your database is extremely large, and for databases in a production environment, you might want to consider using MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition. For those of you that aren’t familiar with mysqldump: The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format. The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database. NOTE: If you are using GTID’s (global transaction identifiers) in your database, you will need to include the –set-gtid-purged=OFF option, otherwise you will receive this error: Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. For these examples, I will not include the –set-gtid-purged=OFF option. Dumping and making a copy of a single database To dump/backup a single database: mysqldump -uroot -p database_name > db_dump.sql To load the dump file back into mysql, you will need to create the new database first. If you use the –databases option before the database name, mysqldump will also dump the CREATE DATABASE and USE statements that you need prior to inserting the data from the dump. You can either use mysqladmin to create the new database, or create it from a MySQL prompt: # mysqladmin create new_database_name mysql> CREATE DATABASE new_database_name; Next, you can simply import the dump file into mysql. # mysql new_database_name < db_dump.sql You can also use the dump file to move the database to another server. If you did not use the –databases option, then you will need to create the database first. Dumping events, routines, triggers Here are the options for mysqldump to also dump event scheduler events, stored procedures or functions. If you want to include these, use these options: –routines – dumps stored procedures and functions –events – dumps Event Scheduler events –triggers – dumps triggers When you use mysqldump, triggers are dumped by default. If you want to disable any of these functions, use the “skip” versions: https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_events–skip-events, –skip-routines, or –skip-triggers. Only dump table definitions If you want to just dump the CREATE TABLE statements without any data, use the –no-data option. # mysqldump --no-data database_name > db_definitions_dump.sql You might want to add the –routines and –events options to include stored routines and event definitions. # mysqldump --no-data --routines --events database_name > db_definitions_dump.sql Only dump the data If you want to just dump the data without any table definitions, you can use the –no-create-info option: # mysqldump --no-create-info database_name > db_data_dump.sql Using mysqldump to test a new version of mysql Let’s say that you need to upgrade to a new version of mysql. In order to test to see if your database objects are handled properly by the new version, you can simply dump the data definitions and import them into the new version of MySQL (preferably on a separate computer). On the computer with the old version of MySQL: mysqldump --all-databases --no-data --routines --events > db_definitions.sql Then, on the upgraded server, you can just import this dump file: mysql -uroot -p < db_definitions.sql This will help you spot any potential incompatibilities with the new version of MySQL. If you don’t receive any errors, you can then dump the data and load into the new server. Be sure to run some test queries on the new upgraded server as well. Point-in-time recovery using mysqldump and the binary logs The MySQL binary logs (binlogs) contains all of the SQL statements or “events” that could change the database (I say “could” because a delete statement that does not delete any rows would still be entered into the binary log – unless you are using row-based logging). For more information about the binary log, see: http://dev.mysql.com/doc/refman/5.6/en/binary-log.html. Since the binlog contains all of the events that happen to the database, you can use the binlog to apply these same changes to a different database. If you started your MySQL instance with the binlogs enabled, and you have never flushed the binlogs, then the binlogs contain all of the SQL statements for all of the data that is in your database. The binlog itself is like a backup of your database. If you want to use the binary logs in addition to mysqldump to restore your database, you need to have the binary logs (binlogs) enabled. There are many options for the binlogs (see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html, but the only two that you really need for this example are: --log-bin[=base_name] --log-bin-index[=file_name] One other option is to use the –binlog-format. You can set this value to STATEMENT (default), ROW or MIXED. For more information about these options, see http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_format. These variables need to go into your my.cnf or my.ini file under the [mysqld] section, and this will require a restart of mysqld. Once you have the binary log enabled, you will need to do a few things differently when you use mysqldump. You will need to: - flush the tables and place a READ lock on the tables - check to see what binlog is being used - check the position of the binlog - dump your data with mysqldump - release the lock By placing a read lock on the tables, you are stopping anyone from modifying the data in the database. By having the binlog and binlog position, these will allow you use the binary logs to restore any statements that happened after the mysqldump. Open two terminal windows – one with a MySQL prompt, and another with a root prompt: In the MySQL prompt, issue the READ lock and SHOW MASTER STATUS: mysql> FLUSH TABLES WITH READ LOCK; Query OK, 0 rows affected (0.47 sec) mysql> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000008 Position: 191 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) Now you are ready to dump the database with whatever options you need: # mysqldump --all-databases > db_000008_191_dump.sql Once the dump has finished, you can release the lock: mysql> UNLOCK TABLES; Query OK, 0 rows affected (0.00 sec) It is extremely important that you write down the binlog file and position from the SHOW MASTER STATUS statement and somehow associate it with the dump file. One way to do this is to insert the binlog file name and position into the dump file name. In my example above, I did this by adding both to the dump file name db_000008_191_dump.sql. When you have to restore the database, you will need to load the dump file first, and then apply the binlog(s) to the database. Let’s assume that we need to restore the entire database. First, we will import the dump file: # mysql -uroot -p < db_000008_191_dump.sql Next, you will need to load the information from the binlog(s). To load information from the binlogs, you need to use the mysqlbinlog utility. You can check your MySQL data directory to see how many binlogs you have used since the one that was in the SHOW MASTER STATUS statement: $ ls -l mysql-bin* -rw-rw---- 1 mysql wheel 67110 Apr 4 16:22 mysql-bin.000001 -rw-rw---- 1 mysql wheel 1230893 Apr 4 16:24 mysql-bin.000002 -rw-rw---- 1 mysql wheel 13383907 Apr 4 17:03 mysql-bin.000003 -rw-rw---- 1 mysql wheel 13383907 Apr 4 19:03 mysql-bin.000004 -rw-rw---- 1 mysql wheel 13383907 Apr 4 19:07 mysql-bin.000005 -rw-rw---- 1 mysql wheel 13383907 Apr 18 16:48 mysql-bin.000006 -rw-rw---- 1 mysql wheel 13383907 Apr 21 13:37 mysql-bin.000007 -rw-rw---- 1 mysql wheel 13383907 Apr 21 13:37 mysql-bin.000008 -rw-rw---- 1 mysql wheel 154847 Apr 21 13:37 mysql-bin.000009 -rw-rw---- 1 mysql wheel 171 Apr 21 13:37 mysql-bin.index You can also just look at the mysql-bin.index file (located in your MySQL data directory), which contains a list of all of the binary files in use: # cat mysql-bin.index ./mysql-bin.000001 ./mysql-bin.000002 ./mysql-bin.000003 ./mysql-bin.000004 ./mysql-bin.000005 ./mysql-bin.000006 ./mysql-bin.000007 ./mysql-bin.000008 ./mysql-bin.000009 In this example, we will need to apply the changes from the binlog file mysql-bin.000008 after position 191, and then all of the mysql-bin.000009 binlog. You will need to add the correct data directory PATH to your mysqlbinlog statement. mysqlbinlog --start-position=191 $DATA_DIR_PATH/mysql-bin.000008 | mysql -u root -p After you have inserted all of mysql-bin.000008 after position 191, you can insert the entire mysql-bin.000009 binlog file: mysqlbinlog $DATA_DIR_PATH/mysql-bin.000009 | mysql -u root -p Note: During the restore process, you do not want anyone inserting any data into the database. Your database should now be back to the state when the database crashed or was deleted. It isn’t mandatory, but it is also a good idea to copy the binlogs to a separate location as part of your backup routine. You can use mysqlbinlog to do this as well – see: http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html. For more information on using the binary logs for point-in-time recovery, see https://dev.mysql.com/doc/refman/5.6/en/point-in-time-recovery.html. There are a lot of other options for using binlogs. The best option for backing up and restoring your MySQL database is to use the MySQL Enterprise Backup (mysqlbackup), which is part of the MySQL Enterprise Edition subscription, which includes 24×7 MySQL Support and the other Enterprise features.   Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.