พฤหัสบดี, 28 ส.ค. 2014
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL
    Swanhart-Tools includes FlexCDC, a change data capture tool for MySQL. FlexCDC follows a server’s binary log and usually writes “changelogs” that track the changes to tables in the database. I say usually because the latest version of Swanhart-Tools (only in github for now) supports FlexCDC plugins, which allow you to send the updates to a remote data source, or to any other place of your liking.  You can find out more about FlexCDC basics in a previous blog post.Please note that FlexCDC still needs to have source and destination instances defined in the configuration, even when using plugins.  This is because the FlexCDC state (how much into which binary log has FlexCDC progressed, and what tables are being captured) is stored in the “dest”.  Normally when using a plugin, the source and destination instances will be the same. FlexCDC will create a ‘flexviews’ database with a number of state tables in the destination instance.  This also means you still have to use the create_mvlog.php add_table.php or Flexview’s create_mvlog(…) procedure to mark which tables to capture!  See the previous blog post about FlexCDC.When you create the mvlog, there will still be a changelog table created in the dest, just like when not using a plugin. This is because the INFORMATION_SCHEMA is used to get column datatypes and additional information (such as if an int is signed or unsigned) and this lookup is done against the table in the dest. The reason this is needed, is because mysqlbinlog, the utility used to scrape the binlog, produces strange output for large signed integers (it provides the signed and unsigned version), thus FlexCDC must figure out the right one to choose from the actual DDL of the changelog table. FlexCDC can’t look at the DDL of the source table though, because the consumer may be behind, and the current structure may not match the structure of the rows in the log.The new plugin system allows you to do a lot of nifty things like:Replicate to external databasesPublish changes to a message queue (this is like Facebook’s Wormhole)Keep a remote cache server in syncand more…The latest version of Swanhart-Tools includes an Example plugin (in flexviews/consumer/include/example_plugin.php) that simply prints the events that come through it, not logging them into the changelog table at all. There is an example of the output at the end of the post.The example plugin looks like this:<?php class FlexCDC_Plugin { static function begin_trx($uow_id, $gsn) { echo "START TRANSACTION: trx_id: $uow_id, Prev GSN: $gsn"; } static function commit_trx($uow_id, $gsn) { echo "COMMIT: trx_id: $uow_id, Last GSN: $gsn"; } static function rollback_trx($uow_id) { echo "ROLLBACK: trx_id: $uow_id"; } static function insert($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: INSERT, AT: $gsn"; print_r($row); } static function delete($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: DELETE, AT: $gsn"; print_r($row); } static function update_before($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: UPDATE (OLD), AT: $gsn"; print_r($row); } static function update_after($row, $db, $table, $trx_id, $gsn) { echo "TRX_ID: $trx_id, Schema:$db, Table: $table, DML: UPDATE (NEW), AT: $gsn"; print_r($row); } }Important Note: You must define all seven of these functions in your plugin, even if you do not plan to have actions for each of the callbacks – just leave the function body empty to do no action (the call is simply a noop that case.) Note that the plugin functions must be declared STATIC.  This is due to the way that FlexCDC calls the functions.Transaction state callbacks There are three callback functions which notify the plugin of changes in transaction state. Before I go into what they do, I want to note the $trx_id and $gsn parameters which are present in every callback. Each transaction is assigned a monotonically increasing transaction identifier. This sequence uniquely identifies each transaction that FlexCDC processes. In addition, each row change is assigned a unique sequence number which FlexCDC calls the Generic Sequence Number (or GSN).As you can see, the start_trx(…) callback (called when a transaction starts) is passed both the new transaction number and also the highest GSN used in the previous transaction. This is called the GSN high water mark (GSNHWM). At transaction commit, the commit_trx(…) callback is called and the transaction id and the last GSN assigned in the transaction are passed into the callback. This same value will appear as the GSNHWM in the next start_trx(…) callback. Finally, at rollback any sequence numbers assigned in that transaction will be re-used, so no GSN is passed to the rollback callback, but a transaction id is, which lets you determine exactly which transaction is rolling back.Row change callbacksEach of the four row change callback functions capture a particular change to the data set. Each of the functions take five parameters. The first ($row) is an array which contains the row being acted upon. The second ($db) is the schema which contains the row. The third ($table) is the table that contains the row. Each callback also receives the transaction identifier, and of course, each row change is assigned a unique GSN.For example: An update will fire both update_before(…) and update_after(…) callbacks with the row images before and after the change, respectively. There is an example of this at the end of the post.Configuring FlexCDC to use a plugin FlexCDC uses a configuration file called consumer.ini by default.  To the [flexcdc] section add: plugin=plugin_file.phpThe plugin must be in the FlexCDC include/ directory.  You will find example_plugin.php in this directory, to serve as an example.How it works Flexviews uses mysqlbinlog to decode the binary log from the source server. It uses the –decode-rows=ROWS option to decode RBR into a format which can be parsed by an external utility. FlexCDC collects information about each transaction and the row changes that happen in the database (which means it requires ROW based binary logging to be used.)  When a plugin is defined the normal actions used by FlexCDC are overridden with the callback functions.Here is the output from the example plugin, for an update that affected 3 rows (update test.t3 set c1 = c1 – 1):START TRANSACTION: trx_id: 44, Prev GSN: 107 TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 108 Array ( [c1] => -3 [c2] => 1 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 109 Array ( [c1] => -4 [c2] => 1 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 110 Array ( [c1] => -5 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 111 Array ( [c1] => -6 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (OLD), AT: 112 Array ( [c1] => -5 [c2] => 2 ) TRX_ID: 44, Schema:test, Table: t3, DML: UPDATE (NEW), AT: 113 Array ( [c1] => -6 [c2] => 2 ) COMMIT: trx_id: 44, Last GSN: 113One thing you should notice, is that FlexCDC provides column names for the data coming from the binary log. This is because the log table exists in the dest instance and FlexCDC can get the list of column names from there. When you use other CDC tools, like the C binlog API, you don’t get column names.The post Trawling the binlog with FlexCDC and new FlexCDC plugins for MySQL appeared first on MySQL Performance Blog.

  • The InnoDB mutex
    InnoDB provides a custom mutex and rw-lock implementation. I wondered how the InnoDB mutex performance compares to a pthread mutex on modern hardware and finally had time to test it. My test client is innotsim and includes code copied from InnoDB in MySQL 5.6. There is a lot of work-in-progress for InnoDB mutexes in 5.7 and I have yet to catch up to that work. SummaryGenerally, the performance is more constant for pthread mutex and more a function of the number of threads for the InnoDB mutex. The interesting result is the point (number of threads in the test) at which the lines cross and the InnoDB mutex changes from being faster to slower than pthread mutexes. There are a few performance metrics that matter. I only report on two of them -- the time per loop iteration which includes the overhead of (in)efficient mutex code and the CPU utilization during the test. I don't report on fairness here. Another interesting question for which I provide data but not analysis is whether the performance improvement from using a busy-wait loop in a mutex to wait for unlock is worth the additional CPU & power consumption.My conclusions:For low thread counts the InnoDB mutex had a better response time than the pthread variants.The InnoDB mutex does much worse with many threads per mutex. This is probably due to code in InnoDB that wakes all threads waiting for a mutex lock when the mutex is unlocked.The crossing point (number of threads) at which pthread mutexes outperform the InnoDB mutex increased as the lock hold time increased.Performance with the default pthread mutex was usually better than with the adaptive pthread mutexAssuming the vmstat CPU counters (us, sy, id) are correct then there was rarely any idle time for the tests with zero and 1000 nsec lock hold times. This means that all CPUs were saturated even for the test with 1000 nsec lock hold time and 1024 threads competing for one mutex.There was significant idle time from the 4000 nsec lock hold test for the pthread mutexes but not for the InnoDB mutex. In that case InnoDB is using much more CPU than it should.I didn't include stats for it, but innotsim has a custom mutex that does a busy-wait and then calls pthread_mutex_trylock. There is a variant that uses either a global counter or a per-mutex counter to limit the max number of threads that can spin concurrently. While the option to limit the number of max spinning threads helped performance at high concurrency it otherwise hurt performance. The likely cause is the overhead (memory transfers) from maintaining the extra counters.InnoDB mutexThe custom InnoDB mutex adds value in terms of behavior (monitoring, debugging, correctness) but that might come at a cost in performance. I suspect that the value-added behavior could be implemented on something lighter weight using a pthread mutex. On x86-64 + Linux the InnoDB mutex uses a lock word and atomic instructions. When the lock word is already set a busy wait loop is done for up to ~50 usecs. The busy wait time is configurable, but doesn't adapt per mutex. When that limit is reached a slot is reserved in the sync array and this requires a short term lock/unlock of the sync array pthread mutex. Then the lock word is checked again and if still set the thread sleeps on an InnoDB os_event_t (pthread_cond_t + pthread_mutex_t) after storing a pointer to the os_event_t in the sync array. When the InnoDB mutex is unlocked then broadcast is done on the pthread_cond_t in the os_event_t. This broadcast can wake all threads trying to lock the InnoDB mutex but only one will get to lock it.I assume this code arrived in the mid-90s when SMP systems were less common so the performance overhead from it wasn't a big deal. It also provides useful behavior. A few more comments on this: Waking all threads waiting to lock the mutex when it is unlocked doesn't scale as the number of waiters increases. If all waiting threads are scheduled to run from the broadcast then this can have an O(N*N) CPU overhead for N threads. The overhead from this is visible in the graphs below.A busy wait loop is done that uses the PAUSE instruction on x86. This gives a waiting thread a chance to get the lock without sleeping assuming the lock hold time is not too long. The wait time is configurable for InnoDB but it is not adaptive. There is a pthread mutex option on Linux that provides an adaptive, but not configurable, busy wait loop. The busy wait loop means that more user CPU time can be consumed. InnoDB does not have a limit on the max number of threads that can spin concurrently. I assume that the pthread adaptive mutex also lacks that. It might not be a good thing to have 1000 threads spinning in the busy-wait loop. For some of the configurations tested below limiting the max number of threads that spin was useful. I also don't know if the adaptive pthread mutex on Linux uses the PAUSE instruction and I haven't looked at current source code for glibc.Threads wait on the InnoDB sync array. A background thread occasionally scans this sync array to find threads that have been waiting too long (many minutes) and will kill the server when one is found because that should not happen. Long waits for row locks could happen, but those don't use the sync array. Were pthread mutex used directly then each thread could use a loop of pthread_mutex_timedlock calls assuming that function is generally available. Otherwise, each InnoDB mutex could contain a pthread condition variable and then do timed sleeps on that. But using a condition variable means we again confront the problem of broadcast and waking all waiters on unlock.Monitoring commands can display the threads that wait for each mutex. This has been very useful for debugging rare problems.The mutex includes fields for the filename and line# of the caller that last locked it. Back in the day this was extremely useful to debug InnoDB.Performance counters have been added to the InnoDB mutex to count the number of lock requests, the number of requests that require a busy wait loop and the number that require use of the sync array. However, with the introduction of the performance schema to upstream MySQL these counters might get removed.Test setupThe test servers have 2 sockets, 20/40 CPUs/vCPUs (HT is enabled) and ran Linux 3.10. The test client is innotsim compiled with gcc 4.4.6 and the test server used glibc 2.12. We have a talented Linux kernel team, and I have been working with them to repeat tests on several versions of our 3.10 branch as performance bugs are fixed.The innotsim client includes the mutex code extracted from InnoDB with a few changes to remove performance counters. innotsim is configured to use N threads and M mutexes. Each thread does J iterations of the benchmark work loop where each iteration locks a mutex, increments a per-mutex counter, does Y iterations of a work loop to simulate holding the lock and then unlocks the mutex. Tests used several mutex implementations: InnoDB, pthread adaptive and pthread default. Tests were repeated for the cross product of:1, 2, 4, 8, 16 mutexes. Threads were evenly distributed across the mutexes. Threads were not pinned to cores. With 16 mutexes there was much less contention.1, 2, 4, 8, 16, 20, 24, 36, 40, 48, 64, 128, 256, 512, 1024 threadsLock hold times of 0, 1000 and 4000 nsecsGraphs for 0 nsec lock holdThe graphs below display performance (nanoseconds per loop iteration) for the tests that use a 0 nanosecond lock hold duration and 1, 4 and 16 mutexes. In practice the lock hold duration is greater than 0 because a per-mutex counter is also incremented when the lock is held and that adds latency from memory system transfers. Note that:InnoDB degrades significantly at high concurrency so that is most obvious when all (1024) threads share one mutex.InnoDB does better than pthread adaptive & pthread default at low concurrencypthread default does better than pthread adaptive at all timesGraphs for 1000 nsec lock holdThe graphs below display performance (nanoseconds per loop iteration) for the tests that use a ~1000 nanosecond lock hold duration and 1, 4 and 16 mutexes.  Note that:InnoDB degrades significantly at high concurrency so that is most obvious when all (1024) threads share one mutex.The point at which pthread does better than InnoDB has shifted to the right (higher thread count required)pthread default and pthread adaptive are similarGraphs for 4000 nsec lock holdThe graphs below display performance (nanoseconds per loop iteration) for the tests that use a ~4000 nanosecond lock hold duration and 1, 4 and 16 mutexes. Results are similar to the 1000 nsec lock hold case.CPU overheadThe question for which I am still working on an answer is whether the extra performance from the busy-wait loop in InnoDB and pthread adaptive is worth the CPU overhead and extra context switches. The innotsim client has a mutex variation that uses the busy-wait loop but limits the max number of threads that can spin. That is an attempt to get the benefit from busy-wait with less overhead. Perhaps I will return to that. For now I will share sample vmstat output during the tests. I collected it at 1 second intervals and share data from one of the samples for each of the test configurations.This is vmstat output from the 0 nsec lock hold tests at 1024 threads. The columns are:cs - context switch rateus - %user CPU timesy - %system CPU timeid - %idle CPU timeInnoDB mutex#mutexes        cs - us - sy - id       1    444925 - 27 - 71 -  2        2    446279 - 25 - 75 -  0        4    408583 - 36 - 64 -  0        8    223385 - 82 - 18 -  0       16     75598 - 97 -  3 -  0 pthread adaptive mutex#mutexes        cs - us - sy - id       1     73605 -  8 - 92 -  0        2    203305 - 24 - 76 -  0        4    614610 - 49 - 51 -  0        8    659100 - 72 - 28 -  0       16    343300 - 87 - 13 -  0 pthread default mutex#mutexes        cs - us - sy - id       1     48735 -  3 - 97 -  0        2    141863 -  5 - 95 -  0        4    337363 - 10 - 90 -  0        8    853007 - 18 - 82 -  0       16    966979 - 45 - 55 -  0 And this is vmstat output from the 1000 nsec lock hold tests at 1024 threads.InnoDB mutex#mutexes        cs - us - sy - id       1    452304 - 24 - 75 -  1       2    441478 - 21 - 79 -  0       4    446445 - 23 - 77 -  0        8    434083 - 32 - 68 -  0       16    411418 - 43 - 57 -  0 pthread adaptive mutex#mutexes        cs - us - sy - id       1    102700 -  5 - 95 -  0        2    249238 - 16 - 84 -  0        4    718922 - 30 - 70 -  0        8   1172786 - 55 - 45 -  0       16   1017120 - 76 - 24 -  0 pthread default mutex#mutexes        cs - us - sy - id       1     98288 -  2 - 98 -  0        2    204332 -  4 - 96 -  0        4    580350 -  9 - 91 -  0        8   1050830 - 17 - 83 -  0       16   2711492 - 28 - 72 -  0 And vmstat output from the 4000 nsec lock hold tests at 1024 threads. Note that there is significant idle time for the pthread mutexes but not the InnoDB mutex.InnoDB mutex#mutexes        cs - us - sy - id       1    455946 - 28 - 70 -  1       2    449311 - 24 - 76 -  0       4    443422 - 26 - 74 -  0       8    444105 - 39 - 61 -  0      16    461858 - 62 - 38 -  0pthread adaptive mutex#mutexes        cs - us - sy - id       1    318129 -  2 -  1 - 96       2    645815 -  5 -  3 - 92       4   1222591 -  9 -  6 - 84       8   2300555 - 17 - 10 - 72      16   1806223 - 39 - 11 - 50pthread default mutex#mutexes        cs - us - sy - id       1    312771 -  3 -  2 - 96        2    639662 -  4 -  3 - 93        4   1224032 -  9 -  6 - 85        8   2298452 - 19 - 11 - 70       16   1892771 - 39 -  9 - 52 

  • Query Sphinx with Node.js
    You probably already know that Sphinx supports MySQL binary network protocol. But, if you haven’t heard– Sphinx can be accessed with the regular ol’ MySQL API. Pretty much any language you use to query MySQL can be used to query Sphinx. So, in this post, to demonstrate how using ‘SphinxQL’ (our subset of SQL) makes [...]

  • Replicating from MySQL to Amazon Redshift
    Continuent is delighted to announce an exciting Continuent Tungsten feature addition for MySQL users: replication in real-time from MySQL into Amazon RedShift.   In this webinar we'll survey Continuent Tungsten capabilities for data warehouse loading, then zero in on practical details of setting up replication from MySQL into RedShift.  We'll cover: Introduction to real-time movement from

  • mysqld_multi: How to run multiple instances of MySQL
    The need to have multiple instances of MySQL (the well-known mysqld process) running in the same server concurrently in a transparent way, instead of having them executed in separate containers/virtual machines, is not very common. Yet from time to time the Percona Support team receives a request from a customer to assist in the configuration of such an environment. MySQL provides a tool to facilitate the execution of multiple instances called mysqld_multi:“mysqld_multi is designed to manage several mysqld processes that listen for connections on different Unix socket files and TCP/IP ports. It can start or stop servers, or report their current status.”For tests and development purposes, MySQL Sandbox might be more practical and I personally prefer to use it for my own tests. Both tools work around launching and managing multiple mysqld processes but Sandbox has, as the name suggests, a “sandbox” approach, making it easy to both create and dispose a new instance (including all data inside it). It is more usual to see mysqld_multi being used in production servers: It’s provided with the server package and uses the same single configuration file that people are used to look for when setting up MySQL. So, how does it work? How do we configure and manage the instances? And as importantly, how do we backup all the instances we create?Understanding the concept of groups in my.cnfYou may have noticed already that MySQL’s main configuration file (or “option file“), my.cnf, is arranged under what is called group structures: Sections defining configuration options specific to a given program or purpose. Usually, the program itself gives name to the group, which appears enclosed by brackets. Here’s a basic my.cnf showing three such groups:[client] port = 3306 socket = /var/run/mysqld/mysqld.sock user = john password = p455w0rd [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 datadir = /var/lib/mysql [xtrabackup] target_dir = /backups/mysql/The options defined in the group [client] above are used by the mysql command-line tool. As such, if you don’t specify any other option when executing mysql it will attempt to connect to the local MySQL server through the socket in /var/run/mysqld/mysqld.sock and using the credentials stated in that group. Similarly, mysqld will look for the options defined under its section at startup, and the same happens with Percona XtraBackup when you run a backup with that tool. However, the operating parameters defined by the above groups may also be stated as command-line options during the execution of the program, in which case they they replace the ones defined in my.cnf.Getting started with multiple instancesTo have multiple instances of MySQL running we must replace the [mysqld] group in the my.cnf configuration file by as many [mysqlN] groups as we want instances running, with “N” being a positive integer, also called option group number. This number is used by mysqld_multi to identify each instance, so it must be unique across the server. Apart from the distinct group name, the same options that are valid for [mysqld] applies on [mysqldN] groups, the difference being that while stating them is optional for [mysqld] (it’s possible to start MySQL with an empty my.cnf as default values are used if not explicitly provided) some of them (like socket, port, pid-file, and datadir) are mandatory when defining multiple instances – so they don’t step on each other’s feet. Here’s a simple modified my.cnf showing the original [mysqld] group plus two other instances:[mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 datadir = /var/lib/mysql [mysqld1] user = mysql pid-file = /var/run/mysqld/mysqld1.pid socket = /var/run/mysqld/mysqld1.sock port = 3307 datadir = /data/mysql/mysql1 [mysqld7] user = mysql pid-file = /var/run/mysqld/mysqld7.pid socket = /var/run/mysqld/mysqld7.sock port = 3308 datadir = /data/mysql/mysql7Besides using different pid files, ports and sockets for the new instances I’ve also defined a different datadir for each – it’s very important that the instances do not share the same datadir. Chances are you’re importing the data from a backup but if that’s not the case you can simply use mysql_install_db to create each additional datadir (but make sure the parent directory exists and that the mysql user has write access on it):mysql_install_db --user=mysql --datadir=/data/mysql/mysql7Note that if /data/mysql/mysql7 doesn’t exist and you start this instance anyway then myqld_multi will call mysqld_install_db itself to have the datadir created and the system tables installed inside it. Alternatively from restoring a backup or having a new datadir created you can make a physical copy of the existing one from the main instance – just make sure to stop it first with a clean shutdown, so any pending changes are flushed to disk first.Now, you may have noted I wrote above that you need to replace your original MySQL instance group ([mysqld]) by one with an option group number ([mysqlN]). That’s not entirely true, as they can co-exist in harmony. However, the usual start/stop script used to manage MySQL won’t work with the additional instances, nor mysqld_multi really manages [mysqld]. The simple solution here is to have the group [mysqld] renamed with a suffix integer, say [mysqld0] (you don’t need to make any changes to it’s current options though), and let mysqld_multi manage all instances.Two commands you might find useful when configuring multiple instances are:$ mysqld_multi --example…which provides an example of a my.cnf file configured with multiple instances and showing the use of different options, and:$ my_print_defaults --defaults-file=/etc/my.cnf mysqld7…which shows how a given group (“mysqld7″ in the example above) was defined within my.cnf.Managing multiple instancesmysqld_multi allows you to start, stop, reload (which is effectively a restart) and report the current status of a given instance, all instances or a subset of them. The most important observation here is that the “stop” action is managed through mysqladmin – and internally that happens on an individual basis, with one “mysqladmin … stop” call per instance, even if you have mysqld_multi stop all of them. For this to work properly you need to setup a MySQL account with the SHUTDOWN privilege and defined with the same user name and password in all instances. Yes, it will work out of the box if you run mysqld_multi as root in a freshly installed server where the root user can access MySQL passwordless in all instances. But as the manual suggests, it’s better to have an specific account created for this purpose:mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass'; mysql> FLUSH PRIVILEGES;If you plan on replicating the datadir of the main server across your other instances you can have that account created before you make copies of it, otherwise you just need to connect to each instance and create a similar account (remember, the privileged account is only needed by mysqld_multi to stop the instances, not to start them). There’s a special group that can be used on my.cnf to define options for mysqld_multi, which should be used to store these credentials. You might also indicate in there the path for the mysqladmin and mysqld (or mysqld_safe) binaries to use, though you might have a specific mysqld binary defined for each instance inside it’s respective group. Here’s one example:[mysqld_multi] mysqld = /usr/bin/mysqld_safe mysqladmin = /usr/bin/mysqladmin user = multi_admin password = multipassYou can use mysqld_multi to start, stop, restart or report the status of a particular instance, all instances or a subset of them. Here’s a few examples that speak for themselves:$ mysqld_multi report Reporting MySQL (Percona Server) servers MySQL (Percona Server) from group: mysqld0 is not running MySQL (Percona Server) from group: mysqld1 is not running MySQL (Percona Server) from group: mysqld7 is not running $ mysqld_multi start $ mysqld_multi report Reporting MySQL (Percona Server) servers MySQL (Percona Server) from group: mysqld0 is running MySQL (Percona Server) from group: mysqld1 is running MySQL (Percona Server) from group: mysqld7 is running $ mysqld_multi stop 7,0 $ mysqld_multi report 7 Reporting MySQL (Percona Server) servers MySQL (Percona Server) from group: mysqld7 is not running $ mysqld_multi report Reporting MySQL (Percona Server) servers MySQL (Percona Server) from group: mysqld0 is not running MySQL (Percona Server) from group: mysqld1 is running MySQL (Percona Server) from group: mysqld7 is not runningManaging the MySQL daemonWhat is missing here is an init script to automate the start/stop of all instances upon server initialization/shutdown; now that we use mysqld_multi to control the instances, the usual /etc/init.d/mysql won’t work anymore. But a similar startup script (though much simpler and less robust) relying on mysqld_multi is provided alongside MySQL/Percona Server, which can be found in /usr/share/<mysql|percona-server>/mysqld_multi.server. You can simply copy it over as /etc/init.d/mysql, effectively replacing the original script while maintaining it’s name. Please note: You may need to edit it first and modify the first two lines defining “basedir” and “bindir” as this script was not designed to find out the good working values for these variables itself, which the original single-instance /etc/init.d/mysql does. Considering you probably have mysqld_multi installed in /usr/bin, setting these variables as follows is enough:basedir=/usr bindir=/usr/binConfiguring an instance with a different version of MySQLIf you’re planning to have multiple instances of MySQL running concurrently chances are you want to use a mix of different versions for each of them, such as during a development cycle to test an application compatibility. This is a common use for mysqld_multi, and simple enough to achieve. To showcase its use I downloaded the latest version of MySQL 5.6 available and extracted the TAR file in /opt:$ tar -zxvf mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz -C /optThen I made a cold copy of the datadir from one of the existing instances to /data/mysql/mysqld574:$ mysqld_multi stop 0 $ cp -r /data/mysql/mysql1 /data/mysql/mysql5620 $ chown mysql:mysql -R /data/mysql/mysql5620and added a new group to my.cnf as follows:[mysqld5620] user = mysql pid-file = /var/run/mysqld/mysqld5620.pid socket = /var/run/mysqld/mysqld5620.sock port = 3309 datadir = /data/mysql/mysql5620 basedir = /opt/mysql-5.6.20-linux-glibc2.5-x86_64 mysqld = /opt/mysql-5.6.20-linux-glibc2.5-x86_64/bin/mysqld_safeNote the use of basedir, pointing to the path were the binaries for MySQL 5.6.20 were extracted, as well as an specific mysqld to be used with this instance. If you have made a copy of the datadir from an instance running a previous version of MySQL/Percona Server you will need to consider the same approach use when upgrading and run mysql_upgrade.* I did try to use the latest experimental release of MySQL 5.7 (mysql-5.7.4-m14-linux-glibc2.5-x86_64.tar.gz) but it crashed with:*** glibc detected *** bin/mysqld: double free or corruption (!prev): 0x0000000003627650 ***Using the conventional tools to start and stop an instanceEven though mysqld_multi makes things easier to control in general let’s not forget it is a wrapper; you can still rely (though not always, as shown below) on the conventional tools directly to start and stop an instance: mysqld* and mysqladmin. Just make sure to use the parameter –defaults-group-suffix to identify which instance you want to start:mysqld --defaults-group-suffix=5620and –socket to indicate the one you want to stop:$mysqladmin -S /var/run/mysqld/mysqld5620.sock shutdown* However, mysqld won’t work to start an instance if you have redefined the option ‘mysqld’ on the configuration group, as I did for [mysqld5620] above, stating:[ERROR] mysqld: unknown variable 'mysqld=/opt/mysql-5.6.20-linux-glibc2.5-x86_64/bin/mysqld_safe'I’ve tested using “ledir” to indicate the path to the directory containing the binaries for MySQL 5.6.20 instead of “mysqld” but it also failed with a similar error. If nothing else, that shows you need to stick with mysqld_multi when starting instances in a mixed-version environment.BackupsThe backup of multiple instances must be done in an individual basis, like you would if each instance was located in a different server. You just need to provide the appropriate parameters to identify the instance you’re targeting. For example, we can simply use socket with mysqldump when running it locally:$ mysqldump --socket=/var/run/mysqld/mysqld7.sock --all-databases > mysqld7.sqlIn Percona XtraBackup there’s an option named  –defaults-group that should be used in environments running multiple instances to indicate which one you want to backup :$ innobackupex --defaults-file=/etc/my.cnf --defaults-group=mysqld7 --socket=/var/run/mysqld/mysqld7.sock /root/Backup/Yes, you also need to provide a path to the socket (when running the command locally), even though that information is already available in “–defaults-group=mysqld7″; as it turns out, only the Percona XtraBackup tool (which is called by innobackupex during the backup process) makes use of the information available in the group option. You may need to provide credentials as well (“–user” & “–password”), and don’t forget you’ll need to prepare the backup afterwards. The option “defaults-group” is not available in all versions of Percona XtraBackup so make sure to use the latest one.SummaryRunning multiple instances of MySQL concurrently in the same server transparently and without any contextualization or a virtualization layer is possible with both mysqld_multi and MySQL Sandbox. We have been using the later at Percona Support to quickly spin on new disposable instances (though you might as easily keep them running indefinitely). In this post though I’ve looked at mysqld_multi, which is provided with MySQL server and remains the official solution for providing an environment with multiple instances.The key aspect when configuring multiple instances in my.cnf is the notion of group name option, as you replace a single [mysqld] section by as many [mysqldN] sections as you want instances running. It’s important though to pay attention to certain details when defining the options for each one of these groups, specially when mixing instances from different MySQL/Percona Server versions. Differently from MySQL Sandbox, where each instance relies on it’s own configuration file, you should be careful each time you edit the shared my.cnf file as a syntax error when configuring a single group option will prevent all instances from starting upon the server’s (re)initialization.I hope to have covered the major points about mysqld_multi here but feel free to leave us a note below if you have something else to add or any comment to contribute.The post mysqld_multi: How to run multiple instances of MySQL appeared first on MySQL Performance Blog.