ศุกร์, 22 พ.ค. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Optimizing Out-of-order Parallel Replication with MariaDB 10.0
    Fri, 2015-05-22 07:19geoff_montee_gOut-of-order parallel replication is a great feature in MariaDB 10.0 that improves replication performance by committing independent transactions in parallel on a slave. If slave_parallel_threads is greater than 0, then the SQL thread will instruct multiple worker threads to concurrently apply transactions with different domain IDs. If an application is setting the domain ID, and if parallel replication is enabled in MariaDB, then out-of-order parallel replication should mostly work automatically. However, depending on an application's transaction size and the slave's lag behind the master, slave_parallel_max_queued may have to be adjusted. In this blog post, I'll show an example where this is the case. Configure the master and slave For our master, let's configure the following settings: [mysqld] max_allowed_packet=1073741824 log_bin binlog_format=ROW sync_binlog=1 server_id=1 For our slave, let's configure the following: [mysqld] server_id=2 slave_parallel_threads=2 slave_domain_parallel_threads=1 slave_parallel_max_queued=1KB In our test, we plan to use two different domain IDs, so slave_parallel_threads is set to 2. Also, notice how small slave_parallel_max_queued is here: it is only set to 1 KB. With such a small value, it will be easier to see the behavior I want to demonstrate. Set up replication on master Now, let's set up the master for replication: MariaDB [(none)]> CREATE USER 'repl'@'192.168.1.46' IDENTIFIED BY 'password'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.1.46'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> RESET MASTER; Query OK, 0 rows affected (0.22 sec) MariaDB [(none)]> SHOW MASTER STATUS\G *************************** 1. row *************************** File: master-bin.000001 Position: 313 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.00 sec) MariaDB [(none)]> SELECT BINLOG_GTID_POS('master-bin.000001', 313); +-------------------------------------------+ | BINLOG_GTID_POS('master-bin.000001', 313) | +-------------------------------------------+ | | +-------------------------------------------+ 1 row in set (0.00 sec) If you've set up GTID replication with MariaDB 10.0 before, you've probably used BINLOG_GTID_POS to convert a binary log position to its corresponding GTID position. On newly installed systems like my example above, this GTID position might be blank. Set up replication on slave Now, let's set up replication on the slave: MariaDB [(none)]> SET GLOBAL gtid_slave_pos =''; Query OK, 0 rows affected (0.09 sec) MariaDB [(none)]> CHANGE MASTER TO master_host='192.168.1.45', master_user='repl', master_password='password', master_use_gtid=slave_pos; Query OK, 0 rows affected (0.04 sec) MariaDB [(none)]> START SLAVE; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.45 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000001 Read_Master_Log_Pos: 313 Relay_Log_File: slave-relay-bin.000002 Relay_Log_Pos: 601 Relay_Master_Log_File: master-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 313 Relay_Log_Space: 898 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: Slave_Pos Gtid_IO_Pos: 1 row in set (0.00 sec) Create some test tables on master Let's set up some test tables on the master. These will automatically be replicated to the slave. We want to test parallel replication with two domains, so we will set up two separate, but identical tables, in two different databases: MariaDB [(none)]> CREATE DATABASE db1; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE TABLE db1.test_table ( -> id INT AUTO_INCREMENT PRIMARY KEY, -> file BLOB -> ); Query OK, 0 rows affected (0.12 sec) MariaDB [(none)]> CREATE DATABASE db2; Query OK, 1 row affected (0.01 sec) MariaDB [(none)]> CREATE TABLE db2.test_table ( -> id INT AUTO_INCREMENT PRIMARY KEY, -> file BLOB -> ); Query OK, 0 rows affected (0.06 sec) Stop SQL thread on slave For the test, we want the slave to fall behind the master, and we want its relay log to grow. To make this happen, let's stop the SQL thread on the slave: MariaDB [(none)]> STOP SLAVE SQL_THREAD; Query OK, 0 rows affected (0.02 sec) Insert some data on master Now, in a Linux shell on the master, let's create a random 1 MB file: [gmontee@master ~]$ dd if=/dev/urandom of=/tmp/file.out bs=1MB count=1 1+0 records in 1+0 records out 1000000 bytes (1.0 MB) copied, 0.144972 s, 6.9 MB/s [gmontee@master ~]$ chmod 0644 /tmp/file.out Now, let's create a script to insert the contents of the file into both of our tables in db1 and db2 with different values of gtid_domain_id: tee /tmp/domain_test.sql <<EOF SET SESSION gtid_domain_id=1; BEGIN; INSERT INTO db1.test_table (file) VALUES (LOAD_FILE('/tmp/file.out')); COMMIT; SET SESSION gtid_domain_id=2; BEGIN; INSERT INTO db2.test_table (file) VALUES (LOAD_FILE('/tmp/file.out')); COMMIT; EOF After that, let's run the script a bunch of times. We can do this with a bash loop: [gmontee@master ~]$ { for ((i=0;i<1000;i++)); do cat /tmp/domain_test.sql; done; } | mysql --max_allowed_packet=1073741824 --user=root Restart SQL thread on slave Now the relay log on the slave should have grown quite a bit. Let's restart the SQL thread and watch the transactions get applied. To do this, let's open up two shells on the slave. On the first shell on the slave, connect to MariaDB and restart the SQL thread: MariaDB [(none)]> START SLAVE SQL_THREAD; Query OK, 0 rows affected (0.00 sec) On the second shell, let's look at SHOW PROCESSLIST output in a loop: [gmontee@slave ~]$ for i in {1..1000}; do mysql --user=root --execute="SHOW PROCESSLIST;"; sleep 1s; done; Take a look at the State column for the slave's SQL thread: +----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+ | 3 | system user | | NULL | Connect | 139 | closing tables | NULL | 0.000 | | 4 | system user | | NULL | Connect | 139 | Waiting for work from SQL thread | NULL | 0.000 | | 6 | system user | | NULL | Connect | 264274 | Waiting for master to send event | NULL | 0.000 | | 10 | root | localhost | NULL | Sleep | 43 | | NULL | 0.000 | | 21 | system user | | NULL | Connect | 45 | Waiting for room in worker thread event queue | NULL | 0.000 | | 54 | root | localhost | NULL | Query | 0 | init | SHOW PROCESSLIST | 0.000 | +----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+ With such a low slave_parallel_max_queued value, it will probably say "Waiting for room in worker thread event queue." most of the time. The SQL thread doesn't have enough memory allocated to read-ahead more of the relay log. This can prevent the SQL thread from providing enough work for all of the worker threads. The worker threads will probably show a State value of "Waiting for work from SQL thread" more often. Conclusion If you expect to be able to benefit from parallel slave threads, but you find that the State column in SHOW PROCESSLIST often shows "Waiting for room in worker thread event queue" for your SQL thread, you should try increasing slave_parallel_max_queued to see if that helps. The default slave_parallel_max_queued value of 132 KB will probably be acceptable for most workloads. However, if you have large transactions or if your slave falls behind the master often, and you hope to use out-of-order parallel replication, you may have to adjust this setting. Of course, most users probably want to avoid large transactions and slave lag for other reasons as well. Has anyone run into this problem before? Were you able to figure out a solution on your own? Tags: DBADeveloperHigh AvailabilityLoad balancingReplication About the Author Geoff Montee is a Support Engineer with MariaDB. He has previous experience as a Database Administrator/Software Engineer with the U.S. Government, and as a System Administrator and Software Developer at Florida State University.

  • Decrypt .mylogin.cnf
    General-purpose MySQL applications should read MySQL option files like /etc/my.cnf, ~/.my.cnf, ... and ~/.mylogin.cnf. But ~/.mylogin.cnf is encrypted. That's a problem for our ocelotgui GUI application, and I suppose other writers of Linux applications could face the same problem, so I'll share the code we'll use to solve it. First some words of defence. I think that encryption (or more correctly obfuscation) is okay as an option: a customer asked for it, and it prevents the most casual snoopers -- rather like a low fence: anyone can get over it, but making it a bit troublesome will make most passersby pass by. I favoured the idea, though other MySQL employees were against it on the old "false sense of security" argument. After all, by design, the data must be accessible without requiring credentials. So just XORing the file contents with a fixed key would have done the job. Alas, the current implementation does more: the configuration editor not only XORs, it encrypts with AES 128-bit ecb. The Oxford-dictionary word for this is supererogation. This makes reading harder. I've seen only one bug report / feature request touching on the problem, but I've also seen that others have looked into it and provided some solutions. Kolbe Kegel showed how to display the passwords, Serge Frezefond used a different method to display the whole file. Great. However, their solutions require downloading MySQL source code and rebuilding a section. No good for us, because ocelotgui contains no MySQL code and doesn't statically link to it. We need code that accesses a dynamic library at runtime, and unless I missed something big, the necessary stuff isn't exported from the mysql client library. Which brings us to ... ta-daa ... readmylogin.c. This program will read a .mylogin.cnf file and display the contents. Most of it is a BSD licence, so skip to the end to see the twenty lines of code. Requirements are gcc, and libcrypto.so (the openSSL library which I believe is easily downloadable on most Linux distros). Instructions for building and running are in the comments. Cutters-and-pasters should beware that less-than-sign or greater-than-sign may be represented with HTML entities. /* readmylogin.c Decrypt and display a MySQL .mylogin.cnf file. Uses openSSL libcrypto.so library. Does not use a MySQL library. Copyright (c) 2015 by Ocelot Computer Services Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. To compile and link and run with Linux and gcc: 1. Install openSSL 2. If installation puts libcrypto.so in an unusual directory, say export LD_LIBRARY_PATH=/unusual-directory 3. gcc -o readmylogin readmylogin.c -lcrypto To run, it's compulsory to specify where the file is, for example: ./readmylogin .mylogin.cnf MySQL may change file formats without notice, but the following is true for files produced by mysql_config_editor with MySQL 5.6: * First four bytes are unused, probably reserved for version number * Next twenty bytes are the basis of the key, to be XORed in a loop until a sixteen-byte key is produced. * The rest of the file is, repeated as necessary: four bytes = length of following cipher chunk, little-endian n bytes = cipher chunk * Encryption is AES 128-bit ecb. * Chunk lengths are always a multiple of 16 bytes (128 bits). Therefore there may be padding. We assume that any trailing byte containing a value less than '\n' is a padding byte. To make the code easy to understand, all error handling code is reduced to "return -1;" and buffers are fixed-size. To make the code easy to build, the line #include "/usr/include/openssl/aes.h" is commented out, but can be uncommented if aes.h is available. This is version 1, May 21 2015. More up-to-date versions of this program may be available within the ocelotgui project https://github.com/ocelot-inc/ocelotgui */ #include <stdio.h> #include <fcntl.h> //#include "/usr/include/openssl/aes.h" #ifndef HEADER_AES_H #define AES_BLOCK_SIZE 16 typedef struct aes_key_st { unsigned char x[244]; } AES_KEY; #endif unsigned char cipher_chunk[4096], output_buffer[65536]; int fd, cipher_chunk_length, output_length= 0, i; char key_in_file[20]; char key_after_xor[AES_BLOCK_SIZE] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; AES_KEY key_for_aes; int main(int argc, char *argv[]) { if (argc < 1) return -1; if ((fd= open(argv[1], O_RDONLY)) == -1) return -1; if (lseek(fd, 4, SEEK_SET) == -1) return -1; if (read(fd, key_in_file, 20) != 20) return -1; for (i= 0; i < 20; ++i) *(key_after_xor + (i%16))^= *(key_in_file + i); AES_set_decrypt_key(key_after_xor, 128, &key_for_aes); while (read(fd, &cipher_chunk_length, 4) == 4) { if (cipher_chunk_length > sizeof(cipher_chunk)) return -1; if (read(fd, cipher_chunk, cipher_chunk_length) != cipher_chunk_length) return -1; for (i= 0; i < cipher_chunk_length; i+= AES_BLOCK_SIZE) { AES_decrypt(cipher_chunk+i, output_buffer+output_length, &key_for_aes); output_length+= AES_BLOCK_SIZE; while (*(output_buffer+(output_length-1)) < '\n') --output_length; } } *(output_buffer + output_length)= '\0'; printf("%s.\n", output_buffer); }

  • Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup – Part 2 of 2
    In part one of this post, I gave you a couple examples of how to backup your MySQL databases using mysqldump. In part two, I will show you how to use the MySQL Enterprise Backup (MEB) to create a full and partial backup. MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. To learn more, you may download a whitepaper on MEB. MySQL Enterprise Backup delivers: NEW! Continuous monitoring – Monitor the progress and disk space usage “Hot” Online Backups – Backups take place entirely online, without interrupting MySQL transactions High Performance – Save time with faster backup and recovery Incremental Backup – Backup only data that has changed since the last backup Partial Backup – Target particular tables or tablespaces Compression – Cut costs by reducing storage requirements up to 90% Backup to Tape – Stream backup to tape or other media management solutions Fast Recovery – Get servers back online and create replicated servers Point-in-Time Recovery (PITR) – Recover to a specific transaction Partial restore – Recover targeted tables or tablespaces Restore to a separate location – Rapidly create clones for fast replication setup Reduce Failures – Use a proven high quality solution from the developers of MySQL Multi-platform – Backup and Restore on Linux, Windows, Mac & Solaris (from: http://www.mysql.com/products/enterprise/backup.html) While mysqldump is free to use, MEB is part of MySQL’s Enterprise Edition (EE) – so you need a license to use it. But if you are using MySQL in a production environment, you might want to look at EE, as: MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications. (from: http://www.mysql.com/products/enterprise/) Before using MEB and backing up your database for the first time, you will need some information: Information to gather – Where to Find It – How It Is Used Path to MySQL configuration file – Default system locations, hardcoded application default locations, or from --defaults-file option in mysqld startup script. - This is the preferred way to convey database configuration information to the mysqlbackup command, using the --defaults-file option. When connection and data layout information is available from the configuration file, you can skip most of the other choices listed below. MySQL port – MySQL configuration file or mysqld startup script. Used to connect to the database instance during backup operations. Specified via the --port option of mysqlbackup. --port is not needed if available from MySQL configuration file. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. Path to MySQL data directory – MySQL configuration file or mysqld startup script. – Used to retrieve files from the database instance during backup operations, and to copy files back to the database instance during restore operations. Automatically retrieved from database connection for hot and warm backups. Taken from MySQL configuration file for cold backups. ID and password of privileged MySQL user – You record this during installation of your own databases, or get it from the DBA when backing up databases you do not own. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. For cold backups, you log in as an administrative user. – Specified via the --password option of the mysqlbackup. Prompted from the terminal if the --password option is present without the password argument. Path under which to store backup data – You choose this. See Section 3.1.3, “Designate a Location for Backup Data” for details. – By default, this directory must be empty for mysqlbackup to write data into it, to avoid overwriting old backups or mixing up data from different backups. Use the --with-timestamp option to automatically create a subdirectory with a unique name, when storing multiple sets of backup data under the same main directory. Owner and permission information for backed-up files (for Linux, Unix, and OS X systems) – In the MySQL data directory. – If you do the backup using a different OS user ID or a different umask setting than applies to the original files, you might need to run commands such as chown and chmod on the backup data. See Section A.1, “Limitations of mysqlbackup Command” for details. Size of InnoDB redo log files – Calculated from the values of the innodb_log_file_size and innodb_log_files_in_group configuration variables. Use the technique explained for the --incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the --incremental-with-redo-log-only option rather than the --incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups. Rate at which redo data is generated – Calculated from the values of the InnoDB logical sequence number at different points in time. Use the technique explained for the --incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the --incremental-with-redo-log-only option rather than the --incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups. (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-prep-gather.html For most backup operations, the mysqlbackup command connects to the MySQL server through --user and --password options. If you aren’t going to use the root user, then you will need to create a separate user. Follow these instructions for setting the proper permissions. All backup-related operations either create new files or reference existing files underneath a specified directory that holds backup data. Choose this directory in advance, on a file system with sufficient storage. (It could even be remotely mounted from a different server.) You specify the path to this directory with the --backup-dir option for many invocations of the mysqlbackup command. Once you establish a regular backup schedule with automated jobs, it is preferable to keep each backup within a timestamped subdirectory underneath the main backup directory. To make the mysqlbackup command create these subdirectories automatically, specify the --with-timestamp option each time you run mysqlbackup. For one-time backup operations, for example when cloning a database to set up a replication slave, you might specify a new directory each time, or specify the --force option of mysqlbackup to overwrite older backup files. (from http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-prep-storage.html If you haven’t downloaded and installed mysqlbackup, you may download it from edelivery.oracle.com (registration is required). Install the MySQL Enterprise Backup product on each database server whose contents you intend to back up. You perform all backup and restore operations locally, by running the mysqlbackup command on the same server as the MySQL instance. Information on installation may be found here. Now that we have gathered all of the required information and installed mysqlbackup, let’s run a simple and easy backup of the entire database. I installed MEB in my /usr/local directory, so I am including the full path of mysqlbackup. I am using the backup-and-apply-log option, which combines the --backup and the --apply-log options into one. The --backup option performs the initial phase of a backup. The second phase is performed later by running mysqlbackup again with the --apply-log option, which brings the InnoDB tables in the backup up-to-date, including any changes made to the data while the backup was running. $ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log MySQL Enterprise Backup version 3.8.2 [2013/06/18] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. mysqlbackup: INFO: Starting with following command line ... /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log Enter password: mysqlbackup: INFO: MySQL server version is '5.6.9-rc-log'. mysqlbackup: INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully. At the end of a successful 'backup-and-apply-log' run mysqlbackup prints "mysqlbackup completed OK!". -------------------------------------------------------------------- Server Repository Options: -------------------------------------------------------------------- datadir = /usr/local/mysql/data/ innodb_data_home_dir = /usr/local/mysql/data innodb_data_file_path = ibdata1:40M:autoextend innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_files_in_group = 2 innodb_log_file_size = 5242880 innodb_page_size = 16384 innodb_checksum_algorithm = innodb innodb_undo_directory = /usr/local/mysql/data/ innodb_undo_tablespaces = 0 innodb_undo_logs = 128 -------------------------------------------------------------------- Backup Config Options: -------------------------------------------------------------------- datadir = /Users/tonydarnell/hotbackups/datadir innodb_data_home_dir = /Users/tonydarnell/hotbackups/datadir innodb_data_file_path = ibdata1:40M:autoextend innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/datadir innodb_log_files_in_group = 2 innodb_log_file_size = 5242880 innodb_page_size = 16384 innodb_checksum_algorithm = innodb innodb_undo_directory = /Users/tonydarnell/hotbackups/datadir innodb_undo_tablespaces = 0 innodb_undo_logs = 128 mysqlbackup: INFO: Unique generated backup id for this is 13742482113579320 mysqlbackup: INFO: Creating 14 buffers each of size 16777216. 130719 11:36:53 mysqlbackup: INFO: Full Backup operation starts with following threads 1 read-threads 6 process-threads 1 write-threads 130719 11:36:53 mysqlbackup: INFO: System tablespace file format is Antelope. 130719 11:36:53 mysqlbackup: INFO: Starting to copy all innodb files... 130719 11:36:53 mysqlbackup: INFO: Copying /usr/local/mysql/data/ibdata1 (Antelope file format). 130719 11:36:53 mysqlbackup: INFO: Found checkpoint at lsn 135380756. 130719 11:36:53 mysqlbackup: INFO: Starting log scan from lsn 135380480. 130719 11:36:53 mysqlbackup: INFO: Copying log... 130719 11:36:54 mysqlbackup: INFO: Log copied, lsn 135380756. <font color="blue"><i>(I have truncated some of the database and table output to save space)</font></i> ..... 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_index_stats.ibd (Antelope file format). 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_table_stats.ibd (Antelope file format). 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_master_info.ibd (Antelope file format). 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_relay_log_info.ibd (Antelope file format). 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_worker_info.ibd (Antelope file format). ..... 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t1.ibd (Antelope file format). 130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t3.ibd (Antelope file format). ..... 130719 11:36:57 mysqlbackup: INFO: Copying /usr/local/mysql/data/watchdb/watches.ibd (Antelope file format). ..... 130719 11:36:57 mysqlbackup: INFO: Completing the copy of innodb files. 130719 11:36:58 mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 130719 11:36:58 mysqlbackup: INFO: Starting to lock all the tables... 130719 11:36:58 mysqlbackup: INFO: All tables are locked and flushed to disk 130719 11:36:58 mysqlbackup: INFO: Opening backup source directory '/usr/local/mysql/data/' 130719 11:36:58 mysqlbackup: INFO: Starting to backup all non-innodb files in subdirectories of '/usr/local/mysql/data/' ..... 130719 11:36:58 mysqlbackup: INFO: Copying the database directory 'comicbookdb' ..... 130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'mysql' 130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'performance_schema' ..... 130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'test' ..... 130719 11:36:59 mysqlbackup: INFO: Copying the database directory 'watchdb' 130719 11:36:59 mysqlbackup: INFO: Completing the copy of all non-innodb files. 130719 11:37:00 mysqlbackup: INFO: A copied database page was modified at 135380756. (This is the highest lsn found on page) Scanned log up to lsn 135384397. Was able to parse the log up to lsn 135384397. Maximum page number for a log record 375 130719 11:37:00 mysqlbackup: INFO: All tables unlocked 130719 11:37:00 mysqlbackup: INFO: All MySQL tables were locked for 1.589 seconds. 130719 11:37:00 mysqlbackup: INFO: Full Backup operation completed successfully. 130719 11:37:00 mysqlbackup: INFO: Backup created in directory '/Users/tonydarnell/hotbackups' 130719 11:37:00 mysqlbackup: INFO: MySQL binlog position: filename mysql-bin.000013, position 85573 ------------------------------------------------------------- Parameters Summary ------------------------------------------------------------- Start LSN : 135380480 End LSN : 135384397 ------------------------------------------------------------- mysqlbackup: INFO: Creating 14 buffers each of size 65536. 130719 11:37:00 mysqlbackup: INFO: Apply-log operation starts with following threads 1 read-threads 1 process-threads 130719 11:37:00 mysqlbackup: INFO: ibbackup_logfile's creation parameters: start lsn 135380480, end lsn 135384397, start checkpoint 135380756. mysqlbackup: INFO: InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 0 1 .... 99 Setting log file size to 5242880 Setting log file size to 5242880 130719 11:37:00 mysqlbackup: INFO: We were able to parse ibbackup_logfile up to lsn 135384397. mysqlbackup: INFO: Last MySQL binlog file position 0 85573, file name mysql-bin.000013 130719 11:37:00 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/datadir/ibdata1' and the new created log files are at '/Users/tonydarnell/hotbackups/datadir' 130719 11:37:01 mysqlbackup: INFO: Apply-log operation completed successfully. 130719 11:37:01 mysqlbackup: INFO: Full backup prepared for recovery successfully. mysqlbackup completed OK! Now, I can take a look at the backup file that was created: root@macserver01: $ pwd /Users/tonydarnell/hotbackups root@macserver01: $ ls -l total 8 -rw-r--r-- 1 root staff 351 Jul 19 11:36 backup-my.cnf drwx------ 21 root staff 714 Jul 19 11:37 datadir drwx------ 6 root staff 204 Jul 19 11:37 meta $ ls -l datadir total 102416 drwx------ 5 root staff 170 Jul 19 11:36 comicbookdb -rw-r----- 1 root staff 5242880 Jul 19 11:37 ib_logfile0 -rw-r----- 1 root staff 5242880 Jul 19 11:37 ib_logfile1 -rw-r--r-- 1 root staff 4608 Jul 19 11:37 ibbackup_logfile -rw-r--r-- 1 root staff 41943040 Jul 19 11:37 ibdata1 drwx------ 88 root staff 2992 Jul 19 11:36 mysql drwx------ 55 root staff 1870 Jul 19 11:36 performance_schema drwx------ 3 root staff 102 Jul 19 11:36 test drwx------ 30 root staff 1020 Jul 19 11:36 testcert drwx------ 19 root staff 646 Jul 19 11:36 watchdb root@macserver01: $ ls -l meta total 216 -rw-r--r-- 1 root staff 90786 Jul 19 11:37 backup_content.xml -rw-r--r-- 1 root staff 5746 Jul 19 11:36 backup_create.xml -rw-r--r-- 1 root staff 265 Jul 19 11:37 backup_gtid_executed.sql -rw-r--r-- 1 root staff 321 Jul 19 11:37 backup_variables.txt As you can see, the backup was created in /Users/tonydarnell/hotbackups. If I wanted to have a unique folder for this backup, I can use the --with-timestamp. The --with-timestamp option places the backup in a subdirectory created under the directory you specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run. (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/mysqlbackup.html) I will run the same backup command again, but with the --with-timestamp option: (I am not going to duplicate the entire output – but I will only show you the output where it creates the sub-directory under /Users/tonydarnell/hotbackups)$ /usr/local/meb/bin/mysqlbackup --user=root --password --backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log --with-timestamp ...... 130719 11:49:54 mysqlbackup: INFO: The first data file is '/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1' <font color="blue">and the new created log files are at '/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir'</font> 130719 11:49:54 mysqlbackup: INFO: Apply-log operation completed successfully. 130719 11:49:54 mysqlbackup: INFO: Full backup prepared for recovery successfully. mysqlbackup completed OK! So, I ran the backup again to get a unique directory. Instead of the backup files/directories being placed in /Users/tonydarnell/hotbackups, it created a sub-directory with a timestamp for the directory name: $ pwd /Users/tonydarnell/hotbackups root@macserver01: $ ls -l total 0 drwx------ 5 root staff 170 Jul 19 11:49 2013-07-19_11-49-48 $ ls -l 2013-07-19_11-49-48 total 8 -rw-r--r-- 1 root staff 371 Jul 19 11:49 backup-my.cnf drwx------ 21 root staff 714 Jul 19 11:49 datadir drwx------ 6 root staff 204 Jul 19 11:49 meta Note: If you don’t use the --backup-and-apply-log option you will need to read this: Immediately after the backup job completes, the backup files might not be in a consistent state, because data could be inserted, updated, or deleted while the backup is running. These initial backup files are known as the raw backup. You must update the backup files so that they reflect the state of the database corresponding to a specific InnoDB log sequence number. (The same kind of operation as crash recovery.) When this step is complete, these final files are known as the prepared backup. During the backup, mysqlbackup copies the accumulated InnoDB log to a file called ibbackup_logfile. This log file is used to “roll forward” the backed-up data files, so that every page in the data files corresponds to the same log sequence number of the InnoDB log. This phase also creates new ib_logfiles that correspond to the data files. The mysqlbackup option for turning a raw backup into a prepared backup is --apply-log. You can run this step on the same database server where you did the backup, or transfer the raw backup files to a different system first, to limit the CPU and storage overhead on the database server. Note: Since the --apply-log operation does not modify any of the original files in the backup, nothing is lost if the operation fails for some reason (for example, insufficient disk space). After fixing the problem, you can safely retry --apply-log and by specifying the --force option, which allows the data and log files created by the failed --apply-log operation to be overwritten. For simple backups (without compression or incremental backup), you can combine the initial backup and the --apply-log step using the option --backup-and-apply-log. (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/backup-apply-log.html) One file that was not copied was the my.cnf file. You will want to have a separate script to copy this at regular intervals. If you put the mysqlbackup command in a cron or Windows Task Manager job, you can add a way to copy the my.cnf file as well. Now that we have a completed backup, we are going to copy the backup files and the my.cnf file over to a different server to restore the databases. We will be using a server that was setup as a slave server to the server where the backup occurred. If you need to restore the backup to the same server, you will need to refer to this section of the mysqlbackup manual. I copied the backup files as well as the my.cnf file to the new server: # pwd /Users/tonydarnell/hotbackups # ls -l total 16 drwxrwxrwx 5 tonydarnell staff 170 Jul 19 15:38 2013-07-19_11-49-48 On the new server (where I will restore the data), I shutdown the mysqld process (mysqladmin -uroot -p shutdown), copied the my.cnf file to the proper directory, and now I can restore the database to the new server, using the copy-back option. The copy-back option requires the database server to be already shut down, then copies the data files, logs, and other backed-up files from the backup directory back to their original locations, and performs any required postprocessing on them. (from: http://dev.mysql.com/doc/mysql-enterprise-backup/3.8/en/restore.restore.html) # /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf --backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 copy-back MySQL Enterprise Backup version 3.8.2 [2013/06/18] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. mysqlbackup: INFO: Starting with following command line ... /usr/local/meb/bin/mysqlbackup --defaults-file=/etc/my.cnf --backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 copy-back IMPORTANT: Please check that mysqlbackup run completes successfully. At the end of a successful 'copy-back' run mysqlbackup prints "mysqlbackup completed OK!". -------------------------------------------------------------------- Server Repository Options: -------------------------------------------------------------------- datadir = /usr/local/mysql/data innodb_data_home_dir = /usr/local/mysql/data innodb_data_file_path = ibdata1:40M:autoextend innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_files_in_group = 2 innodb_log_file_size = 5M innodb_page_size = Null innodb_checksum_algorithm = innodb -------------------------------------------------------------------- Backup Config Options: -------------------------------------------------------------------- datadir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir innodb_data_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir innodb_data_file_path = ibdata1:40M:autoextend innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir innodb_log_files_in_group = 2 innodb_log_file_size = 5242880 innodb_page_size = 16384 innodb_checksum_algorithm = innodb innodb_undo_directory = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir innodb_undo_tablespaces = 0 innodb_undo_logs = 128 mysqlbackup: INFO: Creating 14 buffers each of size 16777216. 130719 15:54:41 mysqlbackup: INFO: Copy-back operation starts with following threads 1 read-threads 1 write-threads 130719 15:54:41 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1. ..... 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/comicbookdb/comics.ibd. ..... 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_index_stats.ibd. 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_table_stats.ibd. 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_master_info.ibd. 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_relay_log_info.ibd. 130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_worker_info.ibd. ..... 130719 15:54:43 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/watchdb/watches.ibd. ..... 130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'comicbookdb' ..... 130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'mysql' 130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'performance_schema' ..... 130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'test' ..... 130719 15:54:43 mysqlbackup: INFO: Copying the database directory 'watchdb' 130719 15:54:43 mysqlbackup: INFO: Completing the copy of all non-innodb files. 130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile0' 130719 15:54:43 mysqlbackup: INFO: Copying the log file 'ib_logfile1' 130719 15:54:44 mysqlbackup: INFO: Copy-back operation completed successfully. 130719 15:54:44 mysqlbackup: INFO: Finished copying backup files to '/usr/local/mysql/data' mysqlbackup completed OK! I can now restart MySQL. I have a very small database (less than 50 megabytes). But it took less than a minute to restore the database. If I had to rebuild my database using mysqldump, it would take a lot longer. If you have a very large database, the different in using mysqlbackup and mysqldump could be in hours. For example, a 32-gig database with 33 tables takes about eight minutes to restore with mysqlbackup. Restoring the same database with a mysqldump file takes over two hours. An easy way to check to see if the databases match (assuming that I haven’t added any new records in any of the original databases – which I haven’t), I can use one of the MySQL Utilities – mysqldbcompare. I wrote about how to do this in an earlier blog about using it to test two replicated databases, but it will work here as well – see Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication. The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences) Some of the syntax may have changed for mysqldbcompare since I wrote that blog, so you will need to reference the help notes for mysqldbcompare. You would need to run this for each of your databases. $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2 --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context comicbookdb:comicbookdb # server1 on 192.168.1.2: ... connected. # server2 on 192.168.1.123: ... connected. # Checking databases comicbookdb on server1 and comicbookdb on server2 Defn Row Data Type Object Name Diff Count Check --------------------------------------------------------------------------- TABLE comics pass pass pass Databases are consistent. # ...done You can try and run this for the mysql database, but you may get a few errors regarding the mysql.backup_history and mysql.backup_progress tables: $ mysqldbcompare --server1=scripts:scripts999@192.168.1.2 --server2=scripts:scripts999@192.168.1.123 --run-all-tests --difftype=context mysql:mysql # server1 on 192.168.1.2: ... connected. # server2 on 192.168.1.123: ... connected. # Checking databases mysql on server1 and mysql on server2 Defn Row Data Type Object Name Diff Count Check --------------------------------------------------------------------------- TABLE backup_history pass FAIL SKIP Row counts are not the same among mysql.backup_history and mysql.backup_history. No primary key found. TABLE backup_progress pass FAIL SKIP Row counts are not the same among mysql.backup_progress and mysql.backup_progress. No primary key found. TABLE columns_priv pass pass pass TABLE db pass pass pass TABLE event pass pass pass TABLE func pass pass pass TABLE general_log pass pass SKIP No primary key found. TABLE help_category pass pass pass TABLE help_keyword pass pass pass TABLE help_relation pass pass pass TABLE help_topic pass pass pass TABLE innodb_index_stats pass pass pass TABLE innodb_table_stats pass pass pass TABLE inventory pass pass pass TABLE ndb_binlog_index pass pass pass TABLE plugin pass pass pass TABLE proc pass pass pass TABLE procs_priv pass pass pass TABLE proxies_priv pass pass pass TABLE servers pass pass pass TABLE slave_master_info pass pass pass TABLE slave_relay_log_info pass pass pass TABLE slave_worker_info pass pass pass TABLE slow_log pass pass SKIP No primary key found. TABLE tables_priv pass pass pass TABLE time_zone pass pass pass TABLE time_zone_leap_second pass pass pass TABLE time_zone_name pass pass pass TABLE time_zone_transition pass pass pass TABLE time_zone_transition_type pass pass pass TABLE user pass pass pass Database consistency check failed. # ...done For example, when you compare the mysql.backup_history tables, the original database will have two entries – as I ran mysqlbackup twice. But the second backup entry doesn’t get entered until after the backup has occurred, and it isn’t reflected in the backup files. Original Servermysql&gt; select count(*) from mysql.backup_history; +----------+ | count(*) | +----------+ | 2 | +----------+ 1 row in set (0.00 sec) Restored Servermysql&gt; select count(*) from mysql.backup_history; +----------+ | count(*) | +----------+ | 1 | +----------+ 1 row in set (0.00 sec) For the mysql.backup_progress tables, the original database has ten rows, while the restored database has seven. There are many options for using mysqlbackup, including (but not limited to) incremental backup, partial backup , compression, backup to tape, point-in-time recovery (PITR), partial restore, etc. If you are running MySQL in a production environment, then you should look at MySQL Enterprise Edition, which includes MySQL Enterprise Backup. Of course, you should always have a backup and recovery plan in place. Finally, if and when possible, practice restoring your backup on a regular basis, to make sure that if your server crashes, you can restore your database quickly.   Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.

  • Bash Arrays & Oracle
    Last week, I wrote about how to use bash arrays and the MySQL database to create unit and integration test scripts. While the MySQL example was nice for some users, there were some others who wanted me to show how to write bash shell scripts for Oracle unit and integration testing. That’s what this blog post does. If you don’t know much about bash shell, you should start with the prior post to learn about bash arrays, if-statements, and for-loops. In this blog post I only cover how to implement a bash shell script that runs SQL scripts in silent mode and then queries the database in silent mode and writes the output to an external file. To run the bash shell script, you’ll need the following SQL files, which you can see by clicking not he title below. There are several differences. For example, Oracle doesn’t support a DROP IF EXISTS syntax and requires you to write anonymous blocks in their PL/SQL language; and you must explicitly issue a QUIT; statement even when running in silent mode unlike MySQL, which implicitly issues an exit. Setup SQL Files ↓ The actor.sql file: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 -- Drop actor table and actor_s sequence. BEGIN FOR i IN (SELECT object_name , object_type FROM user_objects WHERE object_name IN ('ACTOR','ACTOR_S')) LOOP IF i.object_type = 'TABLE' THEN EXECUTE IMMEDIATE 'DROP TABLE ' || i.object_name || ' CASCADE CONSTRAINTS'; ELSIF i.object_type = 'SEQUENCE' THEN EXECUTE IMMEDIATE 'DROP SEQUENCE ' || i.object_name; END IF; END LOOP; END; /   -- Create an actor table. CREATE TABLE actor ( actor_id NUMBER CONSTRAINT actor_pk PRIMARY KEY , actor_name VARCHAR(30) NOT NULL );   -- Create an actor_s sequence. CREATE SEQUENCE actor_s;   -- Insert two rows. INSERT INTO actor VALUES (actor_s.NEXTVAL,'Chris Hemsworth'); INSERT INTO actor VALUES (actor_s.NEXTVAL,'Chris Pine'); INSERT INTO actor VALUES (actor_s.NEXTVAL,'Chris Pratt');   -- Quit session. QUIT; The film.sql file: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 -- Drop film table and film_s sequence. BEGIN FOR i IN (SELECT object_name , object_type FROM user_objects WHERE object_name IN ('FILM','FILM_S')) LOOP IF i.object_type = 'TABLE' THEN EXECUTE IMMEDIATE 'DROP TABLE ' || i.object_name || ' CASCADE CONSTRAINTS'; ELSIF i.object_type = 'SEQUENCE' THEN EXECUTE IMMEDIATE 'DROP SEQUENCE ' || i.object_name; END IF; END LOOP; END; /   -- Create a film table. CREATE TABLE film ( film_id NUMBER CONSTRAINT film_pk PRIMARY KEY , film_name VARCHAR(30) NOT NULL );   -- Create an actor_s sequence. CREATE SEQUENCE film_s;   -- Insert four rows. INSERT INTO film VALUES (film_s.NEXTVAL,'Thor'); INSERT INTO film VALUES (film_s.NEXTVAL,'Thor: The Dark World'); INSERT INTO film VALUES (film_s.NEXTVAL,'Star Trek'); INSERT INTO film VALUES (film_s.NEXTVAL,'Star Trek into Darkness'); INSERT INTO film VALUES (film_s.NEXTVAL,'Guardians of the Galaxy');   -- Quit session. QUIT; The movie.sql file: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 -- Drop movie table and movie_s sequence. BEGIN FOR i IN (SELECT object_name , object_type FROM user_objects WHERE object_name IN ('MOVIE','MOVIE_S')) LOOP IF i.object_type = 'TABLE' THEN EXECUTE IMMEDIATE 'DROP TABLE ' || i.object_name || ' CASCADE CONSTRAINTS'; ELSIF i.object_type = 'SEQUENCE' THEN EXECUTE IMMEDIATE 'DROP SEQUENCE ' || i.object_name; END IF; END LOOP; END; /   -- Create an movie table. CREATE TABLE movie ( movie_id NUMBER CONSTRAINT movie_pk PRIMARY KEY , actor_id NUMBER CONSTRAINT movie_nn1 NOT NULL , film_id NUMBER CONSTRAINT movie_nn2 NOT NULL , CONSTRAINT actor_fk FOREIGN KEY (actor_id) REFERENCES actor (actor_id) , CONSTRAINT film_fk FOREIGN KEY (film_id) REFERENCES film(film_id));   -- Create table constraint. CREATE SEQUENCE movie_s;   -- Insert translation rows. INSERT INTO movie VALUES ( movie_s.NEXTVAL ,(SELECT actor_id FROM actor WHERE actor_name = 'Chris Hemsworth') ,(SELECT film_id FROM film WHERE film_name = 'Thor'));   INSERT INTO movie VALUES ( movie_s.NEXTVAL ,(SELECT actor_id FROM actor WHERE actor_name = 'Chris Hemsworth') ,(SELECT film_id FROM film WHERE film_name = 'Thor: The Dark World'));   INSERT INTO movie VALUES ( movie_s.NEXTVAL ,(SELECT actor_id FROM actor WHERE actor_name = 'Chris Pine') ,(SELECT film_id FROM film WHERE film_name = 'Star Trek'));   INSERT INTO movie VALUES ( movie_s.NEXTVAL ,(SELECT actor_id FROM actor WHERE actor_name = 'Chris Pine') ,(SELECT film_id FROM film WHERE film_name = 'Star Trek into Darkness'));   INSERT INTO movie VALUES ( movie_s.NEXTVAL ,(SELECT actor_id FROM actor WHERE actor_name = 'Chris Pratt') ,(SELECT film_id FROM film WHERE film_name = 'Guardians of the Galaxy'));   -- Quit session. QUIT; The tables.sql file, lets you verify the creation of the actor, film, and movie tables: 1 2 3 4 5 6 7 8 9 -- Set Oracle column width. COL table_name FORMAT A30 HEADING "Table Name"   -- Query the tables. SELECT table_name FROM user_tables;   -- Exit SQL*Plus. QUIT; The results.sql file, lets you see join results from actor, film, and movie tables: 1 2 3 4 5 6 7 8 9 10 11 -- Format query. COL film_actors FORMAT A40 HEADING "Actors in Films"   -- Diagnostic query. SELECT a.actor_name || ', ' || f.film_name AS film_actors FROM actor a INNER JOIN movie m ON a.actor_id = m.actor_id INNER JOIN film f ON m.film_id = f.film_id;   -- Quit the session. QUIT; If you don’t have a sample test schema to use to test this script, you can create a sample schema with the following create_user.sql file. The file depends on the existence of a users and temp tablespace. Click the link below to see the source code for the script: Create User SQL Files ↓ You can use the dbms_metadata.get_ddl function to discover the existence of the tablespaces. The following SQL syntax returns the SQL DDL statement that created a users or temp tablespace: 1 2 SET LONG 200000 SELECT dbms_metadata.get_ddl('TABLESPACE','USERS') FROM dual; You create the sample database with the following SQL statements: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 -- Drop the sample user table. DROP USER sample CASCADE;   -- Create the sample user table. CREATE USER sample IDENTIFIED BY sample DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA 50M ON users;   -- Grant privileges to sample user. GRANT CREATE CLUSTER, CREATE INDEXTYPE, CREATE OPERATOR , CREATE PROCEDURE, CREATE SEQUENCE, CREATE SESSION , CREATE TABLE, CREATE TRIGGER, CREATE TYPE , CREATE VIEW TO sample; The following list_oracle.sh shell script expects to receive the username, password, and fully qualified path in that specific order. The script names are entered manually in the array because this should be a unit test script. This is an insecure version of the list_oracle.sh script because you provide the password on the command line. It’s better to provide the password as you run the script. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/usr/bin/bash   # Assign user and password username="${1}" password="${2}" directory="${3}"   echo "User name:" ${username} echo "Password: " ${password} echo "Directory:" ${directory}   # Define an array. declare -a cmd   # Assign elements to an array. cmd[0]="actor.sql" cmd[1]="film.sql" cmd[2]="movie.sql"   # Call the array elements. for i in ${cmd[*]}; do sqlplus -s ${username}/${password} @${directory}/${i} > /dev/null done   # Connect and pipe the query result minus errors and warnings to the while loop. sqlplus -s ${username}/${password} @${directory}/tables.sql 2>/dev/null |   # Read through the piped result until it's empty. while IFS='\n' read actor_name; do echo $actor_name done   # Connect and pipe the query result minus errors and warnings to the while loop. sqlplus -s ${username}/${password} @${directory}/result.sql 2>/dev/null |   # Read through the piped result until it's empty. while IFS='\n' read actor_name; do echo $actor_name done You can run the shell script with the following syntax: ./list_oracle.sh sample sample /home/student/Code/bash/oracle > output.txt You can then display the results from the output.txt file with the following command: cat output.txt command: It will display the following output: User name: sample Password: sample Directory: /home/student/Code/bash/oracle   Table Name ------------------------------ MOVIE FILM ACTOR   Actors in Films ---------------------------------------- Chris Hemsworth, Thor Chris Hemsworth, Thor: The Dark World Chris Pine, Star Trek Chris Pine, Star Trek into Darkness Chris Pratt, Guardians of the Galaxy As always, I hope this helps those looking for a solution.

  • Updates To Our Fault Detection Algorithm
    Unexpected downtime is one of your worst nightmares, but most attempts to find problems before they happen are threshold-based. Thresholds create noise, and alerts create false positives so often you may miss actual problems. When we began building VividCortex, we introduced Adaptive Fault Detection, a feature to detect problems through a combination of statistical anomaly detection and queueing theory. It’s our patent-pending technique to detect system stalls in the database and disk. These are early indicators of serious problems, so it’s really helpful to find them. (Note: “fault” is kind of an ambiguous term for some people. In the context we’re using here, it means a stall/pause/freeze/lockup). The initial version of fault detection enabled us to find hidden problems nobody suspected, but as our customer base diversified, we found more situations that could fool it. We’ve released a new version that improves upon it. Let’s see how. How It Works The old fault detection algorithm was based on statistics, exponentially weighted moving averages, and queueing theory. The new implementation ties together concepts from queueing theory, time series analysis and forecasting, and statistical machine learning. The addition of machine learning is what enables it to be even more adaptive (i.e. even less hard-coded). Take a look at the following screenshot of some key metrics on a system during a fault. Notice how much chaos there is in the system overall. For example, the burst of network throughput just before an after the fault. Despite this, we would not have detected a fault if work were still getting done. We’re able to reliably detect single-second problems in systems that a human would struggle to make any sense of. Adaptive fault detection is not based on simple thresholds on metrics such as threads_running. Rather, its algorithm adapts dynamically to work for time series ranging from fairly stable (such as MySQL Concurrency shown above) to highly variable (such as MySQL Queries in the example above). Note how different those metrics are. What does “typical” even mean in such a system? At the same time, we clearly identify and highlight both the causes and the effects in the system. For example, a screenshot of a different part of the user interface for the same time period highlights how badly a variety of queries were impacted. The fault stalled them. If we drill down into the details page for one of those queries, we can see that the average latency around the time of the fault is significantly higher, implying that it’s taking more time to get the same amount of work done. That’s an example of a very short stall, but long stalls are important too. Detecting Longer Faults Some customers had long-building, slow-burn stalls in systems. The new fault detection algorithm is better able to detect such multi-second faults. The chart below shows a multi-second fault. The algorithm can also detect even longer faults. Sometimes these are subtle unless you “zoom out” to see how things have slowly been getting stuck over time. Trick question: what’s stalling our server here? Okay, it’s xtrabackup. Not really a trick question :-) You might think this kind of thing is easy to detect. “Just throw an alarm when threads_running is more than 50,” you say. If you try that, though, you’ll see why we invented Adaptive Fault Detection. It’s not easy to balance sensitivity and specificity. Other Improvements In addition to the improvements you’ll see, we’ve made a lot of changes to the code as well. Because the code is better organized and diagnostic tools readily available, we can easily add support for different kinds of faults, and because it is testable, we can make sure we are truly measuring system work, the monitoring metric that matters most. We occasionally find new and interesting kinds of stalls that we want to capture, and we are now in a position to more generically detect such tricky scenarios. In summary, the improved fault detection algorithm finds entirely new classes of previously undetectable problems for our customers–bona fide “perfect storms” of complex configuration and query interactions. If you would like to learn more about Adaptive Fault Detection, read our support docs, and if you are interested in monitoring the work your system does, sign up for a free trial.