อาทิตย์, 05 ก.ค. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Testing MySQL 5.7.7-rc with memory and disc full conditions
    With MySQL 5.7 there will be many improvements and changes. The most valuable ones are: 1. “General Tablespace” feature -> From now you can place tables to separate tablespaces. Prior, on our hands there were only “System TableSpace” and “InnoDB file per table” options. Check for further reading: General Tablespaces Create statement: Create TableSpace 2. Resizing InnoDB Buffer Pool online -> for changing innodb_buffer_pool_size variable restart is required prior to MySQL 5.7, from now we can change buffer pool size without restart dynamically. Read Documentation -> innodb-buffer-pool-online-resize To break this new things, we need to create some conditions where Memory or Disc will be full. Firstly let’s try to resize innodb buffer pool size without having sufficient memory. Note: MySQL installation: debug enabled Started MySQL as: (gdb) run --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/opt/mysql/datadir/centos7-1_vm.err --pid-file=/opt/mysql/datadir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3306 Then try to resize: mysql> SET @@GLOBAL.innodb_buffer_pool_size=9556544565345346; Query OK, 0 rows affected, 2 warnings (0.00 sec) After few attempts it will stuck with SIGABRT signal: Program received signal SIGABRT, Aborted. [Switching to Thread 0x7fffb77f6700 (LWP 17145)] 0x00007ffff61675d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 56 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); (gdb) bt #0 0x00007ffff61675d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007ffff6168cc8 in __GI_abort () at abort.c:90 #2 0x0000000001adbe0f in ut_dbg_assertion_failed (expr=0x0, file=0x21956d8 "/root/mysql-5.7.7-rc/storage/innobase/ut/ut0ut.cc", line=906) at /root/mysql-5.7.7-rc/storage/innobase/ut/ut0dbg.cc:67 #3 0x0000000001ae123e in ib::fatal::~fatal (this=0x7fffb77f49e0, __in_chrg=<optimized out>) at /root/mysql-5.7.7-rc/storage/innobase/ut/ut0ut.cc:906 #4 0x00000000018f18e8 in ut_allocator<unsigned char>::allocate (this=0x7fffb77f4ba0, n_elements=14238670808, hint=0x0, file=0x21cbb10 "/root/mysql-5.7.7-rc/storage/innobase/ha/hash0hash.cc", set_to_zero=false, throw_on_error=false) at /root/mysql-5.7.7-rc/storage/innobase/include/ut0new.h:328 #5 0x0000000001bed775 in hash0_create (n=1756528640) at /root/mysql-5.7.7-rc/storage/innobase/ha/hash0hash.cc:284 #6 0x000000000194769d in lock_sys_resize (n_cells=1756528640) at /root/mysql-5.7.7-rc/storage/innobase/lock/lock0lock.cc:478 #7 0x0000000001b3dce1 in buf_pool_resize () at /root/mysql-5.7.7-rc/storage/innobase/buf/buf0buf.cc:2669 #8 0x0000000001b3e0f5 in buf_resize_thread (arg=0x0) at /root/mysql-5.7.7-rc/storage/innobase/buf/buf0buf.cc:2749 #9 0x00007ffff7bc6df5 in start_thread (arg=0x7fffb77f6700) at pthread_create.c:308 #10 0x00007ffff62281ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 See Related verified BUG report -> #77564 How about creating “General Tablespace” if there is no free disk space? Again started with: [root@centos7_vm ~]# gdb /opt/mysql/bin/mysqld (gdb) run --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/opt/mysql/datadir/centos7_vm.err --pid-file=/opt/mysql/datadir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3306 Try to run create tablespace: mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' Engine=InnoDB; In error log there is some mention about full disk: 2015-06-30T05:34:41.379790Z 2 [ERROR] InnoDB: Encountered a problem with file ./ts2.ibd 2015-06-30T05:34:41.379865Z 2 [ERROR] InnoDB: Disk is full. Try to clean the disk to free space. 2015-06-30T05:34:41.379910Z 2 [ERROR] InnoDB: Operating system error number 28 in a file operation. 2015-06-30T05:34:41.379937Z 2 [ERROR] InnoDB: Error number 28 means 'No space left on device'. 2015-06-30T05:34:41.379952Z 2 [Note] InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html 2015-06-30T05:34:41.393247Z 2 [ERROR] InnoDB: Cannot create file './ts2.ibd' From GDB we will see SIGSEGV signal: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffddf7e700 (LWP 2787)] __strnlen_sse2 () at ../sysdeps/x86_64/strnlen.S:34 34 pcmpeqb (%rdi), %xmm2 (gdb) bt #0 __strnlen_sse2 () at ../sysdeps/x86_64/strnlen.S:34 #1 0x0000000001dc18f2 in process_str_arg (cs=0x2ba3ea0 <my_charset_utf8_general_ci>, to=0x7fffddf7bfdb "\002", end=0x7fffddf7c1cf "", width=192, par=0x1 <Address 0x1 out of bounds>, print_type=0) at /root/mysql-5.7.7-rc/strings/my_vsnprintf.c:194 #2 0x0000000001dc32e6 in my_vsnprintf_ex (cs=0x2ba3ea0 <my_charset_utf8_general_ci>, to=0x7fffddf7bfdb "\002", n=512, fmt=0x2d32a12 "s' is full", ap=0x7fffddf7c1d0) at /root/mysql-5.7.7-rc/strings/my_vsnprintf.c:607 #3 0x000000000189d0e9 in my_error (nr=135, MyFlags=0) at /root/mysql-5.7.7-rc/mysys/my_error.c:206 #4 0x0000000001622428 in mysql_alter_tablespace (thd=0x7fffcc000e20, ts_info=0x7fffcc022080) at /root/mysql-5.7.7-rc/sql/sql_tablespace.cc:191 #5 0x000000000158e104 in mysql_execute_command (thd=0x7fffcc000e20) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:4432 #6 0x000000000158ff62 in mysql_parse (thd=0x7fffcc000e20, parser_state=0x7fffddf7d5a0) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:5159 #7 0x0000000001585cf1 in dispatch_command (command=COM_QUERY, thd=0x7fffcc000e20, packet=0x7fffcc006f71 "CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' Engine=InnoDB", packet_length=60) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:1249 #8 0x0000000001584b21 in do_command (thd=0x7fffcc000e20) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:835 #9 0x00000000016adcff in handle_connection (arg=0x3c92060) at /root/mysql-5.7.7-rc/sql/conn_handler/connection_handler_per_thread.cc:298 #10 0x0000000001cd9905 in pfs_spawn_thread (arg=0x3b4dcd0) at /root/mysql-5.7.7-rc/storage/perfschema/pfs.cc:2147 #11 0x00007ffff7bc6df5 in start_thread (arg=0x7fffddf7e700) at pthread_create.c:308 #12 0x00007ffff62281ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 Related BUG report -> #77556 If you have you own test cases, please share it. The post Testing MySQL 5.7.7-rc with memory and disc full conditions appeared first on Azerbaijan MySQL UG.

  • Log Buffer #430: A Carnival of the Vanities for DBAs
    This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL. Oracle: Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time. Query existing HBase tables with SQL using Apache Phoenix. Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.). OSB & MTOM: When to use Include Binary Data by Reference or Value. Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project. SQL Server: Using DAX to create SSRS reports: The Basics. Getting to know your customers better – cohort analysis and RFM segmentation in R. Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008. Schema-Based Access Control for SQL Server Databases. How to Fix a Corrupt MSDB SQL Server Database. MySQL: MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table. Proposal to deprecate MySQL INTEGER display width and ZEROFILL. Using Cgroups to Limit MySQL and MongoDB memory usage. Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA. Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone. Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL, as well as the author Fahd Mirza.

  • MariaDB 10.0.20 Overview and Highlights
    MariaDB 10.0.20 was recently released, and is available for download here: https://downloads.mariadb.org/mariadb/10.0.20/ This is the eleventh GA release of MariaDB 10.0, and 21st overall release of MariaDB 10.0. There were no major functionality changes, but there was one security fix, 6 crashing bugs fixed, some general upstream fixes, and quite a few bug fixes, so let me cover the highlights: Security Fix: Client command line option –ssl-verify-server-cert (and MYSQL_OPT_SSL_VERIFY_SERVER_CERT option of the client API) when used together with –ssl will ensure that the established connection is SSL-encrypted and the MariaDB server has a valid certificate. This fixes CVE-2015-3152. Crashing Bug: mysql_upgrade crashes the server with REPAIR VIEW (MDEV-8115). Crashing Bug: Server crashes in intern_plugin_lock on concurrent installing semisync plugin and setting rpl_semi_sync_master_enabled (MDEV-363). Crashing Bug: Server crash on updates with joins still on 10.0.18 (MDEV-8114). Crashing Bug: Too large scale in DECIMAL dynamic column getter crashes mysqld (MDEV-7505). Crashing Bug: Server crashes in get_server_from_table_to_cache on empty name (MDEV-8224). Crashing Bug: FreeBSD-specific bug that caused a segfault on FreeBSD 10.1 x86 (MDEV-7398). XtraDB upgraded to 5.6.24-72.2 InnoDB updated to InnoDB-5.6.25 Performance Schema updated to 5.6.25 TokuDB upgraded to 7.5.7 Given the security fix, you may want to consider upgrading if that particular CVE is of concern to you. Also, please review the crashing bugs to see if they might affect you, and upgrade if so. Also, if running TokuDB, XtraDB, InnoDB, or Performance Schema, you may also want to benefit from those fixes, as well as the new MariaDB fixes (139 in all). You can read more about the 10.0.20 release here: https://mariadb.com/kb/en/mariadb-10020-release-notes/ And if interested, you can review the full list of changes in 10.0.20 (changelogs) here: https://mariadb.com/kb/en/mariadb-10020-changelog/ Hope this helps.

  • Selecting Representative Samples From A Stream Of Queries
    Sampling is hard. This is the title of a talk I gave at a meetup in Boston a few weeks back. But what’s so hard about sampling anyway? To begin with, let’s clarify what I mean by sampling. It’s a bit ambiguous because sampling could apply to a few different things one does with time series data. In this context, I’ll be talking about capturing individual events from a large, diverse set of events (queries). Here’s a picture of a simple stream of events over time. Notice that they are not all the same–some of them are higher or lower than others. This is a simple illustration of some variability in the stream. The way VividCortex generates query insight is by computing metrics about the events. We compute running sums of things like the latency, errors, and the count of queries we see. Once a second we send the sums to our APIs and reset all the counters. That’s how we generate per-second metrics about the events. The metrics are a lot less data than capturing every individual event, so this is a lot more efficient: The problem is, if we now just discard the original events we lose a lot of information. In the context of a database server (current we support PostgreSQL, MySQL, MongoDB, and Redis) it’s really important to have the original query, with its properties such as the user, latency, origin host, and above all the SQL or other text of the query. We really want to keep some of the queries from the stream. That’s the goal. The quandary is that this is a lot harder than it seems. One reason is that we have several requirements for the sampling. We want it to be: Representative Sample enough queries to be useful, but not so many that it overloads us Achieve the desired overall sampling rate Sample different kinds of events in a balanced way (e.g. frequent events should not starve the sampling of rare ones) Biased towards queries we know are important (e.g. they have errors) Allow blacklisting and whitelisting To understand why these requirements are so important, consider what happens when a query has some kind of behavior that needs to be fixed. It often shows up as a specific type of pattern. Here’s a problematic query from an early version of our time series backend: If we didn’t select representative samples of that type of query, that pattern would be hidden. This is also made much more complicated by the fact that streams of queries to a database server are not simple. They’re extremely rich and varied. This might be a better picture: Imagine how that picture would look with a million different kinds of queries that all vary in frequency, importance, latency, and so on. In this context, the goals and requirements conflict in some important ways: Rate limiting conflicts with biasing towards important queries Rate limits overall conflict with sampling rates of each kind of query Rare queries and frequent queries conflict Efficiency and correctness are a constant tradeoff Our first sampling implementation was a series of equations that kept track of how long it had been since we had selected a sample for each category of queries. As time passed, we became more and more likely to choose each query we saw, until we chose one and reset the probabilities again. It sounds good, but it didn’t work. Some queries were way oversampled, and others never got sampled, and we got enormous numbers of samples from some customers and not many from others. The user experience is the most important thing to keep in mind here. When a user looks into a query’s details, they should be able to look at a reasonable time range and get a reasonable number of samples to examine, EXPLAIN, and so on. Our naive approach wasn’t the only one. From past experience I had seen a variety of ways to do this. The easiest way (and the one I built into pt-query-digest by default) is to select the “worst” sample you see. But that’s not representative at all. You’ll end up with nothing but the outliers, and a lot of queries have multi-modal distributions, so you’d never see a lot of the most representative samples. Selecting every Nth event is another simple approach, but it creates disparities between rare and frequent queries. The ideal approach is to sample a random event per time period per type of query, but our implementation wasn’t working as we wanted it to. The solution turned out to be an interesting combination of math, statistics, and approximate streaming algorithms (a sketch). I’ll write about that in a followup blog post, since this one is getting kind of long already. If you’re interested in this topic, Preetam and I presented on it at Monitorama, and the video is online. Till next time!

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into MySQL table
    The MySQL Enterprise Audit plug-in is part of the MySQL Enterprise Edition (available through a paid license). Basically, Enterprise Audit tracks everything that is happening on your MySQL server, and can be used to protect/detect the misuse of information, and to meet popular compliance regulations including HIPAA, Sarbanes-Oxley, and the PCI Data Security Standard. MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines. When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access. (from https://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html) When you enable MySQL Enterprise Audit, log files are generated in your MySQL data directory. You can use tools like MySQL Workbench (Enterprise Edition) or Oracle Audit Vault to import the log data, to view the information and to generate reports. I was talking with a client, and he wanted to know if the audit data could be stored in a table. Currently (as of MySQL 5.6.25), the audit information is stored as XML in the audit log files. So, I decided to write a quick Perl script that would parse the XML log files and insert the information into a MySQL database. You will need to set the size of your audit log files in your my.cnf or my.ini configuration file via the audit_log_rotate_on_size variable. You might need to adjust the size of your log files based upon database activity and how well the script parses the log files. If your log files are very large, the Perl script might have issues processing it, and you might want to decrease the size of your log files and run the script more frequently. CAVEAT Enterprise Audit does require a license from MySQL. If you are interested in an Enterprise subscription, contact me via the comment section below. If you are an Enterprise customer, you will need to configure Enterprise Audit first. See the Enterprise Audit online documentation page for more information, or contact MySQL Support. For the data fields, I used the audit log file format information found at The Audit Log File page on MySQL.com. My MySQL server doesn’t have a lot of activity, so I tried to configure the size of the data fields as best as possible to accommodate the possible size of the data in each field. There may be instances where you will have to increase the size of these fields or change their data types. The largest field is the SQL_TEXT field which will contain your SQL statements. Every table has a max row size of 65,535 bytes. So, the largest possible size of the SQL_TEXT field could be for this example is around 63,200 bytes (65,535 bytes minus the sum of the size of all of the other fields, and minus the 1-byte or 2-byte length prefix used for each varchar field). In this example, the SQL_TEXT field is set to 8,096 bytes, so you may need to increase or decrease this value. I used varchar data types for each field, excluding the primary key field named ID. I did not spend a lot of time on the database schema, so you might want to modify it a little. I am sure that some of the fields are integers, but I did not have enough data in my log files to positively determine all of the possible values for each field. I did read the online manual, and it stated that CONNECTION_ID, SERVER_ID, STATUS, STATUS_CODE and VERSION were unsigned integers – but I left them as varchar. NOTICE This script requires the use of the new format for the audit log files, which is available in MySQL versions 5.6.20 or later. I created a database along with two tables; one to store the log file information, and a history table to keep track of what files had already been parsed and inserted into MySQL, as well as the number of log file entries. The CREATE DATABASE and CREATE TABLE syntax is as follows: CREATE DATABASE `audit_information` /*!40100 DEFAULT CHARACTER SET latin1 */ CREATE TABLE `audit_parsed` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `COMMAND_CLASS` varchar(64) DEFAULT NULL, `CONNECTIONID` varchar(32) DEFAULT NULL, `DB_NAME` varchar(64) DEFAULT NULL, `HOST_NAME` varchar(256) DEFAULT NULL, `IP_ADDRESS` varchar(16) DEFAULT NULL, `MYSQL_VERSION` varchar(64) DEFAULT NULL, `COMMAND_NAME` varchar(64) DEFAULT NULL, `OS_LOGIN` varchar(64) DEFAULT NULL, `OS_VERSION` varchar(256) DEFAULT NULL, `PRIV_USER` varchar(16) DEFAULT NULL, `PROXY_USER` varchar(16) DEFAULT NULL, `RECORD_ID` varchar(64) DEFAULT NULL, `SERVER_ID` varchar(32) DEFAULT NULL, `SQL_TEXT` varchar(8096) DEFAULT NULL, `STARTUP_OPTIONS` varchar(1024) DEFAULT NULL, `COMMAND_STATUS` varchar(64) DEFAULT NULL, `STATUS_CODE` varchar(11) DEFAULT NULL, `DATE_TIMESTAMP` varchar(24) DEFAULT NULL, `USER_NAME` varchar(128) DEFAULT NULL, `LOG_VERSION` varchar(11) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 CREATE TABLE `audit_history` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `AUDIT_LOG_NAME` varchar(64) DEFAULT NULL, `PARSED_DATE_TIME` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `LOG_ENTRIES` int(11) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 The Perl script finds the non-active log files (which end in .xml – example: audit.log.14357895017796690.xml), parses the data, creates an SQL file with INSERT statements, imports the data via the mysql command-line program, and then moves the log file(s) and SQL file(s) to a directory. The history table records what files have been processed, so you don’t accidentally process the same file twice. In the beginning of the Perl script, there are several values you need to replace to match your system. The values are under the section titled “values needed”. Here is the Perl script (named audit.pl): #!/usr/bin/perl # audit.pl use DBI; use CGI; use XML::Simple; #---------------------------------------------------------- # values needed $Database = "audit_information"; $MYSQL_DATA_DIR = "/usr/local/mysql/data"; $MySQL_Host_IP_Name = "192.168.1.2"; $mysql_user = "root"; $mysql_password = "password_needed"; # directory to store old audit files after parsing $audit_directory = "$MYSQL_DATA_DIR/audit_files"; # make an audit_files directory if one does not exist mkdir($audit_directory) unless(-d $audit_directory); #---------------------------------------------------------- #---------------------------------------------------------- #for each file do this @files = @files = ;; foreach $file_name_to_parse (@files) { #---------------------------------------------------------- # check to see if file has already been parsed $dbh1 = ConnectToMySql($Database); $query1 = "select AUDIT_LOG_NAME from audit_history where AUDIT_LOG_NAME = '$file_name_to_parse'"; $sth1 = $dbh1->prepare($query1); $sth1->execute(); while (@data = $sth1->fetchrow_array()) { $audit_log_name = $data[0]; } # if length of audit_log_name is less than 1, process file if (length($audit_log_name) $PARSED_FILE") or die print "Couldn't open log_file: $!"; $count = 0; # XML::Simple variable - SuppressEmpty => 1 ignore empty values $xml = XML::Simple->new(SuppressEmpty => 1); $data = $xml->XMLin("$file_name_to_parse"); foreach $info (@{$data->{AUDIT_RECORD}}) { # replace tick marks ' with \' in the SQL TEXT $info->{"SQLTEXT"} =~ s/'/\\'/g; print LOGOUT "INSERT INTO audit_information.AUDIT_PARSED (COMMAND_CLASS, CONNECTIONID, DB_NAME, HOST_NAME, IP_ADDRESS, MYSQL_VERSION, COMMAND_NAME, OS_LOGIN, OS_VERSION, PRIV_USER, PROXY_USER, RECORD_ID, SERVER_ID, SQL_TEXT, STARTUP_OPTIONS, COMMAND_STATUS, STATUS_CODE, DATE_TIMESTAMP, USER_NAME, LOG_VERSION) values ('" . $info->{"COMMAND_CLASS"} . "', '" . $info->{"CONNECTION_ID"} . "', '" . $info->{"DB"} . "', '" . $info->{"HOST"} . "', '" . $info->{"IP"} . "', '" . $info->{"MYSQL_VERSION"} . "', '" . $info->{"NAME"} . "', '" . $info->{"OS_LOGIN"} . "', '" . $info->{"OS_VERSION"} . "', '" . $info->{"PRIV_USER"} . "', '" . $info->{"PROXY_USER"} . "', '" . $info->{"RECORD_ID"} . "', '" . $info->{"SERVER_ID"} . "', '" . $info->{"SQLTEXT"} . "', '" . $info->{"STARTUP_OPTIONS"} . "', '" . $info->{"STATUS"} . "', '" . $info->{"STATUS_CODE"} . "', '" . $info->{"TIMESTAMP"} . "', '" . $info->{"USER"} . "', '" . $info->{"VERSION"} . "');\n"; $count++; # end foreach $info (@{$data->{AUDIT_RECORD}}) } # load parsed file into MySQL - hide warnings system("mysql -u$mysql_user -p$mysql_password /dev/null 2>&1"); $dbh2 = ConnectToMySql($Database); $query2 = "insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('$file_name_to_parse', '$count')"; # optional print output - uncomment if desired # print "$query2\n"; $sth2 = $dbh2->prepare($query2); $sth2->execute(); # close audit log file close(INFILE); # optional print output - uncomment if desired # print "Moving audit log ($file_name_to_parse) and log file ($PARSED_FILE) to $audit_directory.\n"; # strip directories off $file_name_to_parse @file_name_to_move_array = split("\/",$file_name_to_parse); $directory_count = $#file_name_to_move_array; $file_name_to_move = $file_name_to_move_array[$directory_count]; # optional print output - uncomment if desired # print "mv $file_name_to_move $file_name_to_parse\n"; # print "mv $PARSED_FILE $audit_directory\n"; # move audit log files and parsed log files to $audit_directory system("mv $file_name_to_parse $audit_directory"); system("mv $PARSED_FILE $audit_directory"); # end - if (length($audit_log_name) connect($connectionInfo,$mysql_user,$mysql_password); return $l_dbh; } It should not matter where you execute audit.pl, as long as you have correctly entered the required values in the script. You might get errors if you try to run this script on a log file that has not been rotated, which is the current log file in your MySQL data directory. The current log file is named audit.log. # pwd /usr/local/mysql/data # ls -l audit.log -rw-rw---- 1 mysql _mysql 9955118 Jul 2 15:25 audit.log The script will only work on files ending in .xml. For testing, I used four small (and identical) audit log files: # pwd /usr/local/mysql/data # ls -l *xml -rw-rw---- 1 mysql wheel 15508 Jul 2 12:20 audit.log.14357895017796690.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796691.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796692.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796693.xml I have commented-out the print statements in the Perl script, but if you uncomment them, running the script gives you this output for each log file: # perl audit.pl Parsing - /usr/local/mysql/data/audit.log.14357895017796690.xml insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('/usr/local/mysql/data/audit.log.14357895017796690.xml', '34') Moving audit log (/usr/local/mysql/data/audit.log.14357895017796690.xml) and log file (/usr/local/mysql/data/audit.log.14357895017796690_parsed.sql) to /usr/local/mysql/data/audit_files. mv audit.log.14357895017796690.xml /usr/local/mysql/data/audit.log.14357895017796690.xml mv /usr/local/mysql/data/audit.log.14357895017796690_parsed.sql /usr/local/mysql/data/audit_files .... After running my test script, the following data is what is in the audit_history table: mysql> use audit_information Database changed mysql> select * from audit_history; +----+-------------------------------------------------------+---------------------+-------------+ | ID | AUDIT_LOG_NAME | PARSED_DATE_TIME | LOG_ENTRIES | +----+-------------------------------------------------------+---------------------+-------------+ | 1 | /usr/local/mysql/data/audit.log.14357895017796690.xml | 2015-07-02 15:25:07 | 34 | | 2 | /usr/local/mysql/data/audit.log.14357895017796691.xml | 2015-07-02 15:25:08 | 34 | | 3 | /usr/local/mysql/data/audit.log.14357895017796692.xml | 2015-07-02 15:25:08 | 34 | | 4 | /usr/local/mysql/data/audit.log.14357895017796693.xml | 2015-07-02 15:25:09 | 34 | +----+-------------------------------------------------------+---------------------+-------------+ 4 rows in set (0.00 sec) And here is an example of one line from the audit_parsed table. mysql> select * from audit_parsed limit 1 \G *************************** 1. row *************************** ID: 1 COMMAND_CLASS: select CONNECTIONID: 10093 DB_NAME: HOST_NAME: localhost IP_ADDRESS: 127.0.0.1 MYSQL_VERSION: COMMAND_NAME: Query OS_LOGIN: OS_VERSION: PRIV_USER: PROXY_USER: RECORD_ID: 1614933_2015-07-01T22:08:58 SERVER_ID: SQL_TEXT: SELECT (UNIX_TIMESTAMP(now()) - CAST(variable_value AS SIGNED))*1000 as serverStartMillis FROM information_schema.global_status WHERE variable_name='uptime' STARTUP_OPTIONS: COMMAND_STATUS: 0 STATUS_CODE: 0 DATE_TIMESTAMP: 2015-07-01T22:08:58 UTC USER_NAME: root[root] @ localhost [127.0.0.1] LOG_VERSION: 1 row in set (0.00 sec) After parsing the log files, you can then write your own queries for searching through your audit data. You can even include this script in cron, so it runs and parses your files automatically. But as always, test this script and use it with caution before putting it in a production environment. You could also modify the Perl script to filter out values you did not want or need to store. If you do use this script or if you have any suggestions or other questions, please leave a comment below.   Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn. Tony is the author of Twenty Forty-Four: The League of Patriots   Visit http://2044thebook.com for more information.