จันทร์, 03 ส.ค. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • InnoDB checkpoint strikes back
    In my recent benchmarks for MongoDB, we can see that the two engines WiredTiger and TokuMX struggle from periodical drops in throughput, which is clearly related to a checkpoint interval – and therefore I correspond it to a checkpoint activity.The funny thing is that I thought we solved checkpointing issues in InnoDB once and for good. There are bunch of posts on this issue in InnoDB, dated some 4 years ago.  We did a lot of research back then working on a fix for Percona Serverhttps://www.percona.com/blog/2011/04/04/innodb-flushing-theory-and-solutions/https://www.percona.com/blog/2011/03/31/innodb-flushing-a-lot-of-memory-and-slow-disk/https://www.percona.com/blog/2011/01/03/mysql-5-5-8-in-search-of-stability/https://www.percona.com/blog/2010/12/20/mysql-5-5-8-and-percona-server-being-adaptive/But, like Baron said, “History Repeats“… and it seems it also repeats in technical issues.So, let’s take a look what is checkpointing is, and why it is a problem for storage engines.As you may know, a transactional engine writes transactional logs (name you may see: “redo logs”, write-ahead logs or WAL etc), to be able to perform a crash recovery in the case of database crash or server power outage. To maintain somehow the time for recovery (we all expect that database will start quick), the engine has to limit how many changes are in logs. For this a transactional engine performs a “checkpoint,” which basically synchronizes changes in memory with corresponding changes in logs – so old log records can be deleted. Often it results in writing changed database pages from memory to a permanent storage.InnoDB takes an approach to limit size of log files (in total it equals to innodb-log-file-size * innodb-log-files-in-group), which I name “size limited checkpointing”. Both TokuDB and WiredTiger limit changes by time periods (by default 60 sec), which results in that log files may grow unlimited within a given time interval (I name it “time limited checkpointing”).Also the difference is InnoDB takes a “fuzzy checkpointing” approach, which was not really “fuzzy”, until we fixed it, but basically it is not to wait until we reach size limited, but perform “checkpointing” all the time, and the more intensive the closer we get to log size limit. This allows to achieve more or less a smooth throughput, without significant drops in throughput.Unlike InnoDB, both TokuDB (that I am sure) and WiredTiger (I speculate here, I did not look into WiredTiger internal) wait till the last moment, and perform checkpoint strictly by prescribed interval. If it happens that a database contains many changes in memory, it will result in performance stalls. This effect is close by effect to “hitting a wall on full speed”: user queries get locked, until an engine writes all changes in memory it has to write.Interestingly enough, RocksDB, because it has a different architecture (I may write on it in future, but for now I will point to RocksDB Wiki), does not have this problem with checkpoints (it however has its own background activity, like level compactions and tombstone maintenance, but it is a different topic).I do not know how WiredTiger is going to approach this issue with checkpoint, but we are looking to improve TokuDB to make this less painful for user queries – and eventually to move to “fuzzy checkpointing” model.The post InnoDB checkpoint strikes back appeared first on MySQL Performance Blog.

  • Deploy Asynchronous Replication Slave to MariaDB Galera Cluster 10.x using ClusterControl
    Combining Galera and asynchronous replication in the same MariaDB setup, aka Hybrid Replication, can be useful - e.g. as a live backup node in a remote datacenter or reporting/analytics server. We already blogged about this setup for Codership/Galera or Percona XtraDB Cluster users, but master failover did not work for MariaDB because of its different GTID approach. In this post, we will show you how to deploy an asynchronous replication slave to MariaDB Galera Cluster 10.x (with master failover!), using GTID with ClusterControl v1.2.10.  Preparing the Master First and foremost, you must ensure that the master and slave nodes are running on MariaDB Galera 10.0.2 or later. MariaDB replication slave requires at least a master with GTID among the Galera nodes. However, we would recommend users to configure all the MariaDB Galera nodes as masters. GTID, which is automatically enabled in MariaDB, will be used to do master failover. The following must be true for the masters: At least one master among the Galera nodes All masters must be configured with the same domain ID log_slave_updates must be enabled All masters’ MariaDB port is accessible by ClusterControl and slaves Must be running MariaDB version 10.0.2 or later To configure a Galera node as master, change the MariaDB configuration file for that node as per below: gtid_domain_id=<must be same across all mariadb servers participating in replication> server_id=<must be unique> binlog_format=ROW log_slave_updates=1 log_bin=binlog  Preparing the Slave For the slave, you would need a separate host or VM, with or without MariaDB installed. If you do not have MariaDB installed, and choose ClusterControl to install MariaDB on the slave, ClusterControl will perform the necessary actions to prepare the slave; configure root password (based on monitored_mysql_root_password), create slave user (based on repl_user, repl_password), configure MariaDB, start the server and finally start replication. In short, we must perform the following actions beforehand: The slave node must be accessible using passwordless SSH from the ClusterControl server MariaDB port (default 3306) and netcat port 9999 on the slave are open for connections You must configure the following options in the ClusterControl configuration file for the respective cluster ID under /etc/cmon.cnf or /etc/cmon.d/cmon_<cluster ID>.cnf: repl_user=<the replication user> repl_password=<password for replication user> monitored_mysql_root_password=<the mysql root password of all nodes including slave> The slave configuration template file must be configured beforehand, and must have at least the following variables defined in the MariaDB configuration template: gtid_domain_id (the value must be the same across all nodes participating in the replication) server_id basedir datadir To prepare the MariaDB configuration file for the slave, go to ClusterControl > Manage > Configurations > Template Configuration files > edit my.cnf.slave and add the following lines: [mysqld] bind-address=0.0.0.0 gtid_domain_id=1 log_bin=binlog log_slave_updates=1 expire_logs_days=7 server_id=1001 binlog_format=ROW slave_net_timeout=60 basedir=/usr datadir=/var/lib/mysql  Attaching a Slave via ClusterControl Let’s now add a MariaDB slave using ClusterControl. Our example cluster is running MariaDB 10.1.2 with ClusterControl v1.2.10. Our deployment will look like this: 1. Configure Galera nodes as master. Go to ClusterControl > Manage > Configurations, and click Edit/View on each configuration file and append the following lines under mysqld directive: mariadb1: gtid_domain_id=1 server_id=101 binlog_format=ROW log_slave_updates=1 log_bin=binlog expire_logs_days=7mariadb2: gtid_domain_id=1 server_id=102 binlog_format=ROW log_slave_updates=1 log_bin=binlog expire_logs_days=7mariadb3: gtid_domain_id=1 server_id=103 binlog_format=ROW log_slave_updates=1 log_bin=binlog expire_logs_days=72. Perform a rolling restart from ClusterControl > Manage > Upgrades > Rolling Restart. Optionally, you can restart one node at a time under ClusterControl > Nodes > select the corresponding node > Shutdown > Execute, and then start it again. 3. On the ClusterControl node, setup passwordless SSH to the slave node: $ ssh-copy-id -i ~/.ssh/id_rsa 10.0.0.1284. Then, ensure the following lines exist in the corresponding cmon.cnf or cmon_<cluster ID>.cnf: repl_user=slave repl_password=slavepassword123 monitored_mysql_root_password=myr00tP4sswordRestart CMON daemon to apply the changes: $ service cmon restart5. Go to ClusterControl > Manage > Configurations > Create New Template or Edit/View existing template, and then add the following lines: [mysqld] bind-address=0.0.0.0 gtid_domain_id=1 log_bin=binlog log_slave_updates=1 expire_logs_days=7 server_id=1001 binlog_format=ROW slave_net_timeout=60 basedir=/usr datadir=/var/lib/mysql6. Now, we are ready to add the slave. Go to ClusterControl > Cluster Actions > Add Replication Slave. Choose a master and the configuration file as per the example below: Click on Proceed. A job will be triggered and you can monitor the progress at ClusterControl > Logs > Jobs. You should notice that ClusterControl will use non-GTID replication in the Jobs details: You can simply ignore it as we will setup our MariaDB GTID replication manually later. Once the add job is completed, you should see the master and slave nodes in the grids: At this point, the slave is replicating from the designated master using the old way (using binlog file/position).  Replication using MariaDB GTID Ensure the slave catches up with the master host, the lag value should be 0. Then, stop the slave on slave1: MariaDB> STOP SLAVE;Verify the slave status and ensure Slave_IO_Running and Slave_SQL_Running return No. Retrieve the latest values of Master_Log_File and Read_Master_Log_Pos: MariaDB> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Master_Host: 10.0.0.131 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000022 Read_Master_Log_Pos: 55111710 Relay_Log_File: ip-10-0-0-128-relay-bin.000002 Relay_Log_Pos: 28699373 Relay_Master_Log_File: binlog.000022 Slave_IO_Running: No Slave_SQL_Running: NoThen on the master (mariadb1) run the following statement to retrieve the GTID: MariaDB> SELECT binlog_gtid_pos('binlog.000022',55111710); +-------------------------------------------+ | binlog_gtid_pos('binlog.000022',55111710) | +-------------------------------------------+ | 1-103-613991 | +-------------------------------------------+The result from the function call is the current GTID, which corresponds to the binary file position on the master.  Set the GTID slave position on the slave node. Run the following statement on slave1: MariaDB> STOP SLAVE; MariaDB> SET GLOBAL gtid_slave_pos = '1-103-613991'; MariaDB> CHANGE MASTER TO master_use_gtid=slave_pos; MariaDB> START SLAVE;The slave will start catching up with the master using GTID as you can verify with SHOW SLAVE STATUS command: ... Seconds_Behind_Master: 384 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 101 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: Slave_Pos Gtid_IO_Pos: 1-103-626963  Master Failover and Recovery ClusterControl doesn’t support MariaDB slave failover with GTID via ClusterControl UI in the current version (v1.2.10), this will be supported in v1.2.11. So, if you are using 1.2.10 or earlier, failover has to be done manually whenever the designated master fails. Initially, when you added a replication slave via ClusterControl, it only added the slave user on the designated master (mariadb1). To ensure failover works, we have to explicitly add the slave user on mariadb2 and mariadb3. Run following command on mariadb2 or mariadb3 once. It should replicate to all Galera nodes: MariaDB> GRANT REPLICATION SLAVE ON *.* TO slave@'10.0.0.128' IDENTIFIED BY 'slavepassword123'; MariaDB> FLUSH PRIVILEGES;If mariadb1 fails, to switch to another master, you just need to run following statement on slave1: MariaDB> STOP SLAVE; MariaDB> CHANGE MASTER TO MASTER_HOST='10.0.0.132'; MariaDB> START SLAVE;The slave will resume from the last Gtid_IO_Pos. Check the slave status to verify everything is working: MariaDB [(none)]> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.0.0.132 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000020 Read_Master_Log_Pos: 140875476 Relay_Log_File: ip-10-0-0-128-relay-bin.000002 Relay_Log_Pos: 1897915 Relay_Master_Log_File: binlog.000020 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 130199239 Relay_Log_Space: 12574457 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 133 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 102 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: Slave_Pos Gtid_IO_Pos: 1-103-675356That’s it! Please give it a try and let us know how these instructions worked for you. Blog category: DevopsTags: automationgalerahybrid replicationMariaDBmariadb galera clusterMySQLreplicationslave

  • How to hire for Infrastructure Operations Engineers
    I have been working for a very large Australian website for over six years and during this period have been fortunate enough to hire many Infrastructure Operations Engineers that now work for that company. I want to detail the evolution of the hiring process and what I have driven it to over the last six years.How was I hiredThis is the interview process I went through at my current company:A technical and cultural pre-screening from a recruiterA short phone interview with a small set of adhoc questions around technical skill set. How does DNS work?An hour long face to face interview with the hiring manager and another senior engineer testing both technical capability and culture fitAnother hour long face to face interview with a HR representative testing cultural fit Lastly, a reference check to confirm technical capability and cultural fit performed over the phoneThis sounds pretty standard. Lets dive into some subjective positives and negatives of this process:What did workThe hiring manager, who was a technical expert in the field was able to confirm with a high degree of confidence my technical capability What didn't workHow I might work with my peers wasn't considered in the hiring equation at allThe questions from the hiring manager were completely custom and specific to my resume and my responses during the interview This requires the same manager to perform the interview with all the candidates to provide a comparison between candidates which generates a hiring bottleneckFirst evolution 2009 - 2011The first evolution that I was apart of consisted of standardising our pre-screening questions to enable a larger set of people to pre-screen candidates. Further, we also asked our candidates self rate their technical abilities in certain technologies and then tested their own rating during the technical interview. On a scale of 1 to 5, how would you rate yourself in the administration of MySQL?  The interview processed had three phases:Recruiter pre-screenTechnical phone pre-screen:On a scale of 1 to 5, how would you rate yourself in the administration of MySQL?And depending on the response: what does the binary log do in MySQL?A two stage interview consisting of:A technical interview:Why, from time to time, does replication fail? A cultural interviewWhat is more important to you: Training and development vs Autonomy?What did workStandardising the technical pre-screening enabled us to scale our hiring processSelf-rating and testing that self-rating during the technical interview enabled us to filter any technical embellishment in the candidates CVThis really enabled us to assert the technical level of our candidates What didnt workOur interview process did not have a practical component which introduced a level of subjectiveness in ones technical skill setThere was minimal focus on the cultural aspect of hiring The hiring process was long and the candidate had limited feedback to where they were at in the processSecond evolution 2012 - 2014I drove the second evolution and it was largely based around the changes that were made in the developer hiring process at my company. The developer process consisted of:Recruiter pre-secreenA cultural pre-screen performed by someone at the company - generally over coffeeAn offline coding test by the candidate  - the famous 'robot test'A review of that coding test by subject matter experts in the companyA three stage interview consisting ofRefactoring the code submittedA technical interviewA cultural interviewFurther, the interview was largely run by the individuals in the team that the candidate would join. I was really impressed with the practical components and the cultural focus that this process enjoyed. Overlaying the 'DevOPS' and 'configuration as code' movement in the Infrastructure Operations space, and the gaps in the company at the time, myself and other leaders in the company deduced that we wanted the role that consisted ofKnowing configuration as codeSomeone that would work directly with developers to coach them on Infrastructure OperationsSomeone that is able to troubleshoot technical problems at a macro and micro levelI didnt think we were getting much value out of the 'self-ranking' so I decided to drop it in favour of an alternate standard set of pre-secreen questions.Thus I changed the hiring process toRecruiter pre-secreenA technical and cultural phone pre-screen by someone at the companyAn offline configuration-as-code test (https://github.com/tnh/simple-sinatra-app)A two stage interview consisting ofA technical interviewA cultural interviewWe also involved a lot more people into the pre-screening and final stages of the interview process and ran the final stage as a two one hour interviews back-to-back. This enabled quicker feedback to the candidate and enabled us to 'batch process' the candidate.What did workThe configuration-as-code test helped filter candidates that have the technical understanding of where the industry is headingI was able to obtain great insight into the candidate by the way they solve the puzzle Hint: I actually care more about how you submit it, than how you complete itThe test itself provided lo-fi filtering - so we could discount candidates quickerInvolving the members of the team in the interview process enabled the candidate to interview the team as much as the team to interview the candidateEmpowering the team to own the hiring process generated a 'family like' feel to the teamWait, I actually have to work with this person!What didnt workThe practical troubleshooting wasnt a part of the interview process - how would the candidate handle the stress of responding to a complete outage? Having the team involved in the interview process introduced an element of 'group think' and in the extreme, the team could be 'socially hacked' in the candidate reviewOver that two year period, I performed a few experiments on candidates by altering our final stage to A three stage interview consisting ofA break fix interviewA technical interviewA cultural interviewThe break fix was largely based around the RHCE exam of being given a Linux box running a webserver that was unable to boot. Fix the box, get the box to boot and get the webserver running. While I think we better confirmed the technical skill set, there were some negative aspects which is why I abandoned it:In an age of disposable infrastructure, how often would you need to fix GRUB? Are we testing the wrong thing?"The right way" of fixing a problem is in configuration-as-code and re-deploying - are we sending the wrong message in this 'hack-and-slash' break fix?Further, 70% of the role of an Infrastructure Operations Engineer is consistent across all of the company, there are some areas where the candidate is fully embedded in a development team. To drive the teams empowerment in the interview process, we made some minor refinements A three stage interview consisting ofA technical interview from Infrastructure Operations expertsA technical interview from developer leads A cultural interviewWe were largely testing their ability to work with developers and whether they would be able to coach others in the team from different backgrounds in the important of Infrastructure Operations. This really enabled the candidate to get a first hand view on the people they would be working with if they were successful and they still wanted the job.Third evolution 2015 onwardsMy peer and I really wanted to turn the dial on practical troubleshooting in our interview process. The practical test focuses on ones ability to troubleshoot a broken website in a stressful situation and what tools they reach for in the troubleshooting process.  I get really excited when I conduct this test as I'm fascinated with how people solve this puzzle in new and unexpected ways.  While it still has the same drawbacks of the break fix experiment, the value in understanding the thought process far out weights implementing a fix that isn't 'the right way'.As a nice byproduct, this test is now used as a training tool for our developers. I was like a kid at a candy store conducting this test with my developer peers looking at how they tackled the problem.Further to this, I have adjusted my pre-screening interview to a face to face "coffee catchup" instead of a technical and cultural Q&A on the phone. This pre-screen interview has a higher focus on culture instead of technical and is a direct response to candidates in late 2014 that were technically brilliant but culturally shocking.Thus the current recruitment process is:Recruiter pre-secreen if the candidate is coming from an external recruiterAn internal HR recruiter pre-screen An offline coding test (https://github.com/tnh/simple-sinatra-app) to assess technical skill set A review of the coding test by subject matter expects in the companyA cultural and technical 'coffee catchup' assessing a cultural fix Sometimes the coding test and coffee catch up are swapped around - I'm still unsure which provides better filtering A three stage interview consisting ofA troubleshooting test A technical interview A cultural interviewWhats workingI'm seeing interesting data in the new troubleshooting test - I like having the new data, though I'm not sure how much weight the test needs to factor into the equationThank you to the recent hires for being the guinea pigs!A large part of the organisation participate in the interview  (~10 people per candidate)Whats not workingI'm not convinced that 'coffee catchups' are a 'good' - what social signals do we send by setting with a 'coffee catchup' to drill them about their work history and professional desires and aspirations? Is there a better way to do this? I'm not sureI'm not sure if we have the balance right between pushing the autonomy and accountability of the hiring of the candidate to the teamIs the team hiring someone they like vs hiring someone they need?Where to nowI'm proud that I have been able to refine the interview process over the last 6 years and I have no doubt that it will continue to be refined. Its critical to reassess the interview process for candidates as the role and responsibility for the role changes over time - don't stand still!This is what I look for in a candidateThis is what I look at with a candidate (if the role has a web operations element)TroubleshootingWhat is in your toolbox to troubleshoot? You may say that you look at logs, but do you actually look at logs and understand what they say?Do you know what goes into a web request? Can you divide and conquer to narrow the area of the problem?How do you react in a highly stressful situation?Technical Do you know when to use "the right technology"?How do you turn the dial on your productivity?Can you grow others? Can others grow you?  CulturalDo you complement the team? Do you add a new dimension to the team?Will you work well with the team?Will you stand up and fight for something if its appropriate to do so?  Do you know when it is appropriate? Will you accept feedback from others? Will you give feedback?I have found myself increasingly interested in analysing the people conducting the interview - looking atDo you have self awareness of how you and your peers respond to the questions you asked - and how your social cues could prime subsequent responses from the candidate?Are you looking for 'the right candidate' or a candidate that is similar to you?Are you conscious on how this candidate might fill the gap that we are hiring for? What you should look forDuring the interview process the candidate should be interviewing the company and assessing whether they would want to work there. Here is what I look forWill you be able to grow yourself in the organisation?Does the organisation subscribe to similar beliefs to you?How much politics is in the company? Will that prohibit 'getting stuff done'?Will you enjoy going to work?

  • on ORDER BY optimization | Domas Mituzas
    http://dom.as/2015/07/30/on-order-by-optimization/ An insightful exploration by Domas (Facebook) on how some of the MySQL optimiser’s decision logic is sometimes naive, in this case regarding ORDER BY optimisation. Quite often, “simple” logic can work better than complex logic as chasing all the corner cases can just make things worse – but sometimes, logic can be too simple. Everything must be made as simple as possible, but no simpler. — Albert Einstein / Roger Sessions

  • Log Buffer #434: A Carnival of the Vanities for DBAs
    This Log Buffer Edition throws spotlight on some of the salient blog posts from Oracle, SQL Server and MySQL. Oracle: STANDARD date considerations in Oracle SQL and PL/SQL My good friend, Oracle icon Karen Morton passed away. Multiple invisible indexes on the same column in #Oracle 12c Little things worth knowing: Data Guard Broker Setup changes in 12c Things that are there but you cannot use SQL Server: Dynamic Grouping in SSRS Reports SQL 2014 Clustered Columnstore index rebuild and maintenance considerations SQL Server 2016 CTP2 Azure SQL Database Security Features Visualize the timeline of your SQL jobs using Google graph and email MySQL: Shinguz: Max_used_connections per user/account Generally in MySQL we send queries massaged to a point where optimizer doesn’t have to think about anything. Replication is the process that transfers data from an active master to a slave server, which reproduces the data stream to achieve, as best as possible, a faithful copy of the data in the master. Unknown column ‘smth’ in ‘field list’ -> Oldie but goodie error Why base64-output=DECODE-ROWS does not print row events in MySQL binary logs Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL. The post Log Buffer #434: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.