เสาร์, 05 ก.ย. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • MySQL Utilities release-1.6.2 BETA
    The MySQL Utilities Team is pleased to announce the Beta release of MySQL Utilities. This release includes a number of improvements for usability, stability, and a few enhancements. A complete list of all improvements can be found in our release notes. Starting with MySQL Utilities 1.6.2, MySQL Fabric is no longer included as part of the MySQL Utilities release. They are now separate MySQL products with separate release cycles. MySQL Utilities source code is now available on Github at https://github.com/mysql/mysql-utilities How Can I Download MySQL Utilities? You can download MySQL Utilities 1.6.2 Beta from the following link using one of the pre-built installation repositories including a source download. Click on the Development Releases tab: http://dev.mysql.com/downloads/tools/utilities/ MySQL Utilities is also available on Github as a source download at: https://github.com/mysql/mysql-utilities Where is the Documentation? You can find online documentation for MySQL Utilities version 1.6 at: http://dev.mysql.com/doc/mysql-utilities/1.6/en/index.html

  • MySQL Utilities release-1.5.5 GA
    The MySQL Utilities Team is pleased to announce the latest general availability (GA) release of MySQL Utilities. This release includes a number of improvements for usability, stability, and a few enhancements. A complete list of all improvements can be found in our release notes. How Can I Download MySQL Utilities? You can download MySQL Utilities 1.5.5 from the following link using one of the pre-built installation repositories including a source download: http://dev.mysql.com/downloads/tools/utilities/ Where is the Documentation? You can find online documentation for MySQL Utilities version 1.5 at: http://dev.mysql.com/doc/mysql-utilities/1.5/en/index.html

  • Linkbench for MySQL 5.7.8 with an IO-bound database
    I wanted to try InnoDB transparent page compression that is new in the MySQL 5.7.8 RC. That didn't work out, so I limited my tests to old-style compression. I compared MyRocks with InnoDB from the Facebook patch for 5.6, upstream 5.6.26 and upstream 5.7.8. My performance summary is:MyRocks loads data faster than InnoDB. This isn't a new result. Non-unique secondary index maintenance doesn't require a read before the write (unlike a B-Tree). This is also helped by less random IO on writes and better compression.MyRocks compression is much better than compressed InnoDB. After 24 hours it used between 56% and 64% of the space compared to the compressed InnoDB configurations.MyRocks QPS degrades over time. This will be fixed real soon.Partitioning improves InnoDB load performance in MySQL 5.6 for compressed and non-compressed tables. This reduces stalls from the per-index mutex used by InnoDB when inserts cause or might cause a page split (pessimistic code path) because there is one mutex per partition. With MySQL 5.7 partitioning doesn't help in the non-compressed table case. There has been work in 5.7 to reduce contention on the per-index mutex and I think it helped. I suspect it is still needed with old-style compression because compressed page splits are more expensive as they include recompression.The Facebook patch for MySQL 5.6 is faster than upstream 5.6 and competitive with upstream 5.7.8. Too bad that patches might not reach upstream.ConfigurationMy test server has 144G of RAM, 40 HW threads with HT enabled and fast PCIe flash storage. I configured linkbench with loaders=10, requesters=20 and maxid1=1B. This uses 10 clients for the load, 20 clients for the query runs and about 1B rows in the node table after the load. The linkbench clients share the server with mysqld. The my.cnf settings are explained in a previous post.  The load was done with the binlog disabled. After the load there were 12 1-hour runs of the query test and I report results for hours 2 and 12. Then mysqld was restarted with the binlog enabled and 12 more 1-hour runs of the query test were done and I report results for hours 14 and 24. Fsync for the binlog was disabled. Fsync for the InnoDB redo log was done by a background thread (innodb_flush_log_at_trx_commit=2). Note that the InnoDB page size was 8kb so I used 2X compression for the link and count tables. The node table is not compressed for InnoDB because it is unlikely to compression by 50%.I tested the following binaries:myrocks - RocksDB storage engine for MySQL using the Facebook patch for MySQL 5.6fb56 - InnoDB using the Facebook patch for MySQL 5.6orig56 - upstream MySQL 5.6.26orig57 - upstream MySQL 5.7.8The partitioning and compression options are described by the following.  For partitioning I use 32 partitions and transactions/queries don't span partitions. All of the DDL is here.p0 - no partitioning for RocksDBp1 - partitioning for RocksDBp0.c0 - no partitioning, no compression for InnoDBp0.c1 - no partitioning, old-style compression for InnoDBp1.c0 - partitioning, no compression for InnoDBp1.c1 - partitioning, old-style compression for InnoDBResultsThis lists the database size in GB after the load and query tests at the 2nd, 12th, 14th and 24th hours. I don't have sufficient granularity in my measurement script for databases larger than 1T. I am not sure why compression with upstream 5.6 and 5.7 uses more space than with the Facebook patch.Update - I removed the results for myrocks, p1 because my measurements were wrong.load    2h      12h     14h     24hgb      gb      gb      gb      gb      config 487     493     512     514     523    myrocks, p0.11XX    11XX    12XX    12XX    13XX    fb56, p0.c0 666     697     779     787     814    fb56, p0.c111XX    12XX    12XX    13XX    13XX    fb56, p1.c0 707     745     803     808     826    fb56, p1.c1.12XX    12XX    13XX    14XX    14XX    orig56, p0.c0 756     790     879     889     920    orig56, p0.c113XX    13XX    14XX    14XX    14XX    orig56, p1.c0 803     838     901     907     930    orig56, p1.c1.12XX    13XX    14XX    14XX    15XX    orig57, p0.c0 756     796     892     902     931    orig57, p0.c113XX    13XX    14XX    14XX    15XX    orig57, p1.c0 803     844     844     916     940    orig57, p1.c1This lists the insert rate during the load (load ips) and the average query rates for the 2nd, 12th, 14th and 24th hours. Note that the query rate is lousy for p0.c1 immediately after the load. The problem is that the b-tree pages are almost full after the load and then over time many of them get split. There are stalls from page splits with compression and over time the page split rate drops.load    2h      12h     14h     24hips     qps     qps     qps     qps     config165210  31826   22347   21293   17888   myrocks, p0103145  30045   22376   21325   18387   myrocks, p1.109355  21151   23733   23478   24865   fb56, p0.c0 74210   8261   13928   14706   18656   fb56, p0.c1104900  26953   26029   25161   25479   fb56, p1.c0 90162  19888   24431   22596   22811   fb56, p1.c1.105356  16472   16873   16575   17073   orig56, p0.c0 45966   7638   12492   13178   16516   orig56, p0.c1 98104  18797   18273   17625   17702   orig56, p1.c0 66738  17731   19854   19159   19418   orig56, p1.c1.122454  31009   30260   29905   29751   orig57, p0.c0 49101   9217   17552   18448   22092   orig57, p0.c1114400  28191   26797   25820   25832   orig57, p1.c0 69746  22028   25204   23882   23983   orig57, p1.c1This is the same data as above, but grouped by configuration.load    2h      12h     14h     24hips     qps     qps     qps     qps     config109355  21151   23733   23478   24865   fb56, p0.c0105356  16472   16873   16575   17073   orig56, p0.c0122454  31009   30260   29905   29751   orig57, p0.c0.165210  31826   22347   21293   17888   myrocks, p0 74210   8261   13928   14706   18656   fb56, p0.c1 45966   7638   12492   13178   16516   orig56, p0.c1 49101   9217   17552   18448   22092   orig57, p0.c1.104900  26953   26029   25161   25479   fb56, p1.c0 98104  18797   18273   17625   17702   orig56, p1.c0114400  28191   26797   25820   25832   orig57, p1.c0.103145  30045   22376   21325   18387   myrocks, p1 90162  19888   24431   22596   22811   fb56, p1.c1 66738  17731   19854   19159   19418   orig56, p1.c1 69746  22028   25204   23882   23983   orig57, p1.c1GraphsFor people who prefer graphs I include one for the load rates and another for the QPS from the configurations that use partitioning.

  • Facebook’s Simon Martin on semi-synchronous replication
    Facebook, with 1.49 billion monthly active users,  is one of the world’s top MySQL users. Simon Martin, a production engineer on Facebook’s MySQL Infrastructure team, has been working with MySQL for most of his career, starting from 2 servers built out of spare parts and moving through to one of the largest deployments in the world.Simon will be sharing some of the challenges Facebook has tackled as it rolled out semi-synchronous replication across the company’s different services at Percona Live Amsterdam on Sept. 22. His talk is aptly titled, “The highs and lows of semi-synchronous replication.” I sat down, virtually, with Simon the other day. Our conversation is below, but first, as a special reward to my readers, save €20 on your Percona Live registration by entering promo code “BlogInterview” at registration. Please feel free to share this offer! Tom: On a scale from 1-10, how important is MySQL to Facebook? And how does Facebook use MySQL?Simon: 10. We have a sophisticated in memory caching layer that will serve most requests, but MySQL is the persistent store for our graph. This means all your profile data, all your friends, likes and comments and the same for pages, events, places and the rest are stored permanently in MySQL.We rely on MySQL in this role for 3 key features. Firstly as the final store it needs to not lose data, and InnoDB is well proven in this space. It needs to be highly available, MySQL and InnoDB are both very stable and we use replication as well to provide redundancy. Finally, even with extensive caching, it needs to be performant, both in latency and throughput, MySQL is both and we can use replication again to spread the read traffic to slaves in remote regions to help here too.Tom: What are some of the advantages of using Semi-Synchronous Replication at Facebook — and what are the challenges for deployments of that size when using it?Simon: That’s a big question, I could probably talk for 50 minutes on it! We started looking at Semi-Synchronous as a solution to reduce downtime when a MySQL master, or the host it’s on, crashes. Historically, if you are running a replicated environment and the master crashes, you are faced with a choice. You could promote another slave right away to reduce downtime, but it’s impossible to be sure that any of your slaves got all the transactions off the master. At Facebook we cannot lose people’s data, so we always chose to recover the master and re-connect the slaves before promoting if required. The downside is recovering InnoDB on a busy host can be slow, and if the host is rebooted it will be even slower, giving us many minutes of downtime.Now that we run Semi-Synchronous replication it means that a master will not commit a transaction until at least one slave has acknowledged receipt of the binary logs for that transaction. With this running when a master crashes we can be sure our most up-to-date slave has all the data, so once it’s applied by the SQL thread we can promote safely without waiting for crash recovery.There are many challenges in this though. Firstly there is performance, we now need a network round trip for each transaction, so we need the acknowledging slaves to be very close. Slaves in a different data hall, let-alone a different region, will be too slow.We also need to pay attention to slave availability, previously not having a slave connected to a master for a short time was not a problem, now this will cause writes to stop and connections pile up, so we need to be much more careful about how we manage our replication topology. A target of 99.999% uptime for a service now requires the same SLA on slaves being available and connected locally to acknowledge the commits.On top of this running at “webscale” adds a layer of requirements of its own. Like the rest of our environment everything needs to be automated, anything that requires a human is not going to scale. So our automation needs to respond to any failure and heal the system without intervention in any circumstance. An edge case that has even a tiny chance of occurring on a given day needs to be handled automatically, to keep our SLA and to stop our engineers constantly having to fix things.Tom: What are you looking forward to the most at this year’s conference (besides your own talk)?Simon: I always enjoy the keynotes, they don’t all seem to be announced yet but it’s a great way to get a state of the community update. I’ll certainly stop by “Binlog Servers at Booking.com,” it sounds like they might be doing the same kind of things we are for Semi-Synchronous replication, so it’ll be great to compare ideas. I’ll also be looking at the talks on MySQL 5.7 to get the scoop on what cool new stuff is coming down the pipeline!The post Facebook’s Simon Martin on semi-synchronous replication appeared first on MySQL Performance Blog.

  • Log Buffer #439: A Carnival of the Vanities for DBAs
    This Log Buffer Edition covers some nifty blog posts from Oracle, SQL Server and MySQL. Oracle:Real Application Testing report On Premise vs. Oracle Public CloudForeign Keys and Library Cache LocksIN/EXISTS bugs by Jonathan Lewis.Creating a trace file from EM12c is quite easy and doesn’t require a DBA offering up the world to allow a developer or support person to perform this action.Oracle Cloud : First ImpressionsSQL Server:Hidden Tricks To SQL Server Table CleanupAn Introduction to the OpenPOWER FoundationConfiguring Service Broker ArchitectureUnderstand the Limitations of SQL Server Dynamic Data MaskingCreating Dashboards for Mobile Devices with Datazen – Part 3MySQL:Second day with InnoDB transparent page compressionAmazon RDS Migration ToolA new client utility called mysqlpump that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects and table data.How MySQL-Sandbox is tested, and tests MySQL in the processOrchestrator 1.4.340: GTID, binlog servers, Smart Mode, fail-overs and lots of goodies Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL.