พฤหัสบดี, 05 พ.ค. 2016
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Planets9s - Sign up for our ClusterControl New Features Webinar
    Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures. Sign up for our ClusterControl New Features Webinar Join us for this new webinar on Tuesday, May 24th, where we’ll be discussing and demonstrating the latest version of ClusterControl, the one-stop console for your entire database infrastructure. We’ll be introducing some cool new features for MySQL and MongoDB users in particular as well as walk you through the work we’ve recently done for improved security. Sign up for the webinar Download our new whitepaper: The MySQL Replication Blueprint We’re excited to introduce the Severalnines Blueprint for MySQL Replication, a new whitepaper which discusses all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy. Download the whitepaper Become a MongoDB DBA: provisioning and deployment If you are a MySQL DBA you may ask yourself why you would install MongoDB? That is actually a very good question as MongoDB and MySQL have been in a flame-war a couple of years ago. But there are many cases where you simply have to. And if you’re in that situation, this new blog series gives you an excellent starting point to get yourself prepared for MongoDB. Read the blog That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels. Have a good end of the week, Jean-Jérôme Schmidt Planets9s Editor Severalnines AB Tags: MongoDBMySQLclustercontroldatabase managementmysql replication

  • Announcing Galera Cluster 5.5.49 and 5.6.30 with Galera 3.16
    Codership is pleased to announce the release of Galera Cluster 5.5.49 and 5.6.30 with Galera Replication library 3.16, implementing wsrep API version 25.The library is now available as targeted packages and package repositories for a number of Linux distributions including RHEL, Ubuntu, Debian, Fedora, CentOS, OpenSUSE and SLES. Obtaining packages using a package repository removes the need to download individual files and facilitates the deployment and upgrade of Galera nodes.This and future releases will be available from http://www.galeracluster.com. The source repositories and bug tracking are now on http://www.github.com/codership .This release incorporates all changes up to MySQL 5.5.49 and 5.6.30.New features and notable fixes in Galera replication since last binary release by Codership (3.15):a counter is now used to track the number of desync operations currently runninga new option, gcomm.thread_prio, allows specifying the priority of the gcomm threada new option, ist.recv_bind can be used to listen for IST requests on a particular interfaceNew features and notable changes in MySQL-wsrep since last binary release by Codership (5.6.29):DDL statements are no longer recorded in the general log on the slaves (MW-44)a new status variable, wsrep_desync_count, shows the number of desync operations currently in progress. The node syncs back to the cluster after the counter is back to zeroNew features and notable changes and bug fixes in MySQL 5.6.30:mysql client programs now support the –ssl-mode option that can be used to force encryption to be usedreplicating a DROP TABLE statement could fail under certain situations (Bug #77684, Bug #21435502, Bug #20797764, Bug #76493)Improper host name checking in X509 certificates could permit man-in-the-middle attacks. (Bug #22295186, Bug #22738607)

  • Docker for Mac beta and MySQL - First impressions
    Using Docker for development is a great way of ensuring that what you develop will be the same that you deploy in production. This is true for almost everything. If you develop on Linux, the above statement holds. If you develop on a different operating system (OSX or Windows) there are several restrictions. I showed one of those issues in a recent article (MySQL and Docker on a Mac: networking oddity.) When you want to export a port from a service running in the container, the exported port is not available in your mac, but in the virtual machine that runs Docker services. This happens with any application that listens to a port. The second limitation I found affects only MySQL, and it is related to using volumes. The proper way of achieving data persistence with containers is through volumes, i.e. telling the container to run the data directory in a virtual path that refers to some safe place in the host computer. That can't be done on a Mac, because the host computer is a virtual machine, and even though Docker can access a folder in your Mac, the server installation fails for lack of permissions. Both the above restrictions are lifted if you use the beta release of Docker for Mac and Windows. It's a private beta: you need to apply and wait to be given an operational token, but once you are in, you notice the differences between the beta and the "old" Docker-Toolbox: The Docker app is a native app, which you install by copying its icon to the /Application folder;You don't need Virtualbox or VMware Fusion. It comes with its own lightweight VM based on xhyve.There is no need to run docker-machine start xxx and eval $(docker-machine env xxx). The new app is fully integrated with the OS.Ports exported from a container are available in your Mac.You can keep both the Docker Toolbox and the new Docker app in the same host, provided that you don't run them both in the same terminal session. Back to our claim of lifted limitations: let's try a full installation on a Mac as we would do it on Linux. $ docker run --name mybox -e MYSQL_ROOT_PASSWORD=secret -d \ -v ~/docker/mysql/single:/var/lib/mysql \ -p 5000:3306 mysql/mysql-server72ca99918076ff0e5702514311cc706ffcc27f98917f211e98ed187dfda3b47b$ ls ~/docker/mysql/single/auto.cnf client-key.pem ibdata1 mysql.sock.lock server-cert.pemca-key.pem ib_buffer_pool ibtmp1 performance_schema server-key.pemca.pem ib_logfile0 mysql private_key.pem sysclient-cert.pem ib_logfile1 mysql.sock public_key.pem We create a MySQL server container with the internal port 3306 exposed to the external port 5000, and the data directory running in the host directory $HOME/docker/mysql/single. It seems that the data directory was created correctly. Now we use a MySQL client on the Mac to connect to the container, using port 5000 on the local network address (Note: there is NO database server running on my mac. Only in the container). $ sudo netstat -atn |grep LISTEN | grep 5000tcp4 0 0 *.5000 *.* LISTEN$ ~/opt/mysql/5.7.12/bin/mysql -h 127.0.0.1 -u root -psecret -P 5000mysql: [Warning] Using a password on the command line interface can be insecure.Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 2Server version: 5.7.12 MySQL Community Server (GPL)Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> create schema hello_docker;Query OK, 1 row affected (0.01 sec)mysql> use hello_dockerDatabase changedmysql> create table i_am_here(id int);Query OK, 0 rows affected (0.03 sec)mysql> exitBye$ ls ~/docker/mysql/single/hello_docker/db.opt i_am_here.frm i_am_here.ibd This is full integration! Using a Mac client we connected to the container, where we created a table, which then appeared inside the data directory in the Mac host! It's still early to say if this beta is ready for more serious work, but the first impressions are really good!

  • How VividCortex Uses the New Generated Virtual Columns Feature in MySQL
    In an industry as fast-growing and interconnected as database technology’s, it’s exciting to track how innovations in one platform can ignite beneficial, rippling effects on other, surrounding systems. At VividCortex we frequently find ourselves faced with opportunities to improve our monitoring solutions based on how database technologies (such as MySQL, Redis, PostgreSQL, etc) develop and integrate new upgrades. When those platforms that we monitor -- already powerful, tried and true -- equip themselves with new features, we work to discover how VividCortex can leverage those features and make our own technology even better. In the case of MySQL 5.7.8’s recent introduction of Generated Virtual Columns, we found the opportunity to use a new feature to make our queries simpler and more elegant, with results that are significantly faster and more eifficient in how they use space.    Image via Wikipedia Before having access to MySQL’s Generated Virtual Columns, we were already using a table that had an ID column for metric ID’s. As an inherent part of our use of that table, when we read from it we were interested in a huge number of those metric ID’s. Unfortunately, because those ID’s are generated from a hash, reading them wasn’t so simple as just selecting a particular range. Instead, we need to generate a huge list of ID’s and put them in IN (...) clauses when we query. We developed a way to decrease the number of ID’s generated and lighten the load of the process; instead of specifying the raw ID’s themselves -- often a number in the thousands -- we found that a satisfactory solution in selecting ID’s that have a certain hash result (specifically, a modulo result). In other words, instead of using SELECT * FROM table WHERE metric IN (_, _, _, ... [hundreds or thousands more]) we can use SELECT * from table WHERE metric % 100 IN (1, 2, 3, 4) In this expression, we’re only interested in metrics ID’s that have a remainder of 1,2,3, or 4 after dividing by 100. This specification makes handling our queries much easier… but it also means we have an indexing problem. On one hand, our metric ID is part of our primary key, so specifying the ID directly in the IN clause would be very fast, as we can look up records directly in the primary key. However, with our modulo approach, we’d have to scan through and check each and every ID -- a slow, granular process. This process causes MySQL to look at more rows than is actually necessary, which is unnecessary work. This is where Generated Virtual Columns come in. As of MySQL 5.7.8, users have had the ability to create secondary indexes on generated virtual columns -- for us, that means we can add a virtual generated column for our modulo result (metric % 100), which, significantly, uses no space directly. With this power at our fingertips, we updated one of our indexes to use the generated column -- something that was previously not possible. We also updated that index to include another column that we needed for our query, so it became a covering index (read about covering indexes in Baron Schwartz’s post about exploiting MySQL index optimizations.) Of special interest to VividCortex, once we started exploring Generated Virtual Columns, we found it especially helpful to look at the differences in EXPLAIN plans in our Query Details page. Rather than manually experimenting with different queries and exhaustively checking latencies and other details, the information was all there, available, easily accessed, on VividCortex. The final result is that our queries got a lot simpler and more efficient mainly due to MySQL's new virtual generated column support. MySQL became more flexible and powerful, VividCortex was able to leverage that power, and, as a result, when customers use our product, they’ll find a more streamlined solution, making minimal demands on space and time in their resources. If you’d like to see VividCortex in action on your own systems, be sure to request a demo.

  • MySQL High Availability: The Road Ahead for Percona and XtraDB Cluster
    This blog post discusses what is going on in the MySQL high availability market, and what Percona’s plans are for helping customers with high availability solutions. One thing I like to tell people is that you shouldn’t view Percona as a “software” company, but as a “solution” company. Our goal has always been to provide the best solution that meets each customer’s situation, rather than push our own software, regardless of whether it is the best fit or not. As a result, we have customers running all kinds of MySQL “flavors”: MySQL, MariaDB, Percona Server, Amazon RDS and Google Cloud SQL. We’re happy to help customers be successful with the technology of their choice, and advise them on alternatives when we see a better fit. One area where I have been increasingly uneasy is our advanced high availability support with Percona XtraDB Cluster and other Galera-based technologies. In 2011, when we started working on Percona XtraDB Cluster together with Codership, we needed to find a way to arrange investment into the development of Galera technology to bring it to market. So we made a deal, which, while providing needed development resources, also required us to price Percona XtraDB Cluster support as a very expensive add-on option. While this made sense at the time, it also meant few companies could afford XtraDB Cluster support from Percona, especially at large scale. As a few years passed, the Galera technology became the mainstream high-end high availability option. In addition to being available in Percona XtraDB Cluster, it has been included in MariaDB, as well as Galera Cluster for MySQL. Additionally, the alternative technology to solve the same problem – MySQL Group Replication – started to be developed by the MySQL Team at Oracle. With these all changes, it was impossible for us to provide affordable support for Percona XtraDB Cluster due to our previous commercial agreement with Codership that reflected a very different market situation than we now find ourselves facing. As a result, over a year ago we exited our support partnership agreement with Codership and moved the support and development function in-house. These changes have proven to be positive for our customers, allowing us to better focus on their priorities and provide better response time for issues, as these no longer require partner escalation. Today we’re taking the next natural step – we will no longer require customers to purchase Percona XtraDB Cluster as a separate add-on. Percona will include support for XtraDB Cluster and other Galera-based replication technologies in our Enterprise and Premier support levels, as well as our Percona Care and Managed Services subscriptions. Furthermore, we are going to support Oracle’s MySQL Group Replication technology at no additional cost too, once it becomes generally available, so our customers have access to the best high availability technology for their deployment. As part of this change, you will also see us focusing on hardening XtraDB Cluster and Galera technology, making it better suited for demanding business workloads, as well as more secure and easier to use. All of our changes will be available as 100% open source solutions and will also be contributed back to the Galera development team to incorporate into their code base if they wish. I believe making the Galera code better is the most appropriate action for us at this point!