ศุกร์, 06 มี.ค. 2015
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Shinguz: Rename MySQL Partition
    Taxonomy upgrade extras: partitionrenameDDLBefore I forget it and have to search again here a short note about how to rename a MySQL Partition: My dream: ALTER TABLE history RENAME PARTITION p2015_kw10 INTO p2015_kw09; In reality: ALTER TABLE history REORGANIZE PARTITION p2015_kw10 INTO ( PARTITION p2015_kw09 VALUES LESS THAN (UNIX_TIMESTAMP('2015-03-02 00:00:00')) );Caution: REORGANIZE PARTITION causes a full copy of the whole partition! Hint: I assume it would be very easy for MySQL or MariaDB to make this DDL command an in-place operation... MySQL Partitioning was introduced in MySQL 5.1.

  • Changes in the MariaDB Foundation board and management
    Year 2014 was an important year for the MariaDB project with the release of MariaDB 10.0. Adoption continued to grow both organically and by MariaDB being included both in the Red Hat Enterprise Linux and Suse Linux Enterprise Server distributions as the default database option. Ubuntu started providing MariaDB as an option since their release 14.04. MariaDB also came available in many cloud services, e.g. DBaaS in the Rackspace Cloud and Cloud Foundry. Those are just a few highlights. There is of course a lot of other news from last year which has already been covered earlier. If you’re interested in what the MariaDB Foundation worked on last year, Monty wrote a wrap-up on it. You can find it here. In this article I want to focus on a few recent changes related to the MariaDB Foundation. On Monday this week, the MariaDB Foundation had its first board meeting of the year. In addition to all the general things on a board meeting agenda there were the following topics: New memberships Approval of new board member Approval of new CEO A Norwegian company called Visma has become a member of the MariaDB Foundation. Visma is a 6,000 person software company focused on finance and accounting software and services. As part of Visma becoming a member it was also suggested that Visma gets a representative on the board. It would be hard to find a more suitable person for the job than Espen Håkonsen, CIO of Visma and Managing Director of Visma IT & Communications. The MariaDB Foundation has been looking for a new CEO since Simon Phipps left last fall. Late last year discussions started with Otto Kekäläinen and he got interested. Otto is an entrepreneur in the open source business in Finland. He is the CEO of the open source consulting company Seravo. In addition he has several important open source roles in Finland and Europe. Otto is Finland’s representative in the Free Software Foundation Europe and in the steering group of COSS, the Finnish Centre for Open Systems and Solutions, the umbrella open source association in Finland. Otto will also serve as the Secretary of the board. These changes in the board and management of the MariaDB Foundation were approved in the board meeting. I’m delighted to have Espen and Otto to join the MariaDB Foundation. They bring a lot of new experience and ideas to the foundation. Welcome! Chairman of the board Rasmus Johansson  

  • mysqlfailover – Installation and Operations tutorial
    Configuration   1) Download the utilities. Python connector is a prerequisite and this needs to be installed along with mysql utilities rpm wget http://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-2.0.2-1.el6.noarch.rpm wget http://dev.mysql.com/get/Downloads/MySQLGUITools/mysql-utilities-1.5.3-1.el6.noarch.rpm   2) Install the RPMs. “Mysqlfailover –help” will confirm the install of the utilities rpm –ivh mysql-connector-python-2.0.2-1.el6.noarch.rpm rpm -ivh mysql-utilities-1.5.3-1.el6.noarch.rpm   3) Set the GTID MODE to ON. GTID mode is a prerequisite for mysqlfailover. GTID tracks the transaction IDs for replication instead of tracking log file and position log-bin=1 gtid_mode=ON enforce_gtid_consistency=true log-slave-updates=1 4) Setup replication using change master to command like below. Notice the difference in the change master to statement. We are not giving the binary log file and position CHANGE MASTER TO MASTER_HOST = ’104.xxx.xx.xx’, MASTER_PORT = 3306, MASTER_USER = ‘slaveuser’, MASTER_PASSWORD = ‘slavepass’, MASTER_AUTO_POSITION = 1;   5) Setup report host. Report host has to be updated on the slave. You can validate this configuration by issuing “show slave hosts” on the master server report-host is the variables that needs to be updated in my.cnf on slave e.g report-host=100.xx.xx.x   6) Master-info-repository has to be set to TABLE on slave. Implement the below changes in slave configuration master-info-repository=TABLE log-slave-updates=1   Operations   1)    Checking health: Below command can be issued to check the status of the replication servers. This will connect to all slaves and list the health of each server. If you do not see any of the slave servers please go and fix it mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-   login=root:sairam health   2)    Testing failover : Run the below command and kill mysql on the master server. Status should automatically get updated and the failover should start . Once failover is successful , old master can be configured to slave manually   mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-login=root:sairam –failover-mode=auto –force   3)    Running as daemon : Below command can be used to run in the background. Process will run in the background and initiate the transfer when needed   mysqlfailover –master=root:sairam@localhost:3306 –discover-slaves-login=root:sairam –failover-mode=auto –force –deamon start   4)    Post failover operations : You can use execute after option to execute post failover actions . For e.g if heartbeat is installed you can initiate failover of virtual IP as a post failover action    

  • The design of the SCAN algorithm within one LDM instance
    As part of optimising scan execution by about 50% in MySQL Cluster 7.4we made an effort to also write down a description of the scan protocollocally within the LDM threads. This is documented in the source code ofMySQL Cluster 7.4 and here is an excerpt from the source code.DBLQH controls the execution of scans on tables on behalf of DBTC andDBSPJ. Here follows a signal overview of how a scan is performed withinone LDM instance. For a description of the global scan protocolsee DbtcMain.cpp as a comment before execSCAN_TABREQ.DBLQH only controls execution of a scan towards one partition of atable. DBTC/DBSPJ is responsible for execution of scans toward theentire table and ensuring that the API sees a consistent view of thetable.There are currently four types of scans implemented in one LDMinstance:Full table scan using hash index. This is implemented in DBACC.Full table scan using row by row. This is implemented in DBTUP.Full table scan using row by row in disk order. This is implemented inDBTUP.Index scan using one or several ranges. This is implemented in DBTUX.DBLQH controls execution of one partition scan, Dependent on the scantype, DBACC/DBTUP/DBTUX is responsible to get the row references tothe tuple scanned. DBTUP is responsible for reading of those rows andfinally DBACC is responsible for any locking of rows required as partof the scan.Each scan is controlled by an interpreted program created by the APIand transported down to DBTUP. This program is sent as part of theSCAN_FRAGREQ signal and passed to DBTUP in the STORED_PROCREQ signal.This program is applied on each row reference passed to DBTUP byexecution of the execTUPKEYREQ signal.In index ranges one or more ranges is sent in the keyinfo part of theSCAN_FRAGREQ. This range information is sent to DBTUX one range at atime. Actually with multiple ranges, DBLQH will treat each range as aseparate scan towards the other blocks, so a scan will be started andclosed towards DBACC/DBTUP/DBTUX for each range involved.As an optimisation all signals locally in one LDM instance have beenconverted to direct signals.The following signals are used as part of the scan of one partition.ACC_SCANREQ:  This signal initialises an operation record in DBACC/DBTUP/DBTUX for  scan of one range or a full partition. Always sent as a direct signal  and returned immediately through signal object on return.STORED_PROCREQ:  This signal stores the interpreted program used to read each tuple  as part of the scan. The same signal is also used to deallocate the  the interpreted program when the entire scan of all ranges have been  completed. Always sent as a direct signal and returned immediately  through signal object on return.ACC_LOCKREQ:  Certain scans require a lock on the row before the row is read, this  signal acquires such a lock. Always sent as a direct signal. Return  signal not always sent immediately.ACCKEYCONF:  Signal returned when the lock have been acquired, the signal is  normally sent directly when the row is not locked, but for a locked  row the signal can be sent even a second or more later. When sent the  signal is sent as a direct signal.ACCKEYREF:  Signal returned when acquiring lock failed, e.g. due to record deleted  while waiting for it.ACC_ABORTCONF:  Signal returned after aborting a scan using an asynchronous message to  ensure that all asynchronous messages are delivered since setting the  scan state as aborted.NEXT_SCANREQ:  This signal is used with different meaning:  ZSCAN_NEXT:    Get the next row reference to read, returned in NEXT_SCANCONF signal.  ZSCAN_NEXT_COMMIT:    Get the next row reference to read AND unlock the specified row.    Returned in NEXT_SCANCONF signal.  ZSCAN_COMMIT:    Unlock the specified row. Return signal is simply returned when    returning from call to execNEXT_SCANREQ.  ZSCAN_CLOSE:    Close the scan in DBACC/DBTUP/DBTUX.  When sent as ZSCAN_COMMIT and ZSCAN_CLOSE it is always sent as a direct  signal. Otherwise it is sent as direct or asynchronous signal dependent  on the value of the scan_direct_count variable in the DBLQH scan  record. The scan_direct_count variable ensures that we keep the number  of direct signals sent bounded.NEXT_SCANCONF:  Return signal to NEXT_SCANREQ containing row reference to read or  indication of close completed. Always sent as a direct signal.TUPKEYREQ:  This signal does the actual read of the row and sends the read row data  directly to the API using the TRANSID_AI signal. This signal is always  sent as a direct signal.ACC_CHECK_SCAN:  Continue scanning from specified place. Used by DBACC/DBTUP/DBTUX as an  internal signal as part of the scan. This signal can be sent both as  an asynchronous signal and as a direct signal.SCAN_FRAGCONF:  Return signal sent to DBTC/DBSPJ after completing a part of the scan,  the signal carries a set of references to rows sent to the API. After  sending this signal DBLQH will stop and wait for a SCAN_NEXTREQ to  signal asking DBLQH to continue the scan of the partition. The number  of rows scanned before sending SCAN_FRAGCONF is dependent on both  configuration parameters and information in the SCAN_FRAGREQ signal.  This signal is also sent when the scan is fully completed.  This signal is normally a distributed signal, so it is always sent as  an asynchronous signal.SCAN_NEXTREQ:  Request to continue scanning from DBTC/DBSPJ as requested to them from  API.  This signal is normally a distributed signal, so it is always sent as  an asynchronous signal. Below follows an example signal diagram of a scan of one partition. DBLQH          ACC      TUP    ACC/TUP/TUX    API      DBTC   |ACC_SCANREQ   |----------------------------------------- >|   |< -----------------------------------------|   | STORED_PROCREQ   |------------------------- >|   |< -------------------------|   | NEXT_SCANREQ (ZSCAN_NEXT)   |-----------------------------------------  >|   |                          prepareTUPKEYREQ   |                          |< --------------|   |                          |-------------- >|   | NEXT_SCANCONF   |< -----------------------------------------|   | TUPKEYREQ   |------------------------- >|  TRANSID_AI   |                          |--------------------------------------- >|   |< -------------------------|   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)   |----------------------------------------- >|   |                          prepareTUPKEYREQ   |                          |< --------------|   |                          |-------------- >|   | NEXT_SCANCONF   |< -----------------------------------------|   | TUPKEYREQ   |------------------------- >|  TRANSID_AI   |                          |--------------------------------------- >|   |< -------------------------|   Repeat above for as many rows as required before returning to the   API. The above TRANSID_AI isn't necessary, the interpreted program   could perform selection and decide to not send a specific row since   it doesn't match the condition checked by the interpreted program.   |   | SCAN_FRAGCONF   |---------------------------------------------------------------- >|   .... Some time for API and DBTC to process things.   | SCAN_NEXTREQ   |< ----------------------------------------------------------------|   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)   |----------------------------------------- >|   |                          prepareTUPKEYREQ   |                          |< --------------|   |                          |-------------- >|   | NEXT_SCANCONF   |< -----------------------------------------|   | TUPKEYREQ   |------------------------- >|  TRANSID_AI   |                          |-------------------------- >|   |< -------------------------|   Repeat above again until time for next SCAN_FRAGCONF to be sent.   When scan from NEXT_SCANCONF indicates there are no more tuples to   fetch one starts to close the scan.   |   | NEXT_SCANREQ (ZSCAN_NEXT_COMMIT)   |----------------------------------------- >|   | NEXT_SCANCONF(no more tuples)   |< -----------------------------------------|   | NEXT_SCANREQ (ZSCAN_CLOSE)   |----------------------------------------- >|   | NEXT_SCANCONF   |< -----------------------------------------|   | STORED_PROCREQ (delete interpreted program)   |------------------------- >|   |< -------------------------|   | SCAN_FRAGCONF (close flag set)   |---------------------------------------------------------------- >|   Now the scan is completed.   Now a number of variations on the above signal diagrams:   Scan with locking:   In this we use the flag ZSCAN_NEXT all the time and never   ZSCAN_NEXT_COMMIT, we handle things a bit differently instead when   receiving SCAN_NEXTREQ where we perform a signal diagram like this:   | NEXT_SCANREQ (ZSCAN_COMMIT)   |----------------------------------------- >|   |< -----------------------------------------|   This is repeated for each row sent to the API in the previous   SCAN_FRAGCONF signal.   If the application wants the row locked for longer time he have had   the chance to perform a key lookup operation that took over the lock   such that even when we unlock the scan lock, the transaction still   retains a lock on the row.   After each row scanned we check if we've reached a scan heartbeat   timeout. In case we have we send a SCAN_HBREP signal to DBTC/DBSPJ   to inform about that we're still actively scanning even though no   result rows have been sent. Remember here that a scan in DBLQH can   potentially scan billions of rows while only returning very few to   the API. Thus we can scan for an extended time without returning to   the API. This is handled by the method check_send_scan_hb_rep.   Already from returning from ACC_SCANREQ we can discover that the   partition (== fragment) is empty and go immediately to the close down code.   For index scans we will send TUX_BOUND_INFO after ACC_SCANREQ and   before sending STORED_PROCREQ to DBTUX. This will provide one range   to DBTUX for scanning, if multiple ranges are to be scanned we   startup a new scan as if it was a new SCAN_FRAGREQ received, but we   don't need to send STORED_PROCREQ since the same interpreted program   will be used. We will however send ACC_SCANREQ and TUX_BOUND_INFO   also for this new range.  There are various reasons for temporarily stopping a scan, this could  lack of operation records, holding too many row locks, one could also  end up in this situation after waiting for a row lock.  To restart the scan again after any type of temporary stop one sends  the signal ACC_CHECK_SCAN either as direct or as an asynchronous signal  to DBACC/DBTUP/DBTUX. This signal is sent from many different places in  DBLQH, DBACC, DBTUP and DBTUX. It is always sent as part of NEXT_SCANREQ  processing.  When executing ACC_CHECK_SCAN one can flag to DBACC/DBTUP/DBTUX  that one  should check for a 10 ms delay with the flag ZCHECK_LCP_STOP. In  previous versions this was also related to local checkpoints, this is no longer  the case. Now it's only related to situations where it is required to  perform an extra wait such that resources becomes available again.  DBTUP and DBTUX sends the signal CHECK_LCP_STOP to DBLQH in a number of  situations, among other things when a locked key has been encountered.  When the ACCKEYCONF signal then is received indicating that one acquired  the lock, DBLQH will still wait for CHECK_LCP_STOP from DBQLH to return  after 10ms delay. This is on the TODO-list to fix to ensure that we can  proceed with these locked rows immediately after delivery. As it is now  we can get up to 10ms delay each time we encounter a locked row.

  • Handling hierarchy and travesing Social networks in MySQL with OQGraph
    From time to time we detect query patterns that are not well fitted to the BTree+ structures provided by InnoDB. One such situation is when you need to traverse a hierarchy (tree) or graph structure with many nodes. Specialist databases exist for this such as Neo4J. However there exists a simple solution in the form of  OQGraph which is distributed with MariaDB and is documented here. The OQGRAPH engine is based on an original idea by Open Query founder Arjen Lentz, and was developed in-house with Antony Curtis at Open Query. A good simple example of how OQGraph can be applied, is the friend of a friend traversal where you want to determine the path between two people in a social network. I’ve used MariaDB, installing OQGraph as a plugin. INSTALL SONAME 'ha_oqgraph'; create database test; use test; First I’ll create a person table to hold a bunch of people from my facebook account, a friendship table to represent the various connections that exist between those people, and lastly the OQGraph table which is what we will query in order to find the relationships. CREATE TABLE `person` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` char(12) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=17 DEFAULT CHARSET=latin1 CREATE TABLE `friendship` ( `person` int(10) unsigned NOT NULL, `friend` int(10) unsigned NOT NULL, PRIMARY KEY (`person`,`friend`), KEY `idx_friend_person` (`friend`,`person`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CREATE TABLE `friendship_graph` ( `latch` varchar(32) DEFAULT NULL, `origid` bigint(20) unsigned DEFAULT NULL, `destid` bigint(20) unsigned DEFAULT NULL, `weight` double DEFAULT NULL, `seq` bigint(20) unsigned DEFAULT NULL, `linkid` bigint(20) unsigned DEFAULT NULL, KEY `latch` (`latch`,`origid`,`destid`) USING HASH, KEY `latch_2` (`latch`,`destid`,`origid`) USING HASH ) ENGINE=OQGRAPH DEFAULT CHARSET=latin1 `data_table`=friendship `origid`=person `destid`=friend In the CREATE statement for the OQGraph table, we pass the origin table (data_table) and fields so it understands where to source the data. The person table simply contains names and IDs. +----+----------+ | id | name | +----+----------+ | 1 | ewen | | 2 | piotr | | 3 | mixa | | 4 | maciek | | 5 | kenny | | 6 | lefred | | 7 | peter | | 8 | celeste | | 9 | stephen | | 10 | carolina | | 11 | mark | | 12 | harriet | | 13 | dani | | 14 | juanjo | | 15 | ander | | 16 | ana | +----+----------+ And the friendship table was populated with the combined ID values to represent each connection. For example here we record Ander knows Ana, and Ana knows Ander. insert into friendship values (16,15); insert into friendship values (15,16); Now we can find the shortest path between two nodes (people). MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 1 and destid = 2; +-------+ | name | +-------+ | ewen | | piotr | +-------+ Since Piotr and I know each other directly, there is just a straight link between the two of us. What about something less direct? MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 8 and destid = 2; +---------+ | name | +---------+ | celeste | | ewen | | piotr | +---------+ In this case Celeste knows me, and since I know Piotr we have linked the two nodes (people) via a third node. Can this follow a longer chain, can we traverse more than one level of abstraction? MariaDB [test]> select p.name from friendship_graph fg join person p on (fg.linkid=p.id)where fg.latch = '1' and origid = 8 and destid = 16; +---------+ | name | +---------+ | celeste | | ewen | | juanjo | | ander | | ana | +---------+ Again, Celeste knows Ewen, who knows Juanjo, he in turn knows Ander and Ander is friends with Ana. What is the “latch” field in the CREATE TABLE statement? The latch field is the given mechanism for communicating with the OQGraph engine and it allows us to set the algorithm to use. Summary of Implemented latch commands Latch Alternative additional where clause fields Graph operation NULL (unspecified) (none) List original data (empty string) 0 (none extra) List all vertices in linkid column (empty string) 0 origid List all first hop vertices from origid in linkid column dijkstras 1 origid, destid Find shortest path using Dijkstras algorithm between origid and destid, with traversed vertex ids in linkid column dijkstras 1 origid Find all vertices reachable from origid, listed in linkid column, and report sum of weights of vertices on path to given vertex in weight dijkstras 1 destid Find all vertices reachable from destid, listed in linkid column, and report sum of weights of vertices on path to given vertex in weight breadth_first 2 origid List vertices reachable from origid in linkid column breadth_first 2 destid List vertices from which a path can be found to destid in linkid column breadth_first 2 origid, destid Visit all vertices possible between origid and destid, report in linkid column   The MariaDB documentation on OQGraph can be found on the MariaDB knowledge base. To solve Graph type queries natively in MySQL, it is necessary to both consider the schema model to use such as nested sets or adjacency lists and in some cases write stored procedures to provide the recursion necessary. OQGraph presents a simple solution to a complex problem without introducing additional complications to the stack and without the need to develop sophisticated SQL. Facebook have created their own solution called TAO which interacts with MySQL, but to my knowledge is not currently open source.