พฤหัสบดี, 30 ต.ค. 2014
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • GeoJSON Functions
    In recent years, GeoJSON has become a popular data format for exchanging GIS data due to several factors. The primary factors being that it’s easy to read, and it’s simple and lightweight. In 5.7.5, we added support for parsing and generating GeoJSON documents via two new functions: ST_GeomFromGeoJson() and ST_AsGeoJson(). These functions makes it easy to connect MySQL with other GeoJSON enabled software and services, such as the Google Maps Javascript API. Since GeoJSON is a JSON format, we needed a library to parse and write JSON documents. After evaluating several candidates, we ended up with rapidjson due to its features, speed, and compatible license. The new functions support all of the geometry types specified in the GeoJSON specification, as well as collections. The parsing function also extracts the geometry information from feature objects, as shown below: mysql> SELECT ST_AsText(ST_GeomFromGeoJson( -> '{ '> "type": "Feature", '> "properties": {"Location": "Oracle HQ"}, '> "geometry": '> { '> "type": "Point", '> "coordinates": [-122.262289, 37.530518] '> } '> }' '> )); +--------------------------------+ | SELECT ST_AsText(ST_GeomFr ... | +--------------------------------+ | POINT(-122.262289 37.530518) | +--------------------------------+ When creating GeoJSON documents from spatial data, you have the possibility to specify a maximum number of decimal digits in the output. If GeoJSON output length is a concern for you, then this is a simple and efficient way to reduce the output length if your geometry contains a lot of coordinates with many decimals. You can even add a URN in the OGC namespace to the GeoJSON output. The function uses the SRID from the geometry input, and outputs a short or long format OGC URN depending the input parameters (note that the output below is formatted for readability): mysql> SELECT ST_AsGeoJson(ST_GeomFromText("POINT(-0.127676 51.507344)", 4326), 5, 4); +-------------------------------------------------------------------------+ | ST_AsGeoJson(ST_GeomFromText("POINT(-0.127676 51.507344)", 4326), 5, 4) | +-------------------------------------------------------------------------+ | { | | "type":"Point", | | "coordinates": [-0.12768,51.50734], | | "crs": | | { | | "type":"name", | | "properties": | | { | | "name":"urn:ogc:def:crs:EPSG::4326" | | } | | } | | } | +-------------------------------------------------------------------------+ URNs in the OGC namespace are also recognized by the parsing function, and are stored in the geometry output from the function. What about Importing GeoJSON with 3D Geometries? You might think that your lovely collection of GeoJSON documents with 3D geometries are useless with these functions, and that they can’t be imported, but this is exactly what the options argument in ST_GeomFromGeoJson is for. By default, the parsing function will refuse to parse documents with 3D geometries. We do, however, realize that this would reject a lot of existing documents out there, so setting the options argument to 2, 3, or 4 will allow you to import these documents:mysql> SELECT ST_AsText(ST_GeomFromGeoJson( -> '{ '> "type": "LineString", '> "coordinates": '> [ '> [30.1904297,69.7181067,1.11], '> [27.8173828,70.2446036,1.12], '> [21.9726563,69.6876184,1.12], '> [18.1054688,68.7204406,1.11], '> [15.9082031,67.5924750,1.12], '> [14.2382813,66.2314575,1.11], '> [13.1835938,65.1091482,1.10], '> [11.9091797,63.6267446,1.10], '> [8.65722660,61.2702328,1.11], '> [7.07519530,58.0080978,1.11] '> ] '> }', 3 '> )); +------------------------------------------------------------------------------+ | SELECT ST_AsText(ST_GeomFr ... | +------------------------------------------------------------------------------+ | LINESTRING(30.1904297 69.7181067,27.8173828 70.2446036,21.9726563 69.6876184,| | 18.1054688 68.7204406,15.9082031 67.592475,14.2382813 66.2314575,13.1835938 6| | 5.1091482,11.9091797 63.6267446,8.6572266 61.2702328,7.0751953 58.0080978 | +------------------------------------------------------------------------------+ Option values of 2, 3, and 4 all have the same effect for now, but in the future when 3D geometries are supported, they will produce different results. We recommend using option 3, since this will mean that documents with 3D geometries will be rejected when 3D is introduced in MySQL. This will ensure that the change of behavior won’t happen silently and that you can take the appropriate action in your applications. We look forward to your feedback on the new Geohash and GeoJSON features! Please let us know if you have any comments or if you run into any bugs. Thank you for using MySQL!

  • MariaDB 10.1.1: triggers for RBR
    Sometimes users ask for something that doesn’t really make sense. On the first glance. But then you start asking and realize that the user was right, you were wrong, and it is, actually, a perfectly logical and valid use case. I’ve had one of these moments when I’ve heard about a request of making triggers to work on the slave in the row-based replication. Like, really? In RBR all changes made by triggers are replicated from the master to slaves as row events. If triggers would be fired on the slave they would do their changes twice. And anyway, assuming that one only has triggers one the slave (why?) in statement-based replication triggers would run on the slave normally, wouldn’t they? Well, yes, they would, but one cannot always use statement-based replication. If one could, RBR would’ve never been implemented. There are many cases that statement-based replication cannot handle correctly. Galera requires RBR too. And as it turned out, that user, indeed, only had triggers on the slave — the master gets all the updates and slaves maintain summary tables that triggers keep up to date. That’s why MariaDB 10.1.1 can now optionally invoke triggers for row-based events. This is controlled by the slave_run_triggers_for_rbr system variable. This variable is defined as MariaDB [test]> SELECT * FROM INFORMATION_SCHEMA.SYSTEM_VARIABLES WHERE --> VARIABLE_NAME='slave_run_triggers_for_rbr'\G *************************** 1. row *************************** VARIABLE_NAME: SLAVE_RUN_TRIGGERS_FOR_RBR SESSION_VALUE: NULL GLOBAL_VALUE: NO GLOBAL_VALUE_ORIGIN: COMPILE-TIME DEFAULT_VALUE: NO VARIABLE_SCOPE: GLOBAL VARIABLE_TYPE: ENUM VARIABLE_COMMENT: Modes for how triggers in row-base replication on slave side will be executed. Legal values are NO (default), YES and LOGGING. NO means that trigger for RBR will not be running on slave. YES and LOGGING means that triggers will be running on slave, if there was not triggers running on the master for the statement. LOGGING also means results of that the executed triggers work will be written to the binlog. NUMERIC_MIN_VALUE: NULL NUMERIC_MAX_VALUE: NULL NUMERIC_BLOCK_SIZE: NULL ENUM_VALUE_LIST: NO,YES,LOGGING READ_ONLY: NO COMMAND_LINE_ARGUMENT: REQUIRED 1 row in set (0.01 sec)That is, it’s a global variable, it can be set to YES, NO, and LOGGING. By default it’s NO. The value of YES will, naturally, make triggers to run for RBR events. The value of LOGGING will make them to run and changes made by the triggers will be written into the binary log. This mode can be changed run-time or from the command line or a config file. Either way, when slave_run_triggers_for_rbr is not NO MariaDB slaves will invoke specific triggers for certain row events: Update_row_event will invoke an UPDATE trigger Delete_row_event will invoke a DELETE trigger Write_row_event is a bit tricky. It is applied (yes, in MySQL too) as follows: The slave tries to insert a row. If the table has UNIQUE KEY constraints (or a PRIMARY KEY) and there is a conflicting row — it’ll be updated to have all values as in what should’ve been inserted. But if the table also has FOREIGN KEY constraints or there are other UNIQUE keys in the table, old row will be deleted and new row will be inserted. Two operations instead of one, so it’s slower, but guarantees that there will be no references to the old row after it disappears. That is, Write_row_event can invoke INSERT trigger, DELETE trigger, or UPDATE trigger, depending on whether a conflicting row exists, which UNIQUE constraint was violated and whether a table was referenced in a foreign key constraint. If you think it’s too complex — you’re right, we’ll simplify it in 10.1.2. In setups like this, particularly if one replicates further, these slaves being masters to some other slaves, there is a risk that triggers will be run multiple times for the same row. To detect this kind of mistakes, MariaDB marks row events that have invoked triggers, and when these events are replicated, they won’t cause further trigger invocations on slaves. That is, the master must not have any triggers on tables in question, otherwise slave-side triggers will not be invoked.

  • PECL/mysqlnd_ms needs updates for MySQL Group Replication
    ‘Synchronous’, multi-master, auto-everything – that’s the new MySQL Group Replication (IPC14 talk/slides) in simple words. After torturing PHP developers for decades with MySQL Replication there is now a new replication option which does not require read-write splitting. A system that does not know about slave lags and reading stale data. In theory, MySQL Group Replication is just about the perfect approach to run a standard PHP application (WordPress, Drupal, …) on a small cluster (3-7 nodes) in LAN settings. In theory, MySQL Group Replication improves both availability and performance. MySQL Group Replication talk given today at the International PHP Conference 2014 (Munich) Distribution Transparency When designing replication systems there are some desireable goals which contradict each other. In a perfect world, from a developers perspective, a database cluster would behave exactly the same way as a single database. The user should never have to worry where and how data is stored in the cluster. Transactions executed on the cluster would provide the same properties like transactions run on a standalone database. The cluster would never return stale data (synchronous). Synchronous replication is desired but it requires coordination among cluster nodes. In LAN settings coordination can be reasonably fast. MySQL Group Replication is ‘synchronous’ replication (see slides for details). Deploy it on LAN only. In the internet, in WAN settings, when trying to replicate from Europe to Asia things will be slow. If WAN, then either forget about distribution transparency or performance. If WAN, go for asychronous MySQL Replication. The extra work different clusters cause for the developer Synchronous and asynchronous clusters always cause some extra work for the developer. Either approach requires load balancing and failover logic. An asynchronous approach adds: dealing with delays and stale reads. MySQL Replication is not only asynchronous but has only one master (primary). This adds: read-write splitting. PECL/mysqlnd_ms tries to help with all these tasks and take them over in a semi-transparent way. PECL/mysqlnd_ms support for synchronous clusters PECL/mysqlnd_ms is a plugin for mysqlnd. PDO_MySQL and mysqli use mysqlnd as their default library to talk to MySQL. Any of the two APIs works with PECL/mysqlnd_ms, our load balancing and replication plugin. The plugin monitors many API calls and aims to make using any kind of MySQL clusters easier. No matter what cluster: MySQL Replication, MySQL Cluster, MySQL Group Replication, 3rd party solutions. Example configurations are given in the PHP manual. MySQL Group Replication usage task 1: load balancing When moving an application from a single database server to a synchronous cluster there are two additional tasks: load balancing and failover. With PECL/mysqlnd_ms load balancing does no require any code changes. The plugin intercepts your connect calls and tests whether the host you connect to matches the name of a config entry. If so, the plugin loads the config, learns from the config which nodes there are and starts load balancing connection. Should you be too lazy to change the host name in your connects to match a PECL/mysqlnd_ms config entry, then just name the config entry after you current host names, have a config entry for ’127.0.0.1′ etc. $link = new mysqli("myapp", ...); MySQL Group Replication usage task 2: failover The second task is to handle the failure of a cluster node and connect to the next available one. PECL/mysqlnd_ms does that for you if you want. It picks an alternative from the config and connect you to it. There’s a small feature gap here. MySQL Group Replication tries to be an auto-everything solution. It automatically detects failed nodes. It also fully automates adding new nodes to the cluster. That’s cool but it means that over time the set of nodes can change and your config needs to be updated. The PECL/mysqlnd_ms feature gap There are two options. First, you could deploy the config. Second, after a failover or periodically, we could make PECL/mysqlnd_ms fetch the list of nodes from the cluster and make it reconfigure itself (see also here). That’s finally possible because MySQL Group Replication shows the list of nodes in a performance schema table. Once we did that, and MySQL Group Replication has reached GA, the auto-everything cluster for MySQL becomes real. All the stuff on the server side is already automatic. PECL/mysqlnd_ms is already GA and already handles all additional tasks – without code changes. A tiny addition is missing and you could even get an auto-deployed PECL/mysqlnd_ms… Happy hacking! @Ulf_Wendel The post PECL/mysqlnd_ms needs updates for MySQL Group Replication appeared first on Ulf Wendel.

  • Presenting Sphinx Search at Percona Live London
    It has been a while since I posted on this blog. The more in depth articles are all posted on the Spil Games Engineering blog and I’m overcrowded with work in the past months that I hardly have any time left. One of the reasons for having too much on my plate was my own doing: I volunteered to be a conference committee member for Percona Live London and we, as a committee, worked our way through all proposals while you all were enjoying your holidays in the sun. Well, I must admit I did review a couple of them sitting in the sun, next to the pool enjoying a cold drink. ;) I must say there were so many great proposals it was really hard to choose which ones would be left out. I also proposed a talk for Percona Live London this year and my fellow committee members liked it enough to make it to the final schedule: Serve out any page with an HA Sphinx environment. In basis it is a MySQL case study where I will show how we at Spil Games use Sphinx Search in three different use cases: our main search, friend search and serving out our content on any page. I’ll also describe how we handle high availability, how we handle index creation and show benchmark results between MySQL and Sphinx in various use cases. In case you are interested in Sphinx or the benchmark results: the session will be on the 4th of November at 3:10pm – 4:00pm in Cromwell 1 & 2. Also don’t hesitate to ask me things when I’m wandering around in the corridors and rooms. Or maybe we’ll meet at the MySQL Community Dinner? See you next week! Tagged: conference committee, percona live london, sphinx search

  • An Ending and a Beginning: VMware Has Acquired Continuent
    As of today, Continuent is part of VMware. We are absolutely over the moon about it.You can read more about the news on the VMware vCloud blog by Ajay Patel, our new boss. There’s also an official post on our Continuent company blog. In a nutshell the Continuent team is joining the VMware Cloud Services Division. We will continue to improve, sell, and support our Tungsten products and work on innovative integration into VMware’s product line. So why do I feel exhilarated about joining VMware? There are three reasons. 1.     Continuent is joining a world-class company that is the leader in virtualization and cloud infrastructure solutions. Even better, VMware understands the value of data to businesses. They share our vision of managing an integrated fabric of standard DBMS platforms, both in public clouds as well as in local data centers. It is a great home to advance our work for many years to come. 2.     We can continue to support our existing users and make Tungsten even better. I know many of you have made big decisions to adopt Continuent technology that would affect your careers if they turned out badly. We now have more resources and a mandate to grow our product line. We will be able to uphold our commitments to you and your businesses. 3.     It’s a great outcome for our team, which has worked for many years to make Continuent Tungsten technology successful. This includes our investors at Aura in Helsinki, who have been dogged in their support throughout our journey. Speaking of the Continuent team…I am so proud of what all of you have achieved. Today we are starting a new chapter in our work together. See you at VMware!