จันทร์, 08 ก.พ. 2016
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • MySQL 5.7: Introduction for Operational DBAs
    Join us Tuesday, February 16, 2016 9:00am PST (UTC-8) for a webinar on the operational and monitoring features of MySQL 5.7 with Percona CEO Peter Zaitsev. MySQL 5.7 is a great release, providing valuable features for both daily operations and ongoing development. In this, part two of our ongoing 5.7 webinar series, we will look into the new MySQL 5.7 features and enhancements that improve operations, with a specific focus on monitoring. These include: An improved optimizer, including updates to EXPLAIN and enhanced JSON support Performance and scalability improvements for the InnoDB storage engine, including temp tables, spatial types and full text search parsing Security improvements, such as a password expiration policy Performance and sys schema improvements, such as memory usage, metadata locks, monitoring capabilities and reduced footprint/overhead Better online server management Improved replication functions, including new statements for multi-source replication and better monitoring MySQL 5.7 promises to be faster, more robust, and more secure. We look forward to seeing you at the webinar! Peter Zaitsev co-founded Percona in 2006, assuming the role of CEO. Percona helps companies of all sizes maximize their success with MySQL. Percona was named to the Inc. 5000 in 2013. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. As CEO of Percona, Peter enjoys mixing business leadership with hands on technical expertise. Peter is co-author of High Performance MySQL published by O’Reilly, one of the most popular books on MySQL performance. Peter blogs regularly on MySQLPerformanceBlog.com and speaks frequently at conferences. Peter lives in North Carolina with his wife and two children. In his spare time, Peter enjoys travel and spending time outdoors.

  • High availability read-write splitting with php-mysqlnd, MySQL Replication and HAProxy
    MySQL Replication is used in a variety of use cases - scale out read workloads, provide high availability and geographic redundancy, offload backups and reporting/analytic jobs. However it has a big drawback - the application needs to be able to send writes to the master only. Updating more than one master in a replication setup can result in data inconsistency and cause replication to break. An SQL-aware load balancer like MaxScale could help here, as it could redirect SQL to the appropriate node. If you are using HAProxy and the application is written in PHP, read-write splitting is still achievable with the PHP MySQL native driver with master slave plugin (aka php-mysqlnd_ms). The driver can perform the read-write splitting, as well as load balancing and failover. There are advantages for having a load balancer as well though. php-mysqlnd_ms also has experimental support for multi-master replication setups like Galera or MMM. In this blog post, we explore the use of php-mysqlnd_ms with a PHP application (Wordpress) on a standard MySQL Replication backend. Why add HAProxy between php-mysqlnd and MySQL? The suggestion here is to use HAProxy between the PHP driver and MySQL, as it provides a single point access to the application, routes queries to the backend MySQL servers and also takes care of the master slave health checks. The reason behind this is to minimize the changes that need to be done on the php-mysqlnd configuration file residing on each application server. Especially important during dynamic replication topology changes, e.g., when a new slave is added or removed from the setup. If you have many application/web servers that connects to a single replication setup or the database servers are hosted in a dynamic environment that is constantly changing (e.g Docker, cloud instances), then this setup might be what you are looking for. Our architecture looks like this: The PHP application (Wordpress) is hosted on the web server,  and php-mysqlnd_ms will redirect writes to the HAProxy node on port 3307 while reads are redirected to port 3308. A hot-standby HAProxy instance is coupled with Keepalived to provide a virtual IP address. The web server connects to the virtual IP address on the respective port as a single access point to our MySQL Replication setup. All nodes in this setup are running on Debian 8 (Jessie). Deploying MySQL Replication We use ClusterControl to deploy a three node MySQL Replication with two slaves. Install ClusterControl and go to “Create Database Node”. In this dialog, we are going to create a MySQL Replication master node. Specify the required information and click “Deploy”: Once added, we will use “Add Node” function to add one slave at a time: Repeat the above step for the second slave. At the end of the deployment, you should see something like the below in ClusterControl’s summary bar. You should see one master node and two slaves: Deploying HAProxy and Configuring Health Checks ClusterControl supports deployment of HAProxy and Keepalived. For MySQL Replication, ClusterControl by default uses a script called mysqlchk located at /usr/local/sbin/mysqlchk on the every database node to run a simple ‘select 1’ health check. The script doesn’t produce an accurate health check result and can’t distinguish whether the database node is a healthy master or slave. We have to come out with a better health check that suits HAProxy. For the purpose of this blog post, we have built a custom health check script. The script detects the MySQL replication role on the database node as per below: if master (SHOW SLAVE STATUS > 1 AND read_only = OFF) return 'MySQL master is running.' if slave (Slave_IO_Running = Yes AND Slave_SQL_Running = Yes AND (Seconds_Behind_Master = 0 OR Seconds_Behind_Master < SLAVE_LAG_LIMIT)) return 'MySQL slave is running. (slave lag: 0)' else return 'MySQL is *down*' Note that the assigned mysql user must have at least REPLICATION CLIENT and REPLICATION SLAVE in order for the script to report correctly. Before the deployment begins, run the following command on the ClusterControl node to replace the health check template for MySQL Replication: $ wget https://raw.githubusercontent.com/ashraf-s9s/mysqlchk/master/mysqlchk.mysql -O /usr/share/cmon/templates/mysqlchk.mysql Now we are good to deploy the two HAProxy instances. Ensure the role for all nodes are set to Active and click on ‘Install HAProxy’ to start the installation: Repeat the above step for the second HAproxy node. To make HAProxy work with MySQL Replication, two HAProxy listeners (3307 for writes, 3308 for reads) are required. We also have to use tcp-check to distinguish whether the backend node is a healthy master or slave. To achieve this, we need to perform some modification to the installed HAProxy configuration file located at /etc/haproxy/haproxy.cfg of the load balancer nodes. Ensure you have the following configuration lines in haproxy.cfg on both load balancer nodes: listen haproxy_192.168.55.110_3307 bind *:3307 mode tcp timeout client 10800s timeout server 10800s balance leastconn option tcp-check tcp-check expect string MySQL\ master option allbackups default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 128 weight 100 server 192.168.55.111 192.168.55.111:3306 check server 192.168.55.112 192.168.55.112:3306 check server 192.168.55.113 192.168.55.113:3306 check listen haproxy_192.168.55.110_3308 bind *:3308 mode tcp timeout client 10800s timeout server 10800s balance leastconn option tcp-check tcp-check expect string is\ running. option allbackups default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 128 weight 100 server 192.168.55.111 192.168.55.111:3306 check server 192.168.55.112 192.168.55.112:3306 check server 192.168.55.113 192.168.55.113:3306 check** The use of ‘option tcp-check’ and ‘tcp-check expect’ is vital here. This is how HAProxy is capable to route the incoming query to the correct backend server. Writes are redirected to the node which report ‘MySQL master is running’ (therefore the expected string is “MySQL\ master”). Reads are redirected to the nodes which contain “is\ running” in the return string, to include master (‘MySQL master is running’) and all slaves (‘MySQL slave is running’) in the read-only load balancing set. Then, deploy Keepalived with virtual IP address 192.168.55.100 via ClusterControl > Actions > Add Load Balancer > Install Keepalived: At this point, all nodes have been deployed correctly - as indicated by the green ticks in the summary bar: Database and load balancer tiers are now deployed. Let’s move to the application tier. Deploying Web Server and Configuring php-mysqlnd for Master Slave The following steps should be performed on the web/application server: Install Apache web server and PHP 5 packages: $ apt-get install apache2 libapache2-mod-php5 php5-common php5-gd php5-json php5-curl php5-mysqlnd php-pear php5-dev Install mysqlnd_ms module through PECL: $ pecl install mysqlnd_ms Create a configuration file for mysqlnd_ms: $ vi /etc/php5/mods-available/mysqlnd_ms.iniAdd the following line: ; configuration for php MySQL module ; priority=40 extension=mysqlnd_ms.so mysqlnd_ms.enable=1 mysqlnd_ms.config_file=/etc/php5/mods-available/mysqlnd_ms.json**We are using priority=40 since it requires php-mysqlnd module to be loaded first. Enable the module: $ php5enmod mysqlnd_ms Create the mysqlnd_ms.json configuration file $ vi /etc/php5/mods-available/mysqlnd_ms.jsonAdd the following lines: { "wordpress_ms": { "master": { "master_0": { "host": "192.168.55.100", "port": "3307", "user": "wpress", "password": "password", "db": "wpress" } }, "slave": { "slave_0": { "host": "192.168.55.100", "port": "3308", "user": "wpress", "password": "password", "db": "wpress" } } } }We defined the application name as “wordpress_ms”, this value will be used to replace the ‘host’ value in MySQL function call. The master section provides information about the MySQL master - we forward writes to the virtual IP address on the HAProxy node, on port 3307. HAProxy will automatically discover the correct master to send the writes to. The slave section provides information for MySQL slaves - we forward reads to the virtual IP address on the HAProxy node, on port 3308. HAProxy will then automatically discover which nodes available to send the reads to. The JSON configuration file must be accessible by the Apache user, in this case we are using the default www-data. Apply correct ownership on the config file: $ chown www-data.www-data /etc/php5/mods-available/mysqlnd_ms.json Now we are ready to deploy Wordpess with MySQL Replication as the database backend. Installing Wordpress When installing Wordpress, specify the mysqlnd_ms application name (wordpess_ms) as the Database Host value: That’s it! You are now running Wordpress with read-write splitting on top of MySQL Replication. Testing Writes are done on the master server only. If the master fails, replication will stop. Failover must be done by promoting the most updated slave, before replication can resume. Applications doing updates must then reconnect to the newly promoted master and then continue to operate. If the master (192.168.55.111) is down, we need to promote one of the slaves (192.168.55.112) to new master. To achieve this, go to ClusterControl > Nodes > choose 192.168.55.112 > Promote Slave: Once selected, click ‘Execute’. You will be prompted with the following: Once the selected slave has become the new master, the other slaves will automatically failover to the new master and continue to get updates while the old master is down. When the old master comes up again, it will take the role of a slave and synchronize with the new master (which is handling application updates). This is orchestrated by ClusterControl. The following screenshot shows the old master (192.168.55.111) has became a slave in the replication chain: Once the old master is up-to-date with the new master, the old master will remain as slave. When a new master is promoted, HAProxy will automatically send the writes to the newly promoted node, as shown in the HAProxy stats page below: This setup will require no changes on the application and load balancer tier if the replication topology changes. Easier to manage and less error prone. Happy read-write splitting! Blog category: MySQL ClusterTags: MySQLmysql replicationload balancingMaxScalehaproxyphp-mysqlnd_mswordpressclustercontrol

  • MariaDB in the Running for 2015 LinuxQuestions.org Members Choice Database of the Year
    MariaDB won the 2013 and the 2014 LinuxQuestions.org Members Choice Award Database of the Year, and is in the running again for 2015 winner. Up until 2012, MySQL held a near-monopoly on the title, having won every year since at least 2001 (except for 2002, won by PostgreSQL). Since 2013, however, it’s been all MariaDB, […] The post MariaDB in the Running for 2015 LinuxQuestions.org Members Choice Database of the Year appeared first on MariaDB.org.

  • Fun with Bugs #41 - Bugs Fixed in MySQL 5.7.11
    I've just noted that Oracle had released new versions of MySQL on February 5, 2016 formally, so while these days I am mostly thinking about the ways to do support properly, remembering my colleagues and trying to understand some of RocksDB internals, it's time to postpone all these and write about bugs again. This time about some of the public bug reports from MySQL Community and Oracle engineers that were fixed by Oracle in MySQL 5.7.11.As usual, I'll try to mention who had reported a bug and who verified it, as I think that names matter in MySQL world. I'll concentrate mostly on InnoDB, replication and optimizer bug reports, trying to highlight regressions clearly.As usual, I prefer to start with InnoDB bugs:Bug #75112 - "Optimize table removes the data directory in partitions". This bug was reported by my former colleague Nilnandan Joshi and verified by Umesh. The fix should appear in MySQL 5.6.30 also.Bug #79775 - "Innodb doesn't update index stats when dropping virtual column in virtual index". It was reported by Shaohua Wang and probably was verified and fixed by him. I am always happy to see Oracle engineers reporting MySQL bugs in public! Check also his Bug #79773, "handler::my_eval_gcolumn_expr doesn't set correct null bit" fixed in 5.7.11.Bug #79691 - "Block Memcached from modifying tables that contain indexed virtual columns". One more  case of public bug reporting by Oracle engineer. This time it was Thirunarayanan Balathandayuthapani.Bug #79516 - "Server crashes on invalid value for variable innodb_saved_page_number_debug". When I hit third public report by Oracle engineer, this time even about a crash, I started to think that there is a new generation of engineers there that just do right things from the very beginning. Thank you, Naga Satyanarayana Bodapati, for reporting this bug in public!Bug #79030 - "REORGANIZE PARTITION MOVES PARTITION TO TABLE-LEVEL TABLESPACE BY DEFAULT". When I see all caps in the bug synopsis I suspect it's from Oracle engineer (and from Oracle's internal bugs database, originally). This bug was reported by Darshan M n. Bug #79201 - "InnoDB: Too small buffer pool for innodb_page_size=64k". Marko Mäkelä had never hesitated to report bugs in public. Check also his Bug #79150, "InnoDB: Remove runtime checks for 32-bit file offsets". Thank you for noting and fixing these!  Bug #79434 - "Incorrect NUMA-specific code in buf_chunk_init()". It was reported by my former colleague Alexey Kopytov and verified by Sinisa Milivojevic formally. Check also his Bug #79354, "Incomplete NUMA support on buffer pool resizing".Bug #78960 - "Unclear note "InnoDB: not started" in the error log". I've reported this bug back in October, 2015, based on the log from some customer issue. The message that was produced during loading of the buffer pool looked scary and misleading for customer, so I reported the bug immediately. It was verified by Umesh and is also fixed in 5.8.0.Bug #78955 - "Import Dump with InnoDB Fulltext Index and Constraint fails". This regression bug was reported by Nico Niepraschk and verified by Umesh.Bug #78831 - "InnoDB looks for buffer pool file name in '/' if innodb_data_home_dir is empty". It was reported by someone named NOT_FOUND NOT_FOUND. I'd like to find out one day who is it, as I see other useful bug reports under the same name, all fixed.Bug #78714 - "InnoDB: Failing assertion: !(&dict_sys->mutex)->is_owned() in dict0stats.cc 3049", was reported by my former colleague Roel Van de Paar and verified by Umesh. Bug #78647 - "POTENTIAL RUNNING OUT OF SPACE IN REDO - INVALID MARGIN CALCULATIONS". This bug was reported by Pawel Olchawa and probably verified by Marko Mäkelä. The fix was mostly to increase the minimum value of innodb_log_file_size from 1MB to 4MB.Bug #78262 - "Adding indexes on empty table is slow with large innodb_sort_buffer_size.". It was reported by Jean-François Gagné (who had made a great presentation at FOSDEM 2016, by the way) and verified by Umesh.Bug #74832 - "ut_delay missing compiler barrier, UT_RELAX_CPU isn't at all relaxing". This bug was reported long time ago by Stewart Smith, who had provided the patch also. It was formally processed by Umesh.The only public bug in partitioning that is listed in the release notes as fixed is Bug #79145, by Debarun Banerjee, who is probably working for Oracle.Now, let's move on to replication (and XA transactions) bugs:Bug #79416 - "InnoDB: Failing assertion: total_trx >= trx_sys->n_prepared_trx". This (debug builds only) bug was reported by Roel Van de Paar and verified by Umesh. See also Bug #78695, "Assertion `static_cast<Sql_cmd_xa_commit*>(thd->lex->m_sql_cmd)-> get_xa_opt() =".Bug #77740 - "silent failure to start if mysql.gtids_executed gets HA_ERR_LOCK_WAIT_TIMEOUT". It was reported and verified by Shane Bester. The fix is to NOT allow XA transactions to modify the mysql.gtid_executed table.Bug #77417 - "Applying create temporary table SQL on a slave with replicate-rewrite-db fails". This bug was reported by JAMES HETHERINGTON (who seems to be a customer of Oracle) and verified by Umesh. Good to see Oracle customers following my old advice and reporting bugs in public!Bug #77393 - "Failure to generate GTID leads to inconsistency". It was reported by Davi Arnaut, who had provided a patch, and verified by Umesh.Bug #77354 - "multi drop temp table statements in UDF cause replication fails using 5.6 GTID". This bug was reported by Fungo Wang (check his other replication-related bug reports, still "Verified") and verified by Umesh (no wonder, he seems to verify up to 70% of all bugs recently...)Bug #77288 - "Variables in alter event not replicated properly". It was reported by Trey Raymond and verified by Umesh.Bug #77249 - "Drop table may cause slaves to break". This bug was reported by  Santosh Praneeth Banda (who is a very active bug reporter, I see 7 more replication bugs) and verified by Umesh. It is also fixed in MySQL 5.6.29.Bug #71859 - "Replication broken after creation of scheduled events". It was reported by Shahid Hussain and verified by Sveta Smirnova long time ago. The bug is also fixed in MySQL 5.6.29.Let me conclude with several optimizer-related bugs:Bug #79590 - "ERROR 1093 (HY000): You can't specify target table '.' for update in FROM clause". It was reported by Zhang Jinzhu and verified by Miguel Solorzano.Bug #79549 - "Regression: “unknown column” for outer computed values used inside a subquery". This bug was reported by Chris Adams and verified by Miguel Solorzano.Bug #79398 - "view using hexadecimal or bit literal gives wrong results". It was reported, verified and probably fixed by Guilhem Bichot.Bug #79333 - "Unexpected error 1093 on nested subquery for update". This regression bug was reported by Danielle McLean and verified by Miguel Solorzano. Bug #79294 - "Query with double nested subquery much slower in 5.7". This regression bug was reported by Jonathan Corwin and verified by Umesh. Nice to see it fixed.Bug #79194 - "Wrong result on outer join with uncorrelated subquery, derived_merge". This regression bug was reported by Elena Stepanova and verified by Miguel Solorzano.Bug #79131 - "Get negative FOUND_ROWS() for UNION stmt". It was reported by Su Dylan and verified by Umesh. Based on the comments, it's also a regression bug in 5.7 comparing to 5.6, even though it is not tagged like that.Bug #78946 - "WRONG RESULT FOR SELECT NULL IN (<SUBQUERY>) ". One more bug report from an Oracle developer, Dag Wanvik. Good that he had found, reported and probably fixed this regression, but in general I am negatively surprised by the number of recent optimizer regression bugs in MySQL 5.7 that are found only after they are introduced into public releases.There are many more bug fixes, changes in behavior and new features in MySQL 5.7.11. So, read the release notes carefully and prepare to upgrade.

  • Finding and deleting orphaned rows in MySQL
    If you store data with a parent-child relationship in MySQL tables, then you may want to identify and/or delete orphaned rows from the child table. An orphaned row is defined as a child row that references a non-existant parent row. If you enforce the parent-child relationship with a foreign key constraint on the child table then orphans are less likely to occur, but you can still end up with orphans; for example if a user disables foreign_key_checks in a session and deletes a parent row. If you choose not to create foreign keys in your database in order to avoid the performance overhead, then you are at higher risk of having orphaned rows. Finding orphaned rows Finding orphaned rows is fairly simple either using an exclusion join or a NOT EXISTS query. Using the sakila database, here are two queries to count rows in the payment table that have an invalid rental_id: ``` -- count orphaned payments using exclusion join select count(*) from sakila.payment as child left outer join sakila.rental as parent on parent.rental_id = child.rental_id where child.rental_id is not null and parent.rental_id is null; -- count orphaned payments using NOT EXISTS select count(*) from sakila.payment as child where child.rental_id is not null and not exists ( select NULL from sakila.rental as parent where parent.rental_id = child.rental_id ); ``` What about NULLs? The treatment of child rows with a NULL parent_id will vary depending on the requirements of your application, but in general I do not count NULL values as orphans. Thus in my queries above I explicitly check that child.rental_id is not NULL. What not to do I've also seen people write queries like the following to find orphaned rows. In general this will be slower, especially if you are running an older version of MySQL, so I do not recommend this method: -- count orphaned payments using NOT IN select count(*) from sakila.payment as child where rental_id not in (select rental_id from sakila.rental); Next Steps What should I do with orphaned rows once I find them? That depends on my specific application. The correct response could be one of the following: Insert the missing parent row(s) Update the child row(s) to reference an existing parent row Delete the orphaned row(s) from the child table For the purposes of this post, I will assume that I want to delete the orphaned row(s). Deleting orphaned rows It's fairly simple to take the queries I used to find orphaned rows and rewrite them as delete statements: ``` -- delete orphaned payments using exclusion join delete child.* from sakila.payment as child left outer join sakila.rental as parent on parent.rental_id = child.rental_id where child.rental_id is not null and parent.rental_id is null; -- delete orphaned payments using NOT EXISTS delete child.* from sakila.payment as child where child.rental_id is not null and not exists ( select NULL from sakila.rental as parent where parent.rental_id = child.rental_id ); ``` Those queries are sufficient for deleting a small number of rows. If I need to delete a large number of orphaned rows then I will break the delete statement into multiple chunks. Here's an example using common_schema: ``` call common_schema.run("split( sakila.payment: delete sakila.payment.* from sakila.payment left outer join sakila.rental on rental.rental_id = payment.rental_id where payment.rental_id is not null and rental.rental_id is null ) pass;"); ``` Dynamically generated orphan queries If you use foreign keys to maintain referential integrity then it's possible to dynamically generate queries like the ones from earlier in this post by querying information_schema. For example, here's a query on information_schema to generate queries to count orphaned rows based on all 22 foreign keys in the sakila schema: select concat('select count(*) as "', table_schema,'.',table_name, ' -> ', referenced_table_schema,'.',referenced_table_name, ' (', constraint_name, ') orphan count" from ', table_schema,'.',table_name, ' as child left outer join ',referenced_table_schema,'.',referenced_table_name,' as parent on ', group_concat(concat('parent.',referenced_column_name,' = child.',column_name) separator ' and '), ' where ', group_concat(concat('child.',column_name,' is not null') separator ' and '), ' and ', group_concat(concat('parent.',referenced_column_name,' is null') separator ' and '), ';') as sql_stmt from information_schema.key_column_usage where referenced_table_name is not null and table_schema = 'sakila' group by table_schema,table_name,referenced_table_schema,referenced_table_name,constraint_name order by table_schema,table_name,referenced_table_schema,referenced_table_name,constraint_name; I wrote the query in such a way that it should work with multiple column foreign keys even though the sakila schema doesn't have any such foreign keys. Using dynamic queries makes it easier to script the process of counting orphans, and potentially even deleting them. Testing it out The sakila database doesn't have any orphaned payment rows by default (unless you count the 5 rows with a NULL rental_id), so I'm going to create some in order to test out my queries. First I delete 1,000 random rows from the rental table with foreign_key_checks disabled in order to orphan 1,000 payment rows: ``` mysql> set foreign_key_checks = 0; Query OK, 0 rows affected (0.00 sec) mysql> delete from sakila.rental -> order by rand() -> limit 1000; Query OK, 1000 rows affected (0.05 sec) ``` Now I count the orphaned payment rows using the dynamically generated orphan counter from my previous example: ``` mysql> select count(*) as "sakila.payment -> sakila.rental (fk_payment_rental) orphan count" -> from sakila.payment as child -> left outer join sakila.rental as parent on parent.rental_id = child.rental_id -> where child.rental_id is not null -> and parent.rental_id is null; +------------------------------------------------------------------+ | sakila.payment -> sakila.rental (fk_payment_rental) orphan count | +------------------------------------------------------------------+ | 1000 | +------------------------------------------------------------------+ 1 row in set (0.01 sec) ``` And I delete the 1,000 orphaned rows using one of the delete statements from my previous examples: ``` mysql> -- delete orphaned payments using exclusion join mysql> delete child.* -> from sakila.payment as child -> left outer join sakila.rental as parent on parent.rental_id = child.rental_id -> where child.rental_id is not null -> and parent.rental_id is null; Query OK, 1000 rows affected (0.09 sec) ``` Re-running the orphan counter shows that there are now 0 orphaned payment rows: ``` mysql> select count(*) as "sakila.payment -> sakila.rental (fk_payment_rental) orphan count" -> from sakila.payment as child -> left outer join sakila.rental as parent on parent.rental_id = child.rental_id -> where child.rental_id is not null -> and parent.rental_id is null; +------------------------------------------------------------------+ | sakila.payment -> sakila.rental (fk_payment_rental) orphan count | +------------------------------------------------------------------+ | 0 | +------------------------------------------------------------------+ 1 row in set (0.02 sec) ```