จันทร์, 01 ก.ย. 2014
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • New version of libAttachSQL, C connector for MySQL released!
    It has been just over 2 weeks since the last libAttachSQL version was released.  I had a great vacation in the middle which for once meant that I didn't do any work for the week I was away :)For those who don't know about it, libAttachSQL is a lightweight, non-blocking C connector for MySQL servers.  It is Apache 2.0 licensed so plays well with both Open Source and Commercially licensed applications.  I have been developing it for 2 months now as part of my work for HP's Advanced Technology Group.  It is hosted on GitHub and uses many freely available tools (such as Travis CI) to host and test various parts of the project.Once again I thank everyone for the feedback I have received.  You all make it even more awesome to be working on this :)So, on to the new version 0.3.0 alpha release.  This time round we have been focusing on zlib compression and SSL support.  Both of these features have been added and neither impacts the non-blocking aspect of the library.  The SSL part in particular was quite new to me, I've coded SSL into applications many times in the past, but I've never done it in a non-blocking way before.  It posed some interesting challenges but it was fun and appears to be working great now.The biggest changes in this release are:Fixes to the test cases and improvements to the CI usedDocumentation improvementsMany minor bug fixesProtocol compression (zlib) supportSSL encryption (OpenSSL) support32bit compiling worksFor more information see the Version History section of the docs.On to the next release which should complete the biggest pre-release features.  From there we can head towards our first GA release.If you have any questions, feedback, etc... please feel free to leave comments, email me or open a GitHub issue.

  • MySQL for Database Administrators Training Available in Eight Languages
    The MySQL for Database Administrators training course is one of the most popular courses at Oracle. This is not surprising since it provides a comprehensive range of administration skills to those using the world's most popular open source database. You can travel to an education center to take this 5-day instructor-led course, and as shown in the table below, events are currently scheduled for delivery in 8 languages (Arabic, Brazilian Portuguese, English, French, Hebrew, Italian, Spanish, Turkish). If you want to take this training from your own desk, you have two options: Training-on-Demand: Start training within 24 hours of registration, following training at your own pace. Live-Virtual Event: Follow a live event from your own desk, choosing from a selection of events on the schedule to suit different timezones. For those who want to travel to an education center, below is a selection of the events already on the schedule:  Location  Date  Delivery Language  Brisbane, Australia  13 October 2014  English  Brussels, Belgium  3 November 2014  English  Sao Paulo, Brazil  20 October 2014  Brazilian Portuguese  Cairo, Egypt  28 September 2014  Arabic  London, England  22 September 2014  English  Belfast, Ireland  12 January 2015  English  Dublin, Ireland  5 January 2015  English  Tel Aviv, Israel  16 November 2014  Hebrew  Milan, Italy  22 September 2014  Italian  Rome, Italy  15 September 2014  Italian  Nairobi, Kenya  22 September 2014  English  Petaling Jaya, Malaysia  29 September 2014  English  San Pedro Garza Garcia, Mexico  29 September 2014  Spanish  Pasig City, Philippines  1 September 2014  English  Utrecht, Netherlands  3 November 2014  English  Singapore  27 October 2014  English  Edinburgh, Scotland  29 September 2014  English  Madrid, Spain  6 October 2014  Spanish Tunis, Tunisia   5 November 2014  French Istanbul, Turkey   27 October 2014  Turkish  The MySQL for Database Administrators course is the recommended training for those preparing to take the MySQL 5.6 Database Administrator OCP certification exam. To register for an event or for more information on the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

  • Galera replication – how to recover a PXC cluster
    Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily.I am going to present here some simple tips that hopefully will let fresh Galera users prevent headaches when there is the need to recover part or a whole cluster in certain situations. I used Percona XtraDB Cluster (project based on Percona Server and Galera library + MySQL extensions from Codership) to prepare this post, but most if not all of the scenarios should also apply to any solution based on MySQL+Galera tandem you actually chose, whether these are binaries from Codership, MariaDB Galera Cluster or maybe your own builds.Unlike standard MySQL replication, a PXC cluster acts like one logical entity, which takes care about each node status and consistency as well as cluster status as a whole. This allows to maintain much better data integrity then you may expect from traditional asynchronous replication while allowing safe writes on multiple nodes in the same time. This is though for the price of more possible scenarios where database service will be stopped with no node being able to serve requests.Lets assume the simplest case cluster of nodes A, B and C and few possible scenarios where some or all nodes are out of service. What may happen and what we have to do, to bring them (or whole cluster) back up.Scenario 1Node A is gracefully stopped. Likely for the purpose of maintenance, configuration change, etc. In this case the other nodes receive “good bye” message from that node, hence the cluster size is reduced and some properties like quorum calculation or auto increment are automatically changed. Once we start the A node again, it will join the cluster based on it’s wsrep_cluster_address setting in my.cnf. This process is much different from normal replication – the joiner node won’t serve any requests until it is again fully synchronized with the cluster, so connecting to it’s peers isn’t enough, state transfer must succeed first. If the writeset cache (gcache.size), on nodes B and/or C has still all the transactions there were executed during the time this node was down, joining will be possible via (usually fast and light) IST. Otherwise, full SST will be needed, which in fact is full binary data snapshot copy. Hence it may be important here to determine the best donor, as shown in this article. If IST is impossible due to missing transactions in donor’s gcache, the fallback decision is made by the donor and SST is started automatically instead.Scenario 2Nodes A and B are gracefully stopped. Similar to previous case, cluster size is reduced to 1, hence even the single remaining node C forms a primary component and is serving client requests. To get the nodes back into the cluster, you just need to start them. However, the node C will be switched to “Donor/Desynced” state as it will have to provide state transfer to at least first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending how large state transfers it needs to send. Also some load balancers may consider the donor node as not operational and remove it from the pool. So it is best to avoid situation when only one node is up.Note though, if you restart A and then B in that order, you may want to make sure B won’t use A as state transfer donor, as A may not have all the needed writesets in it’s gcache. So just specify the C node as donor this way (“nodeC” name is the one you specify with wsrep_node_name variable):service mysql start --wsrep_sst_donor=nodeCScenario 3All three nodes are gracefully stopped. Cluster is deformed. In this case, the problem is how to initialize it again. Here, it is important to know, that during clean shutdown, a PXC node writes it’s last executed position into the grastate.dat file. By comparing the seqno number inside, you will see which node is the most advanced one (most likely the last one stopped). Cluster must be bootstrapped using this node, otherwise nodes that had more advanced position will have to perform full SST to join cluster initialized from the less advanced one (and some transactions will be lost). To bootstrap the first node, invoke the startup script like this:/etc/init.d/mysql bootstrap-pxcorservice mysql bootstrap-pxcorservice mysql start --wsrep_new_clusterorservice mysql start --wsrep-cluster-address="gcomm://"or in packages using systemd service manager (Centos7 at the moment):systemctl start mysql@bootstrap.serviceIn older PXC versions, to bootstrap cluster, you had to edit my.cnf and replace previous wsrep_cluster_address line with empty value like this: wsrep_cluster_address=gcomm:// and start mysql normally. More details to be found here.Scenario 4Node A disappears from the cluster. By disappear I mean power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, OOMkiller, etc. Two remaining nodes notice the connection to A node is down and will be trying to re-connect to it. After some timeouts, both agree that node A is really down and remove it “officially” from the cluster. Quorum is saved ( 2 out of 3 nodes are up), so no service disruption happens. After restarting, A will join automatically the same way as in scenario 1.Scenario 5Nodes A and B disappear. The node C is not able to form the quorum alone, so the cluster is switching into a non-primary mode, in which MySQL refuses to serve any SQL query. In this state, mysqld process on C will be still running, you can connect to it, but any statement related to data fails with:mysql> select * from test.t1; ERROR 1047 (08S01): Unknown commandActually reads will be possible for a moment until C decides that it cannot reach A and B, but immediately no new writes will be allowed thanks to the certification based replication in Galera. This is what we are going to see in the remaining node’s log:140814 0:42:13 [Note] WSREP: commit failed for reason: 3 140814 0:42:13 [Note] WSREP: conflict state: 0 140814 0:42:13 [Note] WSREP: cluster conflict due to certification failure for threads: 140814 0:42:13 [Note] WSREP: Victim thread: THD: 7, mode: local, state: executing, conflict: cert failure, seqno: -1 SQL: insert into t values (1)The single node C is then waiting for it’s peers to show up again, and in some cases if that happens, like when there was network outage and those nodes were up all the time, the cluster will be formed again automatically. Also if the nodes B and C were just network-severed from the first node, but they can still reach each other, they will keep functioning as they still form the quorum. If A and B were crashed ( due to data inconsistency, bug, etc. ) or off due to power outage, you need to do manual action to enable primary component on the C node, before you can bring A and B back. This way, we tell the C node “Hey, you can now form a new cluster alone, forget A and B!”. The command to do this is:SET GLOBAL wsrep_provider_options='pc.bootstrap=true';However, you should double check in order to be very sure the other nodes are really down before doing that! Otherwise, you will most likely end up with two clusters having different data.Scenario 6All nodes went down without proper shutdown procedure. Such situation may happen in case of datacenter power failure, hitting some MySQL or Galera bug leading to crash on all nodes, but also as a result of data consistency being compromised where cluster detects that each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain valid sequence number (seqno). It may look like this:cat /var/lib/mysql/grastate.dat # GALERA saved state version: 2.1 uuid: 220dcdcb-1629-11e4-add3-aec059ad3734 seqno: -1 cert_index:In this case, we are not sure if all nodes were consistent with each other, hence it is crucial to find the most advanced one in order to boostrap the cluster using it. Before starting mysql daemon on any node, you have to extract the last sequence number by checking it’s transactional state. You can do it this way:[root@percona3 ~]# mysqld_safe --wsrep-recover 140821 15:57:15 mysqld_safe Logging to '/var/lib/mysql/percona3_error.log'. 140821 15:57:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 140821 15:57:15 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.6bUIqM' --pid-file='/var/lib/mysql/percona3-recover.pid' 140821 15:57:17 mysqld_safe WSREP: Recovered position 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:2 140821 15:57:19 mysqld_safe mysqld from pid file /var/lib/mysql/percona3.pid endedSo the last committed transaction sequence number on this node was 2. Now you just need to bootstrap from the latest node first and then start the others.However, the above procedure won’t be needed in the recent Galera versions (3.6+?), available since PXC 5.6.19. There is a new option – pc.recovery (enabled by default), which saves the cluster state into a file named gvwstate.dat on each member node. As the variable name says (pc – primary component), it saves only a cluster being in PRIMARY state. An example content of that file may look like this:cat /var/lib/mysql/gvwstate.dat my_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9 #vwbeg view_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3 bootstrap: 0 member: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0 member: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0 member: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0 #vwendWe can see three node cluster above with all members being up. Thanks to this new feature, in the case of power outage in our datacenter, after power is back, the nodes will read the last state on startup and will try to restore primary component once all the members again start to see each other. This makes the PXC cluster to automatically recover from being powered down without any manual intervention!  In the logs we will see:140823 15:28:55 [Note] WSREP: restore pc from disk successfully (...) 140823 15:29:59 [Note] WSREP: declaring 6c821ecc at tcp://192.168.90.3:4567 stable 140823 15:29:59 [Note] WSREP: declaring 6d80ec1b at tcp://192.168.90.4:4567 stable 140823 15:29:59 [Warning] WSREP: no nodes coming from prim view, prim not possible 140823 15:29:59 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3 140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28] 140823 15:29:59 [Note] WSREP: Received NON-PRIMARY. 140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1 140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 140823 15:29:59 [Note] WSREP: promote to primary component 140823 15:29:59 [Note] WSREP: save pc into disk 140823 15:29:59 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 2, memb_num = 3 140823 15:29:59 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID. 140823 15:29:59 [Note] WSREP: clear restored view (...) 140823 15:29:59 [Note] WSREP: Bootstrapped primary 00000000-0000-0000-0000-000000000000 found: 3. 140823 15:29:59 [Note] WSREP: Quorum results: version = 3, component = PRIMARY, conf_id = -1, members = 3/3 (joined/total), act_id = 11, last_appl. = -1, protocols = 0/6/2 (gcs/repl/appl), group UUID = 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a 140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28] 140823 15:29:59 [Note] WSREP: Restored state OPEN -> JOINED (11) 140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# 0: Primary, number of nodes: 3, my index: 2, protocol version 2 140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification. 140823 15:29:59 [Note] WSREP: REPL Protocols: 6 (3, 2) 140823 15:29:59 [Note] WSREP: Service thread queue flushed. 140823 15:29:59 [Note] WSREP: Assign initial position for certification: 11, protocol version: 3 140823 15:29:59 [Note] WSREP: Service thread queue flushed. 140823 15:29:59 [Note] WSREP: Member 1.0 (percona3) synced with group. 140823 15:29:59 [Note] WSREP: Member 2.0 (percona1) synced with group. 140823 15:29:59 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 11) 140823 15:29:59 [Note] WSREP: Synchronized with group, ready for connectionsScenario 7Cluster lost it’s primary state due to split brain situation. For the purpose of this example, let’s assume we have the cluster formed from even number of nodes – six and three of them are in one location while another three in second location (datacenter) and network connectivity is broken between them. Of course the best practice is to avoid such topology: if you can’t have odd number of real nodes, at least you can use an additional arbitrator (garbd) node or set higher pc.weight to some nodes. But when split brain happens any way, so none of the separated groups can maintain the quorum – all nodes must stop serving requests and both parts of the cluster are just continuously trying to re-connect. If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command like in scenario 5:SET GLOBAL wsrep_provider_options='pc.bootstrap=true';After that, you are able to work on the manually restored part of the cluster, and the second half should be able to automatically re-join using incremental state transfer (IST) once the network link is restored. But beware: if you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring network link in that case won’t make them to re-join until nodes are restarted and try to re-connect to members specified in configuration file. Then, as Galera replication model truly cares about data consistency – once the inconsistency will be detected, nodes that cannot execute row change statement due to different data – will perform emergency shutdown and the only way to bring them back to the cluster will be via full SST.I hope I covered most of the possible failure scenarios of Galera-based clusters, and made the recovery procedures bit more clear.The post Galera replication – how to recover a PXC cluster appeared first on MySQL Performance Blog.

  • Moving On
    I have a difficult task of making this post interesting, helpful and personal at the same time. I think the main goal is to balance these aspects, and I really appreciate your comments and suggestions that I will add here.For the busy readers who may be put off by the length of this post, here is a very short summary: I spent 4 wonderful years, first as the head of Field Services, then as a CTO, I believe it is now time for a change, so I am leaving SkySQL. I am leaving behind a great company and very good friends, but I am not disappearing completely, and I will continue supporting the work I started and the projects I created with the help of such great people.For many, leaving a company is not easy, and it is extremely difficult if you have contributed to its creation and development since the beginning. Even more difficult is to depart from ideas and projects that you have shaped and designed, together with the people who have contributed to them and that I am sure they will continue to work on these projects with great success.The reasons for SkySQLIn the past 4 years, I have been asked many times why we had created SkySQL and how SkySQL is different from other providers, such as Percona and Oracle.Since the beginning, the first and most important objective for SkySQL was to provide the best products and services around MySQL. In order to achieve this objective, we had created a network of partners and we were working closely with them to support our customers in the best possible way. The value added by SkySQL to the offering was a strong team of consultants and architects who could suggest and implement MySQL solutions, and a stellar Technical Support team who could provide the best possible answers to a large variety of technical and consultative issues that customers might encounter.Having many options to choose from was certainly good, but it introduced another issue: not all the products could work well together. Customers demanded solutions from a single vendor that could go beyond the “typical” MySQL database + backup + monitor: they wanted to have a set of products that was tested and guaranteed to work together. This was the first motivation for the first big effort at SkySQL in terms of products and tools, when we defined the SkySQL Reference Architecture.The Reference Architecture was the result of many hours spent in meetings and solitary thinking in my home office during the Christmas holidays in 2010, when the business slowed down and I could dedicate more CPU cycles to the subject. We worked on the project for 4 months and we launched the Architecture as a concept at the MySQL/Percona Conference in 2011. We demonstrated the SkySQL Reference Architecture with a tool that users could access online, in order to automatically generate and activate a fully functional cluster of MySQL replicated servers with MONyog, MySQL Replication, a cluster software with resource agent in AWS. Severalnines had a similar approach for MySQL Cluster and later for MySQL Replication and Galera, but at that time only SkySQL had the full automation and a selection of different engines and uses, from the configuration to the MySQL prompt. Later, Percona introduced a web tool that could provide an optimised configuration file.The evolution of the Reference Architecture was the SkySQL Data Suite (SDS). The concept was similar, but the main difference was that for the first time we added SkySQL Intellectual Property to MySQL. The suite was packaged with an administration tool that was designed and built by SkySQL. The first target was the Cloud, specifically AWS and OpenStack. The initial idea was to have SDS seamlessly deployed on bare OS, on clouds or in hybrid environments. All the tools have been designed with programmable and user interfaces, in order to satisfy different customers’ needs. An independent presentation of SDS is available here.In 2013, the company merged with Monty Program, and we suddenly found ourselves in a position where software development was a fundamental part of our offering. We moved the focus of the Data Suite to MariaDB and we rebranded it as MariaDB Enterprise,but more importantly, we combined the value and the skills of our services team with the core team of the original development of MySQL. The merge resulted in a company with all the credentials needed to excel and innovate in the MySQL world. But the key question at this point was: is this enough to make MySQL even more successful? Is a better MariaDB (or indeed MySQL) the right answer to the data management needs in 2010s and beyond?The evolution of MySQL and MariaDBThe answer to the previous questions is not surprisingly a “no”. Indeed, users need a better MySQL (or MariaDB). Traditionally, they demanded more performance, more availability and more scalability, and many players have contributed in their own way to the cause.Still, there is something missing. The competition from NoSQL solutions is, to say the least, intense. It is probably true that the MySQL adoption is not declining (as some analysts say), but the adoption of NoSQL is way bigger in absolute terms. And more important, the majority of the new initiatives and startups that once were the lymph that flowed in the MySQL Community, have now moved to NoSQL.From a purely technical (and generic) perspective, when MySQL and NoSQL are tested and measured in a fair way, MySQL can provide in many cases better performance and robustness. Scalability, on the other hand, is a big issue as it has always been – it was an issue for bigger servers in the past, it is an issue for distributed systems now. The search for a better scalability is the primary reason why we have created MaxScale.You may have read a lot about MaxScale, or you may want to read more here and here. In simple terms, MaxScale is a highly scalable, lightweight proxy system aimed at distributing and scaling parts of a database server that do not need to reside in its core. There is a similarity to this approach in the NoSQL world and certainly in many home made solitions. The mongos / mongod binomial is a good example of what MaxScale can achieve with MySQL, but this is only half of the story. MaxScale is generic in nature, what makes it a relevant component of the IT infrastucture are its plugins. By loading different plugins you can make MaxScale a proxy for multiple client protocols, or a proxy for geographically replicated servers, or to integrate different replication technologies, and so on.I believe that we need MaxScale for MySQL and MariaDB. Incidentally, Max is the name of Monty’s son, so we have covered all his heirs (at least so far). In designing MaxScale, I wanted to provide a link between a technology that was good for servers available in the 90s and today’s infrastructures.A difficult choice?One might ask, if I feel so strong about MaxScale and its fundamental role, why am I leaving it behind? The fact is, I am not. The project is in good hands, thanks to the great work and dedication from Mark Riddoch, Massimiliano Pinto and Vilho Raatikka. The concept, the ideas and the architecture are here to stay. MaxScale is shaped today as we – Mark Riddoch, Massimo Brignoli and I – wanted it, as we have designed it during long hours of work and passionate discussions.When your kids grow up and are ready to walk alone in the world, you need to let them go. MaxScale can now walk with MySQL and MariaDB, and SkySQL will take a good care of their path together. So now, I may move on and I have time to raise other kids.A look at the futureAs for me, I am technically embracing a wider range of technologies. I will not be focused only to MySQL, but rest assured that these 10 years will always remain in my heart. I will work on IT infrastructures and systems where databases play the central and most important role, but I will look at the customers’ needs as a whole. I will carry on my duty for the MySQL User Group in London, that has now reached the reasonable size of 40-50 attendees per session, every other month. I will not move my MySQL blog, so any MySQL-related post will be available on izoratti.blogspot.com and it will be aggregated on PlanetMySQL. But I will have a collection various topics in my personal blog www.ivanzoratti.com. I will cover more databases, HPC, OpenStack and OSs. I will also have a section dedicated to an important aspect of my life, which is the study of Kung Fu in its inner and outer styles. I started learning Kung Fu almost 30 years ago, first for 12 years, then I abandoned it for another 12 years, until I realised the importance of this practice in my life. I have to thank some of my best friends for that, they really helped me a lot in good and bad times.So, even if I will not wear a T-shirt with a seal (or a sea lion, as it is more fashionable these days), you will probably see me around at conferences and exhibitions, or perhaps you will not see me, but I will work, as I have done in these 10 years, behind the scenes to make MySQL the good and strong database that can help in creating the next Facebook or the next Twitter of this world.All the best to all of you.

  • How to Manage All Your MySQL or MariaDB Databases
    September 1, 2014 By Severalnines According to Forrester, a DBA in a large enterprise manages between 8 and 275 databases, with the industry average being 40 databases to a DBA. Larger databases usually require extra effort around tuning, backup, recovery and upgrade. Cloud, as well as automation and management tools can help improve the number of databases managed by one DBA.     With that background, we were pretty excited to introduce support for management of single-instance MySQL and MariaDB databases in ClusterControl 1.2.6. The majority, if not all of the cluster users out there, use single-instance or master-slave replicated setups along their mission-critical clusters. So today, it is possible to manage an entire MySQL/MariaDB environment from one interface.   Ok, so what about Nagios or Zabbix? Unless Nagios is deploying your instances, scaling them, recovering them if they fail, taking backups, upgrading them to newer versions, telling you about slow running queries or suboptimal configuration parameters, we’d argue that you probably ought to look into a management tool.    In today’s blog post, we’ll show you how you set that up.   Our setup consists of MySQL servers running different versions (5.1, 5.5 and 5.6) and on different hosts:   Installing ClusterControl   Deploy ClusterControl on a dedicated host (192.168.197.100) using these simple steps: $ wget http://www.severalnines.com/downloads/cmon/install-cc.sh $ chmod u+x install-cc.sh $ sudo ./install-cc.sh   This will automate the installation of ClusterControl, but you can read more about this in the ClusterControl Quick Start Guide.   Once the installation is complete, login to the ClusterControl UI at http://192.168.197.100/clustercontrol using your email address (that you entered during the installation process) and default password ‘admin’. You should see something like below:   Preparing the Database Hosts   You do not need to install any agent on the database hosts, but ClusterControl needs to be able to SSH into them and connect to the database instance to gather statistics.    read more