อังคาร, 28 มิ.ย. 2016
 
 

SMSKP-PayPal

Donate using PayPal
Amount:
Note:
Note:

PTT Oil Price

Gold Status

SM SKP ADS-1

สมุยสเก็ตอัพ คอมมิวนิตี้
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • 5 Database Insights Easy to See with VividCortex SaaS Monitoring
    There are manifold ways to collect, visualize, and analyze data… but not all methods are equally useful. VividCortex, however, is singular as a database-centric SaaS monitoring platform, and it's designed to provide you with powerful insights into your system that are both inherently actionable and unique. Within minutes of first booting up VividCortex, users frequently discover new aspects of their system. They understand it in brand new ways, just by viewing our app’s basic dashboards and metrics. But that's just the start. Even beyond those initial revelations, there are many more powerful insights that VividCortex can provide, if you know how and where to look. These views aren’t entirely automatic, but they’re simple to discover with a few tips. Here are 5 insights easy to see with VividCortex. Find which queries affect the most rows Understanding which queries are affecting the highest number of rows in your system is a useful way to understand the amount of change occurring in your dataset. By organizing this change as “affected rows,” you’re seeing these developments in terms of a powerful, raw metric. “Affected rows” refers to any row that was changed by an UPDATE, INSERT, or DELETE, based based on the OK Packet or Performance_schema data. To view queries organized in terms of affected rows, head to the Profiler and then rank “Queries” by “Affected Rows.” The Profiler will generate a view like this one, giving you immediate, legible insight into which queries are causing the widest range of change. Find the largest group of similarly grouped queries If you’re able to see the largest group of similar queries, it gives you a window into application behavior, which, in turn, can be used for sharding decisions and other growth strategies. No small thing. Alternatively, examining query verbs can very quickly show you the read to write ratio of a workload, which can be leveraged at further decision points. To view queries this way, head back to the Profiler and rank them according to “Count.” You’ll then see the total number of queries, grouped similarity and organized by quantity. Alternatively, you can rank “Query Verbs” in the same way and retrieve the number according to command type. In both cases, you see which queries are executing the most frequently in your system. Find memory allocation stalls As explained by the kernel documentation, memory allocation stalls refer to times when a process stalls to run memory compaction so that a sizable page is free for use. With VividCortex, you’re able to see the number of times this happens in a given timeframe, allowing for further investigation. To do so, head to the Metrics dashboard and enter the metric text as “os.mem.compact_stalls”.   Find IO Wait IO Wait — the time the CPU waits for IO to complete —  can cause stalls for page requests that memory buffers are unable to fulfill and during background page flushing. All of this can have widespread impacts on database performance and stability. Using the metrics dashboard in VividCortex, you’re able to see these stalls by duration over time and brokendown by host. In the Metrics dashboard, use “os.cpu.io_wait_us” as the metric text. Find long running transactions Also in the metrics dashboard, you can see long running transactions by viewing the redo segment history length, which, in turn, represents transaction redo segment history length. This is essentially the overhead of yet-to-be-purged MVCC. Naturally, seeing spikes of long running transactions and providing explanation for this overhead is a valuable ability — and easily accomplished with VividCortex. Just use the metric text “mysql.status.i_s_innodb_metrics.trx_rseg_history_len”. Want to see more? These tips and insights just scratch the surface. VividCortex has much more visibility available to anybody interested in seeing SaaS, database-centric monitoring in action. If you'd like to find further tips on how to get the most out of VividCortex — or would like to see how much value it can give you and your systems — don't hesitate to get in touch.  

  • MySQL TCPCOPY
    we use tcpcopy to make real traffic on our core systems. Many problems will be found in advance if we enlarge queries several times. Read this PDF TCPCOPY

  • How to Install Nginx with PHP and MySQL (LEMP Stack) on CentOS 7.2
    Nginx (pronounced "engine x") is a free, open-source, high-performance HTTP server. Nginx is known for its stability, rich feature set, simple configuration, and low resource consumption. This tutorial shows how you can install Nginx on a CentOS 7.2 server with PHP support (through PHP-FPM) and MySQL (Mariadb) support.

  • Press Release: Severalnines kicks off online European football streaming
    Award-winning database management platform scores deal with continent’s largest online video solutions provider Stockholm, Sweden and anywhere else in the world - 28/06/2016 - Severalnines, Europe’s leading database performance management provider, today announced its latest customer, StreamAMG (Advanced Media Group), a UK-based pioneer in the field of bespoke online video streaming and content management. StreamAMG is Europe’s largest player in online video solutions, helping football teams such as Liverpool FC, Aston Villa, Sunderland AFC and the BBC keep fans watching from across the world. Long hailed as the future of online content, analysts predict that 90% of all consumer internet traffic will be video by 2019. This poses a challenge to streaming providers, both in terms of the amount of online video data to handle and the variety of ways the content is consumed. Customers expect a seamless viewing experience across any device on any operating system. Downtime, lag or disturbances to streaming can have serious repercussions for customer loyalty. Streaming providers should provide a secure and reliable media platform to maintain the interest of fans and attract new viewers, casting database performance in a starring role. Founded in 2001, StreamAMG builds bespoke solutions for its customers to host and manage online video content. Its software delivers the high-availability needed for on-demand streaming or live broadcasting on any device. Loss of customer trust and damage to brand reputation are likely consequences of database failures, especially for those companies which operate in the online sports, betting and gaming industries. Growing at 30% year on year required StreamAMG to have a scalable IT system to meet new customer demands and to maintain its leadership position in the market. StreamAMG reviewed its database performance as part of its IT infrastructure renewal project for to encompass new online channels, such as social media, and embedding marketing analytics to help its customers better understand and react to customer behaviour. It needed a solution to monitor and optimise its database management system and the detailed metrics to predict database failures. After reviewing options provided by Oracle and AWS, amongst others, StreamAMG chose Severalnines to help future-proof its databases. The previous environment, based on a master-slave replication topology, was replaced with a multi-master Galera Cluster; and Severalnines’ ClusterControl platform was applied to automate operational tasks and provide visibility of uptime and performance through monitoring capabilities. Thom Holliday, Marketing Manager StreamAMG, said: “With ClusterControl in place, StreamAMG’s flagship product is now backed with a fully automated database infrastructure which allows us to ensure excellent uptime. Severalnines increased our streaming speed by 76% and this has greatly improved the delivery of content to our customers. The implementation took only two months to complete and saved us 12% in costs. Expanding the current use of ClusterControl is definitely in the pipeline and we would love to work with Severalnines to develop new features.” Vinay Joosery, Severalnines Founder and CEO, said: “Online video streaming is growing exponentially, and audiences expect quality, relevant content and viewing experiences tailor-made for each digital platform. I’m a big football fan myself and like to stay up to date with games whenever I can. Right now I’m following the European Championships and online streaming is key so I can watch the matches wherever I am. New types of viewerships place certain requirements on modern streaming platforms to create experiences that align with consumer expectations. StreamAMG is leading the way there, and helps its customers monetise online channels through a solidly architected video platform. We’re happy to be part of this.“ About Severalnines Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability. Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 8,000 deployments to date via its popular online database configurator. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit: http://www.severalnines.com/customers About StreamAMG StreamAMG helps businesses manage their online video solutions, such as Hosting video, integrating platforms, monetizing content and delivering live events. Since 2001, it has enabled clients across Europe to communicate through webcasting by building online video solutions to meet their goals. For more information visit: https://www.streamamg.com Media Contact Positive Marketing Steven de Waal / Camilla Nilssonseveralnines@positivemarketing.com 0203 637 0647/0645 Tags: MySQLgalera clusterstreamamgclustercontrolvideo streamingfootball

  • On Using HP Vertica. Interview with Eva Donaldson.
    “After you have built out your data lake, use it. Ask it questions. You will begin to see patterns where you want to dig deeper. The Hadoop ecosystem doesn’t allow for that digging and not at a speed that is customer facing. For that, you need some sort of analytical database.”– Eva Donaldson. I have interviewed Eva Donaldson, software engineer and data architect at iContact. Main topic of the interview is her experience in using HP Vertica. RVZ Q1. What is the business of iContact? Eva Donaldson: iContact is a provider of cloud based email marketing, marketing automation and social media marketing products. We offer expert advice, design services, and an award-winning Salesforce email integration and Google Analytics tracking features specializing in small and medium sized businesses and nonprofits in U.S. and internationally. Q2. What kind of information are your customers asking for? Eva Donaldson: Marketing analytics including but not limited to how customers reached them, interaction with individual messages, targeting of marketing based on customer identifiers. Q3. What are the main technical challenges you typically face when performing email marketing for small and medium businesses? Eva Donaldson: Largely our technical challenges are based on sheer size and scope of data processing. We need to process multiple data points on each customer interaction, on each customer individually and on landing page interaction. Q4. You attempted to build a product on Infobright. Why did you choose Infobright? What was your experience? Eva Donaldson: We started with Infobright because we were using it for log processing and review. It worked okay for that since all the logs are always referenced by date which would come in order. For anything but the most basic querying by date Infobright failed. Tables could not be joined. Selection by any column not in order was impossible in the size data we were processing. For really large datasets some rows would just not be inserted without warning or explanation. Q5. After that, you deployed a solution using HPE Vertica. Why did you choose HPE Vertica? Why didn`t you instead consider another open source solution? Eva Donaldson: Once we determined that Infobright was not the correct solution, we knew already that we needed an analytical style database. I asked anyone and everyone who was working with true analytics at scale what database backend they were using and if they were happy. Three products come to the forefront: Vertica, Teradata and Oracle. The people using Oracle who were happy were complete Oracle shops. Since we do not use Oracle for anything this was not the solution for us. We decided to review Vertica, Teradata and Netezza. Of the three Vertica for our needs came out the clear winner. Vertica installs on commodity hardware which meant we could deploy it immediately on servers we had on hand already. Scaling out is horizontal since Vertica clusters natively which meant it fit exactly in with the way we already handled our scaling practices. After the POC with Vertica’s free version and seeing the speed and accuracy of queries, there was no doubt we had picked the right one for our needs. Continued use and expansion of the cluster has continued to prove that Vertica stands up to everything we throw at it. We have been able to easily put in a new node, migrate nodes to beefy boxes when we needed to. Performance on queries has been unequaled. We are able to return complex analytical queries in milliseconds. As to other open source tools, we did consider them. I looked at Greenplum and I don’t remember what all other columnar data stores. There are loads of them out there. But they are all limited in one way or another and most of them are very similar in ability to Infobright. They just don’t scale to what we needed. The other place people always think is Hadoop. Hadoop and all the related ecosystem is a great place to put stuff while you are wondering what questions you can ask. It is nice to have Hadoop (Hive, Hbase, etc.) to have a place to stick EVERYTHING without question. Then from there you can begin to do some very broad analysis to see what you have. But nothing coming out of a basic file system is going to get you the nitty-gritty analysis to answer the real questions in a timely manner. After you have built out your data lake, use it. Ask it questions. You will begin to see patterns where you want to dig deeper. The Hadoop ecosystem doesn’t allow for that digging and not at a speed that is customer facing. For that, you need some sort of analytical database. Q6. Can you give us some technical details on how you use HPE Vertica? What are the specific features of HPE Vertica you use and for what? Eva Donaldson: We have Vertica installed on Ubuntu 12.04 in a three node cluster. We load data via the bulk upload methods available from the JDBC driver. Querying includes many of the advanced analytical functions available in the language as well as standard SQL statements. We use the Management Console to get insight into query performance, system health, etc. Management Console also provides a tool to suggest and build projections based on queries that have been run in the past. We run the database designer on a fairly regular basis to keep things tuned to how it is actively being used. We do most of our loading via Pentaho DI and quite a lot of querying from that as well. We also have connectors from Pentaho reports. We have some PHP applications that reach that data as well. Q7. To query the database, did you have as requirement to use a standard SQL interface? Or it does not really matter which query language you use? Eva Donaldson: Yes, we required a standard SQL interface and availability of a JDBC driver to integrate the database with our other tools and applications. Q8. Did you perform any benchmark to measure the query performance you obtain with HPE Vertica? If yes, can you tell us how did you perform such benchmark (e.g. what workloads did you use, what kind of queries did you consider, etc,) Eva Donaldson: To perform benchmarks we loaded our biggest fact table and its related dimensions. We took our most expensive queries and a handful of “like to have” queries that did not work at all in Infobright and pushed them through Vertica. I no longer have the results of those tests but obviously we were pleased as we chose the product. Q9. What about updates? Do you have any measures for updates as well? Eva Donaldson: We do updates regularly with both UPDATE and MERGE statements. MERGE is a very powerful utility. I do not have specific times but again Vertica performs splendidly. Updates on millions of rows performs accurately and within seconds. Q10. What is your experience of using various Business Intelligence, Visualization and ETL tools in their environment with HPE Vertica? Eva Donaldson: The only BI tools we use are all part of the Pentaho suite. We use Report Designer, Analyzer and Data Integration. Since Pentaho comes with Vertica connectors it was very easy to begin working with it as the backend of our jobs and reports. Qx Anything else you wish to add? Eva Donaldson: If you are looking for an easy to build and maintain, performant analytical database nothing beats Vertica, hands down. If you are working with enough data that you are wondering how to process it all having an analytical database to be able to actually process the data, aggregate it, ask complicated questions from is priceless. We have gained enormous insight into our information because we can ask it questions in so many different ways and because we can get the data back in a performant manner. ——————— Eva Donaldson is a software engineer and data architect with 15+ years of experience building robust applications to both gather data and return it to assist in solving business challenges in marketing and medical environments. Her experience includes both OLAP and OLTP style databases using SQL Server, Oracle, MySQL, Infobright and HP Vertica. In addition, she has architected and developed the data consumption middle and front end tiers in PHP, C#, VB.Net and Java. Resources – What’s in store for Big Data analytics in 2016, Steve Sarsfield, Hewlett Packard Enterprise. ODBMS.org, 3 FEB, 2016. – Column store database formats like ORC and Parquet reach new levels of performance. Steve Sarsfield, HP Vertica. ODBMS.org, JUNE 15, 2016 –Taking on some of big data’s biggest challenges.  Steve Sarsfield , HP Vertica. ODBMS.org, June 2016 – What’s New in Vertica 7.2?: Apache Kafka Integration!, HPE, February 2, 2016. Related Posts – On the Internet of Things. Interview with Colin Mahony. ODBMS Industry Watch, March 14, 2016. – On Big Data Analytics. Interview with Shilpa Lawande, ODBMS Industry Watch, December 10, 2015. Follow us on Twitter: @odbmsorg ##