.

Monday, April 1, 2019

Application Performance Optimization and Load Balancing

industry perpetrateance optimization and Load fitApplication Performance Optimization and Load Balancing development attack and Caching TechniquesAkilesh KailashSunil Iyer Kolar Suresh KumarSabarish VenkatramanABSTRACTAs the info processing and demand for retentivity grows, the feat of a little industriousnesss programme should always be intact with respect to criminal record I/O. there has been considerable improvements related to disk seek, latency and spindle speeds However, these improvements have non met the challenges and addresses the need for better capital punishment and loading equilibrate. The challenge of whatever Database decision maker is to maximize the Application I/O performance and ensure the eminent approachability with zero downtime. This performance challenge can be met employ I/O supervise, Load balancing, Cache management and bust (Redundant crop of Inexpensive records) technologies. The primary goal of this paper is to exemplify the d etails of successfully solving the I/O problems of a entropybase application program in a consistent fashion with the appropriate foray configurations, caching mechanisms and load balancing algorithm.Categories and progeny DescriptorsB.3.2 Design Styles Mass storage foray into.D.4.2 Storage Management Secondary storage, Storage hierarchies.D.4.3 File Systems Management File organization.D.4.4 Communications Management Input/Output.D.4.5 Reliability support procedures, Fault-tolerance.General TermsAlgorithms, Performance, Design, Theory, Reliability.Keywordsmaraud Redundant Array of Inexpensive DisksI/O Input/OutputDBA Database AdministratorsHA High AvailabilityOLTP Online consummation Processing.IOPS HBA 1. INTRODUCTION rupture technology addresses the need for graduate(prenominal)er storage power in IO corpse and bequeaths the feature of data redundancy. This helps in efficient and change disk access and avoids data loss by disk failures. Theoretically, assail i s primarily used to create a logical disk from 2 or more physical disk drives in run to provide high bandwidth. wear out is an imperative part of storage stack and cloth forge and is coordinated by various storage vendors the like EMC, Hitachi, NetApp. fall apart technologies have enumerated opposite methods in building storage stacks and sub-systems for dissimilar kinds of databases.Thus, the cardinal main technical reasons for switching to RAID atomic number 18 scalability and high availability in the context of I/O and system performance. As the database sizes of today have grown manifold from the gigabytes to petabytes range, the intricacy to scale I/O performance of such gigantic systems is needed very more for critical applications.Load balancing is a critical factor in environments like operate Systems, Clusters, Networking and Applications. They play a quintessential use in the performance and reliability of any environment avoiding catastrophic failures. In a quotidian scenario, the resource allocation and load balancing are make through hash methods, genetic algorithms and several scheduling algorithms in Operating systems.Many database applications demand high throughput and availability from storage subsystems. For instance, a conduct market application running in New York stock tack go away need to have a high throughput and bandwidth with absolutely no downtime. This requires continuous operation i.e., the need to satisfy each I/O quest point in the case of disk failures.It is non satisfactory to meet the aforementi aced requirements at the cost of deprived performance mainly in real-time applications such as video and audio. It is extremely unacceptable if a video is played at s cast down speed or the data is lost during transmission and ends abruptly.Since a database application whitethorn assemble extreme I/O activity or suffer a sudden spike of I/O activities for a brief decimal point of time, the organization of t he database structure onto the disk becomes imperative.2. PROBLEM DEFINITIONMission critical data centers have a compelling need to have highly available applications and services thereby ensuring zero downtime. Current clustering solutions, like MSCS or HP Service Guard enable HA for indispensable applications. However, such applications are specific and developed only for the OS/application for which they are designed.The I/O performance and their patterns of a database application has to be analyzed by understanding their relation with the physical storage so that it helps in determining the deployment of application base on any apt(p) workload.I/O from an application needs to be categorized ground on which appropriate techniques can be used in range to improve its performance. at that place are many DBA tune software which are primarily used for indexing the database and monitor the drive activities. This approach is effective but requires lot of time and in reality it is quite impractical in nature.3. ABSTRACT SOLUTIONThe possible solutions areDetermining the RAID Level and taproom sizeRAID levels are determined on factors such as type of I/O, disk cost, read/ pull through I/O and so on. The data transfer rate and IOPS performance is very much influenced found on the constituent size elect and the striping size used.For example In a RAID 5 configuration, there are 4 disks and 1 proportion disk. allow the segment size of each disk be 64KB. Thus, when an I/O of 64KB has to be turn to, it is written to the first drive. The undermentioned I/O of 64KB is written to next and so on and finally the parity of the 4 I/Os is calculated and written to the last disk. In case of RAID 1 (Mirroring), there are 2 disk groups and 2 reflect groups. A 64KB I/O would be written to each of the disk drives and mirrored drives.Caching techniquesSplitting the cacheThe cache acts as an interface in the midst of the host application and RAID controllers. The cache can be dual-lane into two parts viz. front-end and back-end. Database applications can rely on the front-end cache.PrefetchingOLTP applications may have I/O operations which are not sequent the pre-fetch algorithm confirms the addresses which leave alone fetched in future and loads it in memory. The sum up of data to be pre-fetched depends on the application requirement, memory and performance coveted by application.Database organization on a storage systemOrganizing the database objects such as tables, logs, views on storage layout comes in a wide range. base on the structure of the database layout, an appropriate storage is chosen.Load BalancingI/O load balancing across cluster nodes are performed using regression analysis. If a port of an HBA or fabric node is ladened heavily, then the I/O is balanced across the ports which are not utilized to its full potential.4. LITERATURE SURVEYI/O performance and disk I/O contention plays a vital role for critical applications. Our pr oposal and work on application performance monitor and I/O tuning and load balancing is motivated based on the Oracle I/O Performance and Array tuning Best Practices paper. The proposed solution and enhancements are based on similar lines of these papers. We lay out off the survey by excuseing the technical feasibilities, the pros and cons of these approaches discussed in the papers and explain in brief about the issue we are addressing based on the survey findings.5. PERFORMANCE BOTTLENECKSApplication performance and write access is generally obtained by using storage Arrays having different RAID configurations. For instance, the striping of data across multiple disks using RAID 1 in order to achieve redundancy is the most common way of obtaining high availability.Disk failure vulnerabilities in enterprise storageThe main motivation of deviation for striping technologies is because of the vulnerability in disk failures in enterprise storage Arrays which can end point in catastr ophic loss of data. This high availability of application and I/O is obtained at the cost of write performance. tutelage synch of write operationsDuring a write operation, all the writes have to be updated simultaneously to all the disks in order to keep the disks in synch. This will have a catastrophic result in operations which will have heavy writes and its performance. In increment to it, maintaining the synchronization of data amid all disks and achieving concurrency is a difficult task and can lead to system crashes.In order to overcome the aforementioned problems a number of different striping mechanisms have been proposed each of them have their specific tradeoff based on cost, high performance, scalability and robustness. The majority of RAID configurations are based on the interleaving of the data and the pattern is which the redundant information is distributed across the disks.Load Balancing of I/O and resource utilizationLoad balancing is basically enforced in SQL s erver clustering and is very common practice. There are many third party tools that provide solutions to load balancing and resource utilization however the limitations of such tools is that the factors to decide on load balancing are very system specific and are restricted heavily on the characteristic of each application.As the database size grows in a short period, we generally observe that the query speed has a performance hit as the number of rows increases. This is mainly observed on applications where the performance data is being collected in frequent intervals and simultaneously the data is read from the DB for other purposes. The general and quick solutions to optimize query speed it to partition the views, indexing and table partitioning. But even then, things are observed to be quite slow. The main problem with such solutions is that the database tables and views are located on different servers. Hence a server cluster is used which add in reliability if there is any pe rformance issues seen on one of the cluster nodes.6. RAID LEVEL pickaxe CRITERIAThe choice of RAID level to be chosen is based on different factors. When a mirrored configuration is chosen such as RAID 1 or RAID 1+0, each write request is duplicated to disk by the raid controller. This results in performance issues if the application does not rely heavily on data duplication and its availability.When higher(prenominal) levels/parity based RAID configuration is used, things get more intricate. Let us consider that, when RAID 5 or RAID 6 is used and if the size of the write I/O is less than the stripe size which is frequently observed in database applications where the data write is most 4kb pages contrasting to the drive size of around 128KB as a result of this, the raid controller has to perform magnitude of I/O operations for just a single request.The main drawback of the above technique is that for a small write request, the raid controller has to first fetch the data from the b ack end disk to the main memory. Then it has to insert the idle data at the appropriate position and calculate the new parity stripe to perform another write operation back to the disk. Hence, one I/O operation results in roughly 3 to 4 times the IOPS. This overhead adds in if the calculation of parity is for two sets as in RAID 6.The other factors of choosing the RAID configuration are the disk/drive cost and I/O pattern. The cost is zero for RAID 0 as there is no redundancy while it is highest for RAID 1 or its combination such as RAID 10. This cost is high because of drive mirroring.The cost of RAID 5 is comparatively lower than RAID 1 but it has one disk which is dedicated for parity. A cleared distinction is required to classify small I/O and large I/O. The bursty nature and large I/O is seen if the request for the I/O is more than the one third of the cache size. all told the small/short I/Os are addressed in cache thereby avoiding the RAID access.All in all, RAID 5 and 6 ar e generally preferred for large I/O and sequential I/O operations while RAID 1 and RAID 10 is preferred for short I/O operations.7. SCOPE FOR IMPROVEMENTThis paper goes on the aforementioned aspects and concentrates on monitoring the I/O pattern, analyzing the load on each of the I/O and performing a load balance if required In addition to the above criteria, taking the I/O pattern into consideration, an appropriate RAID configuration along with write-back cache method is used if necessary.8. PROPOSED SOLUTION measure up the I/O patternThe first maltreat is to monitor the I/O and stipulate it. This is done using tools such as Perfmon or IO Meter. We plan to use these tools and analyze the I/O pattern of a given application. This monitoring of pattern is required as we will characterize the request as read intensive, write intensive, how the load is being varied.Perform load balancing upon I/O thresholdThe second step is to perform load balancing. This is done by analyzing the load and identifying the threshold of the I/O from a server HBA Port through the fabric layer to the storage Array. Threshold is a boundary which serves as a benchmark for comparison or guidance, and any deviation or breach of the verbalize threshold may result in a change in state of an overall system.Our proposed infrastructure identifies the threshold by analyzing the I/O graph and monitoring the following parametersLinear RegressionSlope of the swerveUsing Linear Regression, the value of the slope is calculated. Based on these two parameters, if we observe that if one of the HBA ports is heavily loaded, we tend to balance it out by redistributing the excess load to different cluster nodes.Once the I/O is balanced, an appropriate RAID configuration is calculated.9. CONCLUSION AND FUTURE WORK by and by studying the I/O access patterns of various workloads, we can intelligibly the map the database application to the physical storage thereby achieving high performance, straightaway access and retrieval. This would be helpful for DBAs to deploy management applications and would be easy to track the application performance.This analysis can be implemented at the enterprise level configuration as well resulting in efficient usage of physical storage, making it cost effective and reduce the work for DBAs or lab administrators.10. REFERENCESThe RAID Book 6th Edition. RAID Advisory Board.LACIE RAID Technology White Paper.RAID High-Performance, true(p) Secondary Storage ACM Computing Surveys Peter M. Chen, Edward K. Lee.Array tuning outflank practices A Dell technical white paper DOI=http//www.dell.com/downloads/ planetary/products/pvaul/en/powervault-md3200i-performance-tuning-white-paper.pdf.Exploring Disk Size and Oracle Disk I/O performance DOI= http//www.openmpe.com/cslproceed/HPW02CD/paper/11026.pdf

No comments:

Post a Comment