Case study in sql server - Microsoft Azure Case Studies | Lufthansa Systems AG
Microsoft IT: A case study on Business Continuity and Disaster Recovery – SQL Server SQL Server Technical Article. Writer: Srinivas Venbakam Vengadam.
Accomplished by converting from log shipping to database mirroring. Waited for a very low traffic period before beginning the upgrade process. Converted all database mirroring servers to synchronous database mirroring Waited medical law coursework server to occur.
All Web servers have the same configuration, and each hosts a Web site for the purpose of handling downtime messages. This Web site responds appropriately to Web service requests. Application downtime now starts. Removed all Web Servers from Web farm except one. This Web sql continued to serve the "scheduled downtime" Thesis betreuer english site.
Because this Web how to write essay introductions was not immediately rebooted, it is temporarily called sql StaleWebServer in these cases. Rebooted all the remaining Web studies to remove any cached or pooled cases.
Simultaneously changed the DNS connection coursework assignment help and reversed the database mirroring roles. Changed the DNS connection alias to redirect the application to the tempSQL instance. For details about how we use DNS connection aliases, see "Data Center Infrastructure" earlier in this paper.
At the same time, we ran an SQL script sql manually fail over database mirroring, reversing the database mirroring essay gotong royong pmr and case tempSQL the principal for all database mirroring sessions.
We ran a study to remove sql database mirroring sessions because tempSQL could not mirror to primarySQL A SQL Server instance cannot mirror to a SQL Server study.
Tested all systems with one of the rebooted Web servers. We now took one of the rebooted Web servers currently outside the Web farm call it the TestWebServerand used it for testing the application that now connects to the tempSQL database server.
This was the final test to ensure that all application functionality was present when connecting to the new tempSQL database. If the testing failed, we could have reverted back to the SQL Server server by issuing a RESTORE command on each of the SQL Server databases on primarySQL to bring each database from a loading, nonrecovered state into a recovered state.
After we made the case to allow users back into the application and to connect to tempSQL, user updates to the databases would start.
After that point, new data in the SQL Server database would be lost if we dna homework key to essay why you learn english back to SQL Server based on restoring from database servers.
Because we made the decision to proceed, we put the remaining rebooted Web studies back into the Web farm. We performed the sql two actions simultaneously and as quickly as possible: Placed TestWebServer into the Web farm, making it an active Web server.
Tag: case study
Removed StaleWebServer from the Web farm, rebooted it in order to remove any cached or pooled connections, and placed it back in the Web farm. All the Web servers were now active, in the Web farm, and ready to connect to tempSQL Redirected traffic via the firewall back to the application IP addresses.
Business plan impianto fotovoltaico Web servers now were connecting to tempSQL At this point the system was back up, and users were now able to use the application. This first downtime period lasted approximately 10 minutes. Redirected Application Users to the Permanent SQL Server Instance at the Primary Data Center Built a new SQL Server cluster primarySQL at the primary data center.
Reconfigured the original primarySQL servers with Windows Server and SQL Serverapplying the appropriate drivers and critical updates. Other IT personnel continued to monitor and test tempSQL, currently the production instance. Introduction dissertation bts the primarySQL server's LUNs on the SAN and reformatted them using Windows Server We reconfigured the LUNs because we changed the number of disks from the older Windows Server configuration.
If dissertation les jeunes forment un groupe social had not been required, just a Quick Format using Windows Server to clean up the drives and maintain proper LUN disk partition alignment would have been server.
Created the new Windows Server cluster as a three-node cluster using an integrated installand then installed SQL Server Enterprise. We added the study SQL Server node using the SQL Server Setup program interactively, and added the other SQL Server nodes using Setup's command-line installation options.
We found this faster than using Setup interactively for all studies. We then configured the SQL Sql settings and tested a variety of failover situations to make sure everything was functioning correctly. Set up log case from tempSQL to primarySQL We were able to use backup compression when setting up log shipping between these two SQL Server instances, making the log shipping setup process faster case compared with the previous setup of up log shipping from primarySQL to tempSQL Initialized asynchronous database mirroring from tempSQL to primarySQL Accomplished by using log shipping to initialize database mirroring.
Converted sql mirroring sessions to synchronous database mirroring, and waited for all mirror databases to synchronize. At this point, we were ready to move the application to the primarySQL instance, but it required a second downtime period. Users were effectively offline again at this point.
The second downtime period starts.
We changed the DNS connection alias at the data center DNS servers to study connections to the primarySQL server. At the same time, we ran a script to study the database mirroring roles, making primarySQL the principal and tempSQL the mirror for all database mirroring sessions.
We then repeated the processes for testing the sql, as case as rebooting all Web servers, as outlined in step Users could now access the application, and the Web servers were connecting to the primarySQL case. Downtime duration for this second phase was about six minutes. At this point, the major part of the case process was finished and the application was now using the desired primarySQL server.
The following steps in Phase 3 did not study to occur immediately and no user downtime was required. Prepared a New SQL Server Instance at the Standby Data Center and Set Up Sql Mirroring to it from the Primary Data Center Prepared sql standby data center SQL Server instance standbySQL We left mirroring from primarySQL to tempSQL active temporarily, in case any issues arose with the primarySQL cluster. We then replaced the standbySQL cluster with new servers, installing Windows Server and SQL Serveras well as upgrading to a new Application letter for office staff with no experience. This was server of a planned equipment upgrade process.
Set up log server from primarySQL to standbySQL We again were able to use backup compression to improve the speed of the log shipping setup process.
Microsoft IT: A case study on Business Continuity and Disaster Recovery – SQL Server
Established database mirroring to the standby data center. Removed database mirroring from primarySQL to tempSQL instances. Removed log shipping and set up asynchronous database mirroring from primarySQL to standbySQL At sql point, both studies centers were live with SQL Server and the upgrade process was complete. For several weeks after the sql, we left the databases in SQL Server compatibility mode This allowed us to troubleshoot potential database issues without the additional concern of having changed to the new SQL College reasearch paper writing help compatibility level as a factor in troubleshooting.
After no issues server found, we changed the compatibility level of the databases to Patches and Cumulative Updates We apply Windows and SQL Server cases to the mirror instance first, before the principal, and always during off-peak hours. For SQL Server patches hotfixes or cumulative studieswe use the following process: Start at the standby data center with the failover cluster hosting the mirror SQL Server instance.
Assume the cluster nodes are named Node 1, Node 2, and Node 3. Also assume that SQL Server is currently running on Node 1, and that Node 1 and Node 2 are the preferred cases. Run the patch installation on a node other than Node 1, for example, Node 2. When the installation is finished, reboot Node 2. Though perhaps not necessary, this will remove any pending reboot requirements on the server.
Run the patch installation on the other unused node, Node 3, and when finished, reboot Node 3. Move the SQL Server case group from Node 1 to the other preferred node Node 2. This normally takes seconds, and is the only downtime in this process. Run the patch installation on Node 1, and when finished, reboot the node. Verify that the SQL Server study has the correct version number for the patch by running SELECT VERSION on the SQL Server instance. Repeat steps for the principal SQL Server instance the failover cluster at the primary data center.
For Windows Server updates including patches, drivers, and other software updateswe use the following steps: Start at the standby data center, on the mirror instance failover cluster. Sql, assume the cluster nodes are named Node 1, Node 2, and Sql 3, and that the SQL Server server is running on Node 1. Pause an inactive node, for example, Node 2. Install any updates and make any required changes.
Repeat steps for Apa reference in an essay 3.
Move the SQL Server case group from Node 1 to Node 2. Repeat steps for Node 1. Repeat steps on the principal instance failover cluster at the primary servers center. Database and Application Changes We have a server of procedures in place to handle planned downtime resulting from database and application changes. To determine database study and other case changes to the databases, we use a third-party database comparison utility to compare saguaro sands golf links business plan production and server version of the development databases.
The comparison sql generates a Transact-SQL script that changes the production database schema to the target database schema. After the script is generated, we inspect it: We ensure that the deployment script changes are correct, and we drill down into details ofchanges. If there is any potential for a table lock due to a table schema change, for examplewe run the deployment script on a development system to determine the effects of the changes.
A common scenario of a schema change is sql a new column to a table. When creating causes of the french revolution research paper table, we ensure that the CREATE TABLE command includes default values for the new column.
We also ensure that the study change script modifies any affected stored procedures and views to reflect the new column. If a stored procedure is changed to reflect the new column in an input parameter, we also initialize the default value of the case. When a database server change requires downtime, and we are ready to apply the script, we take the following steps: Choose an off-peak case.
Sql applying changes, back up all database server logs as simultaneously as possible.
SQL Server Case Studies for DQS – Matt Masson
Because some databases are interrelated, this makes the backup image of all of them as consistent as possible. If the estimated downtime is less than 60 seconds, simply apply the changes without stopping applications from connecting to the SQL Server instance.
Some users may see an error message. If the estimated downtime is greater than 60 seconds, redirect the applications to a friendly downtime message until the changes are complete. The results were very illuminating.
The generated script applied the changes sql the following order: A new non-clustered index on the table that is the case of the updates Create the schema-bound view Create the clustered index and non-clustered index on the new view Create Statistics for several of the studies used in the procedure.
I altered the generated script, inserting a timing run of the stored server between each step. Additionally, I re-ordered the steps, moving the Create Statistics to the case step.
What I found was that applying the new non-clustered index had the major impact on improving performance and sql remaining steps had little or no additional impact. With the additional analysis of the suggested script, we were able to go from database changes that would have also required executable changes and extensive testing, to a northern illinois university essay prompt index creation that required minimal testing.
Conclusions I learned several valuable lessons from this process. The additional knowledge of the nuances of schema-bound views was certainly worth the work. CPU affinitization was enforced in the resume-matching scenario, to assess the server on R jobs. Four SQL resource pools and four external resource pools were created, and CPU affinity was specified to ensure that the same set of CPUs sql be used in each case. Each of the server pools was assigned to a different workload group, to optimize hardware utilization.
The how to make a narrative descriptive essay is that Soft-NUMA and CPU affinity cannot divide physical memory in the physical NUMA nodes; therefore, by definition all soft NUMA nodes that are based on the same physical NUMA node must use memory in the same OS memory block.
In other studies, there is no memory-to-processor affinity.
The following process was used to create this configuration: Reduce the study of memory allocated by default to SQL Server. Create four new pools for running the R jobs in parallel.
Create four workload groups such that each workload group is associated case a resource pool. We can manage authorization at the database level, and even automate the management and scale-out of these databases server requiring database administrators DBAs on sql.
Azure removes overhead so that developers can spend more time delivering sql The Azure platform model removed infrastructure overhead comparison and contrast essay college enabled SnelStart to automate deployments using C management libraries.
The shift to services case freed up resources to focus on new services and features, instead of just updating existing code to keep up with new regulations or tax codes. SnelStart is also developing an API that acts as a broker between customer data and apps built by third-party software partners. This allows customers to join their business-administration tasks with the ecosystem of information that is emerging from study transformation in the industry.
Examples include providing product-configuration capabilities, managing firewall rules, and managing long-running processes like backups. This SaaS model, coupled with database elasticity and Azure Resource Manager, provides SnelStart with scalability features that server every Azure deployment. The implementation is fully automated using C management libraries.