Posts

Showing posts from March 12, 2023

High CPU usage in SQL Server

  Reasons for high CPU usage in SQL Server. Some common causes  Inefficient queries: Queries that are poorly written or inefficient can consume a large amount of CPU resources. Lack of indexes: If there are no indexes on the tables, SQL Server has to perform full table scans, which can be CPU-intensive. Large data volumes: When working with large data volumes, SQL Server may need to use more CPU resources to process the data. High concurrency: If there are a large number of users or connections executing queries concurrently, this can lead to high CPU usage. Resource-intensive operations: Certain operations such as sorting, aggregating, or joining large data sets can consume significant CPU resources. Server misconfiguration: Incorrectly configured settings, such as maximum degree of parallelism (MAXDOP), can cause excessive CPU usage. Hardware limitations: If the server hardware is not sufficient to handle the workload, this can cause high CPU usage. To find the SPID (Server Proc

Handling of schema changes in different sql server replication

  Replication Type Schema Changes Handling Snapshot Schema changes made to the published database are automatically included in the next snapshot. Subscribers must be reinitialized with the new snapshot to receive the schema changes. Transactional Schema changes are applied to subscribers using a series of system stored procedures automatically generated. Stored procedures are created and executed on the subscriber to synchronize schema changes from the publisher. Merge Schema changes are handled using SQL Server scripts generated by the replication engine. Scripts are propagated to the subscriber databases where they are executed to apply the schema changes. It's important to note that schema changes can potentially cause replication to fail if they are not handled properly.  To avoid issues, it's recommended to carefully plan and test any schema changes before applying them to the published database.  Additionally, it's important to ensure that all subscribers are updated

Factors to consider when we migrate the sql server to 2019.

when migrating from SQL Server 2016 to SQL Server 2019, we need to consider below factors. Factors to Consider Explanation Application Compatibility Check with application vendor/developer for compatibility. Review and update application code/queries. New Features Evaluate new features and enhancements in SQL Server 2019. Determine if changes to application/database design are necessary to take advantage of them. Performance Evaluate current performance on SQL Server 2016. Determine if tuning/optimization is necessary. Backup and Restore Compatibility Verify that backups taken on SQL Server 2016 are compatible with SQL Server 2019. Plan for restoring backups to new instance. Database Size Plan for migration of large databases. Ensure that process does not impact database availability. Maintenance Plans Review and update maintenance plans as necessary to ensure compatibility with SQL Server 2019. Compatibility Level of Dependent Databases Ensure compatibility level of dependent database

Key differences between SQL Server and MongoDB:

  Feature SQL Server MongoDB Database Model Relational Document-oriented Query Language SQL MongoDB Query Language (MQL) Transactions ACID-compliant ACID-compliant with distributed transactions Scalability Vertical and Horizontal scaling Horizontal scaling only Schema Schema-based Schema-free Indexing B-tree and Hash indexing Multikey indexing and Geospatial indexing Replication Master-Slave and Peer-to-Peer Master-Slave and Replica Sets Backups Full, Differential, and Log backup Point-in-Time backup and Filesystem backup Joins Supported Not supported Data Types Fixed and Variable Dynamic Aggregation Supported Native and Map-Reduce Aggregation Note that this is not an exhaustive list and there may be other differences or nuances to consider when choosing between SQL Server and MongoDB.

ways to reduce the size of the tempdb database in SQL Server without restarting the server

    Clearing TempDB:   You can clear the contents of the tempdb database by running the following command: DBCC FREEPROCCACHE; DBCC DROPCLEANBUFFERS; This will remove all cached data and stored procedures from memory, which will reduce the size of tempdb. Reduce the number of active sessions: The tempdb database is used to store temporary data for all active sessions. You can reduce the number of active sessions by killing any inactive or idle sessions.   Check for long-running transactions: Long-running transactions can cause the tempdb database to grow in size. You can check for long-running transactions by using the following query: SELECT      at.transaction_id,      at.name,      at.transaction_begin_time,      DATEDIFF(minute, at.transaction_begin_time, GETDATE()) AS 'Duration',      at.isolation_level,      ar.session_id,      ar.database_id,      ar.command,      aa.start_time,      DATEDIFF(minute, aa.start_time, GETDATE()) AS 

SQL Server 2019 Read Write Routing URL

  SQL Server 2019 Read Write Routing URL  SQL Server 2019 introduces a new feature for Availability Groups that addresses the challenges of read/write connection redirection. With this feature, client applications can connect to any replica of the Availability Group, and the connection will be automatically redirected to the primary replica based on the Availability Group configuration and the connection intent specified in the connection string.   In previous versions of SQL Server, an administrator had to configure the Availability Group listener (and the corresponding cluster resource) to direct SQL Server traffic to the primary replica to ensure that clients were reconnected to the active primary node in the event of failover. However, not all cluster technologies that support SQL Server Availability Groups offered a listener-like capability, making this process more complicated.   The secondary-to-primary replica connection redirection feature in SQL Server 2019 allows cli

AUTOMATED RESTORE TLOG SCRIPT

   AUTOMATED RESTORE TLOG SCRIPT  "In the event of database corruption or data loss, if multiple log or differential files need to be restored, this script is capable of generating the necessary restore script for all log files to be restored onto the database." USE Master;  GO   SET NOCOUNT ON  -- 1 - Variable declaration  DECLARE @dbName sysname  DECLARE @backupPath NVARCHAR(500)  DECLARE @cmd NVARCHAR(500)  DECLARE @fileList TABLE (backupFile NVARCHAR(255))  DECLARE @lastFullBackup NVARCHAR(500)  DECLARE @lastDiffBackup NVARCHAR(500)  DECLARE @backupFile NVARCHAR(500)  -- 2 - Initialize variables  SET @dbName = 'Customer'  SET @backupPath = 'D:\SQLBackups\'  -- 3 - get list of files  SET @cmd = 'DIR /b "' + @backupPath + '"' INSERT INTO @fileList(backupFile)  EXEC master.sys.xp_cmdshell @cmd  -- 4 - Find latest full backup  SELECT @lastFullBackup = MAX(backupFile)   FROM @fileList   WHERE backupFile LIKE '%.BAK'      AND ba

Voting in SQL cluster

 Voting in SQL cluster Voting is a critical component of a SQL cluster that helps to ensure high availability and failover readiness. In a SQL cluster, voting is used to determine which nodes are eligible to participate in quorum and which nodes should be considered to be in a failed state. By configuring voting in a SQL cluster, you can ensure that the cluster is highly available and ready for failover in the event of a node failure.   Here are the general steps for configuring voting in a SQL cluster: Step 1: Identify the number of nodes in the SQL cluster The first step in configuring voting in a SQL cluster is to identify the number of nodes in the cluster. For a two-node cluster, a disk or file share witness can be used as the third vote. For clusters with three or more nodes, the nodes themselves can be used as voting members. Step 2: Determine the number of votes required to achieve quorum The next step is to determine the number of votes required to achieve quorum