Showing posts with label Advance. Show all posts
Showing posts with label Advance. Show all posts

Saturday, September 20, 2025

SQL Server Task Manager using TSQL

SQL Server Task Manager & System Memory Query

In this post, we’ll break down a SQL query that retrieves the running tasks from the Windows Task Manager and fetches important system memory statistics from SQL Server. This combined approach helps system administrators monitor running tasks and memory usage on the server effectively. Let's dive right in!

1. Retrieving Task Information from the Task Manager

The first part of the query extracts the list of tasks running on your SQL Server. It uses the xp_cmdshell extended stored procedure to execute the Windows tasklist command, which lists the running processes in CSV format. This data is then inserted into a temporary table.


-- Drop the temporary table if it exists
drop table if exists #TM

-- Create the temporary table for storing task data
Create table #TM (id int identity, tsk varchar(1000))

-- Insert tasklist output into the temporary table
insert into #TM (tsk) exec xp_cmdshell 'tasklist /v /FO csv'
    

Here’s a quick explanation of the steps in this code:

  • xp_cmdshell runs the tasklist command with the /v option for detailed information and /FO csv for CSV formatting.
  • The task list is inserted into the temporary table #TM</>, where each row contains the task details like process name, PID, memory usage, etc.

2. Cleaning and Formatting the Task Data

Once we have the raw task data, we need to clean it up. A Common Table Expression (CTE) is used here to remove unwanted characters like commas and double quotes from the CSV output, making it easier to extract individual task details later.


;with ct as (
    select *, 
           replace(replace(replace(tsk,'","','|'),'"',''),',','') ntsk
    from #TM where id >= 2
)
    

What’s happening here:

  • The replace function is applied multiple times to remove unwanted characters like quotes and commas from the task data.
  • The cleaned data is stored in a new column ntsk</>, which will be processed further.

3. Parsing the Data and Extracting Specific Information

Next, we use SQL Server’s JSON_VALUE function to parse the cleaned task data and extract specific fields, such as the task name, PID, memory usage, and others. We also use CROSS APPLY to convert the cleaned data into a JSON array format that’s easier to work with.


Select A.id,
       ImageName = nullif(JSON_VALUE(JS,'$[0]'),''),
       PID = nullif(JSON_VALUE(JS,'$[1]'),''),
       SessionName = nullif(JSON_VALUE(JS,'$[2]'),''),
       SessionNo = nullif(JSON_VALUE(JS,'$[3]'),''),
       MemUsage = cast(replace(nullif(JSON_VALUE(JS,'$[4]'),''), ' k','') as int),
       Status = nullif(JSON_VALUE(JS,'$[5]'),''),
       UserName = nullif(JSON_VALUE(JS,'$[6]'),''),
       CPUTime = nullif(JSON_VALUE(JS,'$[7]'),''),
       WindowTitle = nullif(JSON_VALUE(JS,'$[8]'),'')
From ct A
CROSS APPLY (values ('["'+replace(string_escape(ntsk,'json') ,'|','","')+'"]') ) B(JS)
order by MemUsage desc
    

Here’s how the data is processed:

  • The string_escape function prepares the data for JSON formatting by escaping special characters, and the replace function replaces pipes with commas to create a valid JSON array.
  • JSON_VALUE is then used to extract specific fields from the JSON array. For example, $[0] extracts the image name (task name), $[1] gets the PID, and so on.
  • We use nullif to handle empty strings and replace them with NULL.
  • The result is ordered by memory usage in descending order to highlight the processes that are consuming the most resources.

4. Retrieving System Memory Information

Finally, we fetch system memory details using the sys.dm_os_sys_memory DMV (Dynamic Management View). This provides crucial information about the physical memory available on the server.


-- Retrieve system memory information
SELECT   @@servername as servername,
         total_physical_memory_kb / 1024 AS total_physical_memory_mb,
         available_physical_memory_kb / 1024 AS available_physical_memory_mb,
         system_memory_state_desc
FROM     sys.dm_os_sys_memory;
    

What this query does:

  • total_physical_memory_kb and available_physical_memory_kb give the total and available physical memory in kilobytes. We divide these by 1024 to convert them into megabytes (MB).
  • system_memory_state_desc describes the current state of the system’s memory (whether it’s under stress, healthy, etc.).
  • @@servername Returns the name of the SQL Server instance where this query is running.

5. Conclusion

This SQL query provides an effective way to monitor running tasks and system memory on your server. It allows you to:

  • Retrieve detailed task/process information from the Windows Task Manager.
  • Format and clean up raw task data into a readable structure.
  • Get important memory statistics to keep your server healthy and optimized.

By using this query, you can quickly identify high-memory processes and track overall system performance in a simple, easy-to-understand format.

Note: The xp_cmdshell extended stored procedure must be enabled for this query to work. Make sure your environment allows the execution of this command.

If you found this tutorial helpful, be sure to share it with your fellow SQL enthusiasts or subscribe for more beginner-friendly guides!

Sunday, September 29, 2024

How to Remove Duplicate Records in SQL Server

How to Remove Duplicate Records in SQL Server

If you're working with databases, one common problem you might face is duplicate data. Duplicates can cause a lot of headaches, especially when it comes to data analysis or reporting. But don’t worry! In this post, we’ll walk through a simple way to remove duplicate records in SQL Server using a stored procedure that you can use in your own projects.

We’ll break down the SQL code step-by-step to make sure you understand how it works. By the end of this tutorial, you’ll be able to use this technique to keep your data clean and organized.

Step 1: What is a Duplicate Record?

A duplicate record is when you have two or more rows in your table with the exact same data in one or more columns. For example, if you have a list of customers and two customers have the same name, email, and phone number, those would be considered duplicates.

Step 2: The Concept of the RemoveDuplicate Procedure

In SQL Server, you can use a stored procedure to automate the process of removing duplicates. A stored procedure is like a saved set of SQL commands that can be executed later. The RemoveDuplicate procedure that we’ll look at helps to:

  • Find duplicate records in a table.
  • Remove the extra copies, leaving only one unique row.

Step 3: Understanding the Code

Let’s break down the SQL code that removes duplicates. Don’t worry if it seems complex at first. We’ll explain it in a way that’s easy to follow.

Step 3.1: The Procedure and Parameters

The procedure is created with the following parameters:

CREATE PROCEDURE RemoveDuplicate @SCDID sysname, @IsDebug bit = 0
  • @SCDID: This is an ID that identifies the specific rule you want to apply for removing duplicates. It helps the procedure know which table to work with.
  • @IsDebug: This is a flag that lets you print out the SQL commands for debugging. If you're just starting out, you can use this to see the queries that will be executed.

Step 3.2: Using ROW_NUMBER() to Identify Duplicates

The first part of the code uses a Common Table Expression (CTE) and the ROW_NUMBER() function to label rows:

WITH ct AS (
    SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY ColumnSearch ORDER BY ColumnSearch)
    FROM SourceDatabase.SourceSchema.SourceObject
)

The ROW_NUMBER() function assigns a unique number to each row, starting from 1. The key part is PARTITION BY, which groups rows based on a specific column. For example, if you're looking for duplicates in the "Email" column, all rows with the same email will be grouped together.

After the row numbers are assigned, we can delete the duplicates.

Step 3.3: Removing Duplicates

Now that the rows are numbered, we can delete the duplicates by selecting rows with rn >= 2 (anything that’s not the first occurrence):

DELETE FROM ct WHERE rn >= 2;

This means that if there are multiple rows with the same data in the ColumnSearch, all except the first one will be deleted.

Step 3.4: Handling Column Exclusions

Sometimes, you don’t want to compare all columns for duplicates. In that case, we use the ColumnExclude parameter to exclude certain columns from the comparison. The code dynamically builds a list of column names, excluding any columns specified in ColumnExclude:

SELECT @rOut = STRING_AGG(c.name, ',')
    FROM SourceDatabase.sys.COLUMNS c
    WHERE c.name NOT IN (@ColumnExclude)

Step 3.5: Final Cleanup Using LAG()

In the final step, we use the LAG() function, which allows us to compare each row with the previous row in a group. If the current row has the same data as the previous one, it’s considered a duplicate and will be removed:

WITH ct AS (
    SELECT *, Ckp = CONCAT(@rval),
           PRv = LAG(CONCAT(@rval)) OVER(PARTITION BY ColumnSearch ORDER BY ColumnSearch)
    FROM SourceDatabase.SourceSchema.SourceObject
)
DELETE FROM ct WHERE ISNULL(ckp, '') = ISNULL(PRv, '');

Step 4: Executing the Procedure

Once you understand how the procedure works, you can execute it to remove duplicates. Simply call the procedure with the SCDID (rule ID) and @IsDebug parameter (set to 1 for debugging):

EXEC RemoveDuplicate @SCDID = 'yourRuleID', @IsDebug = 1;

If @IsDebug is set to 1, the procedure will print out the SQL commands it is going to execute, so you can check them before they run.

Step 5: Original Table Design for SCD2_Rule

Here’s the original design of the SCD2_Rule table used in the stored procedure. This table contains information about your source and target tables and the columns used for comparison:

SCDID SourceDatabase SourceSchema SourceObject TargetDatabase TargetSchema TargetObject ColumnSearch ColumnExclude
sysname sysname sysname sysname sysname sysname sysname nvarchar(-1) nvarchar(-1)

Step 6: Complete SQL Stored Procedure Code

Here’s the full SQL code for the RemoveDuplicate procedure, including everything we’ve discussed so far:


CREATE PROCEDURE RemoveDuplicate
    @SCDID sysname,
    @IsDebug bit = 0
AS
BEGIN
    DECLARE @Sql AS NVARCHAR(MAX), @rVal varchar(max)

    -- Fetch the SQL query template for the given SCDID
    SELECT @Sql = 'WITH ct AS (
        SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY ' + ColumnSearch + ' ORDER BY ' + ColumnSearch + ')
        FROM ' + CONCAT(SourceDatabase, '.', SourceSchema, '.', SourceObject) + '
    ) DELETE FROM ct WHERE rn >= 2'
    FROM dbo.SCD2_Rule WHERE SCDID = @SCDID

    -- Debugging: Print the dynamic SQL if IsDebug is set to 1
    IF @IsDebug = 1
        PRINT @Sql

    -- Execute the dynamic SQL to remove duplicates
    EXEC SP_EXECUTESQL @Sql

    -- Other duplicate handling: Exclude specified columns
    SELECT @Sql = 'SELECT @rOut = STRING_AGG(c.name, '','')
        FROM ' + SourceDatabase + '.sys.COLUMNS c
        JOIN ' + SourceDatabase + '.sys.tables t ON c.object_id = t.object_id
        WHERE t.name = ''' + SourceObject + ''' 
        AND c.name NOT IN (' + b.a + ') AND c.name <> ''DataValidTo'''
    FROM dbo.SCD2_Rule a
    OUTER APPLY (
        SELECT STRING_AGG(''' + value + ''', ',') a
        FROM STRING_SPLIT(a.ColumnSearch + ',' + a.ColumnExclude, ',') b1
    ) b
    WHERE SCDID = @SCDID

    -- Debugging: Print the dynamic SQL for column exclusions
    IF @IsDebug = 1
        PRINT CONCAT('@Sql : ', @Sql)

    -- Execute the query for excluding columns
    EXEC sp_executesql @Sql, N'@rOut varchar(max) OUTPUT', @rOut=@rVal OUTPUT;

    -- Debugging: Print the result of excluded columns
    IF @IsDebug = 1
        PRINT CONCAT('@rVal : ', @rVal)

    -- Final cleanup: Remove records with identical values in key columns
    SELECT @Sql = 'WITH ct AS (
        SELECT *, Ckp = CONCAT(' + @rVal + '),
            PRv = LAG(CONCAT(' + @rVal + ')) OVER(PARTITION BY ' + REPLACE(ColumnSearch, ',DataSystemDate', '') + ' ORDER BY ' + ColumnSearch + ')
        FROM ' + CONCAT(SourceDatabase, '.', SourceSchema, '.', SourceObject) + '
    ) DELETE FROM ct WHERE ISNULL(ckp, '''') = ISNULL(PRv, '''')'
    FROM dbo.SCD2_Rule WHERE SCDID = @SCDID

    -- Debugging: Print the final SQL for cleanup
    IF @IsDebug = 1
        PRINT @Sql

    -- Execute the final cleanup query
    EXEC SP_EXECUTESQL @Sql
END
        

Step 7: Conclusion

You’ve just learned how to remove duplicate records from your SQL Server tables using a stored procedure. The RemoveDuplicate procedure is powerful because it allows you to automate the process of cleaning up your data.

Key Takeaways:

  • ROW_NUMBER() is used to assign a unique number to each row based on certain columns.
  • LAG() compares rows to detect duplicates.
  • Dynamic SQL is used to make the procedure flexible for different tables and columns.
  • Debugging is made easy with the @IsDebug flag to see the SQL commands before they run.

Now you can keep your SQL Server tables clean by removing duplicate data with ease. If you have any questions or need further clarification, feel free to ask in the comments!

Bonus Tip:

If you're working with large datasets, be sure to test the procedure on a smaller subset of your data before running it on your production database. Always back up your data first!

If you found this tutorial helpful, be sure to share it with your fellow SQL enthusiasts or subscribe for more beginner-friendly guides!

Saturday, September 19, 2020

Track data changes using CDC in SQL Server Enterprise edition

SQL Server offers two features that monitor changes in a database: Change Data Capture and Change Tracking. These features help applications identify Data Manipulation Language (DML) changes such as insertions, updates, and deletions made to user tables in a database.

Change Data Capture Configuration Settings

? maxtrans* -- Specifies the maximum number of transactions to process per scan cycle.
? maxscans* -- The maximum number of scan cycles to be executed to extract all rows from the log.
? continuous* -- A flag indicating whether the capture job should run continuously (1) or in one-time mode (0). For further details, refer to sys.sp_cdc_add_job (Transact-SQL).*
? pollinginterval* -- Defines the interval in seconds between log scan cycles.
? retention** -- Specifies how long change rows will be stored in the change tables.
? threshold -- The maximum number of deletions allowed in a single cleanup statement.

Enabling Change Data Capture for a Database

USE MyDB  
GO  
EXEC sys.sp_cdc_enable_db  
GO

Disabling Change Data Capture for a Database

USE MyDB  
GO  
EXEC sys.sp_cdc_disable_db  
GO

Enabling Change Data Capture for a Table

  
EXEC sys.sp_cdc_enable_table  
@source_schema = N'dbo',  
@source_name = N'MyTable',  
@role_name = N'MyRole',  
@filegroup_name = N'MyDB_CT',  
@supports_net_changes = 1  
GO

Viewing CDC Job Configurations

EXEC sys.sp_cdc_help_jobs

Modifying CDC Job Settings

EXECUTE sys.sp_cdc_change_job  
@job_type = 'cleanup',  
@retention = 86400; -- 60 days

Getting Capture_Instance Name

EXEC sys.sp_cdc_help_change_data_capture

Example 1: Reading Changed Columns

USE AdventureWorks2014

DECLARE @from_lsn binary (10), @to_lsn binary (10)

SET @from_lsn = sys.fn_cdc_get_min_lsn('HumanResources_Shift')
SET @to_lsn = sys.fn_cdc_get_max_lsn()

SELECT * FROM cdc.fn_cdc_get_all_changes_HumanResources_Shift(@from_lsn, @to_lsn, 'all')
ORDER BY __$seqval

Example 2: Reading Changed Columns

USE AdventureWorks2014

DECLARE @from_lsn binary (10), @to_lsn binary (10)

SET @from_lsn = sys.fn_cdc_get_min_lsn('HumanResources_Shift')
SET @to_lsn = sys.fn_cdc_get_max_lsn()

SELECT * FROM cdc.fn_cdc_get_all_changes_HumanResources_Shift(@from_lsn, @to_lsn, 'all')
ORDER BY __$seqval

Check for changes:

SELECT *  
FROM HumanResources.Shift

SELECT *  
FROM cdc.HumanResources_Shift_CT

Checking If CDC Is Already Enabled for a Database

SELECT [name], database_id, is_cdc_enabled  
FROM sys.databases

Checking If CDC Is Already Enabled for a Table

SELECT [name], is_tracked_by_cdc  
FROM sys.tables

CDC System Tables

? cdc.captured_columns - Lists the columns captured.
? cdc.change_tables - Displays all tables enabled for change capture.
? cdc.ddl_history - Records all DDL changes since CDC was enabled.
? cdc.index_columns - Shows the indexes associated with change tables.
? cdc.lsn_time_mapping - Maps LSN numbers.

https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/track-data-changes-sql-server?view=sql-server-ver15

https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/cdc-fn-cdc-get-all-changes-capture-instance-transact-sql?view=sql-server-ver15

How to work with Temporal Tables in SQL Server

How to work with Temporal Tables in SQL Server

SQL Server 2016 introduced the support for temporal tables (also referred to as system-versioned temporal tables). This feature enables you to track historical data by capturing not only the current data but also the state of data at any given point in time. Temporal tables are a significant addition to databases, based on the ANSI SQL 2011 standard.

SELECT name, object_id, temporal_type, temporal_type_desc FROM sys.tables;
temporal_type temporal_type_desc
0 NON_TEMPORAL_TABLE
1 HISTORY_TABLE
2 SYSTEM_VERSIONED_TEMPORAL_TABLE
SELECT OBJECT_NAME(object_id) AS table_name, * 
FROM sys.periods
table_name name period_type period_type_desc object_id start_column_id end_column_id
Employee SYSTEM_TIME 1 SYSTEM_TIME_PERIOD 581577110 7 8
SELECT name AS 'Column_Name', generated_always_type, generated_always_type_desc 
FROM sys.columns

List all temporal tables along with their history tables.

SELECT schema_name(t.schema_id) AS temporal_table_schema, 
       t.name AS temporal_table_name,
       schema_name(h.schema_id) AS history_table_schema, 
       h.name AS history_table_name, 
       CASE 
         WHEN t.history_retention_period = -1 
         THEN 'INFINITE' 
         ELSE CAST(t.history_retention_period AS VARCHAR) + ' ' + t.history_retention_period_unit_desc + 'S' 
       END AS retention_period 
FROM sys.tables t 
LEFT OUTER JOIN sys.tables h ON t.history_table_id = h.object_id 
WHERE t.temporal_type = 2 
ORDER BY temporal_table_schema, temporal_table_name

Creating a table with a history table.

CREATE TABLE dbo.Employee 
( 
  [EmployeeID] int NOT NULL PRIMARY KEY CLUSTERED, 
  [Name] nvarchar(100) NOT NULL, 
  [Position] varchar(100) NOT NULL, 
  [Department] varchar(100) NOT NULL, 
  [Address] nvarchar(1024) NOT NULL, 
  [AnnualSalary] decimal(10,2) NOT NULL, 
  [ValidFrom] datetime2 GENERATED ALWAYS AS ROW START, 
  [ValidTo] datetime2 GENERATED ALWAYS AS ROW END, 
  PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) 
) 
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeeHistory));

The script above will create two tables: Employee and EmployeeHistory.

Inserting and updating data in the tables above.

INSERT INTO Employee (EmployeeID, Name, Position, Department, Address, AnnualSalary) 
SELECT 1, 'himanshu', 'dba', 'IT', 'BC', 4000;
GO
INSERT INTO Employee (EmployeeID, Name, Position, Department, Address, AnnualSalary) 
SELECT 100, 'himanshu', 'dba', 'IT', 'BC', 4000;
GO
UPDATE Employee SET AnnualSalary = 5000 WHERE EmployeeID = 1;
GO
UPDATE Employee SET AnnualSalary = 4500 WHERE EmployeeID = 100;
GO
SELECT * FROM Employee;
SELECT * FROM EmployeeHistory;
GO
SELECT * FROM Employee 
FOR SYSTEM_TIME 
BETWEEN '2014-01-01 00:00:00.0000000' AND '2020-11-01 00:00:00.0000000';

Here is the result of the queries above:

Saturday, August 8, 2020

Availability group fail-over

Forced Manual Failover with Data Loss (2-step Process)

To force a failover with potential data loss, connect to the SQL Server instance where the secondary replica is hosted, and execute the following command:

ALTER AVAILABILITY GROUP [ag1] FORCE_FAILOVER_ALLOW_DATA_LOSS;

Once the original primary replica recovers, it will try to assume the primary role. To ensure the previous primary becomes secondary again, run the command below on that instance:

ALTER AVAILABILITY GROUP [ag1] SET (ROLE = SECONDARY);

Steps to Manually Failover Without Data Loss:

1. First, configure the target secondary replica to use SYNCHRONOUS_COMMIT mode:

ALTER AVAILABILITY GROUP [ag1]
     MODIFY REPLICA ON N''
     WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);

2. To verify that active transactions are committed to the primary and at least one synchronous secondary replica, run the following query:

SELECT ag.name, drs.database_id, drs.group_id, drs.replica_id,
   drs.synchronization_state_desc, ag.sequence_number
FROM sys.dm_hadr_database_replica_states drs, sys.availability_groups ag
WHERE drs.group_id = ag.group_id;

3. Update the REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT setting to 1:

ALTER AVAILABILITY GROUP [ag1]
     SET (REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = 1);

4. Offline the primary replica in preparation for the role change:

ALTER AVAILABILITY GROUP [ag1] OFFLINE;

5. Promote the target secondary replica to the primary role:

ALTER AVAILABILITY GROUP ag1 FORCE_FAILOVER_ALLOW_DATA_LOSS;

6. Change the role of the previous primary replica to SECONDARY. Run this on the SQL Server instance hosting the previous primary replica:

ALTER AVAILABILITY GROUP [ag1] SET (ROLE = SECONDARY);

For more information, check out this resource:

Check the Writable Copy

Additional reference: Manual SQL Server Availability Group Failover

Saturday, August 24, 2019

How to stretch /partition database in SQL server old version

 How to stretch /partition a database in SQL server old version



Choosing the Right Database Architecture

These days, there are many types of storage systems, server hardware, and database technologies available on the market. Often, consultants and management teams believe that only high-end solutions like Oracle or NoSQL databases can meet their business needs — leading to expensive and sometimes unnecessary implementations.

However, the best solution truly depends on real-world exposure, experience, and actual business requirements. In this section, we’ll explore how to design a powerful and scalable database architecture that supports high performance and data availability. Database design is the heart of any business application — it must be done right.

Traditional Database Design

Traditional Database Design
  • High-end database server with SSD/SAN/NAS storage.
  • Requires advanced configuration and high-maintenance, often demanding more attention and longer processing times.
  • Backup and recovery tasks can be complex and time-consuming.

Alternative Database Design & Partitioning Concepts Using Diverse Infrastructure

All examples below are based on a primary database with multiple partitioned or distributed databases:

  1. Cloud Hosting: Multiple Databases on a Single Server
Cloud: Single Server with Partitioned Databases
  • Partition databases to maximize hardware usage.
  • Design based on record volume, data importance, and key columns.
  • Consider storage capacity and application performance requirements.

2. Cloud Hosting: Multiple Servers with Hybrid Storage

Cloud: Multi-Server with Partitioned Databases
  • Utilize multiple cloud servers using a hybrid storage model.
  • Designate one primary and multiple child databases based on capacity and requirements.
  • Leverage dynamic processing and storage scalability.
  • Use linked servers with appropriate security to connect child databases to the primary.

3. On-Premises Hosting: Multiple Databases on a Single Server

  • Single on-premises server with partitioned databases across multiple disks.
  • One primary database and several child databases.
  • Map database files to specific storage tiers (high/mid/low).
  • Link tables via views or synonyms for unified access.

4. On-Premises Hosting: Multiple Stretch Databases Across Multiple Servers

On-Premises: Multi-Server Stretch Databases
  • Partition databases to leverage multiple on-premises servers efficiently.
  • Organize and access data through linked servers and data access libraries.
  • Distribute data according to access priority for optimal performance.

Pros

  • Small databases are easier to maintain than a single large database.
  • Simplifies backup, recovery, and restore operations.
  • Cost-effective — databases can be allocated to different storage tiers based on importance.
  • Speeds up UAT/TEST environment refresh cycles.
  • Improves server productivity and overall system performance.
  • Facilitates easier configuration of replication, log shipping, and mirroring.

Cons

  • Improper design and misconfiguration can be detrimental.
  • Requires careful monitoring and more administrative effort.

Conclusion

As discussed, the database is the core of your application. Start by identifying requirements such as expected data growth, key entities, and referential relationships. Evaluate all possible solutions with their pros and cons. Collaborate closely with your technical team before implementation. In many cases, SQL Server technology alone can satisfy your needs — provided you understand its capabilities and limitations.

Popular Posts