Run Microsoft Sqlserver Container On Mac With Azure Data Studio.

Run Microsoft Sqlserver Container On Mac With Azure Data Studio.

Azure Data Studio, a cross-platform database development tool originally branded as SQL Operations Studio and provide seamless database management experience, regardless of whether users are connecting to on-premise or Azure-based data platforms.

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.


For many standards functions, Azure Data Studio offers a much richer experience that Microsoft’s flagship database management tool, SQL Server Management Studio (SSMS).

Azure data studio has better intelligence as compare to microsoft sql server management studio and more extensive with support for keywords and useful TSQL snippets. For example, with a few key presses you can locate and open the snippet for a CREATE PROCEDURE statement.

once you press tab then complete stored procedure creation base script automatically come and make changes as per requirement.

Download sql server docker image & Run Container

First you have to pull docker image for sql server from docker container registry, here i have selected sql server linux image.

Command: docker pull microsoft/mssql-server-linux

check docker image status by issue docker image command which list all downloaded images from docker registry.

Command: docker images

In Docker we can “install” an image by running it. And that creates the actual Docker Container, which is finally the executable that you are executing. Let’s run our Docker Image with the docker run command:

Command: docker run -d –name homer -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=<YourPassword>’ -p 1433:1433 <imageName>

  • -e ‘ACCEPT_EULA=Y’ With the -e option you set an environment variable, on which SQL Server is dependent on. In our case we have to accept the EULA to be able to use SQL Server.
  • -e ‘SA_PASSWORD=Password@123‘ With the SA_PASSWORD environment variable we set the password for the SA login.
  • -p 1433:1433 With the -p option we bind a port on our host machine (in my case on the Mac) to a port in the Container. The port on the left side of the colon is the port on the host machine, and the port on the right side of the colon is the port in the Container. In my case I bind the default SQL Server port of 1433 within the Container to the port 1433 on my Mac.
  • –name : With the –name option we assign a custom name to our Docker Container.
  • -d :And with the -d option we specify the Docker Image that we have pulled previously, and that you want to run the Docker Container detached from the Terminal. This just means that you can close your Terminal, and your Docker Container is still running in the background.

After you have executed that Docker command, your Docker Container is up and running and ready to use.

Connect Azure Data Studio With Container.

we can connect azure data studio with running container here directly to localhost, because in the last step we have exposed the port 1433 of the Docker Container to our host machine.

Once connection established successfully,you can see server details on home pane where you can verify that container id is coming as computer name on azure data studio which means you have successfully connected to running sql server container and now you can execute sql server queries.

Execute Sql Quries

after successfully connection with sqlserver container,right click on database and choose new query option. A query window will option and now you can execute sql queries and run it .


Run sqlserver on mac is always very tedious task and tools which is available for mac is not easy to use and not much responsive.But docker made developer life easy to run sqlserver on mac os with better user experience.

it’s now real. You can now run natively SQL Server on the Mac, and with the help of Azure Data Studio you can even access SQL Server with a native MacOS.

“Drop If Exists” Syntax in Sql-Server 2016

With the recent Sql server 2016 Service Pack 1, one important feature “Drop If Exists” syntax has been added, which developers were missing from long time.Prior to Sqlserver 2016 developers need to check database objects existence before creating.

Prior to sql server 2016,If Exist clause was used to check the db object existence then developers were taking appropriate action if objects exists.

If the Database objects doesn’t exists it will not raise any error, it will continue executing the next statement in the batch.

If Exist In Older Versions:

Previously you need to add an IF EXISTS() condition to check if the database object already exists or not. If exists then drop and then create a new objects, like:

IF EXISTS (SELECT name FROM master.sys.databases WHERE name = N'DatabaseName')
  Do your thing...

IF EXISTS (SELECT name FROM master.sys.databases WHERE name = N'Sql2016DB')
drop database Sql2016DB

New Way :

You can see new syntax much easier and shorter as compare to older syntax.Developer’s not need to memories whole long syntax to check and drop database objects.

Query Syntax: DROP <DbObject> [ IF EXISTS ] <ObjectName>

Here DbOject can be any database,store procedure,functions,Tables or triggers.

Let See  real-time examples for few important database objects:

1)Drop Database if Exists:

Now query syntax to drop the database is very easy and shorter.

Syntax:  Drop Database If Exists  <DatbaseName>

Example: let say our database  name is SqlDb2016 than below query will drop database by using new query syntax.

Drop database if exists SqlDb2016


2. Drop Tables If Exists:

Syntax: Drop Database If Exists <TableName>

Examplelet “EmployeeMaster” is a table in Sqldb2016 database than below query will drop table if exists in database.

Old Syntax :


New query Syntax in Sql Server 2016


3.Drop Procedure If Exist:

Drop procedure query is very often to use during the complex procedure writing and developers needs to apply some logic to check whether procedure is exists in database or not.

New “Drop If Exists” query syntax make developers life easy and new syntax is more short and memorable.

Old Query Syntax:



New Query Syntax:


4.) Drop Function If Exists:

User-defined functions are routines that accept parameters, perform an action and return the result of that action as a value. The return value can either be a single scalar value or a result set (table).

Old Query Syntax:


New Query Syntax:


5.) Drop Views If Exists:

A view is a virtual table based on the result-set of an SQL statement.A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database.

Old Query Syntax:


New Query Syntax:


6.)Drop Trigger If Exists:

A trigger is a special kind of stored procedure that automatically executes when an event occurs in the database server. DML triggers execute when a user tries to modify data through a data manipulation language (DML) event. DML events are INSERT, UPDATE, or DELETE statements on a table or view. These triggers fire when any valid event is fired, regardless of whether or not any table rows are affected.

Old Query Syntax:


New Query Syntax:


Drop if Exists statement is not limited to above mentioned database objects it can be use with other database objects like :

  • Index
  • constraints
  • Columns
  • Schema
  • Synonym
  • Type
  • Users
  • Role


Different Behavior of @@ROWCOUNT with Set,Print and Set NoCount On.

Few days back someone ask to me why a sql server global variable  @@RowCount produce different output for different-different set of queries in  store procedure.

Below is a simplified version of the query, can you tell why @@Rowcount will be 0?

  1. When Print Statements used before accessing the @@RowCount Variable.r1

Think what will be the output produce from print and select statements which are used at line number  12.


Are you thinking about 3 as there are 3 entries in a table.if you are thinking in this direction then it is completely wrong.

Answer:if you run complete query then value of variable @showcount will be 0.

Why:because print resets the value of @@ROWCOUNT to 0 and the same  value assigned to @showcount parameter.So that @Showcount returns 0 .

so it is recommended that  print statements should not be used before @@RowCount variable.if you are doing so then you always get surprise output which will not be as per expectation.

So best practices says,do not use any  other query  before @@RowCount variable and use local variables to hold the value of @@RowCount.Do not include  global variables directly in your business logic.

Statements that make a simple assignment always set the @@ROWCOUNT value to 1. No rows are sent to the client. Examples of these statements are: SET @local_variable, RETURN, READTEXT, and select without query statements such as SELECT GETDATE() or SELECT ‘Generic Text’.

Correct way to use @@RowCount:

If you want to store the rows that were affected by A DML statement then you need to grab @@ROWCOUNT immediately after the DML statement. There can’t be any code between the DML statement and your code that stores @@ROWCOUNT into a variable.

Here i am using same query as i mentioned top of the article with little bit changes to make query more feasible.

R2.JPGNow you can see that output is 3  and it is as per expectations. Because i have moved print statement just after the reading the @@RowCount variable.

There are more intersting facts about @@RowCount variables:

  1. IF statement also resets the @@ROWCOUNT and you get back 0.Try below query:
if 1=1
SELECT @ShowCount

2. Set Statements also reset the @@ROWCOUNT and you get back 1.


Select 1 as output
union All
select 2 as output

Declare @Flag bit
Set @Flag=1

select @@ROWCOUNT as TotalRecords



Again Surprise output,as per the select statement @@Rowcount should retuns 2 becuase UNION ALL used with both select statements but @@RowCount returns 1.

this happens due to Set statement is used just before accessing the value of @@RowCount.So set statement reset the counter to 1 so that different output produce by the sql engine.


There are many more statements for which @@ RowCount differently as per the expectation.



What is UDL(Universal Data Link)

One of the most common problem in any data driven application working is the database connection string. How many times in your development life have you heard “I can’t connect to the database” and the problem ends up a minor detail in the connection string?

“How to verify sql server connection without SSMS?” or “How to find installed sql server instance names on a machine without SSMS or any tools?”

The Microsoft Data Access Components (MDAC) provide the Data Link Properties dialog box as the common user interface for specifying connection information to a data provider on Windows 2000 and later operating systems. You can use the Data Link Properties dialog box to save connection information in a universal data link (.udl) file.

To create a Universal Data Link (.udl) file

  • RUN → Notepad
  • Once the notepad is opened, File → Save As… (Do not type anything in the notepad)
  • Choose the path → Save as type: All Files (*.*) → File name: Rakesh.udl (File name can be anything but the extension must be .udl)


  • Double click on the udl file.


  • Select provider name for whatever you want test database  “Microsoft OleDb Provider for SqlServer” is used for testing.
  • Now go to the Connection tab and select the database to connect to. You may just browse through the list but SQLExpress instances do not always show up. You may also type the server name into the list box. select appropriate authentication mode(windows or sqlserver).
  • Now you will see all databases in drop down and select any one database.

  • Assuming all’s well, select the database and click the Test Connection button.


  • Now open the file in notepad – remember, the .udl file is simply a text file. You should see something like this:



In the file is the connection string you just tested. Copy and paste were needed in your application and you should be good to go.

SSRS -Multi-level drill down report?

You can organize data in a variety of ways to show the relationship of the general to the detailed. You can put all the data in the report, but set it to be hidden until a user clicks to reveal details; this is a drill-down action.

You can display the data in a data region, such as a table or chart, which is nested inside another data region, such as a table or matrix.

In order to define a drill down action, a tablix must first be created with the appropriate level of aggregation all the way to the level of drill down detail desired. This detail includes both the row and columns desired for the lowest level viewing needed. The drill down functionality works either at the row level, the column level, or the group (column or row) level.

Let’s start step by step:

  1. Create a table “DimEmployee” with following employee details.
EmployeeKey ParentEmployeeKey SalesTerritoryKey EmployeeName
1 NULL 101 Rakesh
2 NULL 102 Amit
3 2 103 Sunil
4 2 104 Avay
5 3 105 Sachin
6 3 106 Gaurav
7 5 107 Rupesh

2. Create another table “EmpSalesAmount” having sales details for each employee.

EmployeeKey Amount
1 5000
2 5225
3 4879
4 6589
5 412
6 5987

3. Create a query to read the details from database.


;WITH cte AS (


EmployeeKey ,  EmployeeName , ParentEmployeeKey , 1 EmployeeLevel

FROM dbo.DimEmployee WHERE ParentEmployeeKey IS NULL



c.EmployeeKey ,c. EmployeeName , c.ParentEmployeeKey , p.EmployeeLevel + 1

FROM dbo.DimEmployee c

JOIN cte p ON p.EmployeeKey = c.ParentEmployeeKey


SELECT cte.*, ISNULL(f.Amount, 0) SalesAmount

FROM cte

LEFT JOIN dbo.Empsalesamount f

ON f.EmployeeKey = cte.EmployeeKey  ORDER BY EmployeeLevel, EmployeeName

4. Create Table on Report and drag and drop EmpName field from the dataset :


5. Right click on the row and select row group properties :


6. Configure the row group properties as per below:



8. Set Left Indent properties:


9.  So we have done with our drill-down report. To see the preview, click on preview tab:


One Step To Understand Relationship Of Primary Key And Non-Clustered Indexes.

Keys and indexes are key fundamental concepts of any RDBMS and play very crucial rule to design good and efficient database. If you have to design performance oriented database then you should have to implement keys and indexes in proper manner in your database.

Developers often do not specify which column should have clustered index. As we know by default Primary Key becomes Clustered Index but vice versa is not true that means you can create clustered index on non primary key column also. This practice is now extremely common and lots of people have forgotten that Primary Key and Clustered Index are two different things.

Another common misconception most people have is “Creating a primary key on a table makes reads fast”.

I want to clarify, Primary key is a constraint, and it has nothing to do with fast reads. “Primary Key” behind the scenes creates a clustered Index which makes the reads fast. Optionally we can also create non-clustered for primary key which contributes to faster reads.

One common and simplest question asked by any interviewer that “what is a Primary key” and most of us reply very quickly with following facts:

  • The primary key constraint uniquely identifies each record in a database table.
  • Primary keys must contain unique values.
  • A primary key column cannot contain NULL values.
  • Most tables should have a primary key, and each table can have only ONE primary key.

But there are few more questions comes in mind about the primary key and indexes as per below:

  1. Can we create primary key and clustered index two different columns?
  2. Is it mandatory to create clustered index on primary key?
  3. Can we create non-clustered index on primary key?

These are the question that creates the confusion and most of the candidates are not confident to say anything.

First of all we have to remember below statement and always keep in mind.

“Primary Key can be clustered or Non-clustered but it is a common best practice to create a Primary Key as Clustered Index. “

In short we can create primary key and clustered index in two different columns and it is not mandatory to apply both of things in same column.

It is not also mandatory to create clustered index on primary key and clustered index can be created on non primary column.

Yes definitely non-clustered index can be created on primary key.

There are following scenario about primary key and Clustered Index:

1. Primary key default to Clustered Index:

This is common scenario in which developers always create primary key and clustered index automatically apply on primary key column. There is no explicitly need to create clustered index on primary key column.

This is a main reason that developers not aware that primary key and clustered index are two different things.


You can see in above output result that clustered index is created automatically whenever primary key applied but always remember that vice-verse is not true.

2. Primary key default to Non-Clustered Index:

This is not mandatory that primary key and clustered key is co-related to each other and clustered index only created on primary key. In this scenario we will explicitly defined Primary Key as a non-clustered and it will create it as a non-clustered index.

You can clearly identify that non-clustered index is created on primary key and stored in Heap.


Note: Heaps (Tables without Clustered Indexes) A heap is a table without a clustered index. One or more no-clustered indexes can be created on tables stored as a heap. Data is stored in the heap without specifying an order.

3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index.

In this scenario we will explicitly defined unique Key as a clustered index and another primary key column automatically assign a non-clustered key. It means if any table already have clustered index and primary key is created on the table then by default non-clustered index created on primary key.


In above result we can see that no clustered key is created on primary key while clustered key is created on unique key column.

4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index.

If any table has primary key and unique key and not applied any index explicitly then in this case clustered key is goes with primary key column and non-clustered key goes with unique column.


As per the above scenarios we can say that primary key is not strictly bound with clustered key. Developers have to apply both of things as per the situation.

To achieve optimal performance, it is better to go with primary key along with clustered key.

How to choose appropriate data types to create indexes on sql-server tables

How to choose appropriate data types to create indexes on sql-server tables

As we know Indexes are very useful in tables and increases the overall searching performance, But developer also aware about that indexes are like a “Sweet poison” because creating the indexes are not only the solution to increase the performance and sometimes indexes decrease the performance.

So this is a big challenge and confusing task to take decisions that what data type is appropriate to create index on column.Sql-Server provide various data types like “Int, Varchar, Nvarchar,DateTime” etc, but question arise which data type should I choose? What implications might the choice of a data type have for storage and performance in SQL Server?

Now i am going to demonstrate how to select data type for creating the indexes on columns.

Step 1: first we have to create a sample database table having name “Sales” and have to insert approx 200000 records(prefer a large amount of data to test the performance issue).

CREATE TABLE [dbo].[Sales]( [BatchNo] [varchar](50) NOT NULL, [Price] [float] NULL, [DateTransaction] [datetime] NULL, [ProductId] [nvarchar](50) NULL )

after adding the details looks like as per below:


Step 2: Create clustered index on column “Product Id” by creating primary key and execute sql command then see the message tab.

table structure

execute below sql command:

set statistics IO on select ProductId,BatchNo,DateTransaction from sales where ProductId= 55005

You can see in below message  by clicking on Message tab from the query output window and it shows that Sql database engine takes 1309 logical reads to find the desired result.

(200000 row(s) affected) Table ‘Sales’. Scan count 1, logical reads 1082, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. (1 row(s) affected) you can also see the below image

table structure

Step 3: initially data type of the column “productId” was nvarchar now change data type from “Nvarchar” to integer and save the table. Again run the same query and see the result.

table structure

execute below sql command:

set statistics IO on select ProductId,BatchNo,DateTransaction from sales where ProductId= 55005

but this you can see the difference in output as compare to previous one.this time Sql-server database engine trace only 3 logical reads and below message comes:

(1 row(s) affected) Table ‘Sales’. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

You can clearly see in above result that Sql database engine takes only 3 logical reads while for nvarchar it takes 1309 logical reads.This is a big difference and assume if this searching performs on very large table like table having Lacs of data with complex data type columns then you can really feel better performance and difference. I have recorded this figure in excel and you can see that there are big difference between the performance with two data types. Capture

The main reason behind this, sql engine manipulate faster on numeric values rather than string values and so be careful whenever you have to take decision to create index then always prefer integer type column. So always try to reduce the logically reads by database engine by applying correct indexes and select correct data type.