Run Microsoft Sqlserver Container On Mac With Azure Data Studio.

Run Microsoft Sqlserver Container On Mac With Azure Data Studio.

Azure Data Studio, a cross-platform database development tool originally branded as SQL Operations Studio and provide seamless database management experience, regardless of whether users are connecting to on-premise or Azure-based data platforms.

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.


For many standards functions, Azure Data Studio offers a much richer experience that Microsoft’s flagship database management tool, SQL Server Management Studio (SSMS).

Azure data studio has better intelligence as compare to microsoft sql server management studio and more extensive with support for keywords and useful TSQL snippets. For example, with a few key presses you can locate and open the snippet for a CREATE PROCEDURE statement.

once you press tab then complete stored procedure creation base script automatically come and make changes as per requirement.

Download sql server docker image & Run Container

First you have to pull docker image for sql server from docker container registry, here i have selected sql server linux image.

Command: docker pull microsoft/mssql-server-linux

check docker image status by issue docker image command which list all downloaded images from docker registry.

Command: docker images

In Docker we can “install” an image by running it. And that creates the actual Docker Container, which is finally the executable that you are executing. Let’s run our Docker Image with the docker run command:

Command: docker run -d –name homer -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=<YourPassword>’ -p 1433:1433 <imageName>

  • -e ‘ACCEPT_EULA=Y’ With the -e option you set an environment variable, on which SQL Server is dependent on. In our case we have to accept the EULA to be able to use SQL Server.
  • -e ‘SA_PASSWORD=Password@123‘ With the SA_PASSWORD environment variable we set the password for the SA login.
  • -p 1433:1433 With the -p option we bind a port on our host machine (in my case on the Mac) to a port in the Container. The port on the left side of the colon is the port on the host machine, and the port on the right side of the colon is the port in the Container. In my case I bind the default SQL Server port of 1433 within the Container to the port 1433 on my Mac.
  • –name : With the –name option we assign a custom name to our Docker Container.
  • -d :And with the -d option we specify the Docker Image that we have pulled previously, and that you want to run the Docker Container detached from the Terminal. This just means that you can close your Terminal, and your Docker Container is still running in the background.

After you have executed that Docker command, your Docker Container is up and running and ready to use.

Connect Azure Data Studio With Container.

we can connect azure data studio with running container here directly to localhost, because in the last step we have exposed the port 1433 of the Docker Container to our host machine.

Once connection established successfully,you can see server details on home pane where you can verify that container id is coming as computer name on azure data studio which means you have successfully connected to running sql server container and now you can execute sql server queries.

Execute Sql Quries

after successfully connection with sqlserver container,right click on database and choose new query option. A query window will option and now you can execute sql queries and run it .


Run sqlserver on mac is always very tedious task and tools which is available for mac is not easy to use and not much responsive.But docker made developer life easy to run sqlserver on mac os with better user experience.

it’s now real. You can now run natively SQL Server on the Mac, and with the help of Azure Data Studio you can even access SQL Server with a native MacOS.

What is UDL(Universal Data Link)

One of the most common problem in any data driven application working is the database connection string. How many times in your development life have you heard “I can’t connect to the database” and the problem ends up a minor detail in the connection string?

“How to verify sql server connection without SSMS?” or “How to find installed sql server instance names on a machine without SSMS or any tools?”

The Microsoft Data Access Components (MDAC) provide the Data Link Properties dialog box as the common user interface for specifying connection information to a data provider on Windows 2000 and later operating systems. You can use the Data Link Properties dialog box to save connection information in a universal data link (.udl) file.

To create a Universal Data Link (.udl) file

  • RUN → Notepad
  • Once the notepad is opened, File → Save As… (Do not type anything in the notepad)
  • Choose the path → Save as type: All Files (*.*) → File name: Rakesh.udl (File name can be anything but the extension must be .udl)


  • Double click on the udl file.


  • Select provider name for whatever you want test database  “Microsoft OleDb Provider for SqlServer” is used for testing.
  • Now go to the Connection tab and select the database to connect to. You may just browse through the list but SQLExpress instances do not always show up. You may also type the server name into the list box. select appropriate authentication mode(windows or sqlserver).
  • Now you will see all databases in drop down and select any one database.

  • Assuming all’s well, select the database and click the Test Connection button.


  • Now open the file in notepad – remember, the .udl file is simply a text file. You should see something like this:



In the file is the connection string you just tested. Copy and paste were needed in your application and you should be good to go.

How To Connect SqlServer with Node.js In AWS(Amazon Services) -Part1

This proof of concept is created to check how node.js works with sqlserver database.

There are many drivers available to support sqlserver in node js. I have created this poc with most usable driver “MSSQL” and you can check available drivers by executing below command in nodejs environment.

Command: npm find sqlserver

Above command gives all available sqlserver drivier for node.js.

There are following steps to configure node.js with sqlserver and you have to create same steps in your machine to run this sample.

Step 1:  install node js package for sqlserver by using below command:

“Npm install mssql “

Step2: Create Sa login in sqlserver and also create I have create sample database with name “EmpDb” with table name “Employees” with some records.

step2 step2-2 step2-3

Step3: Create below code in node.js application:

var sql = require(‘mssql’);

var config = {

user: “sa”,

password: “password@123”,

server: “Localhost”, // it can be servername if connecting directly with sqlserver or can be replace by RDS end point

example: ‘;dbname= EmpDb

database: “EmpDb”


function GetEmp() {


var connection = new sql.Connection(config);

var request = new sql.Request(connection); // or: var request = connection.request();

console.log(“step1 end”);

connection.connect(function (err) {

if (err) {




request.query(“select * from Employees”, function (err, recordset) {

if (err) {



else {








Step4: Now run the application and below output well come:


Note: I have used only select statement in this sample,If you want to perform update and delete then you can also use these operations.

RS Utility(SSRS) And Back Up of SSRS Reports Using RS Utility

RS Utility:

The RS.exe utility processes script that you provide in an input file. Use this utility to automate report server deployment and administration tasks.

Developers and report server administrators can perform operations on a report server through the use of the rs utility (RS.exe).

Reporting Services scripts can be used to run any of the Report Server Web service operations. Scripting can be used to copy security to multiple reports on a server, to add and delete items, to copy report server items from one server to another and more.


rs {-?}
{-i input_file=}
{-s serverURL}
{-u username}
{-p password}
{-e endpoint}
{-l time_out}
{-b batchmode}
{-v globalvars=}
{-t trace}


rs –i c:\BackupReports\BackupReports.rss -s http://localhost/reportserver

Read more about RS utility.

Backup SSRS Reports from Deployed Servers To Local Hard Drives

To take backup using RS utility we have to create two files:

  1. BackupSSRS.rss(.rss Report server scripting file)
  2. BackupSSRS.bat

Let’s start step by step:

Step1: Create a new directory in C: drive with name “BackupReports”.

Step2: Copy and paste below lines of code in notepad and save the file as “BackupSSRS.rss” into new created folder defined in step 1.This script will take each reports under the path we provide, iterate through all the reports , gets report definitions from the server we provide, and save it into .rdl file under the location we gave through command line

Public Sub Main()

rs.Credentials = System.Net.CredentialCache.DefaultCredentials
Dim Items as CatalogItem()
Dim Item as CatalogItem
Dim ReportName As String
Items = rs.ListChildren(ItemPath, false)
Console.Writeline(“Reports Back Up Started.”)
For Each Item in Items
ReportName = ItemPath + “/” + Item.Name
Dim reportDefinition As Byte() = Nothing
Dim rdlReport As New System.Xml.XmlDocument
reportDefinition = rs.GetReportDefinition(ReportName)
Dim Stream As New MemoryStream(reportDefinition)
Dim curDate as Date = Date.Now()
Dim strDate as String = curDate.ToString(“dd-MM-yyyy”)
Dim BackupFolderNew as String = BackupFolder+”\”+strDate+”\”+ItemPath

If(Not System.IO.Directory.Exists(BackupFolderNew )) Then
End If

rdlReport.Save(BackupFolderNew + “\” + Item.Name +”.rdl”)

Console.Writeline(“Report ” + Item.Name +”.rdl Backed up Successfully”)

Console.Writeline(“Reports Back Up Completed.”)
catch e As Exception
End Try
End Sub

Step3: Copy and paste below script in new notepad file and save file as “backupSSRS.bat” into new created folder defined in step 1.

In this script, ItemPath is the location where reports are stored in the report server. ReportServerURL is the URL of ReportServer where reports are deployed. BackupReports is the location where backup of report is stored.

set ItemPath=/Report Project2

set ReportServerURL=http://localhost/Reportserver   

set BackupFolder=C:\BackupReports

rs -i “C:\BackupReports\BackupSSRS.rss” -s %ReportServerURL% -v ItemPath=”%ItemPath%” -v BackupFolder=”%BackupFolder%”


Be ensure now you have 2 files in “c:\BackupReports” with .rss and .bat extension.


Step 4:  Now go to new created folder “C:\BackupReports” and right click on batch file (.bat extension) and run as an administrator and you will see result looks like following screenshot.


Step 5: after executing the batch file as mentioned in step 4, you can find the folder with the name of current date in dd-mm-yyyy format which contains back up of all the reports. Please check below screen shot.. output.

We successfully taken Back Up of SSRS Reports using RS Utility.You can also execute batch file from windows task scheduler. So you can create one schedule which execute daily on particular time. This schedule will generate day-to-day folder with backup of reports into your BackupFolder.

Download Files:  .Rss and .bat files

Lazy Loading vs. Eager Loading

Download complete project :

The Entity Framework offers several different ways to load the entities that are related to your target entity. For example, when you query for Employee, there are different ways that the related Employee Addresses will be loaded into the Object State Manager. From a performance standpoint, the biggest question to consider when loading related entities will be whether to use Lazy Loading or Eager Loading.

When using Eager Loading, the related entities are loaded along with your target entity set. You use an Include statement in your query to indicate which related entities you want to bring in.

  • Lazy Loading – “don’t do the work until you absolutely have to.”

When using Lazy Loading, your initial query only brings in the target entity set. But whenever you access a navigation property, another query is issued against the store to load the related entity that means Related/child objects are not loaded automatically when a navigation property is accessed.Once an entity has been loaded, any further queries for the entity will load it directly from the Object State Manager. you can also turn off lazy loading for a particular property or an entire context. To turn off lazy loading for a particular property, do not make it virtual. To turn off lazy loading for all entities in the context, set its configuration property to false:

this.Configuration.LazyLoadingEnabled = false;

Rules for lazy loading:
  1. context.Configuration.ProxyCreationEnabled should be true.
  2. context.Configuration.LazyLoadingEnabled should be true.
  3. Navigation property should be defined as public, virtual. Context will NOT do lazy loading if the property is not defined as virtual.

Let’s start with actual implementation.Create Console Application with name LazyEagerLoading.

Lazy Loading

Step 1: Create Poco classes for employee and employeeAddress. PocoClassesaddresses Step 2: Create EmployeeContext and EmployeeDbInitilizer classes. Dbcontext DbInitilizer Step 3: As per the above code, i am trying to get employee details using the employeecontext class and then iterate through the employee item also try to read navigation property EmployeeAddresses. If you try to debug the code and find  that details for parent and child related tables comes at one time but this is not correct.Actually some thing different activity happens at back end and there are several sql query executed to get the details and to track this activity  i have used  sql profiler. you can see code for lazy loading and outcome of the code: LPLazyOutput


Now run the application. In the code above, note the first for-each loop, the data of only the main entity, the Employee, is loaded but not that of the EmployeeAddresses. The data of the related entity is loaded when the nested foreach loop is executed. Analysis report of sql profiler for above code execution: LazyProfiler Here you can see that there are four different queries executed to fetch the details from the database. first query executed to get employee details and other three queries executed to fetch the details of navigation property that is EmployeeAddress.This means that to fetch the data of the related entity, database calls are being made again and again, after the data for main entity has been fetched, that could negatively hamper the efficiency of the application.

EagerLoading: Step 4: To implement the eager loading we have to make two changes in our existing code. first change in employeeContext class and second in our program.cs . Add below statement in employeeContext constructor:

this.Configuration.LazyLoadingEnabled = false;

LazyLoadingEnalbled property set to false disable lazy loading and allow to fetch related entity data  using “Include” when the main entity load in memory. Eagercontext

Step5 : Include related entity with main entity at the time of query execution. Snippet

var query = ContextObj.Employees.Include("EmployeeAddresses").ToList();


Step 6: Now again execute above code,see output and track background process with sql profiler. now you can see output is still same as lazy loading but there is a major difference in sql profile analysis report. this time only single query executed to get the parent and child entity details while in lazy loading example different queries initiate to fetch the details. LazyOutput etrace


You will see the query at the bottom of the selection that shows the join being applied at the back-end by the SQL Server. This is known as Eager Loading which means loading the related entity data along with the data of the main entity. Now the question arises when to use what option:

Do you need to access many navigation properties from the fetched entities?

No Both options will probably do. However, if the payload your query is bringing is not too big, you may experience performance benefits by using Eager loading as it’ll require less network round trips to materialize your objects. Yes If you need to access many navigation properties from the entities, you’d do that by using multiple include statements in your query with Eager loading. The more entities you include, the bigger the payload your query will return. Once you include three or more entities into your query, consider switching to Lazy loading.

Do you know exactly what data will be needed at run time?

No Lazy loading will be better for you. Otherwise, you may end up querying for data that you will not need. Yes Eager loading is probably your best bet; it will help loading entire sets faster. If your query requires fetching a very large amount of data, and this becomes too slow, then try Lazy loading instead.

Is your code executing far from your database? (increased network latency)

No When the network latency is not an issue, using Lazy loading may simplify your code. Remember that the topology of your application may change, so don’t take database proximity for granted. Yes When the network is a problem, only you can decide what fits better for your scenario. Typically Eager loading will be better because it requires fewer round trips.

One Step To Understand Relationship Of Primary Key And Non-Clustered Indexes.

Keys and indexes are key fundamental concepts of any RDBMS and play very crucial rule to design good and efficient database. If you have to design performance oriented database then you should have to implement keys and indexes in proper manner in your database.

Developers often do not specify which column should have clustered index. As we know by default Primary Key becomes Clustered Index but vice versa is not true that means you can create clustered index on non primary key column also. This practice is now extremely common and lots of people have forgotten that Primary Key and Clustered Index are two different things.

Another common misconception most people have is “Creating a primary key on a table makes reads fast”.

I want to clarify, Primary key is a constraint, and it has nothing to do with fast reads. “Primary Key” behind the scenes creates a clustered Index which makes the reads fast. Optionally we can also create non-clustered for primary key which contributes to faster reads.

One common and simplest question asked by any interviewer that “what is a Primary key” and most of us reply very quickly with following facts:

  • The primary key constraint uniquely identifies each record in a database table.
  • Primary keys must contain unique values.
  • A primary key column cannot contain NULL values.
  • Most tables should have a primary key, and each table can have only ONE primary key.

But there are few more questions comes in mind about the primary key and indexes as per below:

  1. Can we create primary key and clustered index two different columns?
  2. Is it mandatory to create clustered index on primary key?
  3. Can we create non-clustered index on primary key?

These are the question that creates the confusion and most of the candidates are not confident to say anything.

First of all we have to remember below statement and always keep in mind.

“Primary Key can be clustered or Non-clustered but it is a common best practice to create a Primary Key as Clustered Index. “

In short we can create primary key and clustered index in two different columns and it is not mandatory to apply both of things in same column.

It is not also mandatory to create clustered index on primary key and clustered index can be created on non primary column.

Yes definitely non-clustered index can be created on primary key.

There are following scenario about primary key and Clustered Index:

1. Primary key default to Clustered Index:

This is common scenario in which developers always create primary key and clustered index automatically apply on primary key column. There is no explicitly need to create clustered index on primary key column.

This is a main reason that developers not aware that primary key and clustered index are two different things.


You can see in above output result that clustered index is created automatically whenever primary key applied but always remember that vice-verse is not true.

2. Primary key default to Non-Clustered Index:

This is not mandatory that primary key and clustered key is co-related to each other and clustered index only created on primary key. In this scenario we will explicitly defined Primary Key as a non-clustered and it will create it as a non-clustered index.

You can clearly identify that non-clustered index is created on primary key and stored in Heap.


Note: Heaps (Tables without Clustered Indexes) A heap is a table without a clustered index. One or more no-clustered indexes can be created on tables stored as a heap. Data is stored in the heap without specifying an order.

3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index.

In this scenario we will explicitly defined unique Key as a clustered index and another primary key column automatically assign a non-clustered key. It means if any table already have clustered index and primary key is created on the table then by default non-clustered index created on primary key.


In above result we can see that no clustered key is created on primary key while clustered key is created on unique key column.

4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index.

If any table has primary key and unique key and not applied any index explicitly then in this case clustered key is goes with primary key column and non-clustered key goes with unique column.


As per the above scenarios we can say that primary key is not strictly bound with clustered key. Developers have to apply both of things as per the situation.

To achieve optimal performance, it is better to go with primary key along with clustered key.

How to choose appropriate data types to create indexes on sql-server tables

How to choose appropriate data types to create indexes on sql-server tables

As we know Indexes are very useful in tables and increases the overall searching performance, But developer also aware about that indexes are like a “Sweet poison” because creating the indexes are not only the solution to increase the performance and sometimes indexes decrease the performance.

So this is a big challenge and confusing task to take decisions that what data type is appropriate to create index on column.Sql-Server provide various data types like “Int, Varchar, Nvarchar,DateTime” etc, but question arise which data type should I choose? What implications might the choice of a data type have for storage and performance in SQL Server?

Now i am going to demonstrate how to select data type for creating the indexes on columns.

Step 1: first we have to create a sample database table having name “Sales” and have to insert approx 200000 records(prefer a large amount of data to test the performance issue).

CREATE TABLE [dbo].[Sales]( [BatchNo] [varchar](50) NOT NULL, [Price] [float] NULL, [DateTransaction] [datetime] NULL, [ProductId] [nvarchar](50) NULL )

after adding the details looks like as per below:


Step 2: Create clustered index on column “Product Id” by creating primary key and execute sql command then see the message tab.

table structure

execute below sql command:

set statistics IO on select ProductId,BatchNo,DateTransaction from sales where ProductId= 55005

You can see in below message  by clicking on Message tab from the query output window and it shows that Sql database engine takes 1309 logical reads to find the desired result.

(200000 row(s) affected) Table ‘Sales’. Scan count 1, logical reads 1082, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. (1 row(s) affected) you can also see the below image

table structure

Step 3: initially data type of the column “productId” was nvarchar now change data type from “Nvarchar” to integer and save the table. Again run the same query and see the result.

table structure

execute below sql command:

set statistics IO on select ProductId,BatchNo,DateTransaction from sales where ProductId= 55005

but this you can see the difference in output as compare to previous one.this time Sql-server database engine trace only 3 logical reads and below message comes:

(1 row(s) affected) Table ‘Sales’. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

You can clearly see in above result that Sql database engine takes only 3 logical reads while for nvarchar it takes 1309 logical reads.This is a big difference and assume if this searching performs on very large table like table having Lacs of data with complex data type columns then you can really feel better performance and difference. I have recorded this figure in excel and you can see that there are big difference between the performance with two data types. Capture

The main reason behind this, sql engine manipulate faster on numeric values rather than string values and so be careful whenever you have to take decision to create index then always prefer integer type column. So always try to reduce the logically reads by database engine by applying correct indexes and select correct data type.