Moving REST To GraphQL  With Core & Entity Framework Core.

Moving REST To GraphQL With Core & Entity Framework Core.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

source: GraphQl

GraphQL queries are used to query the GraphQL server for the data that the client needs. What is interesting about GraphQL is that clients can write custom made queries based on the individual client’s needs. This means that GraphQL enables the client to ask for exactly what they want using a query and also returns a response with only what was asked. This approach gives the client more power.

Benefits of GraphQL:

  • Good fit for complex systems and microservices: By integrating multiple systems behind its API, GraphQL unifies them and hides their complexity. The GraphQL server is then responsible for fetching the data from the existing systems and packaging it up in the GraphQL response format.
  • Fetch data in single call and avoid multiple round trips:GraphQl is less chatty than Rest and rest api’s required multiple round trips between client and resources to fetch the data and return back to client to render on calling apps.

GraphQL solves the roundtrip problem by allowing the client to create a single query which calls several related functions (or resolvers) on the server to construct a response with multiple resources – in a single request. This is a much more efficient method of data delivery requiring fewer resources than multiple roundtrips.

  • Avoid Over/Under data fetching problems:REST api responses are known for either containing too much data or not enough of it, it’s very hard to design an API flexible enough to fulfill every client’s precise data needs. GraphQL solves this efficiency problem by fetching the exact data in a single request.

Building a GraphQL Service in ASP.NET Core

  1. Installing GraphQL in .net core: Since GraphQL support is not provided within ASP.NET Core, you need a Nuget package. Below are the most commonly used nuget packages used in .net core.

2. Setting up graph types: GraphTypes is a Class which derives from the ObjectGraphType base class that implements IObjectGraphType. Now in the constructor you can declare fields for this graph type. 

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using GraphQL;
using GraphQL.Types;
using WebApiWithGraphQl.Data.Entities;
using WebApiWithGraphQl.Repositories;

namespace WebApiWithGraphQl.GraphQ.Types
public class EmployeType : ObjectGraphType<Employee>
public EmployeType(EmployeeRepository employeeRepository)
Field(x => x.EmployeId);
Field(x => x.Name);
Field<EmployeeTypeEnumType>("EmploymentType", resolve: context => context.Source.EmployeType.ToString());


resolve: context => employeeRepository.GetAdddressById(context.Source.EmployeId)

3. Define Resolver:Now that we have a Employe graph type we need another class that knows how to get Employees. I call it EmployeQuery which also derives from ObjectGraphType.
In the constructor I declare one field, this time I explicitly say that this field must return a list of EmployeType objects. As you can see even list is a special graph type.
Then I give the field a name and in a lambda I can now specify where the data should come from, in other words how to data should be resolved.

Resolvers are the functions responsible for supplying the data requested by the query and is the integration point between our application’s data source and the GraphQL infrastructure.

namespace WebApiWithGraphQl.GraphQ.Query
public class EmployeeQuery:ObjectGraphType
public EmployeeQuery(EmployeeRepository employeeRepository)
resolve: context => employeeRepository.GetAllEmployees()

2. Set up schema: A GraphQL schema is at the center of any GraphQL server implementation and describes the functionality available to the clients which connect to it.

GraphQL implements a human-readable schema syntax known as its Schema Definition Language, or “SDL”. The SDL is used to express the types available within a schema and how those types relate to each other. 

using GraphQL;
using GraphQL.Types;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using WebApiWithGraphQl.GraphQ.Query;

namespace WebApiWithGraphQl.GraphQl
public class EmployeSchema :Schema
public EmployeSchema(IDependencyResolver resolver ):base(resolver)
Query = resolver.Resolve();

Configuring Core With GraphQL Middleware

To setup GraphQL in your project and start using the scheme we created go to the startup class of your application.

  • Add the dependency resolver to get the query instance.
  • AddGraphQL extension method to register all the types GraphQL .NET uses
  • AddGraphTypes which will scan the assembly for all ObjectGraphTypes and register them automatically in the container using the specified lifetime.
public void ConfigureServices(IServiceCollection services)
services.AddDbContext(options => options.UseSqlServer(Configuration.GetConnectionString("EmpoyeeDBConnectionString")));
services.AddSwaggerGen(c =>
c.SwaggerDoc("v1", new Swashbuckle.AspNetCore.Swagger.Info { Title = "My API", Version = "v1" });
services.AddScoped(s => new FuncDependencyResolver(s.GetRequiredService));

services.AddGraphQL(o => { o.ExposeExceptions = false; })

Now we have to inject graphQL middleware by calling UseGraphQL extension method in configure method of startup.cs class.

public void Configure(IApplicationBuilder app, IHostingEnvironment env,EmployeeContext context)
app.UseGraphQLPlayground(new GraphQLPlaygroundOptions());

app.UseSwaggerUI(c =>
c.SwaggerEndpoint("/swagger/v1/swagger.json", "API with graphQl V1");


Till now we have setup all the mandatory steps for graphQL integration with core api.If you want to see the playground UI as soon as you start the API go to the properties of the project and select the debug tab. Activate launch browser there and type in ui/playground as the url.


When the browser opens take a look at the schema tab on the right side. The metadata of the schema has been read by the playground.
Using this metadata information the query editor can have intellisense.
Type the query in the pic. It gets all products but only the names and descriptions. When I execute it you can see that the result is JSON and the data is contained in the data root node which has a products array with the data I asked for.


Containerised Asp.Net Core WebApi With Docker On Mac.

Containerised Asp.Net Core WebApi With Docker On Mac.

New .NET Core is the biggest change since the invention of .NET platform. It is fully open-source, components and is supported by Windows, Linux and Mac OSX. In this post I am going to give it a test ride by creating a containerised C# application with the latest .NET CORE.

Docker containers allow teams to build, test, replicate and run software, regardless of where the software is deployed.Docker containers assure teams that software will always act the same no matter where it is – there’s no inconsistency in behavior, which allows for more accurate and reliable testing.

The main advantage to using Docker containers is that they mimic running an application in a virtual machine, minus the hassle and resource neediness of a virtual machine. Available for Windows and Linux systems, Docker containers are lightweight and simple, taking up only a small amount of operating system capacity. They start-up in seconds and require little disk space or memory.

docker installation is available for Mac and windows and can be downloaded from office channel. clickhere


  1. Install Visual studio for mac
  2. Install Docker for mac

here we will try to build core webapi and host/run on container with the help of docker.

1. Create New Project: choose core webapi template from visual studio 2017


Now provide project name,solution name and other details like where to save the project.


Now our newly created project structure looks as per below


Here we have very simple case where service only returns some information about employees as our main objective to host this tiny application on container.

2.Add Docker Support To application

Now add docker file in project  and write instructions that how the docker image build from base image of core image.


Below are the instructions issue to daemon to create docker image.


3. Open Terminal on mac :

search for “Terminal” on mac machine and open new window.


Once we click on new window option then command window will appear and now all set to issue/write docker command to create docker image.


4. Navigate to application folder by issuing change directory command “CD” and make sure we are inside the application folder .We can verify that all items listed by issuing “LS” command that means we are in right place.


5. Create Docker Image :

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL . The build process can refer to any of the files in the context

Command: Docker build -t .

here our image name is “firstapiwithdocker”,so command should be

docker build -t firstapiwithdocker .

at the moment we can see daemon accept the command start creating the docker image from the docker file instructions.


at last we can see  image has been successfully created and tagged with “Latest” keyword.if we don’t provided any tag than daemon tagged the image with “Latest” keyword.


5. List all Docker Images:

Now we have to verify that require image has been created or not,so below command have to issue list down all the images.we can see all the important information about images like image name,tag,imageid,created date and size. in below image we can see that out newly created image listed with other base images.

Command :  Docker Images


6.Run image with in container:

Till now we have successfully created docker image for our webapi solution and contain all the necessary files and now we have to create container to run this image.

below command will create container.

Syntax: docker run -d -p : –name

example: docker run -d -p 9000:80 –name FirstContainer firstapiwithdocker

once we execute above command,a new container has been created and get random number that means container has been created successfully.


Now List down all containers and we can see all the important metadata about containers like containerId,ImageName, Command,Created date,container status,Ports and container name.

here our newly created container is running and exposing 9000 from the host to 80 on the container.


Let’s hit the url “http://localhost:9000/api/values” on browser or postman to verify that our application is running on container or not.

Below are the result of the webapi which is running on container instead of local machine.


7. Push docker image on docker hub:

Docker Cloud uses Docker Hub as its native registry for storing both public and private repositories. Once you push your images to Docker Hub, they are available in Docker Cloud.

we need to create docker hub account to push the image on public/private repositories.



Docker image should be tagged with well qualified name before issuing the push command. so below command will tagged the image with name “RakeshMahur/FirstApiWithContainer”.

Syntax: docker tag <ImageName> <TagName>

Example:  docker tag firstapiwithdocker rakeshmahur/webapicore-sample

Now login on docker hub from the terminal window by issuing the “docker login” command and provide the docker hub account details (username/password).


once docker hub credentials has been validated successfully then a message comes on window and now we will able to push image to docker hub.


Issue docker push command to push docker image to docker hub.

docker push rakeshmahur/webapicore-sample

once we execute above command then we can see our local docker image push to docker cloud repository and listed done over there and every one can pull this image start working on it.


Docker Commands

Below are some important and comman used commands , refer to the docker documentation for more details and a more exhaustive list of flags.

  • docker build -t .
    • Builds an image from a given dockerfile. While still useful when handling individual images, ultimately docker-compose will build your project’s images.
  • docker exec -it
    • Runs a command in a running container. More than anything else, I’ve used exec to run a bash session (docker exec -it /bin/bash).
  • docker image ls
    • Lists images on your machine.
  • docker image prune
    • Removes unused images from your machine. Especially when building new images, I’ve found myself constantly wanting a clean slate. Combining prune with other commands helps clear up the clutter.
  • docker inspect
    • Outputs JSON formatted details about a given container. More than anything else I look for IP address via (docker inspect | grep IPAddress)
  • docker pull
    • Downloads a given image from a remote repository. For development purposes, docker compose will abstract this away, but if you want to run an external tool or run project on a new machine you’ll use pull.
  • docker ps
    • Without any flags, this lists all running containers on your machine. I’m constantly tossing on the ‘-a’ flag to see what containers I have across the board. While you are building a new image you inevitably have containers spawned from it exit prematurely due to some runtime error. You’ll need to do ‘docker ps -a’ to look up the container.
  • docker push
    • Once you have an image ready to be distributed/deployed you’ll use push to release it to either docker hub or a private repository.
  • docker rm
    • Removes an unstarted container from your system. Need to run docker stop first if it is running.
  • docker rmi
    • Removes an image. May need to add on the ‘–force’ flag to force removal if it is in use (provided you know what you are doing).
  • docker run
    • Runs a command in a new container. Learning the various flags for the run command will be extremely useful. The flags I’ve been using heavily are as follows:
      • –rm – Removes the container after you end the process
      • -it – Run the container interactively
      • –entrypoint – Override the default command the image specifies
      • -v – Maps a host volume into the container. For development, this allows us to use the image’s full environment and tools, but provide it our source code instead of production build files.
      • -p – Maps a custom port (ie. 8080:80)
      • –name – Gives the container a human readable name which eases troubleshooting
      • –no-cache – Forces docker to reevaluate each step when it runs the container, as opposed to using caching.
  • docker version
    • Outputs both the client vs. server versions of docker being run. This isn’t the same as ‘-v’.
  • docker volume ls
    • While there are variants on volumes, so far I mostly use the ‘ls’ command to list current volumes for troubleshooting. I’m sure there will more to come with using volumes.



Securing Sensitive App Settings Using Azure Key Vault.

DownLoad Complete Project: WebApiWithAzureKeyVault

Why Azure Key Vault?

Almost every Azure app has some kind of cryptographic key, storage account key, sensitive setting, password, or connection string.

For example, consider a web app that requires a connection string to an Azure SQL Database.Storing this sensitive information in an App.config file could result in it being checked in to a source-code control system and unintentionally exposed to many developers.

Compare this to using Azure Key Vault, where the App.config file only contains a reference to this sensitive data, and is controlled by the access policy of Azure Key Vault.

Below is insecure way which is commonly used in azure based solutions:


you can see here all secret information is clearly mentioned in webconfig.cs file in plain text from and think if some one got access on server and stolen all sensitive information easily. Usually these configuration files also checked in on repository systems like TFS,GitHub etc along with other project files.Any team who have access to these repositories can also see these secret information.

By using Key Vault you can securely store data and avoid having these sensitive pieces of information stored in source code which may then be compromised.

The Microsoft Azure cloud platform provides a secure secrets management service, Azure Key Vault, to store sensitive information. It is a multi-tenant service for developers to store and use sensitive data for their application in Azure.

The Azure Key Vault service can store three types of items: secrets, keys, and certificates.

  • Secrets are any sequence of bytes under 10KB like connection strings, account keys, or the passwords for PFX (private key files). An authorized application can retrieve a secret for use in its operation.
  • Keys involve cryptographic material imported into Key Vault, or generated when a service requests the Key Vault to do so. An authorized cloud service can request the Key Vault perform one or more cryptographic operations with a key on its behalf.
  • An Azure Key Vault certificate is simply a managed X.509 certificate. What’s different is Azure Key Vault offers life-cycle management capabilities. Like Azure Keys, a service can request Azure Key Vault to create a certificate. When Azure Key Vault creates the certificate, it creates a related private key and password. The password is stored as an Azure Secret while the private key is stored as an Azure Key. Expired certificates can roll over with notifications before these operations happen.

Application flow with key vault


Steps Required:

  1. Create A Key Vault
  2. Create a Secret
  3. Register an App in Azure Active Directory
  4. Create an API Key for the App
  5. Give App-Specific Permissions to Access Key Vault
  6. Configure your Dot Net Application

1. Create a key vault

Login on azure portal  and add new service “key vault”. If ‘Key vaults’ is not already in your list, click on ‘More services’ and use the filter to find it. Select ‘Key vaults’.Fill all the mandatory information and press create button.



2. Create Secrets:

To do this, click on ‘Secrets’ under ‘Settings’ on the left, or under ‘Assets’ in the Overview panel. Once the ‘Secrets’ panel opens up, click the ‘Add’ button at the top so you can create a new one.

Activation and expiry dates can be used if you only want the secret to be accessed for a specific period of time. When you are finished, click ‘Create’.


Key vault DNS name will be used as Key Vault url in application from where key request will initiate.


3.Register An App In Azure Active Directory

Now You have data protected by Key Vault and You need to give our application (secure) access to this data, first.

Again go to azure portal and search “Azure Active Directory”. inside AD, select ‘App registrations’ from under the ‘Manage’ panel on the left. This is where You will configure the access and permissions.

our application will have when accessing Key Vault programmatically.


In my case i have already created a webapi app named as  “DubaiProperties-Api” which is running under azure app service and i have register the same application in azure active directory to read secure keys/secrets from key vault by this application.


4. Create An Api Key For Registered App

From the ‘App registrations’ menu, you should see your newly created app listed.


Click on registered application and  Copy the ‘Application ID’ that you should be able to see under ‘Essentials’.


select ‘Keys’ from the ‘API Access’ section on the right.Give the Key a meaningful description that will explain its purpose, then set an expiration setting. Click ‘Save’ and your API Key ‘Value’ will be presented to you. Copy this key value now as when you navigate away it will never be presented again.

You can always create a new one, if you forgot to copy it.



5.Give App-Specific Permissions to Access Key Vault

Return to key vault and select ‘Access Policies’ under the ‘Settings’ panel on the right. Click the ‘Add New’ button. Click the ‘Select Principal’ option to be presented with a new blade. Enter the Application ID of the app in Azure AD into this field, and select the app when it is presented to you. Click the ‘Select’ button at the bottom to confirm. You can now configure the permissions that you wish to grant the application.


Only assign the necessary permissions. As it is only Secrets that your app needs access to (and read-only access at that), I would suggest picking ‘Get’ and ‘List’ under the ‘Secret permissions’ option. This is all you need to do, so click ‘OK’ to complete this step.


Now key vault configurations are ready to store secret keys  and refer by  any application.

6.Configure your Dot Net Application

Now all key vault and active directory administration task has completed and now you need to set up dotnet application to use key vault for consuming secret keys instead of define those keys in app.config or some where in application.

Let’s start with Webapi project that needs to use some secrets that is stored in key vault.

Initially there are few things that are required to configure in you application like application Id,Key Vault Url and App Registration keys. All these information already described above at the time of application registration in AD and key Vault Creation.

Below are the settings for webconfig.cs:


Now add some nuget packages for azure key vault  to the application


Create a helper class to interact with azure key vault by using Azure SDK and fetch all required secret keys and use in application.


Now Use this helper class in our webapi controller.


Now Publish webapi project on azure app service. webapi application should work after deployment.


Now Check complete swagger url for deployed api and see all api’s controller with all http verbs.


expend Get method of keyvault controller and try to make request to read key vault secret keys value from azure key vault.


Below is response of webapi with key vault values.


if you analysis whole code you will not find any secret keys configured in application configuration files or application settings section of azure app service.Keys value directly comes from key vault that is different location.

If you can add new version of same key in keyvault again  then no need to make any changes in your application and application always pick latest version of  key.

let’s create new version of same key with differ value.Go back to secret keys section under key vault  settings pane,here you will found all defined keys.


click on key for which you want to create new version with new value. choose “DemoSecretKey” to update the value.Once you click on that,you will found all versions of selected key.Currently single version is created.

you never see values of secret keys,its hide to every one.


Click on “New Version” and select “manual” from the drop down.enter new secret value for key and save it.


now you can see new version added with updated value,and previous version also maintain by the Azure key vault.


Let’s test our webapi and it should read new updated value from the key vault.Important thing this is, i have not make any changes in deployed webapi.


That’s it! This configuration should enable to you to protect your sensitive information in Key Vault and then provide a Dot Net with secure access to that data


The Azure Key Vault is an excellent service and a welcome addition to the overall Azure services family. It promotes the secure management of cryptographic keys without the associated overhead, which is an important step to adopting and implementing better security within our applications. In the next article, we’ll see how you can set up a Key Vault for our application and use the .NET SDK to create, manage and retrieve keys.

Web API Documentation With Swagger

DownLoad Complete Project: WebApiDocumentationWithSwagger

“If it is not documented, it doesn’t exist. As long as information is retained in someone’s head, it is vulnerable to loss.”

That is absolutely valid when we talk about APIs, because any small-to-complex API needs to be documented, in order to make it easy to use. This might be an interesting challenge, because you have to find the bridge between the abstract world of computer programming and the way people think and work. Here is where Swagger shows its great utility.

Swagger is a specification for documenting REST API. It specifies the format (URL, method, and representation) to describe REST web services. Swagger is meant to enable the service producer to update the service documentation in real-time so that client and documentation systems are moving at the same pace as the server.

Microsoft also provide its own Api documentation libraries that automatically generates help page content for the web APIs on your site.The help page package is a good start but it is lacking things like discoverability and live interactions. This is where Swagger comes to the rescue.

Adding Swagger to your Web API does not replace ASP.NET Web API help pages. You can have both running side by side, if desired.

Adding swagger to Api Project

To add Swagger to an ASP.NET Web Api, we will install an open source project called Swashbuckle via nuget.


After the package is installed, navigate to App_Start in the Solution Explorer. You’ll notice a new file called SwaggerConfig.cs. This file is where Swagger is enabled and any configuration options should be set here.


Now you just need to set up Swagger by adding below code:


 Start a new debugging session (F5) and navigate to

http://localhost:%5BPORT_NUM%5D/swagger. You should see Swagger UI help pages for your APIs.


Now you can see,all api methods of web api comes with pretty good documentation and you expand/hide the method definition.

Below is api controller code in which i created two methods that comes in swagger documentation.


if you expand method defination by click on individual methods,then you will find all required api level  meta data  like request,response.



The good thing about swagger is you can invoke api methods with swagger UI without using any external reset client like DHC,postman etc.There is “Try it now” button on each api method and you can call methods and get response from server.


Another useful feature of Swagger is to create a json document with the entire documentation of the API endpoints and models.

In order to open the json document, where your documentation is included, access the link on the top of the dashboard.

copy below highlighted url from swagger ui and enter in new browser tab.After that you will get pretty nice json document that contains all meta data about all api methods.


Below image shows json documentation.



you have an API which is documented and offers a nice experience to developers. You should keep in mind that this process of documenting APIs should start at the very beginning of the development process, for it to be easy to maintain.




WebApi Exception: Multiple Action were found that match the request.

Usually webapi controller contains GET,GET(id),Post,Put,Patch & Delete methods but sometimes we need to create multiple get or post method or more custom methods to support http verbs.

Let say we have existing Get() method and now we want to add one more custom method names as “GetALL()” to support http Get verb.My Api Controller code looks like:


When you defined your new method with http Get verb along with existing Get() method  and run webapi than below error comes:


WebApiConfig.cs for above code which is created by default when new api project created.


So talk about why this error comes if every thing is perfect in code.So look at the defined route in config file and .In webapi routing only controller name is mentioned in route template and there is no action like (Get,Post or any Custom Action Name) are defined.

Here is the difference in mvc routing and Webapi routing. In mvc routing action name are by default included in Url’s while in webapi actions names are not mandatory.

MVC Route: url: “{controller}/{action}/{id}”

WebApi Route: routeTemplate: “api/{controller}/{id}”

So when ever any request comes to webapi,it always goes to default http verbs and if default GET or Post methods used then it returns a response to the client.

But when we have defined some custom methods along with default Api methods than same request will thrown an exception because now there are multiple action methods that supports http verbs  and server not able to identify which method have to execute.

Why this happened because we have not defined any specific action name in webapi Route.

So what is the solution of this problem as we need many custom action names along with default http verbs in our webapi solution to solve the day-to-day business needs.So question comes in mind whether custom method names are allowed in webapi or not.

Then answer is “yes”,off-course we can add custom action names as much as we want but some changes have to make in webapi routing to support custom action names.

To support custom action method names we have to add {action} with controller name in default route as per below:

routeTemplate: “api/{controller}/{action}/{id}”

Now Complete Webapiconfig.cs after make some changes:


Now Test our methods with these changes.

.1.when request goes to default methods:


2.When request goes to custom action method (GetAll)








WebApi Field Level Response Without Implementing Odata.

Download Complete Project: WebApiFieldLevelSelection

When you are writing a RESTful web API you often want to allow clients to feed a list of fields to the API that the clients need. The reason is to return only the useful data to the client. Say for example, you have an entity called Product that has many properties. The client may need only a few properties of the Product object. If you return the entire object every time the client asks for a product.

it unnecessarily wastes bandwidth and increases the response time. So to avoid that you can accept a list of fields the client wants and return only those. How can you do that?

Odata is best way to achieve this where you can use $Select command to fetch specific database fields in response.

Problem comes when webapi not implementing odata then how can achieve this functionality ?

To achieve this you have to use some basic .net objects like dynamic,expendoObject or  generic collections etc.

Let’s resolve the problem step by step:

  1. Create empty Webapi Project with controller name as “ProductCategory” with Two Get is parameter less and other with string parameter that will accept comma separated field list in request.
  2. Get() method will return all fields of database in response while Get(string fields) method accept list of fields and return desired fields in response.
  3. In below example i have use hardcoded list with dummy values.You may replace it with actual database.



 DynamicObject Method:

DynamicObject accept the list of fields  and return I have use .net reflection to get the value of each fields and respective value to dictionary<string,object> object. later this dictionary object pass to linq query.





  1.  When user pass two fields (productid and productName) as query string in can see only two fields are coming in json response.


  1.  When User pass three fields (productId,ProductName,Price) as query string in request.You can see now three fields are coming with json response.


So you can see how you can implement field level selection on webapi without Odata implementation.