Quantcast
Channel: General .NET – Brian Pedersen's Sitecore and .NET Blog
Viewing all 167 articles
Browse latest View live

Sitecore Create/Read/Update/Delete/Copy/Move/Rename item – the beginners guide

$
0
0

New Sitecorians often ask the most trivial questions, and I am happy to answer them. This question popped up lately: How do you perform basic CRUD operations on your Sitecore items? Well, it’s easy:

READ ITEM (GET ITEM):

// Get item from content database:
Item item = Sitecore.Context.ContentDatabase.GetItem("/sitecore/content/home");
// Get item from ID:
Item item = Sitecore.Context.ContentDatabase.GetItem(new Sitecore.Data.ID("{9464f2c9-8490-40e9-a95b-17f8a5128da6}");
// Get item from named database
Item item = Sitecore.Configuration.Factory.GetDatabase("master").GetItem("/sitecore/content/home");

CREATE ITEM:

// You need to have a root item to create item under:
Item item = Sitecore.Context.ContentDatabase.GetItem("/sitecore/content/home");
// You also need a template to create the item from:
TemplateID template = new TemplateID(new ID("{434b38a2-b929-4a89-bbc8-a6b66281e014}"));
// Then you can create a new item:
Item newItem = item.Add("new item", template);
// If you wish to create an item based on a branch:
BranchId branch = new BranchId(new ID("{4f254169-7666-4c2e-8021-a05026d5a2e2}"));
Item newItem = item.Add("new item", branch);

UPDATE ITEM:

// You need to have an item to update:
// Remember to always update items in the MASTER database,
// NOT the WEB Database:
Item item = Factory.GetDatabase("master").GetItem("/sitecore/content/home");
// You then set the item in editing mode
item.Editing.BeginEdit();
try
{ 
  // Change the contents of the fields to update
  item.Fields["field"].Value = "new value";
  // End edit writes the updates to the database:
  item.Editing.EndEdit();
}
catch (Exception ex)
{
  // in case of an exception, you do not really
  // need to cancel editing, but it is good 
  // manners and it indicates that you know
  // what the code is doing
  item.Editing.CancelEdit();
}

DELETE ITEM:

// You need to have an item to delete:
// Remember to always update items in the MASTER database,
// NOT the WEB Database:
Item item = Factory.GetDatabase("master").GetItem("/sitecore/content/home");
// Remember that deleting one item also delete the children.
// The item.Recycle() moves the item to the recycle bin, whilst the 
// item.Delete() permanently deletes the item:
item.Recycle();

COPY ITEM:

// You need to have a source destination:
Item destinationItem = Factory.GetDatabase("master").GetItem("/sitecore/content/home");
// You also need an item to copy:
Item sourceItem = Factory.GetDatabase("master").GetItem("/sitecore/content/sourceitem");
// Then you can copy:
sourceItem.CopyTo(destinationItem, "new name");

MOVE ITEM:

// You need to have a source destination:
Item destinationItem = Factory.GetDatabase("master").GetItem("/sitecore/content/home");
// You also need an item to move:
Item sourceItem = Factory.GetDatabase("master").GetItem("/sitecore/content/sourceitem");
// Then you can copy:
sourceItem.MoveTo(destinationItem);

RENAME ITEM:

// You need to have an item to rename:
Item item = Factory.GetDatabase("master").GetItem("/sitecore/content/home");
item.Editing.BeginEdit();
item.Name = "new name";
item.Editing.EndEdit();

MORE TO READ:


C# Get expiry timestamp from JWT token

$
0
0

JWT tokens (or Json Web Tokens) are an open-standard the defines a way to transmit information between 2 parties in a secure manner. Identity Server 4 uses JWT as a security token.

These tokens have an expiry timestamp, and if you handle the tokens yourself, you need to read the token expiry and refresh the token if the token is expired.

Microsoft have made a brilliant library, System.IdentityModel.Tokens.Jwt to handle JWT tokens, but the package does also have a lot of dependencies that were incompatible with my application, so I chose to use JWT.Net instead, as this package does not have any dependencies at all.

THE ANATOMY OF A JWT TOKEN:

Json Web Token Anatomy

Json Web Token Anatomy

A JWT token consists of a header, a payload and a signature. It is in the payload that you find the expiry timestamp in the “exp” field. The timestamp is the stupid UNIX timestamp format, but fear not, .NET knows how to convert the timestamp to a real DateTime.

STEP 1: CREATE A PAYLOAD MODEL CLASS

JWT.Net is not as powerful as System.IdentityModel.Tokens.Jwt, so you need to create a model class of the payload section. The class, however, is very simple:

namespace MyCode
{
  public class JwtToken
  {
    public long exp { get; set; }
  }
}

STEP2: USE JWT.Net TO GET THE EXPIRY FROM THE TOKEN PAYLOAD

Final step is to take the JWT Token string and decode it to the JwtToken class, then convert the UNIX timestamp to a local time:

using System;
using JWT;
using JWT.Algorithms;
using JWT.Serializers;

namespace MyCode
{
  public class JWTService
  {
    private IJsonSerializer _serializer = new JsonNetSerializer();
    private IDateTimeProvider _provider = new UtcDateTimeProvider();
    private IBase64UrlEncoder _urlEncoder = new JwtBase64UrlEncoder();
    private IJwtAlgorithm _algorithm = new HMACSHA256Algorithm();

    public DateTime GetExpiryTimestamp(string accessToken)
    {
      try
      {
        IJwtValidator _validator = new JwtValidator(_serializer, _provider);
        IJwtDecoder decoder = new JwtDecoder(_serializer, _validator, _urlEncoder, _algorithm);
        var token = decoder.DecodeToObject<JwtToken>(accessToken);
        DateTimeOffset dateTimeOffset = DateTimeOffset.FromUnixTimeSeconds(token.exp);
        return dateTimeOffset.LocalDateTime;
      }
      catch (TokenExpiredException)
      {
        return DateTime.MinValue;
      }
      catch (SignatureVerificationException)
      {
        return DateTime.MinValue;
      }
      catch (Exception ex)
      {
        // ... remember to handle the generic exception ...
        return DateTime.MinValue;
      }
    }
  }
}

That’s it. You are now a security expert. Happy coding.

FUNNY FINAL NOTE:

The term “JWT Token” is a redundant acronym syndrome, or RAS-syndrome. It is the use of the last word of the acronym in conjunction with the abbreviated form. It’s like saying “PIN number” or “PDF format”. In reality, when saying “JWT Token”, you are really saying “json web token token” :).

MORE TO READ:

HttpClient retry mechanism with .NET Core, Polly and IHttpClientFactory

$
0
0

A lot of HttpClient errors are temporary and is caused by server overload, temporary nerwork timeouts and generic gliches in the Matrix. These scenarios can be dealt with using a retry pattern. In .NET Core, the most common retry library is the Polly library:

Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner. From version 6.0.1, Polly targets .NET Standard 1.1 and 2.0+.
http://www.thepollyproject.org/

Polly makes it relatively easy to implement a retry pattern, as long as you use the IHttpClient and IHttpClientFactory.

But enugh talk, lets code.

STEP 1: THE NUGET PACKAGES

You need (at least) the following NuGet Packages:

  • Polly
  • Microsoft.Extensions.Http.Polly

STEP 2: CONFIGURE SERVICES IN STARTUP.CS

In the services configuration, you need to add a IHttpClientFactory and attach a PolicyHandler to the factory:

//ConfigureServices()  - Startup.cs
services.AddHttpClient("HttpClient").AddPolicyHandler(GetRetryPolicy());

private static IAsyncPolicy<HttpResponseMessage> GetRetryPolicy()
{
  return HttpPolicyExtensions
    // Handle HttpRequestExceptions, 408 and 5xx status codes
    .HandleTransientHttpError()
    // Handle 404 not found
	.OrResult(msg => msg.StatusCode == System.Net.HttpStatusCode.NotFound)
    // Handle 401 Unauthorized
	.OrResult(msg => msg.StatusCode == System.Net.HttpStatusCode.Unauthorized)
    // What to do if any of the above erros occur:
	// Retry 3 times, each time wait 1,2 and 4 seconds before retrying.
	.WaitAndRetryAsync(3, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));
}

STEP 3: USE THE IHttpClientFactory IN THE CALLING CLASS

The IHttpClientFactory can be injected using constructor injection. The cool part of the Polly implementation is that your HttpClient code does not contain any special retry-code, just the usual Get or Post calls:

namespace MyCode
{
  public class MyClass
  {
    private readonly IHttpClientFactory _clientFactory;

    public MyClass(IHttpClientFactory clientFactory)
    {
      _clientFactory = clientFactory;
    }

    public async Task<int> PostMessage(string postData)
    {
      var httpClient = _clientFactory.CreateClient("HttpClient");

      using (var content = new StringContent(postData, Encoding.UTF8, "application/json"))
      {
        var result = await httpClient.PostAsync($"{url}", content);
        // The call was a success
        if (result.StatusCode == HttpStatusCode.Accepted)
        {
          return result.StatusCode;
        }
        // The call was not a success, do something
        else
        {
          // Do something
          return result.StatusCode;
        }
      }
    }
  }
}

The httpClient.PostAsync() will retry the post call automatically if any of the conditions described in the GetRetryPolicy() occurs. It will only return after the call is either successful or the retry count is met.

MORE TO READ:

Remove duplicates from XML feed

$
0
0

Apparently XML isn’t dead yet, and today I received a Google Product Feed in the RSS 2.0 XML format. The feed was full of duplicates and my job is to remove them:

<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:g="http://base.google.com/ns/1.0">
    <channel>
        <item>
            <g:id>100</g:id>
            <title>Product 100</title>
            ...
            ...
        </item>
        <item>
            <g:id>100</g:id>
            <title>Product 100</title>
            ...
            ...
        </item>
        <item>
            <g:id>200</g:id>
            <title>Product 200</title>
            ...
            ...
        </item>
        <item>
            <g:id>300</g:id>
            <title>Product 300</title>
            ...
            ...
        </item>
    </channel>
</rss>

As you can see, “Product 100” appears twice.

THE SOLUTION:

A little LINQ can get you far:

using System.Xml;
using System.Xml.Linq;
using System.Linq;

var document = XDocument.Parse(theXMLString);

XNamespace g = "http://base.google.com/ns/1.0";
document.Descendants().Where(node => node.Name == "item");
    .GroupBy(node => node.Element(g+"id").Value)
    .SelectMany(node => node.Skip(1))
    .Remove();

HOW IT WORKS:

  • document.Descendants().Where(node => node.Name == “item”): Get all elements called “item
  • GroupBy(node => node.Element(g+”id”).Value): Group them by the “g:id” element.
  • SelectMany(node => node.Skip(1)): Select every one of them apart from the first one
  • Remove(): Delete all that were selected

MORE TO READ:

C# Azure TelemetryClient will leak memory if not implemented as a singleton

$
0
0

I noticed that my classic .net web application would leak memory after I implemented metrics for some background tasks.

Memory usage of web application

Memory usage of web application

Further investigation showed that my MetricAggregationManager would not release its memory.

Object was not garbage collected

Object was not garbage collected

Since one of the major changes was the implementation of a TelemetryClient, and since the memory not being released was from the Microsoft.ApplicationInsights.Metrics namespace, I concluded that the problem lies within the creation of the TelemetryClient:

using System;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.Metrics;

namespace MyCode
{
  public class BaseProcessor
  {
    private readonly TelemetryClient _telemetryClient;
    
    private BaseProcessor()
    {
      string instrumentationKey = "somekey"
      var telemetryConfiguration = new TelemetryConfiguration { InstrumentationKey = instrumentationKey };
      // This is a no-go. I should not create a new instance for every BaseProcessor
      _telemetryClient = new TelemetryClient(telemetryConfiguration);
    }
  }
}

The code above will create a new TelemetryClient for each creation of my base class. The TelemetryClient will collect metrics and store those in memory until either a set time or number of metrics are met, and then dump the metrics to Application Insights.

So when the BaseClass is disposed, TelemetryClient is not, leaving memory to hang, and thus a memory leak is in effect.

HOW TO SOLVE IT?

The solution is simple. All you need to do is to create a singleton pattern for your TelemetryClient. Having only one instance will allow the client to collect and send metrics in peace. Your code will be much faster (it takes a millisecond or so to create a TelemetryClient) and you will not have any memory leaks.

USE DEPENDENCY INJECTION:

In .NET Core you can add the TelemetryClient to the service collection:

private static void ConfigureServices(IServiceCollection services)
{
  // Add Application Insights
  var telemetryConfiguration = TelemetryConfiguration.CreateDefault();
  telemetryConfiguration.InstrumentationKey = "somekey"
  var telemetryClient = new TelemetryClient(telemetryConfiguration);
  services.AddSingleton(telemetryClient);
}

And then reference it using constructor injection:

using System;
using System.Runtime.Serialization;
using Microsoft.ApplicationInsights;
using Microsoft.AspNetCore.Mvc;

namespace MyCode
{
  [ApiController]
  [Route("/api/[controller]")]
  [Produces("application/json")]
  public class MyController : ControllerBase
  {
    private readonly TelemetryClient _telemetryClient;

    public MyController(TelemetryClient telemetryClient)
    {
      _telemetryClient = telemetryClient;
    }
  }
}

USE A STATIC VARIABLE:

If you do not have access to a DI framework, you could also just create a static variable:

using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.Extensibility;
using System.Collections.Generic;

namespace MyCode
{
  public static class TelemetryFactory
  {
    private static TelemetryClient _telemetryClient;

    public static TelemetryClient GetTelemetryClient()
    {
      if (_telemetryClients == null)
      {
        string instrumentationKey = "somekey";
        var telemetryConfiguration = new TelemetryConfiguration { InstrumentationKey = instrumentationKey };
        _telemetryClient = new TelemetryClient(telemetryConfiguration);
      }

      return _telemetryClient;
    }
  }
}

And then reference the static variable instead:

using System;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.Metrics;

namespace MyCode
{
  public class BaseProcessor
  {
    private readonly TelemetryClient _telemetryClient;
    
    private BaseProcessor()
    {
      _telemetryClient = TelemetryFactory.GetTelemetryClient();
    }
  }
}

MORE TO READ:

 

Manipulating XML Google Merchant Data using C# and LINQ

$
0
0

Receiving a Google Merchant Data feed (also known as a Google Product Feed) can be fairly easily manipulated on import time using a little C# and LINQ.

The feed is basically a XML RSS 2.0 feed with some added properties using the namespace xmlns:g=”http://base.google.com/ns/1.0.

These feeds often comes from older systems and data is created by busy merchants, so data can be relatively dirty, and a cleanup is required before you add them to your product database.

The feed could look like this:

<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0" xmlns:g="http://base.google.com/ns/1.0">
    <channel>
        <title>Google product feed</title>
        <link href="https://pentia.dk" rel="alternate" type="text/html"/>
        <description>Google product feed</description>
        <item>
            <g:id><![CDATA[1123432]]></g:id>
            <title><![CDATA[Some product]]></g:title>
            <link><![CDATA[https://pentia.dk]]></g:link>
            <g:description><![CDATA[description]]></g:description>
            <g:gtin><![CDATA[5712750043243446]]></g:gtin>
            <g:mpn><![CDATA[34432-00]]></g:mpn>
            <g:image_link><![CDATA[https://pentia.dk/someimage.jpg]]></g:image_link>
            <g:product_type><![CDATA[Home &gt; Dresses &gt; Maxi Dresses]]></g:product_type>
            <g:condition><![CDATA[new]]></g:condition>
            <g:availability><![CDATA[in stock]]></g:availability>
            <g:price><![CDATA[15.00 USD]]></g:price>
            <g:sale_price><![CDATA[10.00 USD]]></g:sale_price>
        </item>
        ...
        ...
    </channel>
</rss>

See the full specification in the Google Merchant Center help.

Sometimes the feed would contain content that you does not need, and a little XML manipulation is required.

But first thing first:

STEP 1: GET THE XML FEED AND CONVERT IT INTO AN XML DOCUMENT

using System;
using System.Net;
using System.Net.Http;
using System.Xml;
using System.Xml.Linq;
using System.Linq;
using System.Dynamic;

private static HttpClient _httpClient = new HttpClient();

public static async Task<string> GetFeed(string url)
{
  using (var result = await _httpClient.GetAsync($"{url}"))
  {
    string content = await result.Content.ReadAsStringAsync();
    return content;
  }
}

public static void Run()
{
  // Get the RSS 2.0 XML data
  string feedData = GetData("https://url/thefeed.xml").Result;

  // Convert the data into an XDocument
  var document = XDocument.Parse(feedData);
  // Speficy the Google namespace
  XNamespace g = "http://base.google.com/ns/1.0";
  // Get a list of all "item" nodes
  var items = document.Descendants().Where(node =&amp;gt; node.Name == "item");
    
  // Now we are ready to manipulate
  // ...
  // ...
}

NOW TO THE MANIPULATIONS:

EXAMPLE 1: Remove duplicates – all products with the same ID is removed:

items.GroupBy(node => node.Element(g+"id").Value)
  .SelectMany(node => node.Skip(1))
  .Remove();

EXAMPLE 2: Remove all products out of stock:

items = document.Descendants()
  .Where(node => node.Name == "item" 
         && node.Descendants()
         .Any(desc => desc.Name == g + "availability" 
              && desc.Value == "out of stock"));
items.Remove();

EXAMPLE 3: Remove adverts not on sale (all adverts that do not have a g:sale_price node)

items = document.Descendants()
  .Where(node => node.Name == "item" 
         && node.Descendants()
         .Any(desc => desc.Name == g + "sale_price" 
         && desc.Value.Trim() == string.Empty));
items.Remove();

EXAMPLE 4: ADD TRACKING PARAMETERS TO URL’S (adding query string parameters to the URL)

var items = document.Descendants().Where(node => node.Name == "item");
foreach (var item in items)
{
  string url = item.Element("link").Value;
  if (url.Contains("?"))
    item.Element("link").ReplaceNodes(new XCData(url + "&" + "utm_source=s&utm_medium=m&utm_campaign=c"));
  else  
    item.Element("link").ReplaceNodes(new XCData(url + "?" + "utm_source=s&utm_medium=m&utm_campaign=c"));
}

EXAMPLE 5: CHANGE THE TITLE (for example, if the feed contains used products, you might want to add the word “used” to the title

var items = document.Descendants().Where(node => node.Name == "item");
foreach (var item in items)
{
  var title = "USED " + item.Element("title").Value;
  item.Element("title").ReplaceNodes(title);
}

…AND THE EXOTIC EXAMPLE: COMBINE ALL PRODUCTS IF THEY BELONG TO A PRODUCT_TYPE THAT CONTAIN LESS THAN 2 PRODUCTS

foreach(var group in items.GroupBy(node => node.Element(g+"product_type").Value))
{
  if (group.Count() <= 2)
  {
    foreach (var advert in group)
    {
      advert.Element(g+"product_type").ReplaceNodes(new XCData("Other"));
    }
  }
}

Finally you can grab the manipulated document and do what you need to do:

// Grab converted content
string convertedFeedData = document.ToString();

I hope this gives some examples on how to do much with less code.

MORE TO READ:

Sending JSON with .NET Core QueueClient.SendMessageAsync

$
0
0

In .NET Core, Microsoft.Azure.Storage.Queue have been replaced with Azure.Storage.Queues, and the CloudQueueMessage that you added using queue.AddMessageAsync() have been replaced with the simpler queue.SendMessageAsync(string) method.

But this introduces a strange situation, when adding serialized JSON objects. If you just add the serialized object to the queue:

using Azure.Storage.Queues;
using Newtonsoft.Json;
using System;

public async Task SendObject(object someObject)
{
  await queueClient.SendMessageAsync(JsonConvert.SerializeObject(someObject));
}

The queue cannot be opened from Visual Studio. You will get an error that the string is not Base 64 encoded.

System.Private.CoreLib: The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.

So you need to Base 64 encode the serialized object before adding it to the queue:

using Azure.Storage.Queues;
using Newtonsoft.Json;
using System;

public async Task SendObject(object someObject)
{
  await queueClient.SendMessageAsync(Base64Encode(JsonConvert.SerializeObject(someObject)));
}

private static string Base64Encode(string plainText)
{
  var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
  return System.Convert.ToBase64String(plainTextBytes);
}

When reading the serialized JSON string, you do not need to Base 64 decode the string, it will be directly readable.

MORE TO READ:

Filtering Application Insights telemetry using a ITelemetryProcessor

$
0
0

Application Insights is a wonderful tool. Especially when you have a microservice or multi-application environment and you need one place for all the logs and metrics. But it’s not free, and the costs can run wild if you are not careful.

Although the message logging usually is the most expensive cost, remote dependencies and requests can take up a lot of the costs too.

You can suppress the telemetry data by using a ITelemetryProcessor. The ITelemetryProcessor processes the telemetry information before it is send to Application Insights, and can be useful in many situations, including as a filter.

Take a look at this graph, the red part are my dependencies, and you can see the drop in tracking after the filter was applied:

Application Insights Estimated Costs

Application Insights Estimated Costs

This is an example on the dependency telemetry filter that will exclude all successful dependencies to be tracked, but allow all those that fails:

using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;

namespace MyCode
{
  public class DependencyTelemetryFilter : ITelemetryProcessor
  {
    private readonly ITelemetryProcessor _nextProcessor;

    public DependencyTelemetryFilter(ITelemetryProcessor nextProcessor)
    {
      _nextProcessor = nextProcessor;
    }
    
    public void Process(ITelemetry telemetry)
    {
      if (telemetry is DependencyTelemetry dependencyTelemetry)
      {
        if (dependencyTelemetry.Success == true)
        {
          return;
        }
      }

      _nextProcessor.Process(telemetry);
    }
  }
}

To add the filter, simply call the AddApplicationInsightsTelemetryProcessor method in your startup code:

private void ConfigureApplicationInsights(IServiceCollection services)
{
  services.AddApplicationInsightsTelemetryProcessor&lt;DependencyTelemetryFilter&gt;();
}

MORE TO READ:


Sitecore LinkField TargetItem is NULL – what’s wrong?

$
0
0

An ancient topic, that pops up once or twice every year. The TargetItem of Sitecore.Data.Links.LinkField returns NULL and you are CERTAIN that the item is published. What to do?

CASE 1: THERE IS SECURITY SET ON THE TARGETITEM

Items where extranet\Anonymous does not have read access, the item will exist in the WEB database, but is not available by the calling user, and the return value is NULL.

The solution is simple, use the SecurityDisabler before reading the LinkField:

using (new SecurityDisabler())
{
    LinkField linkField = myItem.Fields["myfield"];
    if (linkField != null && linkField.TargetItem != null)
    {
      // do stuff
    }
  }
}

CASE 2: YOU FROGOT TO PUBLISH THE TEMPLETE

The item is published, but the item template is not. Go to the WEB database and find the item. If there is no fields on the item, the template is most likely missing. Remember to publish the template.

CASE 3: THE LINKFIELD IS POINTING TO AN EXTERNAL URL

The LinkField have a LinkType. If the LinkType is “internal“, the targetitem is valid. If the LinkType is “external“, you have added an external url, and you need to read the “Url” property instead.

Click here to get a method that gives you the correct link regardless of the linktype.

CASE 4: THE LINKFIELD IS POINTING TO A MEDIA LIBRARY ITEM

The TargetItem is not NULL, but it is poiting to the media libray item which have no URL. Instead, you need to use the MediaManager to get the URL of the media item.

Click here to see how to use the MediaManager.

CASE 5: THE ITEM IS ACTUALLY NOT PUBLISHED

Ahh, the most embarrassing situation. You published the item with the LinkField, but not the item that the LinkField is pointing to.

Don’t worry. This happens to all of us. More than once.

MORE TO READ:

 

Programmatically create and delete Azure Cognitive Search Indexes from C# code

$
0
0

Azure Cognitive Search is the search engine of choice when using Microsoft Azure. It comes with the same search features as search engines like Elastic Search and SOLR Search (you can even use the SOLR search query language).

An example index

An example index

One cool feature is the ability to create and delete an index based off a C# model class. This is very useful, as it enables you to store the index definition in code alongside your application, and you can create a command line interface to do index modifications easily.

The code is slightly longer than usual, but hold on, it’s not what complicated at all.

STEP 1: THE NUGET PACKAGES AND RFERENCES

You need the following references:

STEP 2: CREATE A MANAGEMENT CONTEXT

This management context class is a class that will help create the index. It consists of an interface and a implementation.

namespace MyCode
{
  public interface IIndexManagementContext
  {
    /// <summary>
    /// Create or update the index
    /// </summary>
    /// <typeparam name="T">The type of the index definition for the index to create or update</typeparam>
    void CreateOrUpdateIndex<T>() where T : IIndexDefinition, new();

    /// <summary>
    /// Delete the index given by a given index definition. 
    /// </summary>
    /// <typeparam name="T">The type of the index definition for the index to delete</typeparam>
    void DeleteIndex<T>() where T : IIndexDefinition, new();
  }
}
using System;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;

namespace MyCode
{
  public class AzureIndexManagementContext : IIndexManagementContext
  {
    // Since the index name is stored in the index definition class, but should not
    // become an index field, the index name have been marked as "JSON ignore"
    // and the field indexer should therefore ignore the index name when
    // creating the index fields.
    private class IgnoreJsonIgnoreMarkedPropertiesContractResolver : DefaultContractResolver
    {
      protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
      {
        IList<JsonProperty> properties = base.CreateProperties(type, memberSerialization);
        properties = properties.Where(p => !p.Ignored).ToList();
        return properties;
      }
    }

    private readonly ISearchServiceClient _searchServiceClient;

    public AzureIndexManagementContext(string searchServiceName, string adminApiKey)
    {
      _searchServiceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
    }

    public void CreateOrUpdateIndex<T>() where T : IIndexDefinition, new()
    {
      string name = new T().IndexName;
      var definition = new Index
      {
        Name = name,
        Fields = FieldBuilder.BuildForType<T>(new IgnoreJsonIgnoreMarkedPropertiesContractResolver())
      };

      try
      {
        _searchServiceClient.Indexes.CreateOrUpdate(definition);
      }
      catch (Microsoft.Rest.Azure.CloudException e)
      {
        // TODO: Log the error and throw exception
      }
    }

    public void DeleteIndex<T>() where T : IIndexDefinition, new()
    {
      string name = new T().IndexName;
      try
      {
        _searchServiceClient.Indexes.Delete(name);
      }
      catch (Microsoft.Rest.Azure.CloudException e)
      {
        // TODO: Log the error and throw exception
      }
    }
  }
}

STEP 3: CREATE A MODEL CLASS THAT DEFINES THE INDEX

This class will define the actual index. It uses attributes like IsFilterable and IsSearchable to define the properties for the index. You create one model class per index, and this is just one example of such a model class.

This also consists of one interface and one implementation.

using Newtonsoft.Json;

namespace MyCode
{
  public interface IIndexDefinition
  {
    // The name of the index. 
    // Property is ignored when serialized to JSON
    [JsonIgnore]
    string IndexName { get; }
  }
}
using System;
using Microsoft.Azure.Search;
using System.ComponentModel.DataAnnotations;
using Sitecore.Configuration;

namespace MyCode
{
  // This is just an example index. You must create your own class
  // to define your index.
  public class UserIndexDefinition : IIndexDefinition
  {
    public string IndexName = "MyUserIndex";

    // All indexes needs a key. 
    [Key]
    public string IndexKey { get; set; }

    [IsFilterable]
    public string UserID { get; set; }

    [IsFilterable, IsSearchable]
    public string Firstname { get; set; }

    [IsFilterable, IsSearchable]
    public string LastName { get; set; }

    [IsFilterable, IsSearchable]
    public string FullName { get; set; }

    [IsFilterable, IsSearchable]
    public string Email { get; set; }

    [IsSortable]
    public DateTime CreatedDate { get; set; }
  }
}

STEP 4: USE THE MANAGEMENT CONTEXT TO CREATE OR DELETE THE INDEX

First you need to create the context and the index model class. The “name” is the search index instance name, and the apikey1 is the “Primary Admin Key” as found in your Azure Search Index:

Azure Search Keys

Azure Search Keys

IIndexManagementContext indexManagementContext => new AzureIndexManagementContext("name", "apikey1");
IIndexDefinition userIndexDefinition => new UserIndexDefinition();

To create the index, use the following code:

var methodInfo = typeof(IIndexManagementContext).GetMethod("CreateOrUpdateIndex");
var genericMethod = methodInfo.MakeGenericMethod(userIndexDefinition.GetType());
genericMethod.Invoke(indexManagementContext, null);

To delete the index, use the following code:

var methodInfo = typeof(IIndexManagementContext).GetMethod("DeleteIndex");
var genericMethod = methodInfo.MakeGenericMethod(userIndexDefinition.GetType());
genericMethod.Invoke(indexManagementContext, null);

MORE TO READ:

 

Azure Cognitive Search from .NET Core and C#

$
0
0

The Azure Cognitive Search engine is the search of choice in the Microsoft Azure. The search engine can be used in a myriad of ways and there are so many options that it can be difficult to find a starting point.

To help myself, I made this simple class that implements one of the simplest setups and a great starting point for more advanced searches.

The class is an example on how to do a free text search in one index.

THE NUGET REFERENCES: 

The code references the following packages:

STEP 1: DEFINE A MODEL CLASS FOR THE INDEX

You must define a model class that matches the fields you wish to have returned from the index. This is my sample index called “advert”:

Sample Advert Index

And I have defined the fields relevant for my search result:

public class Advert
{
  public string Id { get; set; }
  public string Title { get; set; }
  public string Description { get; set; }
}

STEP 2: THE SAMPLE SEARCH CLASS:

This is just an example search class that implement the most basic functions. You need to specify your own URL to the search engine and the proper API key.

using Azure;
using Azure.Search.Documents;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace MyCode
{
  public class AzureSearch
  {
    public async Task<IEnumerable<string>> Search(string query)
    {
      SearchClient searchClient = CreateSearchClientForQueries("advert");
      SearchOptions options = new SearchOptions() { IncludeTotalCount = true };
      var results = await searchClient.SearchAsync<Advert>(query, options);

      List<string> documents = new List<string>();
      Console.WriteLine(results.Value.TotalCount);
      foreach (var s in results.Value.GetResults())
      {
        documents.Add(s.Document.Title);
      }
      return documents;
    }

    private static SearchClient CreateSearchClientForQueries(string indexName)
    {
      string searchServiceEndPoint = "https://mysearch.windows.net";
      string queryApiKey = "the api key";

      SearchClient searchClient = new SearchClient(new Uri(searchServiceEndPoint), indexName, new AzureKeyCredential(queryApiKey));
      return searchClient;
    }
  }
}

STEP 3: THE USAGE

Remember that the code above is just a sample on how to do a basic free-text search.

class Program
{
  static void Main(string[] args)
  {
    var result = new AzureSearch().Search("BrianCaos").Result;
    foreach (var r in result)
      Console.WriteLine(r);
  }
}

MORE ADVANCED SEARCHES:

To do more advanced searches, you usually modify the SearchOptions. For example, if you wish to apply a filter to the search, you can use the “Filter” property. This property takes a slightly different format, as “=” is written “eq”, “>” is “gt” and “<” is “lt”.

public async Task<IEnumerable<string>> Search(string query)
{
  SearchClient searchClient = CreateSearchClientForQueries("advert");
  SearchOptions options = new SearchOptions() 
  { 
    IncludeTotalCount = true, 
    Filter = "MyField eq true" 
  };
  var results = await searchClient.SearchAsync<Advert>(query, options);

  List<string> documents = new List<string>();
  Console.WriteLine(results.Value.TotalCount);
  foreach (var s in results.Value.GetResults())
  {
    documents.Add(s.Document.Title);
  }
  return documents;
}

PAGINATION:

To do pages searches, you use the SearchOptions again. Use the “Size” and “Skip” parameters to specify paging. “Size” determines the number of results, “Skip” determines how many results to skip before returning results. This example implements a 1-based paging:

public async Task<IEnumerable<string>> Search(string query, int page, int pageSize)
{
  SearchClient searchClient = CreateSearchClientForQueries("advert");
  SearchOptions options = new SearchOptions() 
  { 
    IncludeTotalCount = true, 
    Size = pageSize,
    Skip = (page-1)*pageSize
  };
  var results = await searchClient.SearchAsync<Advert>(query, options);

  List<string> documents = new List<string>();
  Console.WriteLine(results.Value.TotalCount);
  foreach (var s in results.Value.GetResults())
  {
    documents.Add(s.Document.Title);
  }
  return documents;
}

MORE TO READ:

Write to file from multiple threads async with C# and .NET Core

$
0
0

There are several patterns on how to allow multiple threads to write to the same file. the ReaderWriterLock class is invented for this purpose. Another classic is using semaphors and the lock statement to lock a shared resource.

This article explains how to use a ConcurrentQueue and a always running Task to accomplish the same feat.

The theory behind this is:

  • Threads deliver what to write to the file to the ConcurrentQueue.
  • A task running in the background will read from the ConcurrentQueue and do the actual file writing.

This allows the shared resource to be access from one thread only (the task running in the background) and everyone else to deliver their payload to a thread-safe queue.

But enough talk, lets code.

THE FILE WRITER CLASS

using System.Collections.Concurrent;
using System.IO;
using System.Threading;
using System.Threading.Tasks;

namespace MyCode
{
  public class MultiThreadFileWriter
  {
    private static ConcurrentQueue<string> _textToWrite = new ConcurrentQueue<string>();
    private CancellationTokenSource _source = new CancellationTokenSource();
    private CancellationToken _token;

    public MultiThreadFileWriter()
    {
      _token = _source.Token;
      // This is the task that will run
      // in the background and do the actual file writing
      Task.Run(WriteToFile, _token);
    }

    /// The public method where a thread can ask for a line
    /// to be written.
    public void WriteLine(string line)
    {
      _textToWrite.Enqueue(line);
    }

    /// The actual file writer, running
    /// in the background.
    private async void WriteToFile()
    {
      while (true)
      {
        if (_token.IsCancellationRequested)
        {
          return;
        }
        using (StreamWriter w = File.AppendText("c:\\myfile.txt"))
        {
          while (_textToWrite.TryDequeue(out string textLine))
          {
            await w.WriteLineAsync(textLine);
          }
          w.Flush();
          Thread.Sleep(100);
        }
      }
    }
  }
}

// Somewhere in the startup.cs or the Main.cs file
services.AddSingleton<MultiThreadFileWriter>();
// Now you can add the class using constructor injection
// and call the WriteLine() function from any thread without
// worrying about thread safety

Nothice that my code introduces a Thread.Sleep(100) statement. This is not needed, but it can be a good idea to give your application a little breathing space, especially if there are periods where nothing is written. Remove the line if your code requires an instant file write pattern.

MORE TO READ:

C# .NET Core Solr Search – Read from a Solr index

$
0
0

.NET Core has excellent support for doing searches in the Solr search engine. The search language is not always logical, but the search itself is manageable. Here’s a quick tutorial on how to get started.

STEP 1: THE NUGET PACKAGES

You need the following NuGet packages:

STEP 2: IDENTIFY THE FIELDS YOU WISH TO RETURN IN THE QUERY

You don’t need to return all the fields from the Solr index, but you will need to make a model class that can map the Solr field to a object field.

Solr Fields

Solr Fields

From the list of fields, I map the ones that I would like to have returned, in a model class:

using SolrNet.Attributes;

namespace MyCode
{
  public class MySolrModel
  {
    [SolrField("_fullpath")]
    public string FullPath { get; set; }

    [SolrField("advertcategorytitle_s")]
    public string CategoryTitle { get; set; }

    [SolrField("advertcategorydeprecated_b")]
    public bool Deprecated { get; set; }
  }
}

STEP 3: INJECT SOLR INTO YOUR SERVICECOLLECTION

Your code needs to know the Solr URL and which model to return when the Solr instance is queried. This is an example on how to inject Solr, your method might differ slightly:

using SolrNet;

private IServiceProvider InitializeServiceCollection()
{
  var services = new ServiceCollection()
    .AddLogging(configure => configure
      .AddConsole()
    )
    .AddSolrNet<MySolrModel>("https://[solrinstance]:8983/solr/[indexname]")
    .BuildServiceProvider();
  return services;
}

STEP 4: CREATE A SEARCH REPOSITORY TO DO SEARCHES:

Now onto the actual code. This is probably the simplest repository that can do a Solr search:

using SolrNet;
using SolrNet.Commands.Parameters;
using System.Linq;
using System.Threading.Tasks;
using System.Collections.Generic;

namespace MyCode
{
  public class MySolrRepository
  {
    private readonly ISolrReadOnlyOperations<MySolrModel> _solr;

    public AdvertCategoryRepository(ISolrReadOnlyOperations<MySolrModel> solr)
    {
      _solr = solr;
    }

    public async Task<IEnumerable<MySolrModel>> Search(string searchString)
    {
      var results = await _solr.QueryAsync(searchString);

      return results;
    }
  }
}

The Search method will do a generic search in the index that you specified when doing the dependency injection. It will not only search in the fields that your model class returns, but any field marked as searchable in the index.

You can do more complex searches by modifying the QueryAsync method. This example will do field based searches, and return only one row:

public async Task<MySolrModel> Search(string searchString)
{
  var solrResult = (await _solr.QueryAsync(new SolrMultipleCriteriaQuery(new ISolrQuery[]
    {
      new SolrQueryByField("_template", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"),
      new SolrQueryByField("_language", "da"),
      new SolrQueryByField("_latestversion", "true"),
      new SolrQueryByField("advertcategorydeprecated_b", "false"),
      new SolrQueryByField("_title", searchString)
    }, SolrMultipleCriteriaQuery.Operator.AND), new QueryOptions { Rows = 1 }))
    .FirstOrDefault();

  if (solrResult != null)
    return solrResult;

  return null;
}

That’s it for this tutorial. Happy coding!

MORE TO READ:

C# get results from Task.WhenAll

$
0
0

The C# method Task.WhenAll can run a bunch of async methods in parallel and returns when every one finished.

But how do you collect the return values?

Imagine that you have this pseudo-async-method:

private async Task<string> GetAsync(int number)
{
  return DoMagic();
}

And you wish to call that method 20 times, and then collect all the results in a list?

That is a 3 step rocket:

  1. Create a list of tasks to run
  2. Run the tasks in parallel using Task.WhenAll.
  3. Collect the results in a list
// Create a list of tasks to run
List<Task> tasks = new List<Task>();
foreach (int i=0;i<20;i++)
{
  tasks.Add(GetAsync(i));
}

// Run the tasks in parallel, and
// wait until all have been run
await Task.WhenAll(tasks);

// Get the values from the tasks
// and put them in a list
List<string> results = new List<string>();
foreach (var task in tasks)
{
  var result = ((Task<string>)task).Result;
  results.Add(result);
}

MORE TO READ:

 

C# Newtonsoft camelCasing the serialized JSON output

$
0
0

JSON love to be camelCased, while the C# Model class hates it. This comes down to coding style, which is – among developers – taken more seriously than politics and religion.

But fear not, with Newtonsoft (or is it newtonSoft – or NewtonSoft?) you have more than one weapon in the arsenal that will satisfy even the most religious coding style troll.

OPTION 1: THE CamelCasePropertyNamesContractResolver

The CamelCasePropertyNamesContractResolver is used alongside JsonSerializerSettings the serializing objects to JSON. It will – as the name implies – resolve any property name into a nice camelCasing:

// An arbitrary class
public MyModelClass 
{
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public int Age { get; set; }
}

// The actual serializing code:
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;

var myModel = new MyModelClass() { FirstName = "Arthur", LastName = "Dent", Age = 42 };
var serializedOutput = JsonConvert.SerializeObject(
  myModel, 
  new JsonSerializerSettings
  {
    ContractResolver = new CamelCasePropertyNamesContractResolver()
  }
);

The resulting JSON string will now be camelCased, even when the MyModelClass properties are not:

{
  firstName: 'Arthur',
  lastName: 'Dent',
  age: 42
}

OPTION 2: USING THE JsonProperty ATTRIBUTE:

If you own the model class you can control not only how the class is serialized, but also how it is deserialized by uding the JsonProperty attribute:

using Newtonsoft.Json;

public MyModelClass 
{
  [JsonProperty("firstName")]
  public string FirstName { get; set; }

  [JsonProperty("lastName")]
  public string LastName { get; set; }

  [JsonProperty("age")]
  public int Age { get; set; }
}

Both the JsonConvert.SerializeObject and the JsonConvert.DeserializeObject<T> methods will now use the JsonProperty name instead of the model class property name.

MORE TO READ:

 


Handling “415 Unsupported Media Type” in .NET Core API

$
0
0

The default content type for .NET Core API’s is application/json. So if the content-type is left out, or another content type is used, you will get a “415 Unsupported Media Type”:

415 Unsupported Media Type from Postman

This is for example true if you develop an endpoint to capture Content Security Policy Violation Reports. Because the violation report is sent with the application/csp-report content type.

To allow another content-type, you need to specify which type(s) to receive. In ConfigureServices, add the content-type to use to the SupportedMediaTypes:

public void ConfigureServices(IServiceCollection services)
{
  ...
  ...
  // Add MVC API Endpoints
  services.AddControllers(options =>
  {
    var jsonInputFormatter = options.InputFormatters
        .OfType<Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter>()
        .Single();
    jsonInputFormatter.SupportedMediaTypes.Add("application/csp-report");
  }
  );
  ...
  ...
}

Now your endpoint will allow both application/json and application/csp-report content types.

BUT WHAT IF THERE IS NO CONTENT TYPE?

To allow an endpoint to be called without any content-type, you also allow everything to be posted to the endpoint. The endpoint will read the posted content using a streamreader instead of receiving it from a strongly typed parameter.

The endpoint cannot be called using your Swagger documentation.

using Microsoft.AspNetCore.Mvc;
using System.IO;
using System.Text;
using System.Threading.Tasks;

namespace MyCode
{
  [ApiController]
  [Route("/api")]
  public class TestController : ControllerBase
  {
    [HttpPost("test")]
    public async Task<IActionResult> Test()
    {
      using (StreamReader reader = new StreamReader(Request.Body, Encoding.UTF8))
      {
        string message = await reader.ReadToEndAsync();
        // Do something with the received content. For 
        // test pusposes, I will just output the content:
        return base.Ok(message);
      }
    }
  }
}

MORE TO READ:

Sitecore ComputedIndexField extends your SOLR index

$
0
0

The Sitecore SOLR index is your quick access to Sitecore content. And you can extend this access by adding computed index fields. This is a way of enriching your searches with content that is not part of your Sitecore templates, but is needed when doing quick searches.

THE SIMPLE SCENARIO: GET A FIELD FROM THE PARENT ITEM

This is a classic scenario, where the content in Sitecore is organized in a hierarchy, for example by Category/Product, and you need to search within a certain category:

Category/Product Hierarcy

In order to make a direct search for products within a certain category, you will need to extend the product template with the category ID, so you can do a search in one take. So lets add the category ID to the product template SOLR search using a computed index field.

STEP 1: THE CONFIGURATION:

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:env="http://www.sitecore.net/xmlconfig/env/">
  <sitecore>
    <contentSearch>
      <indexConfigurations>
        <defaultSolrIndexConfiguration>
          <fieldMap>
            <fieldNames hint="raw:AddFieldByFieldName">
              <field fieldName="CategoryId" returnType="guid" />
            </fieldNames>
          </fieldMap>
          <documentOptions>
            <fields hint="raw:AddComputedIndexField">          
              <field fieldName="CategoryId" returnType="string">MyCode.ComputedIndexFields.CategoryId, MyDll</field>
            </fields>
          </documentOptions>
        </defaultSolrIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
 </configuration>  

The configuration is a 2 step process. The “fieldMap” maps field names (CategoryId in this case) to output types, in this case a GUID. The documentOptions maps the field name to a piece of code that can compute the field value. Please note that the documentOptions claims that the output type is a string, not a Guid. But don’t worry, as long as our code returns a Guid, everything will be fine.

STEP 2: THE CODE

using Sitecore.ContentSearch;
using Sitecore.ContentSearch.ComputedFields;
using Sitecore.Data.Items;

namespace MyCode.ComputedIndexFields
{
  public class CategoryId : IComputedIndexField
  {
    public object ComputeFieldValue(IIndexable indexable)
    {
      Item item = indexable as SitecoreIndexableItem;

      if (item == null)
        return null;

      if (item.TemplateName != "Product")
        return null;

      Item categoryItem = item.Parent;
      if (categoryItem.TemplateName != "Category")
        return null;

      return categoryItem.ID.ToGuid();
    }

    public string FieldName
    {
      get;
      set;
    }

    public string ReturnType
    {
      get;
      set;
    }
  }
}

The code is equally straight forward. If the code returns NULL, no value will be added.

The code first checks to see if the item being indexed is a product. If not, the code is skipped. Also, if the parent item is not a category, we also skip the code. Only if the item is a product and the parent is a category, the category ID is added to the index.

You will need to re-index your SOLR index. When the index is updated, you will find a “CategoryId” field on all of the “Product” templates in the SOLR index.

MORE TO READ:

C# Remove specific Querystring parameters from URL

$
0
0

These 2 extension methods will remove specific query string parameters from an URL in a safe manner.

METHOD #1: SPECIFY THE PARAMETERS THAT SHOULD GO (NEGATIVE LIST):

using System;
using System.Linq;
using System.Web;

namespace MyCode
{
  public static class UrlExtension
  {
    public static string RemoveQueryStringsFromUrl(this string url, string[] keys)
    {
      if (!url.Contains("?"))
        return url;

      string[] urlParts = url.ToLower().Split('?');
      try
      {
        var querystrings = HttpUtility.ParseQueryString(urlParts[1]);
        foreach (string key in keys)
          querystrings.Remove(key.ToLower());

        if (querystrings.Count > 0)
          return urlParts[0] 
            + "?" 
            + string.Join("&", querystrings.AllKeys.Select(c => c.ToString() + "=" + querystrings[c.ToString()]));
        else
          return urlParts[0];
      }
      catch (NullReferenceException)
      {
        return urlParts[0];
      }
    }
  }
}

Usage/Test cases:

string url = "https://briancaos.wordpress.com/page/?id=1&p=2";
string url2 = url.RemoveQueryStringsFromUrl(url, new string[] {"p"});
string url3 = url.RemoveQueryStringsFromUrl(url, new string[] {"p", "id"});

//Result: 
// https://briancaos.wordpress.com/page/?id=1
// https://briancaos.wordpress.com/page

METHOD #2: SPECIFY THE PARAMETERS THAT MAY STAY (POSITIVE LIST):

using System;
using System.Linq;
using System.Web;

namespace MyCode
{
  public static class UrlExtension
  {
    public static string RemoveQueryStringsFromUrlWithPositiveList(this string url, string[] allowedKeys)
    {
      if (!url.Contains("?"))
        return url;

      string[] urlParts = url.ToLower().Split('?');
      try
      {
        var querystrings = HttpUtility.ParseQueryString(urlParts[1]);
        var keysToRemove = querystrings.AllKeys.Except(allowedKeys);

        foreach (string key in keysToRemove)
          querystrings.Remove(key);

        if (querystrings.Count > 0)
          return urlParts[0] 
		    + "?" 
			+ string.Join("&", querystrings.AllKeys.Select(c => c.ToString() + "=" + querystrings[c.ToString()]));
        else
          return urlParts[0];
      }
      catch (NullReferenceException)
      {
        return urlParts[0];
      }
    }
  }
}

Usage/Test cases:

string url = "https://briancaos.wordpress.com/page/?id=1&p=2";
string url2 = url.RemoveQueryStringsFromUrl(url, new string[] {"p"});
string url3 = url.RemoveQueryStringsFromUrl(url, new string[] {"p", "id"});

//Result: 
// https://briancaos.wordpress.com/page/?p=2
// https://briancaos.wordpress.com/page/?id=1&p=2

MORE TO READ:

C# Using Dapper as your SQL framework in .NET Core

$
0
0

Dapper is a easy to use object mapper for .NET and .NET Core, an it can be used a variety of ways. I use Dapper instead of Entity Framework because it makes my code less complex.

BASICS OF DAPPER: THE OBJECT MAPPER

Basically, Dapper is an object mapper. This means that Dapper will map SQL rows to C# model classes 1-1. So if you wish to select data from an SQL table, you create a class containing the exact same fields as the SQL table:

Contact Form Table

So for that contact form table above, I can create a corresponding ContactForm model class:

using System;
using Dapper.Contrib.Extensions;

namespace MyCode
{
  [Table("dbo.ContactForms")]
  public class ContactForm
  {
    [Key]
    public int Id { get; set; }

    public DateTime Created { get; set; }
	public string Name { get; set; }
	public string Phone { get; set; }
	public string Email { get; set; }
	public int? ZipCode { get; set; }
	public string Comment { get; set; }
	public string IpAddress { get; set; }
	public string UserAgent { get; set; }
  }
}

By including the Dapper.Contrib.Extensions, I can mark the table key in my code, and the table itself. Nullable fields like the zipcode are also nullable in my class.

SIMPLE SQL SELECT WITH DAPPER

Now with the mapping in place, selecting and returning a class is super easy:

using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using Dapper;
using Dapper.Contrib.Extensions;

public class ContactFormRepository
{
  public IEnumerable<ContactForm> Get()
  {
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.Query<ContactForm>("select * from ContactForms").ToList();
  }
}

Dapper will map the ContactForms table to my ContactForm model class.

Selecting with parameters are equally easy, presented here in 2 different forms; one method returning only one row, another method returning all matching the parameter:

public ContactForm Get(int id)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.QuerySingleOrDefault<ContactForm>("select * from ContactForms where id = @Id", 
      new { Id = id } 
    );
}

public IEnumerable<ContactForm> Get(string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    return connection.Query<ContactForm>("select * from ContactForms where email = @Email", 
      new { Email = email } 
    ).ToList();
}

INSERT STATEMENT WITH DAPPER:

With inserting you decide if you wish to use your model class (great for exact inserts) or if you wish to use a dynamic class. The latter is great when you have fields that are autogenerated by the SQL server like auto-incrementing keys or dates that is set to GETDATE():

public void Insert(ContactForm contactForm)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    connection.Insert(contactForm);
}

public void Insert(string name, string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    connection.Execute("insert into ContactForms (name, email) values (@Name, @Email)", new { Name = name, Email = email });
}

USING STORED PROCEDURES:

This is also easy, just give the name of the stored procedure. In this example I will also use the DynamicParameters just to be fancy:

public IEnumerable<ContactForm> Get(string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    var parameters = new DynamicParameters();
    parameters.Add("@Email", email);
    return connection.Query<ContactForm>("Stored_Procedure_Name", parameters, commandType: CommandType.StoredProcedure).ToList();
}

The same goes with insert using a stored procedure:

public void Insert(string name, string email)
{
    using var connection = new SqlConnection("some_sql_connection_string");
    var parameters = new DynamicParameters();
    parameters.Add("@Name", name);
    parameters.Add("@Email", email);
    connection.Execute("Stored_Procedure_Name", parameters, commandType: CommandType.StoredProcedure);
}

That’s basically it. Very easy to use.

MORE TO READ:

Azure.Storage.Queues QueueMessage Deserialize JSON with .NET Core

$
0
0

The documentation around .NET QueueMessage is a little fuzzy so depending on the version of your NuGet libraries might differ in properties. This article uses the Azure.Storage.Queues, Version=12.7.0.0.

If you, like me, have systems writing JSON messages to the queue, you also struggle with converting these queue messages back to an object when reading from the queue.

But with a little help from NewtonSoft, it does not have to be that difficult.

Imagine that you wish to get this simple message from the queue:

A simple JSON message added to the queue via Visual Studio

This message can be mapped to this class:

using Newtonsoft.Json;

namespace MyCode
{
  public class HelloWorld
  {
    [JsonProperty("title")]
    public string Title { get; set; }

    [JsonProperty("text")]
    public string Text { get; set; }
  }
}

CHALLENGE #1: IS THE CONTENT ENCODED?

Now, when you read the message from the queue, you might get a surprise, as the original message is nowhere to be seen:

The message in the MessageText property?

Yes, when adding messages from Visual Studio, the contents is base 64 encoded. So first the message needs to be decoded, and then converted into an object.

THE EXTENSION METHOD:

This extension method will do the heavy lifting for you:

using Azure.Storage.Queues.Models;
using Newtonsoft.Json;
using System;
using System.Text;

namespace MyCode
{
  public static class QueueMessageExtensions
  {
    public static string AsString(this QueueMessage message)
    {
      byte[] data = Convert.FromBase64String(message.MessageText);
      return Encoding.UTF8.GetString(data);
    }

    public static T As<T>(this QueueMessage message) where T : class
    {
      byte[] data = Convert.FromBase64String(message.MessageText);
      string json = Encoding.UTF8.GetString(data);
      return Deserialize<T>(json, true);
    }

    private static T Deserialize<T>(string json, bool ignoreMissingMembersInObject) where T : class
    {
      T deserializedObject;
      MissingMemberHandling missingMemberHandling = MissingMemberHandling.Error;
      if (ignoreMissingMembersInObject)
        missingMemberHandling = MissingMemberHandling.Ignore;
      deserializedObject = JsonConvert.DeserializeObject<T>(json, new JsonSerializerSettings { MissingMemberHandling = missingMemberHandling, });
      return deserializedObject;
    }

  }
}

USAGE:

// This is an arbitrary class that returns a list of messages from 
// an Azure Queue. You have your own class here
IEnumerable<QueueMessage> messages = await _queueRepository.Get();

foreach (var message in messages)
{
  // Use the extension method to convert the message to the
  // HelloWorld type:
  var obj = message.As<HelloWorld>();
  // You can now access the properties:
  _logger.LogInformation($"{obj.Title}, {obj.Text}");
}

MORE TO READ:

Viewing all 167 articles
Browse latest View live