Archive for the ‘Internet’ Category

Hey Flickr, Where Did My Statistics Go? The CouchBase Connection. Part IV   Leave a comment

We interrupt this series to take a side trip concerning application logging.  The series begins here. NLog is an excellent open source logging project available from NuGet and other sources.   The sample code for this blog post can be found HERE. Although this is a kitchen sink implementation (Log to files, event logs, database, SMTP whatever) I will be using it as a simple way to log text information to files.  Once you have created a Visual Studio Project open Tools / NuGet Package  Manager/Package Manager Console.  From Here you can add NLog to your object with the command:

PM> Install-Package NLog

This will install NLog, modify your project and add a project reference for NLog.  Although NLog targets and rules can be managed programmatically, I 
normally user the configuration file: 

NLog.Config

You can set this up using the Package Manager Console with the command:

PM> Install-Package NLog.Config

Configuration File Setup

The NLog config file is then modified to define “targets” and “rules”.  The former defines where log entries are written and the latter define which log 
levels are written to which targets.  A file based target section might look like:

<targets>

   DLR.Flickr/Debug.txt” archiveNumbering=”Rolling”   archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”logfile” xsi:type=”File” layout=”${message}”    fileName=”C:/temp/DLR.Flickr/Info.txt”  archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Errorsfile” xsi:type=”File” layout=”${message}” fileName=”C:/temp/DLR.Flickr/Error.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Fatalfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Fatal.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

</targets>

where name is the symbolic name of the target xsi:type defines this as a file target.  If you are controlling the layout of the log entry set layout to “${message}”.  Given that we are using xsi:type as File we can use fileName to set the physical location of the log file.  The value of fileName can be changed programmatically at runtime but I will not give examples here.

NLog defines five Log levels:  Debug, Info, Warn, Error and Fatal.  These levels are defined in an enum and have the names have no special significance except as you define them.  The Rules section of the config file defines which Log Levels are written to which targets. A given level can be written to zero to many targets.  My Rules section typically looks like:

<rules>

<logger name=”*” minlevel=”Debug” maxlevel=”Debug” writeTo=”debugfile” />

<logger name=”*” minlevel=”Info” maxlevel= “Info” writeTo=”logfile” />

<logger name=”*” minlevel=”Warn” maxlevel=”Warn” writeTo=”Warnfile” />

<logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

<logger name=”*” minlevel=”Fatal” maxlevel=”Fatal” writeTo=”Fatalfile” />
  </rules>

More complex rules like the following are possible:

    <logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

       <logger name=”*” minlevel=”Error” maxlevel=”Fatal” writeTo=”Fatalfile” />

NLog initialization at runtime is very simple.  Typically you can you a single line like:

using NLog;

static Logger _LogEngine = LogManager.GetLogger(“Log Name”);

this need only be called once.

The simplest NLog log call (given the definition layout=”${message}”  ) would look like:

_LogEngine.Log(NLog.LogLevel.Info, “Info Message”);

We can extend this quite simply.  I have a single class extension providing a simple extension of NLog on Git Hub.  You can find it here.  Specifically I have provided wrapper methods for each NLog.LogLevel and support for Exception Stack Dumps.  Include this file in your project (after installing NLog and NLog config) then you can write:

using DLR.Util;

namespace DLR.CCDB.ConsoleApp

{

    class Program

{

static void Main(string[] args)

{

string _CorrelationID=System.Guid.NewGuid().ToString();

CCDB cbase = new CCDB { CorrelationID = _CorrelationID };

cbase.Client = CouchbaseManager.Instance;

NLS.Info(_CorrelationID, “Helllo, CouchBase”);

try{

throw new ApplicationException(“My Exception”);

}catch(Exception x){

NLS.Error(_CorrelationID,”Error”,x.Message);

//OR

NLS.Error(_CorrelationID,”Error”,x);

}

_CorrelationID is supported here so in multiuser situations (like WebAPI) we can identify which messages where written by which task.  In a console app this is not strictly necessary.  The call to NLS.Info results in an output log line like:

DLR|20140909-152031037|2f8f89ce-51de-4269-9ae0-9313ad2a0243|Helllo, CouchBase|

where:

  • DLR is the Log Engine name (more than one engine can write to a given log file);
  • 20140909-152031037 is the terse timestamp of the form: YYYYMMDD-HHMMSSmmm; and
  • Hello, CouchBase is our text message

My call:

NLS.Error(_CorrelationID,”Error”,x);

would result in a log line like:

DLR|20140909-152544801|46e656cd-4e17-4285-a5f3-e1484dad2995|Error|Error Data. Message: [My Exception]Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args|

where Error is my message;

Error Data. Message: [My Exception] is the Message in ApplicationException; and

Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args| is the stack dump.

NLS will handle nested exceptions and stack dumps but we are only showing a single un-nested exception in this example.

OK! That’s it for this post.  We will, hopefully return to couchBase and the Flickr API in the next post.

Posted 2014/09/09 by Cloud2013 in GitHub, Microsoft, NLog, NuGet

Tagged with , , ,

Hey Flickr, Where Did My Statistics Go? The CouchBase Connection. Part III   1 comment

This is the third post in this series on how to harvest statistical data from your (or a friend’s) Flickr Picture View
data.  The series begins
here.  Today we are looking at CouchBase as a noSQL database to store our Flickr data.  This post will get as far as getting the shell of a console application up and will defer example code samples for the next blog post.


CouchBase  iscouchbase a commercialized version of the public domain project
Apache CouchDB.  CouchDB is open source and CouchBase is not.  Both support API libraries for .Net and Java.  Commercial development with CouchBase is NOT free.  The CouchDB wiki lists five active C# libraries for CouchDB.  CouchBase supports a many API libraries including .Net and Java.  I have written about CouchDB and Ruby in a prior series of posts which can be found here. Both systems support multi-server nodes and concurrency controls.  Neither of these features will be touched on in the post.  Our focus here will be on an introduction to the minimum necessary administration skills and API coding to help us with our goal of storing information about Users, Photos and View Counts through time.  Along the way we will also discuss JSON Serialization / Deserialization using Newtonsoft.JSON, open source application Logging with NLog.  I will defer the discussion of CouchBase views for a subsequent post.

Data Model Overview.

Ultimately we want to store information about each User. For each user we will store information for one or more Photo and for each Photo, information on one or more View Counts.  Users and Photos have their own Primary Key, supplied as an ID field from Flickr.  Our view counts will be collected each day and the Primary Key of the Counts is represented by the date the view count data was collected.  This could be modeled into a traditional RDBMS in third normal form, but this pattern is also most naturally represented as a nesting of lists of objects within container objects.  Rather than say we are storing nested objects it is more typical today to say that this data can be thought of as a structured Document.  The most natural way to store and access this data is by simple (or compounds of ) primary keys.  When we get to the point where we are doing manipulation of the data for statistical analysis and summary our most natural mode of access will be by a key composed of the User ID and Photo ID and iterating there view counts by Date ID (or Date ID range).  A very simple way to model this is with a Key / Value noSQL database based on document storage (aka a Document Store).  We could call this design an object oriented database model but that would be old fashion.  Here is the visual of the data model:image

The full Document could be represented as a compound C# object:

   class CObject
{
public CUser User { get; set; }
public List<CPhoto> Photo { get; set; }
}

public class CUser
{
public string FullName { get; set; }
public string Username { get; set; }
public string UserId { get; set; }
public string APIKey { get; set; }
public string SharedSecret { get; set; }
public string Token { get; set; }
public string TokenSecret { get; set; }

}

public class CPhoto
{
public string ID { get; set; }
public string Title { get; set; }
public string ThumbnailURL { get; set; }
public List<CView> Views { get; set; }
}

public class CView
{
public string Date { get; set; }
public int Views { get; set; }
}

In this post we will setup a single server CouchBase instance and develop a single user application to manipulate documents in a CouchBase “bucket”.  We will not model the complete object in this post but deal with a simplified version of Photo Object while we get our feet wet on CouchBase CRUD operations and simple CouchBase server Administration.  To make things as simple as possible, for this post, we will be working only with a modified version of the CPhoto object (Document).

cropped-2001spaceodyssey025

Getting The Stuff You Need.

Shopping List

Setting up a single node Windows CouchBase Server simple and basic administration is easy and fun. Download and run the installation of CouchBase from here. Fred willing all will go well and you will be placed at local page in your default browser. Bookmark this page and note the Port Number that has been assigned to the default instance of CouchBase.  On first use you will need to create an administrator Username and Password. I left the defaults alone for the Cluster and Server Nodes. Select the Tab Data Buckets.  You will need to decrease the Quota Usage limits for the default Bucket.  With the space you freed up, create a new Bucket called “DLR.Flickr.Example1” .  Here is what mine looks like:

 image

And Here is the Bucket Settings Page:

image

OK.  Now take some time and review the documentation for the .Net SDK here.  You can read through or code along with the examples given there. Done? Now let’s get to work.

Starting to Code CouchBase

Open Visual Studio and select Tools/NuGet Package Manager/Package Manager Console and enter the command:

Install-Package CouchbaseNetClient

Create a new Visual Studio Console application.  I called mine:

DLR.CCDB.ConsoleApp and set the default namespace to DLR.CCDB.  Add references to:

Couchbase

Enyim.Memcached

Newtonsoft.Json

[ If you can not resolve Newtonsoft.Json:  Right click on the root of the project and select: Manage NuGet Packages.  Search on Newtonsoft.Json.  Select Install on JSON.Net.  Now try adding the Newtonsoft reference again.]

Now is a good time to add the open source Logging solution to your project.  Select: Manage NuGet Packages.  Search on NLOG. Install both  NLog and NLog Configuration.

Open your App.Config project file.  You will need to make several changes.  Here is what mine looks like after the changes.

Red items are added manually by me (you) and the Blue entries are added by the NuGet Package manager during the sets you followed above.

<!–?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
<configSections>
Couchbase.Configuration.CouchbaseClientSection, Couchbase” />
</configSections>
<couchbase>
<servers bucket=”DLR.Flickr.Example1″ bucketPassword=””>
uri=”
http://127.0.0.1:8091/pools” />
</servers>
</couchbase>
    <startup>
<supportedRuntime version=”v4.0″ sku=”.NETFramework,Version=v4.5″ />
</startup>
<runtime>
<assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
<dependentAssembly>
<assemblyIdentity name=”Newtonsoft.Json” publicKeyToken=”30ad4fe6b2a6aeed” culture=”neutral” />
<bindingRedirect oldVersion=”0.0.0.0-6.0.0.0″ newVersion=”6.0.0.0″ />
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name=”Enyim.Caching” publicKeyToken=”05e9c6b5a9ec94c2″ culture=”neutral” />
<bindingRedirect oldVersion=”0.0.0.0-1.3.7.0″ newVersion=”1.3.7.0″ />
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name=”NLog” publicKeyToken=”5120e14c03d0593c” culture=”neutral” />
<bindingRedirect oldVersion=”0.0.0.0-3.1.0.0″ newVersion=”3.1.0.0″ />
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>

We are most interested in this section:

<servers bucket=”DLR.Flickr.Example1″ bucketPassword=””>
uri=”
http://127.0.0.1:8091/pools” />
</servers>

 

bucket=”DLR.Flickr.Example1″

This sets your default API calls to the bucket “DLR.Flickr.Example1” which you created above.  Although we will not develop the theme here you can override the default bucket during runtime to deal with calls to multiple buckets in the same program.

uri=”http://127.0.0.1:8091/pools

This sets your local node. the http://127.0.0.1 is a constant for development projects (localhost) and the 8091 is the port assigned to CouchBase during installation (double check this value on your system by navigating to the CouchBase Console page you added to your favorites list above.

While we are here let’s make some changes (without explanation why at this point) in NLog.Config (which was created when you installed NLog above).  Replace the entire contents of the file with (mind the wrap):

<!–?xml version=”1.0″ encoding=”utf-8″ ?>
<nlog xmlns=”
http://www.nlog-project.org/schemas/NLog.xsd”
      xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”
      throwExceptions=”true”
internalLogFile=”C:/temp/NLog/WEBAPI/Internal.txt”
internalLogLevel=”Info”
>
<targets>
<target name=”debugfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Debug.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”                        />
<target name=”logfile” xsi:type=”File” layout=”${message}”    fileName=”C:/temp/DLR.Flickr/Info.txt”  archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”                      />
<target name=”Errorsfile” xsi:type=”File” layout=”${message}” fileName=”C:/temp/DLR.Flickr/Error.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”                   />
<target name=”Fatalfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Fatal.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”                    />
</targets>
<rules>
<logger name=”*” minlevel=”Debug” maxlevel=”Debug” writeTo=”debugfile” />
<logger name=”*” minlevel=”Info” maxlevel= “Fatal” writeTo=”logfile” />
<logger name=”*” minlevel=”Error” maxlevel=”Fatal” writeTo=”Errorsfile” />
<logger name=”*” minlevel=”Fatal” maxlevel=”Fatal” writeTo=”Fatalfile” />
</rules>
</nlog>

We will get back to the details of this configuration file in the next post.

Write the minimum test code possible. 

Replace the contents of Program.cs with

using System;

using Couchbase;

using Enyim.Caching.Memcached;

using Newtonsoft.Json;

using NLog;

namespace DLR.CCDB.ConsoleApp
{

class Program
{
static void Main(string[] args)
{
CouchbaseClient client=new CouchbaseClient();

        }

    }

}

Build and run.  You should have no errors and client should not equal null after the call

CouchbaseClient client=new CouchbaseClient();

Success?  Take a break and we will continue next week.

 

On Public Access to the Internet Archive’s Grateful Dead Collection   1 comment

Although I have covered this topic in a technical blog some time ago, the programmatic approach is not for everyone.  As one correspondent wrote:

Cloud,

I just want to listen to Dark Star on my IPOD, not get a computer science degree.

paul6

Well, if you just want to listen to Dark Star (or Fire On the Mountain for that matter) just go here or here.  Right Click on any track listed, select SAVE AS and Bear’s your Uncle.  But if your needs go deeper and you don’t want to write code; here dear reader is a simple explanation involving nothing more technical than a browser (Chrome will do fine but use Internet Explorer will work also) and a text editor (notepad for example).

Know Your Rights

The first thing we do, let’s kill all the lawyers…

                                              Shakespeare  ‘Henry VI,” Part II

Please read the statement from the Internet Archive on use of the Grateful Dead recordings stored on the Internet Archive here.  Understanding and interpretation of same is up to you, not me. I am not a lawyer, I don’t even play one on television, thank you.

PBS680505-01-FP

Doing a Single Track

Let’s say we like an early Dark Star, say:  Grateful Dead Live at Carousel Ballroom on 1968-01-17 for example.  Cursing to the Internet Archive we find the concert page for the Soundboard version we like the best:

image

Opps, No downloads on this concert. Let’s take a closer look at the browser screen, at the URL text box at the top:

Picture2

the “Https://archive.org/details” is the same on all concerts but the other part is the unique key for this particular version of this concert:

gd1968-01-17.sbd.jeff.3927.shnf

I will call this the IAKEY.  We will use this key to delve into the Internet Archive a little more deeply.  Copy this text string from the URL text box and copy it into a Notepad text document.  Now we need a list of all the mp3 files associated with this recording of this concert. Internet Archive stores these in an xml file uniquely defined for each recording. Copy the following template string into a second Notepad text document:

http://www.archive.org/download/{IAKEY}/{IAKEY}_files.xml

Now using text replace the string “{IAKEY} with the IAKey we got perviously.  When you are done the string will look like:

http://www.archive.org/download/gd1968-01-17.sbd.jeff.3927.shnf/gd1968-01-17.sbd.jeff.3927.shnf_files.xml

Open a new browser window and copy this new string into the URL text box and press enter.  Here is what you get back:

Picture3

Search for “Dark Star” on the screen and locate the mp3 file name I will call this the TrackMP3 (ignore any mp3 entries with the digits 64 in the mp3 file name).  In this case the mp3 for Dark Star is:

gd68-01-17d1t03_vbr.mp3

Open up your Notepad document as paste this string into it:

http://www.archive.org/download/{IAKEY}/{TrackMP3}

Now in this string replace {IAKEY} with gd1968-01-17.sbd.jeff.3927.shnf as we did before and replace {TrackMP3} with gd68-01-17d1t03_vbr.mp3

The final text string will now look like:

http://www.archive.org/download/gd1968-01-17.sbd.jeff.3927.shnf/gd68-01-17d1t03_vbr.mp3

Open up a new browser tab and paste this string into it and press enter.  There you are Dark Star will start playing and just RIGHT CLICK and Select “Save As” and Bear’s your uncle.

r733929_5943031

Doing a Group of Tracks

This gets tedious right away.  You can improve this process to skip the part where the file starts playing first by using a little HTML code.  I am not going to explain the whole process but it works like this:

follow the steps above and get a couple (or all) of the TrackMP3 strings.  “Turn on Your Love Light” has an TrackMP3 of gd68-01-17d1t01_vbr.mp3.  Cryptical Envelopment has a TrackMP3 of gd68-01-17d2t01_vbr.mp3 and  Dark Star we already know.

Open a text editor and enter the lines:

<html><body>

<a href=http://www.archive.org/download/{IAKEY}/{TrackMP3}> {title}</a>

<a href=http://www.archive.org/download/{IAKEY}/{TrackMP3}> {title}</a>

<a href=http://www.archive.org/download/{IAKEY}/{TrackMP3}> {title}</a> 

</body><html>

Now replace {IAKey} and {TrackMP3} and {title} in each line with the strings you got above and save the text file with the extension HTML (NOT as a txt file).  Load the HTML file into the browser (drag and drop will work) and you will see in the browser:

Turn on Your Love Light

Cryptical Envelopment

Dark Star

Right click on each of these in turn and click Save As… I hope you are getting the picture now.

tumblr_mg3mwmD25A1qabpu5o1_500

The Day We Fight Back   Leave a comment

800px-Aaron_Swartz_23c3_day_0-590

.

Posted 2014/02/11 by Cloud2013 in Internet

Tagged with

John McAfee Strikes: How To Uninstall MCAfee Antivirus Software   Leave a comment

News Item from the New York Times:

Last week, Intel, which acquired McAfee in 2011, announced it was killing off the McAfee brand altogether, keeping only the company’s red shield logo intact. McAfee will now be known as Intel Security.

Analysts say the move is an apparent effort to separate the brand from its antivirus roots and from its founder, John McAfee, whohas gained notoriety for behavior that, at last count, included going on the lam after his neighbor in Belize was found dead, an arrest in Guatemala, a deportation to Miami and, finally, an expletive-laced video featuring Mr. McAfee trying to uninstall McAfee software while surrounded by scantily clad women, guns and “bath salts.”

The evil video can be found here:

.

 

Enjoy!

Posted 2014/01/17 by Cloud2013 in Internet, mcafee, Thought Control

Tagged with , , ,

Poor Man’s Delegation: Web API Version 2, CORS and System.IdentityModel.Tokens.Jwt Part 2   2 comments

In Part 1 of this post I reviewed the goals of the project and how to create a simple JWT object.  In today’s post I will cover first, how to decode (validate) a JWT object and assign the Claims Principle created during that process to a thread on a running Windows applications.  Secondly I will cover the steps for using Web API to consume the JWT object and use its claims.  I will also review the new attribute based CORS filter (thanks again Brock Allen ).

Creating a Claims Principle from a JWT Object

Recall from Part 1 that we created a JWT object using an System.IdentityModel.Tokens.SecurityTokenDescriptor which we created specifically for our application.  Having received a JWT object the JWT is validated by creating a System.IdentityModel.Tokens.TokenValidationParameters object and then applying these parameters to the JWT object we wish to decode.  Note that the TokenValidationParameters mirror the values we supplied to the System.IdentityModel.Tokens.SecurityTokenDescriptor object to create the JWT.

var validationParameters = new System.IdentityModel.Tokens.TokenValidationParameters()
{
AllowedAudience = Constants.AllowedAudience,
SigningToken = Constatns.BinarySecretSecurityToken,
ValidIssuer = Constants.ValidIssuer
};

Given a JWT object called JWT we validate as:

var tokenHandler = new System.IdentityModel.Tokens.JwtSecurityTokenHandler();
tokenHandler.RequireExpirationTime = true;

var   principal = tokenHandler.ValidateToken(jwt, validationParameters);

The call to ValidateToken will fail with an exception if the TokenValidationParameters values do not match those used to create the object.  Further if the system time of the decoding system is outside of the Lifetime parameter of the JWT an exception will be thrown.  If all goes well the output of the call to ValidateToken is a ClaimsPrincipal object.  The ClaimsPrincipal object is a Framework 4.5 object which is the basis of the new look to Windows Framework security.  So far so good but at this point you are NOT delegating to the new Claims principle to do that you MUST assign the ClaimsPrincipal object to your thread and the HttpContext.Current.user

Thread.CurrentPrincipal = principal;
System.Web.HttpContext.Current.User = Thread.CurrentPrincipal ;

At this point the magic has happened, your thread is now running as a delegate of the JWT object.  There are limits (for your benefit) the user is marked as Authenticated and the Authentication Type is marked as Federated.  As a Federated user there are limits to your power.  You can not act as Local System in this delegated identity.  Ok that was easy.  Now lets turn to a slightly more difficult problem.

Web API and Authorization Delegating Handlers

In our simple design the user sends the JWT object as an argument to a standard HTTP Authorization Header  as a Bearer token.  Other methods (cookies, query parameters, form-encoded) are possible but the header method is a clean and very common.  Lets look now how Web API can process this header using a Authorization Filter.  The basic idea is

Derive a class from the Web API class DelegatingHandler, and

Override the SendAsync Method.

The model over ride looks like this  looks like this:

public class myJWTHandler : DelegatingHandler
{

      protected override Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request, CancellationToken cancellationToken )
{

            var auth=request.Headers.Authorization;

           if ( auth==null || auth.Scheme != “Bearer” )
{
var response=request.CreateResponse( HttpStatusCode.Unauthorized, string.Empty );
var tsc = new TaskCompletionSource<HttpResponseMessage>( );
tsc.SetResult( response );
return tsc.Task;
}
HttpStatusCode statusCode=HttpStatusCode.OK;

           //request.Headers.Authorization.Parameter is the JWT string

           //in this example

           //  _ValidateJWT assigns the claims principle to

           //  Thread.CurrentPrincipal and  System.Web.HttpContext.Current.User

           // on SUCCESS
if ( !_ValidateJWT( request, request.Headers.Authorization.Parameter, out statusCode ) )
{
var response=request.CreateResponse( statusCode, string.Empty );
var tsc = new TaskCompletionSource<HttpResponseMessage>( );
tsc.SetResult( response );
return tsc.Task;
}
return base.SendAsync( request, cancellationToken );
}

}

Now we need to assign our Handler to a URL route.  We do this in standard file WebApiConfig.cs following this simple pattern within the Register Method therein:

           System.Net.Http.DelegatingHandler[] myRouteDelegates=new System.Net.Http.DelegatingHandler[]{
new myJWTHandler()
};
var myRouteHandlers=System.Net.Http.HttpClientFactory.CreatePipeline(
new System.Web.Http.Dispatcher.HttpControllerDispatcher( config ), myRouteDelegates
);
config.Routes.MapHttpRoute(
name: “myRoute”,
routeTemplate: “myApplication//{accountNumber}”,//your route goes here
defaults: new
{
controller = “myController”,
action = “Get”
},
constraints: null,
handler: myRouteHandlers
);

Ok we are Hooked! Any call to “myRoute” will pass FIRST through the delegate Hander, which in turn enforces that a valid JWT object appears in the Authorization Header and will assign (delegate) the thread for the call to the Federated user specified within the JWT.

The  Route Controller and Actions

Now we want to add the following functionality:

  • Allow only authorized users in certain Roles to access the route controller (a Role is a claim), and
  • Use additional claims associated information with the federated user on our controller’s method thread.
  • Enable CORS Support.

The first two of these are pretty straight forward,  but CORS support with authorized users requires some additional design and programming.  CORS support is however required with the design we are playing out here.  More on this bellow.

Our basic controller looks something like this:

public class myBookGroup1Controller : ApiController

    {

        [HttpGet]

        [Authorize(Roles = “Insured”)]

        [WebAPI.Handler.Enable_AIC_CORS()]  //CORS support derived from Wep API standard CORS Attribute

               public HttpResponseMessage Get( string prameter {}}

}

The attribute [HttpGet] filters requests and allows HTTPGet requests to call this action. The Authroize attribute leverages the claims Federated user.  In this case  [Authorize(Roles = “Insured”)] we are asking for Authenticated Users who have a Roles claim whose value is “Customer”;  In order for this to work correctly we use the registered Microsoft Namespace: use “http://schemas.microsoft.com/ws/2008/06/identity/claims/role” as the key for “roles” when we created the JWT initially (way back in Part I).

Once we have entered an Action Method we can access the Federated Users Claims as:

  public static IEnumerable<string> GetClaims(System.Security.Principal.IPrincipal iPrince, string uni)

        {

            try

            {

                return from c in ((ClaimsPrincipal)iPrince).Claims where c.Type == uni select c.Value;

            }

            catch (Exception x)

            {

                throw new ApplicationException(“LINQ access Errors Key ” + uni,x);

            }

        }

The body of the Get method here could look like:

public HttpResponseMessage Get( string prameter) {}{

        var accountNumber = GetClaims(System.Web.HttpContext.Current.User, Constants.JWT);

//do something the “accountNumber”

         List<string> data=new List<string> data();

bool results=DoWork(prameter, out List<string> data);

if (results){

// good

                      return controller.Request.CreateResponse( System.Net.HttpStatusCode.OK, listTable );

}else{

//bad news

return controller.Request.CreateResponse( HttpStatusCode.BadRequest, “badness messages);

}

CORS Made Simple From Preflight Checklist to Take Off

The General Idea:  Cross Origin Resource Sharing is defined by the following scenario. A device connects to a web site subsequent AJAX calls from the device running “as” that  web application to a different web application must preform an additional handshake prior to authorizing the request.  The CORS specification requires additional handlers to be written on the server and the server must specify what the policy requirements imposed on the AJAX call. The user agent (the “browser” handles the device side of the handshake (this phase is called, for obscure reasons “Preflight”)  Brock Allen has done the heavy lifting for us and Web API 2 contains Attribute CORS support based on his design and code.  The policy requirements consist:

  • Allowed Origins:  What domains may make the call. Can be “*” for any if SupportsCredentials is false.
  • Allowed Methods:  “*” for all
  • AllowAnyHeader:    True/False. Incoming headers are not limited to the “standard” headers
  • SupportsCredentials:  true / false.  Supports equals Requires Authorization
  • Exposed Headers:  approved, non-standard headers which may be sent from the Server to the caller.

Wow.  There it is the good and the bad.  For an anonymous access action methods which can be accessed from ANY server, can be defined for CORS within Web API 2 with the attribute:

[EnableCORS(“*”,”*”,”*”)]

public HttpResponseMessage Get( string prameter) {}{

Things get form complex for authorization required sites. In this scenario:

Allowed Origins requires a list of allowed ULRs in proper form:  http:\\{mydomain.com}.  This can be a list of URLS.

If we want to expose a custom (non-standard) HTTP header from the server to the client we must list each header name here.

While this could be written out with constants, having this type of system parameter embedded in an attribute just seems wrong!  There is a way.

The Web API allows us to extend the EnableCORS attribute to allow us to read the list of actual values from …. somewhere (config file or database or whatever.

The basic idea is:

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false)]

    public class Enable_AIC_CORSAttribute : Attribute, ICorsPolicyProvider

    {

        private CorsPolicy _Policy;

        private Dictionary<string, string> _AllowedOrigns()

       {

                      //get the proper list from somewhere

             return object;

       }

        private Dictionary<string, string> _ExposedHeaders()

{

                      //get the proper list from somewhere and format as a string

                   return Dictionary<string, string> obect;

};

private _FormatForCORS(){

    //work

return string object;

}

  public Enable_AIC_CORSAttribute()

        {

            _Policy = new CorsPolicy

            {

                AllowAnyHeader = true,

                SupportsCredentials = true,

            };

   _Policy.Methods.Add(“GET”);

           _Policy.Origins.Add(_FormatForCORS_AllowedOrigns())))

            _Policy.ExposedHeaders.Add(_FormatForCORS_ExposedHeaders()))

        }

  public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request, System.Threading.CancellationToken cancel)

        {

            return Task.FromResult(_Policy);

        }

}

OK.  I think that’s it for now.  Lift Off.  Good Hunting.

Poor Man’s Delegation: Web API Version 2, CORS and System.IdentityModel.Tokens.Jwt Part 1   3 comments

Deeply Disturbing Technical Background

Microsoft calls the assembly System.IdentityModel.Tokens.Jwt:  .Net 4.5 support for JSON Web Security Tokens.  The OAuth Working group Draft  can be found here.  The Working group helpfully suggests that JWT be pronounced as the English word “jot” but we just say “J W T” around our shop. So what is it good for and why would I want to create and consume one?  Often call Poor Man’s Delegation, the JWT is a convenient way for heterogeneous services to communicate claim validity to each other in other for services to be consumed across domain boundaries.  We first heard the term Poor Man’s Delegation in discussion with Brock Allen whose blog we strongly recommend for anyone interested in modern internet security from a .Net perspective. While we are plugging you could do worse than to check out the man who knows more about .Net 4.5 security than anyone not under NDA:  Dominick Baier.  Vittorio Beertocci gives an overview here, with the mandatory confusing and scary diagrams. His introduction to preview of System.IdentityModel.Tokens.Jwt is given here (but note some of the names have changed since this 11/2012 blog was posted).  Please check out his Vittorio’s blog and links to get a feel for the topic.  I will not be writing a tutorial here but will be looking at some cook book approaches (not based on Azure and not using an external STS) in this post.  You must obtain the System.IdentityModel.Tokens.Jwt assembly as a NuGet package here.  Some additional reference links can be found here and here.

Vittorio Beertocci’s view!

What We Would Like To Do

Here at Dog Patch Computing we have a very big commercial software system which we call The Monster (it rhymes with “spare part”) which controls our lives.  The security internals of The Monster are obscure and control by our corporate masters far far away in another part of the galaxy.  We develop primarily SPA (single page applications) intended to be hosted on phones and other devices.  Most of our data resides on servers which we control which are not part of the monster.  We must, must, must authenticate our users in The Monster but we need users to access their data via AJAX services running on servers which we control but which are not part of The Monster. So our situation looks like this:

image

We could develop a “trusted relation” between these two systems and “flow” The Monster’s credentials to our local machines.  While technically feasible the details of implementing this are quite complex and frankly we like light weight solutions for simple problems. The Monster handles authentication and holds critical information about each user including the user’s roles and identifiers used to associate the user with data to which she should have access to.  Our datamart holds the data the user wants access to.  What we want is a simple light way for the client device to access the datamart using AJAX calls and have access only to the data they are authorized to see.  We don’t want the datamart to be an authentication server or to maintain a user database replicating information held by The Monster .  We want the AJAX calls to be secure.

When a user authenticates to The Monster we exploit a hook which allows us to generate a JWT object unique to that user with the claims associated with that user which are relevant to their data on our servers.  We use System.IdentityModel.Tokens.Jwt to do this.  This object is encrypted. The JWT object is passed to the device browser.  When the device needs data from our datamart servers the JWT object is passed in a authorization header attached to the AJAX call. Note that this is a cross server (CORS) call. I will cover CORS processing in Web API in part 2 of this post. 

The Sender must preform the following tasks:

1.  Authenticate the user

2. Associate the user with Roles and Claims

3. Create A signed (encrypted) properly formatted JWT object

4. Return the JWT to the calling device.

The receiver of the AJAX call must do the following tasks:

  1. Decrypt the JWT and Authenticate the AJAX caller,
  2. Process the CORS request correctly,
  3. Create a Federated principle,
  4. Assign this principle to the current thread
  5. Process the data request based on the Claims associated with the caller. 

Most of these details are handled easily with Web API version 2.  Specifically,

  1. Authorize the caller (based on the JWT):  System.IdentityModel.Tokens.Jwt, Web API 2 Route authorization Handler
  2. Process the CORS request correctly: customization of the CORS attribute (the CORS attribute was contributed by Brock Allen)
  3. Create a Federated principle, (Framework 4.5 BCL)
  4. Assign this principle to the current thread (Framework 4.5 BCL)
  5. Process the data request based on the Role Claim and other user specific Claims associated with the caller (Framework 4.5 BCL) 

Ok, let’s get out the cook book and do some cookin’.

Recipe for Creating A Signed JWT

Ingredients:

  • A List of claims.  In our kitchen this includes
    • Roles
    • User Name
    • Other Claims like data access keys
      • For example A claim might be Bank Account and the value of the claim is the Bank Account Number.
  • Issuer UNI (you can make this up)
  • Allowed Audience UNI (you can make this up)
  • Lifetime (this determines how long this JWT is valid)
  • Signing Credentials (more On this one later)

Issuer UNI: this is the FROM UNI which you agree to accept the JWT from.  This should take the form of (but could be any text string):

http://{THEMONSTERDOMAIN}

Allowed Audience UNI: this is the TO UNI which you identify yourself as the correct recipient.  This should take the form of (but could be any text string):

http://{MYDATASERVERDOMAIN}/

Lifetime: this is the start and stop valid date time of the JWT you are issuing. This takes the form of:

new System.IdentityModel.Protocols.WSTrust.Lifetime(

     now.ToUniversalTime(),

      now.AddMinutes({local parameter length of the lifetime})

);

Working With Claims

Claims (not clams): these are specified in key/value pairs. Where the Key is a text string UNI and the value is anything string you want.  Some UNI’s are already in general use. See System.Security.Claims.ClaimTypes for the complete list used by Microsoft.  Since we are interacting with a Microsoft Windows system we will use “http://schemas.microsoft.com/ws/2008/06/identity/claims/role as the Key for all of our defined Roles. For the user identifier we will use “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier” (this seems to be what ADFS is using.  For arbitrary claims we are creating our own uni keys.  In our case our application specific Claim is called AccountNumber and we created an UNI Key of:

http://{MYDATASERVERDOMAIN/Account

We can define more than one claim per ID.  That is, for example we can create multiple Role claims for a given JWT.  More formally, claims are of type

List<string,string> and NOT Dictionary<string,string>

In C# we create a single claims as:

var myClaim1=new Claim(ClaimTypes.Role, “Customer”);

var myClaim2=new Claim(“http://{MYDATASERVERDOMAIN/Account”, “12345678”);

and our array of claims as:

List claimLst = new List();

and

claimLst.Add(myClaim2)

add claims to our list and then make a claims array as:

System.Security.Claims.Claim[] claims = claimList.ToArray();

Ok so far?  Hold on to this idea and turn to the scary topic of:

Encryption and System.IdentityModel.Tokens.SigningCredentials

How paranoid are you? How paranoid do you need to be? The “SigningCredentials” for a JWT are the basis for encrypting the JWT.  The ability to decrypt the JWT requires knowledge of the “SigningCredentials” used by the call.  The sender and receiver must share a  cryptographic key (and other data) in order to exchange JWT objects securely.  In our case our JWT are time limited and contain private (but not secrete ) information.  So our paranoia is limited to: The JWT must be difficult to crack during the existing Lifetime of the JWT and difficult to counterfeit.  No encryption method is perfect, the Chinese (not to mention NSA) can given enough interest and time crack and counterfeit any object. Having said that we adopted a safe’ish SHA-256 encryption algorithm.  We generated our shared key using a Framework Cryptography Class.

Given a key called Key we can create a “SigningCredential” as:

new System.IdentityModel.Tokens.SigningCredentials(

              new System.IdentityModel.Tokens.InMemorySymmetricSecurityKey(Key),

             http://www.w3.org/2001/04/xmldsig-more#hmac-sha256,

              http://www.w3.org/2001/04/xmlenc#sha256

)

Combine Ingredients and Cook up a JWT

Ok, now that we have gotten our ingredients together let’s finally create a JWT object:

Create a Security Token Descriptor:

static System.IdentityModel.Tokens.SecurityTokenDescriptor _MakeSecurityTokenDescriptor                                        (System.IdentityModel.Tokens.InMemorySymmetricSecurityKey sSKey, List claimList)
{
var now = DateTime.UtcNow;
            System.Security.Claims.Claim[] claims = claimList.ToArray();
return new System.IdentityModel.Tokens.SecurityTokenDescriptor
{
Subject = new System.Security.Claims.ClaimsIdentity(claims),
TokenIssuerName = Constants.ValidIssuer,
AppliesToAddress = Constants.AllowedAudience,
Lifetime =
new System.IdentityModel.Protocols.WSTrust.Lifetime(now.ToUniversalTime(),

                      now.AddMinutes (AIC.MyBook2.Constants.JWT.LifeSpan),
SigningCredentials = new System.IdentityModel.Tokens.SigningCredentials(

                      sSKey,

                     http://www.w3.org/2001/04/xmldsig-more#hmac-sha256,

                      “http://www.w3.org/2001/04/xmlenc#sha256″),
};
}

SigningCredentials, AppliesToAddress and TokenIssueName MUST be shared between the sender and the receiver.  Lifetime determines how long the JWT object is valid for use.

Create the JWT Object (finally):

var tokenHandler = new System.IdentityModel.Tokens.JwtSecurityTokenHandler();
tokenHandler.RequireExpirationTime = true; //make that Lifetime mandatory
var myJWT=tokenHandler.WriteToken(tokenHandler.CreateToken(_MakeSecurityTokenDescriptor(sSKey, claimLst)));

Easy and fun ( and 64bit encoded for safe internet transfer).

Part II will cover validating and using the JWT on the receiver.

 

%d bloggers like this: