Archive for the ‘Microsoft’ Category

Hey Flickr, Where Did My Statistics Go? The CouchBase Connection. Part IV   Leave a comment

We interrupt this series to take a side trip concerning application logging.  The series begins here. NLog is an excellent open source logging project available from NuGet and other sources.   The sample code for this blog post can be found HERE. Although this is a kitchen sink implementation (Log to files, event logs, database, SMTP whatever) I will be using it as a simple way to log text information to files.  Once you have created a Visual Studio Project open Tools / NuGet Package  Manager/Package Manager Console.  From Here you can add NLog to your object with the command:

PM> Install-Package NLog

This will install NLog, modify your project and add a project reference for NLog.  Although NLog targets and rules can be managed programmatically, I 
normally user the configuration file: 

NLog.Config

You can set this up using the Package Manager Console with the command:

PM> Install-Package NLog.Config

Configuration File Setup

The NLog config file is then modified to define “targets” and “rules”.  The former defines where log entries are written and the latter define which log 
levels are written to which targets.  A file based target section might look like:

<targets>

   DLR.Flickr/Debug.txt” archiveNumbering=”Rolling”   archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”logfile” xsi:type=”File” layout=”${message}”    fileName=”C:/temp/DLR.Flickr/Info.txt”  archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Errorsfile” xsi:type=”File” layout=”${message}” fileName=”C:/temp/DLR.Flickr/Error.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Fatalfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Fatal.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

</targets>

where name is the symbolic name of the target xsi:type defines this as a file target.  If you are controlling the layout of the log entry set layout to “${message}”.  Given that we are using xsi:type as File we can use fileName to set the physical location of the log file.  The value of fileName can be changed programmatically at runtime but I will not give examples here.

NLog defines five Log levels:  Debug, Info, Warn, Error and Fatal.  These levels are defined in an enum and have the names have no special significance except as you define them.  The Rules section of the config file defines which Log Levels are written to which targets. A given level can be written to zero to many targets.  My Rules section typically looks like:

<rules>

<logger name=”*” minlevel=”Debug” maxlevel=”Debug” writeTo=”debugfile” />

<logger name=”*” minlevel=”Info” maxlevel= “Info” writeTo=”logfile” />

<logger name=”*” minlevel=”Warn” maxlevel=”Warn” writeTo=”Warnfile” />

<logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

<logger name=”*” minlevel=”Fatal” maxlevel=”Fatal” writeTo=”Fatalfile” />
  </rules>

More complex rules like the following are possible:

    <logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

       <logger name=”*” minlevel=”Error” maxlevel=”Fatal” writeTo=”Fatalfile” />

NLog initialization at runtime is very simple.  Typically you can you a single line like:

using NLog;

static Logger _LogEngine = LogManager.GetLogger(“Log Name”);

this need only be called once.

The simplest NLog log call (given the definition layout=”${message}”  ) would look like:

_LogEngine.Log(NLog.LogLevel.Info, “Info Message”);

We can extend this quite simply.  I have a single class extension providing a simple extension of NLog on Git Hub.  You can find it here.  Specifically I have provided wrapper methods for each NLog.LogLevel and support for Exception Stack Dumps.  Include this file in your project (after installing NLog and NLog config) then you can write:

using DLR.Util;

namespace DLR.CCDB.ConsoleApp

{

    class Program

{

static void Main(string[] args)

{

string _CorrelationID=System.Guid.NewGuid().ToString();

CCDB cbase = new CCDB { CorrelationID = _CorrelationID };

cbase.Client = CouchbaseManager.Instance;

NLS.Info(_CorrelationID, “Helllo, CouchBase”);

try{

throw new ApplicationException(“My Exception”);

}catch(Exception x){

NLS.Error(_CorrelationID,”Error”,x.Message);

//OR

NLS.Error(_CorrelationID,”Error”,x);

}

_CorrelationID is supported here so in multiuser situations (like WebAPI) we can identify which messages where written by which task.  In a console app this is not strictly necessary.  The call to NLS.Info results in an output log line like:

DLR|20140909-152031037|2f8f89ce-51de-4269-9ae0-9313ad2a0243|Helllo, CouchBase|

where:

  • DLR is the Log Engine name (more than one engine can write to a given log file);
  • 20140909-152031037 is the terse timestamp of the form: YYYYMMDD-HHMMSSmmm; and
  • Hello, CouchBase is our text message

My call:

NLS.Error(_CorrelationID,”Error”,x);

would result in a log line like:

DLR|20140909-152544801|46e656cd-4e17-4285-a5f3-e1484dad2995|Error|Error Data. Message: [My Exception]Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args|

where Error is my message;

Error Data. Message: [My Exception] is the Message in ApplicationException; and

Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args| is the stack dump.

NLS will handle nested exceptions and stack dumps but we are only showing a single un-nested exception in this example.

OK! That’s it for this post.  We will, hopefully return to couchBase and the Flickr API in the next post.

Posted 2014/09/09 by Cloud2013 in GitHub, Microsoft, NLog, NuGet

Tagged with , , ,

Poor Man’s Delegation: Web API Version 2, CORS and System.IdentityModel.Tokens.Jwt Part 1   3 comments

Deeply Disturbing Technical Background

Microsoft calls the assembly System.IdentityModel.Tokens.Jwt:  .Net 4.5 support for JSON Web Security Tokens.  The OAuth Working group Draft  can be found here.  The Working group helpfully suggests that JWT be pronounced as the English word “jot” but we just say “J W T” around our shop. So what is it good for and why would I want to create and consume one?  Often call Poor Man’s Delegation, the JWT is a convenient way for heterogeneous services to communicate claim validity to each other in other for services to be consumed across domain boundaries.  We first heard the term Poor Man’s Delegation in discussion with Brock Allen whose blog we strongly recommend for anyone interested in modern internet security from a .Net perspective. While we are plugging you could do worse than to check out the man who knows more about .Net 4.5 security than anyone not under NDA:  Dominick Baier.  Vittorio Beertocci gives an overview here, with the mandatory confusing and scary diagrams. His introduction to preview of System.IdentityModel.Tokens.Jwt is given here (but note some of the names have changed since this 11/2012 blog was posted).  Please check out his Vittorio’s blog and links to get a feel for the topic.  I will not be writing a tutorial here but will be looking at some cook book approaches (not based on Azure and not using an external STS) in this post.  You must obtain the System.IdentityModel.Tokens.Jwt assembly as a NuGet package here.  Some additional reference links can be found here and here.

Vittorio Beertocci’s view!

What We Would Like To Do

Here at Dog Patch Computing we have a very big commercial software system which we call The Monster (it rhymes with “spare part”) which controls our lives.  The security internals of The Monster are obscure and control by our corporate masters far far away in another part of the galaxy.  We develop primarily SPA (single page applications) intended to be hosted on phones and other devices.  Most of our data resides on servers which we control which are not part of the monster.  We must, must, must authenticate our users in The Monster but we need users to access their data via AJAX services running on servers which we control but which are not part of The Monster. So our situation looks like this:

image

We could develop a “trusted relation” between these two systems and “flow” The Monster’s credentials to our local machines.  While technically feasible the details of implementing this are quite complex and frankly we like light weight solutions for simple problems. The Monster handles authentication and holds critical information about each user including the user’s roles and identifiers used to associate the user with data to which she should have access to.  Our datamart holds the data the user wants access to.  What we want is a simple light way for the client device to access the datamart using AJAX calls and have access only to the data they are authorized to see.  We don’t want the datamart to be an authentication server or to maintain a user database replicating information held by The Monster .  We want the AJAX calls to be secure.

When a user authenticates to The Monster we exploit a hook which allows us to generate a JWT object unique to that user with the claims associated with that user which are relevant to their data on our servers.  We use System.IdentityModel.Tokens.Jwt to do this.  This object is encrypted. The JWT object is passed to the device browser.  When the device needs data from our datamart servers the JWT object is passed in a authorization header attached to the AJAX call. Note that this is a cross server (CORS) call. I will cover CORS processing in Web API in part 2 of this post. 

The Sender must preform the following tasks:

1.  Authenticate the user

2. Associate the user with Roles and Claims

3. Create A signed (encrypted) properly formatted JWT object

4. Return the JWT to the calling device.

The receiver of the AJAX call must do the following tasks:

  1. Decrypt the JWT and Authenticate the AJAX caller,
  2. Process the CORS request correctly,
  3. Create a Federated principle,
  4. Assign this principle to the current thread
  5. Process the data request based on the Claims associated with the caller. 

Most of these details are handled easily with Web API version 2.  Specifically,

  1. Authorize the caller (based on the JWT):  System.IdentityModel.Tokens.Jwt, Web API 2 Route authorization Handler
  2. Process the CORS request correctly: customization of the CORS attribute (the CORS attribute was contributed by Brock Allen)
  3. Create a Federated principle, (Framework 4.5 BCL)
  4. Assign this principle to the current thread (Framework 4.5 BCL)
  5. Process the data request based on the Role Claim and other user specific Claims associated with the caller (Framework 4.5 BCL) 

Ok, let’s get out the cook book and do some cookin’.

Recipe for Creating A Signed JWT

Ingredients:

  • A List of claims.  In our kitchen this includes
    • Roles
    • User Name
    • Other Claims like data access keys
      • For example A claim might be Bank Account and the value of the claim is the Bank Account Number.
  • Issuer UNI (you can make this up)
  • Allowed Audience UNI (you can make this up)
  • Lifetime (this determines how long this JWT is valid)
  • Signing Credentials (more On this one later)

Issuer UNI: this is the FROM UNI which you agree to accept the JWT from.  This should take the form of (but could be any text string):

http://{THEMONSTERDOMAIN}

Allowed Audience UNI: this is the TO UNI which you identify yourself as the correct recipient.  This should take the form of (but could be any text string):

http://{MYDATASERVERDOMAIN}/

Lifetime: this is the start and stop valid date time of the JWT you are issuing. This takes the form of:

new System.IdentityModel.Protocols.WSTrust.Lifetime(

     now.ToUniversalTime(),

      now.AddMinutes({local parameter length of the lifetime})

);

Working With Claims

Claims (not clams): these are specified in key/value pairs. Where the Key is a text string UNI and the value is anything string you want.  Some UNI’s are already in general use. See System.Security.Claims.ClaimTypes for the complete list used by Microsoft.  Since we are interacting with a Microsoft Windows system we will use “http://schemas.microsoft.com/ws/2008/06/identity/claims/role as the Key for all of our defined Roles. For the user identifier we will use “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier” (this seems to be what ADFS is using.  For arbitrary claims we are creating our own uni keys.  In our case our application specific Claim is called AccountNumber and we created an UNI Key of:

http://{MYDATASERVERDOMAIN/Account

We can define more than one claim per ID.  That is, for example we can create multiple Role claims for a given JWT.  More formally, claims are of type

List<string,string> and NOT Dictionary<string,string>

In C# we create a single claims as:

var myClaim1=new Claim(ClaimTypes.Role, “Customer”);

var myClaim2=new Claim(“http://{MYDATASERVERDOMAIN/Account”, “12345678”);

and our array of claims as:

List claimLst = new List();

and

claimLst.Add(myClaim2)

add claims to our list and then make a claims array as:

System.Security.Claims.Claim[] claims = claimList.ToArray();

Ok so far?  Hold on to this idea and turn to the scary topic of:

Encryption and System.IdentityModel.Tokens.SigningCredentials

How paranoid are you? How paranoid do you need to be? The “SigningCredentials” for a JWT are the basis for encrypting the JWT.  The ability to decrypt the JWT requires knowledge of the “SigningCredentials” used by the call.  The sender and receiver must share a  cryptographic key (and other data) in order to exchange JWT objects securely.  In our case our JWT are time limited and contain private (but not secrete ) information.  So our paranoia is limited to: The JWT must be difficult to crack during the existing Lifetime of the JWT and difficult to counterfeit.  No encryption method is perfect, the Chinese (not to mention NSA) can given enough interest and time crack and counterfeit any object. Having said that we adopted a safe’ish SHA-256 encryption algorithm.  We generated our shared key using a Framework Cryptography Class.

Given a key called Key we can create a “SigningCredential” as:

new System.IdentityModel.Tokens.SigningCredentials(

              new System.IdentityModel.Tokens.InMemorySymmetricSecurityKey(Key),

             http://www.w3.org/2001/04/xmldsig-more#hmac-sha256,

              http://www.w3.org/2001/04/xmlenc#sha256

)

Combine Ingredients and Cook up a JWT

Ok, now that we have gotten our ingredients together let’s finally create a JWT object:

Create a Security Token Descriptor:

static System.IdentityModel.Tokens.SecurityTokenDescriptor _MakeSecurityTokenDescriptor                                        (System.IdentityModel.Tokens.InMemorySymmetricSecurityKey sSKey, List claimList)
{
var now = DateTime.UtcNow;
            System.Security.Claims.Claim[] claims = claimList.ToArray();
return new System.IdentityModel.Tokens.SecurityTokenDescriptor
{
Subject = new System.Security.Claims.ClaimsIdentity(claims),
TokenIssuerName = Constants.ValidIssuer,
AppliesToAddress = Constants.AllowedAudience,
Lifetime =
new System.IdentityModel.Protocols.WSTrust.Lifetime(now.ToUniversalTime(),

                      now.AddMinutes (AIC.MyBook2.Constants.JWT.LifeSpan),
SigningCredentials = new System.IdentityModel.Tokens.SigningCredentials(

                      sSKey,

                     http://www.w3.org/2001/04/xmldsig-more#hmac-sha256,

                      “http://www.w3.org/2001/04/xmlenc#sha256″),
};
}

SigningCredentials, AppliesToAddress and TokenIssueName MUST be shared between the sender and the receiver.  Lifetime determines how long the JWT object is valid for use.

Create the JWT Object (finally):

var tokenHandler = new System.IdentityModel.Tokens.JwtSecurityTokenHandler();
tokenHandler.RequireExpirationTime = true; //make that Lifetime mandatory
var myJWT=tokenHandler.WriteToken(tokenHandler.CreateToken(_MakeSecurityTokenDescriptor(sSKey, claimLst)));

Easy and fun ( and 64bit encoded for safe internet transfer).

Part II will cover validating and using the JWT on the receiver.

 

Visual Studio 2013: Your License will expire in 2147483647 days.   Leave a comment

image

Wikipedia helpful explains:

The number 2,147,483,647 (two billion one hundred forty-seven million four hundred eighty-three thousand six hundred forty-seven) is the eighth Mersenne prime, equal to 231 − 1. It is one of only four known double Mersenne primes…The number 2,147,483,647 may have remained the largest known prime until 1867.

The number 2,147,483,647 is also the maximum value for a 32-bit signed integer in computing. It is therefore the maximum value for variables declared as int in many programming languages running on popular computers, and the maximum possible score, money etc. for many video games. The appearance of the number often reflects an error,overflow condition, or missing value.

The data type time_t, used on operating systems such as Unix, is a 32-bit signed integer counting the number of seconds since the start of the Unix epoch (midnight UTC of 1 January 1970).[9] The latest time that can be represented this way is 03:14:07 UTC on Tuesday, 19 January 2038 (corresponding to 2,147,483,647 seconds since the start of the epoch), so that systems using a 32-bit time_t type are susceptible to the Year 2038 problem.

output

Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends   4 comments

Since our last post was so long I left a few odds and ends for this post.  Specifically I will touch on the following in this posting:s_w19_1a35314u

  • Testing for the Presence of a Document within a Document Library
  • Testing for the presence a Folder within a Document Library
  • How to Create a Folder within a Document Library; and
  • How to Create a custom HTTP Exception class derived from the base Exception Class

Testing for the Presence of a Document within a Document Library

As with all things Sharepoint the most important part of this task is composing the correct uri fragment.  We need to have three pieces of information to preform the test:

  • The Document Library Name
  • The Folder Path (if any) within the Library
  • The Document Name (root plus extension)

We then compose a uri fragment as:

web/GetFolderByServerRelativeUrl(‘/{Document Library/FolderPath’)/Files(‘{Document Name}’)

So if we are testing for a Document myDocument.PDF in the folder path  AdminFolder\ClaimsFolder in a Document Library Call Accounting Documents our uri fragment becomes:

web/GetFolderByServerRelativeUrl(‘/Accounting Documents/AdminFolder\ClaimsFolder’)/Files(‘myDocument.PDF’)

One then makes an HTTP Get call against the Sharepoint REST API.  An Http Status Code of OK (numeric value: 200) indicates that the file exists.

An Http Status Code of NotFound (numeric value: 404 ) indicates that the file is not found at that location.

This get call does NOT return the document itself to the caller.

Testing for the presence a Folder within a Document Library

This test is even simpler.  We need two pieces of information:

  • The Document Library Name
  • The Folder Path (if any) within the Library

We compose the uri fragment as:

web/GetFolderByServerRelativeUrl(‘/{Document Library Name\Folder Path}’)

One then makes an HTTP Get call against the Sharepoint REST API.  An Http Status Code of OK (numeric value: 200) indicates that the folder path exists.  An Http Status Code of NotFound (numeric value: 404 ) indicates that the folder path is not found within the library.s_w04_1a35329u

How to Create a Folder within a Document Library

In order to create a folder we need to:

Compose a proper uri fragment;

Compose a JSON formatted content body (System.Net.Http.HttpContent);

Get a REST API Digest Value (See Part 3 of this series on this) and include it in the header; and

Make a HTTP POST call to the Sharepoint REST API

So here we go.  The uri fragment takes the simple fixed form of:

web/folders

The JSON HTTPContent format does the real work and takes the form of:

[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/{Document Library Name\Folder Path}’]

So if our Document Library is “Accounting Documents” and your folder name is “ClaimsFolder” our JSON looks like:

[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/Accounting Documents/ClaimsFolder}’]

Having placed this value into a string object as:

string data=”[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/Accounting Documents/ClaimsFolder}’]”;

we create a  HTTPContent object as

System.Net.Http.HttpContent reqContent = new StringContent(data);

After adding the correct Digest Header and ContentType Header our post looks like:

var resp = client.PostAsync(uri, reqContent).Result;

string respString = resp.Content.ReadAsStringAsync().Result;

If the folder is created successfully we will get back an HTTP Status of Created (numeric: 201 )

To create a nested folder just expand the path within ServerRelativeUrl.

New lets turn to a topic that does really fit any where else but I will just stuff it in here.

How to Create a custom HTTP Exception class derived from the base Exception Classs_w01_1a35360u

HTTP calls are parameterized calls and when they fail there is some information we may wish to attach to the Exception object created which we create on errors.  Specifically:  The URL which was called, the returned HTTP Status Code and the Response Body. Here is a simple derived HTTP exception class which I use:

public class HTTPException : ApplicationException
{
public string URL { get; set; }
public string Response { get; set; }
public HttpStatusCode Status { get; set; }
public HTTPException(string message, HttpStatusCode status, string uRLString, string respString)
: base(message)
{
URL = uRLString;
Response = respString;
Status = status;
}

public HTTPException(string message, HttpStatusCode status, string uRLString, Exception innerException)
: base(message, innerException)
{
URL = uRLString;
Response = string.Empty;
Status = status;
}

Assume a typical HTTP method call like:

public static HttpStatusCode RestGet(System.Net.Http.HttpClient client, string uri,List allowed, out string respString)
{
respString = string.Empty;
HttpResponseMessage resp=null;
try
{
resp = client.GetAsync(uri).Result;
respString = resp.Content.ReadAsStringAsync().Result;
_DisplayDebugInfo(client, resp, uri, null, null, respString);
}
catch (Exception x)
{
throw new HTTP.Exceptions.HTTPException(“RestGet”,  HttpStatusCode.ServiceUnavailable,                       client.BaseAddress.ToString() + “/” + uri, x);
}

if (statusCode != HttpStatusCode.OK)
{
throw new HTTP.Exceptions.HTTPException(“RestGet”, statusCode, client.BaseAddress.ToString() + “/” + uri,respString);

         }
return statusCode;
}

We can pick this up in a outer try/catch block like:

try{

//make your HTTP call here

}catch(HTTPException xo){

    Console.Writeline(xo.URL);

    Conosole.WriteLine(xo.Message);

}catch(Exception x1){

    Conosole.WriteLine(xo.Message);

}

Ok That’s it for the REST API and the Client HTTP object.  Next up:  Preparing a WEB API Endpoint to be called FROM Sharepoint 2013 Workflow.

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files

VaranasiGaruda1

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files   4 comments

The Document Library is a list like all things within Sharepoint, but it is a list with a difference. Each list item within a document library is associated with one Document.  To access the document through the user agent (the browser) we can address the item “directly” using the simple URI:

https://{server name}/{document library name}\{folder name(s)}\{document name}.{extension}

Note: I will use the convention of using the brackets {} to indicate parameters which are supplied when you instantiate a line of code.  Do NOT include the brackets in your code.

This will conjure up the document and launch an appropriate viewer application (Adobe Adobe, MS Word or what have you).  This type of addressing will not work when we are using the REST API for uploading or downloading a document. Although we ignored the issue of folders within lists in our discussions so far we need to address the issue here.  Keep in mind however that folders can appear in normal (non-Document Library) lists also.  I will defer didscussion of creating folders and deletes of folders and documents for a subsequent post.  In this post let’s work on the downloading documents  and then look at how to upload documents.  As with all things Sharepoint, it doesn’t work how you might think.

Document Library Item Details

If we use the REST API and retrieve a list all items within a document library in the normal manner (see Part II for details) our collection of  entry elements looks normal each entry has a collection of link nodes and (within the content node) a collection of property nodes.  Out of the box the properties seem straight forward.

Document Library Item Node

Note that the Document associated with this item does NOT appear in the list of properties.  But we can get that with a second REST API call.  If we expand the d:FileSystemObjectType properties for the item pictured above (which is a normal item) we see that its value is 0.  If the Library has folders then the folder is also listed as an entry in the entry collection along with document items.  Here is how one such entry item for a folder looks:

Folder2 Item

Looks the same right?  If we expand the d:FileSystemObjectType its value is 1. The “documentation” for FileSystemObjectType is obsure and indicates that this property is “An enumeration value that indicates the type: file, folder, Web, or invalid”.  So apparently 1 indicates a folder item while our observed value of “0” indicates a file item. Or something.  Don’t be confused by the title on this item, the name I gave to this folder is Folder0.0. Moving on, having an item which references a file (Document) we can make a call using the link “FieldValuesAsText” we can get a second item entry with the actual document reference:

FileItemAsText

Here we have the file name (d:fileLeafRef) the Document Library (d:fileDirRef) and the relative path for this item (d:FileRef).  It is the later value with we get when we use the Publication Hyperlink field in a List Item.  Note that d:FileRef will include any folders that the document is nested within. In this case the PDF file called “Screen clipping taken 9152013.pdf” is in the root of the Document Library “DLR Document Library”.

Document Down Load

In order to download a file we need to retrieve the binary byte stream for the document using the REST API.  Here is the basic idea.

Build a  basic uri fragment on this format:

web/GetFolderByServerRelativeUrl(‘/{path}’)/Files/$value?$filter=Name eq ‘{document name}’

where:

{path} is the Document Library and path found in d:fileRef excluding the document and extension.

{document} is the value found in d:fileLeafRef (document Name plus extension)

The $value directs Sharepoint to prove the binary bytes for this document.

As an alternative we can form a uri fragment in this form:

web/ TO DO ADD THIS LINE

Which is the value of the “EDIT” link from the links collection of the document library item and appending

/$value

Now call a REST API get against this uri fragment using a HTTPClient prepared as discussed in Part I with the addition of a header as follows:

client.DefaultRequestHeaders.Add(“binaryStringResponseBody”, “true”);

//The header tells Sharepoint to give us an uncontaminated binary stream

HttpResponseMessage resp = client.GetAsync(uri).Result;

Now we need to read the response body (which is binary format) into a binary MemoryStream:

MemoryStream ms=new MemoryStream();
resp.Content.CopyToAsync(ms);
ms.Position = 0;

This memory stream can be passed to a file write routine to persist the document to disk or in our case can be used to upload a document to a different Sharepoint site.

Document Upload

To upload a document into a Document Library the procedure is the same but different.  The Tasks are:

  • Get the document into a  MemoryStream object;
  • Prepare an HTTP Client Object for an HTTP Post
  • Get A Digest Object From Sharepoint using a REST API Post
    • Code Method: RetrieveDigest & RESTPost
    • The LINQ code to extract the Digest value from the response is given in the ctor method of the class:  CDigest
  • Prepare a Posting URI fragment
  • Prepare A uri fragment to use to POST the binary data to the Document Library
  • Prepare an HTTP Client Object for an second HTTP Post
    • Code Method: GetHTTPClient
    • Code Method: _PutBytes & RESTPostBinary
      • Set a Header containing the Digest Object
      • Prepare a Response Content  with the binary data from the memory stream
      • Set a Response Header with the proper Media Type on the Response Content Object
      • Set a Response Header telling Sharepoint you are sending a binary object in the ‘body’
      • Attach the Content Body to the HttpClient Object
      • Call the Post Command

and “like magic”, your document is created and attached to the Sharepoint like and a file Item is created in the Document library as a side effect of the upload.  Wow.  Lets fill in the details for this process.  I will assume you have created and populated the Memory Stream object with the contents of your binary file.  I will show a full code sample at the end of this post.

Get a Sharepoint Digest Object

In order to modify data within Sharepoint using the REST API you must request a Digest object from Sharepoint and attach this Digest item to your post command.  To get the digest value we must make an HTTP POST call to a special Sharepoint REST API Endpoint:

contextinfo

The actual Digest value is stored in the property field: FormDigestValue and its duration is given in the field: FormDigestTimeoutSeconds.  If your program is long running you may need to get a fresh copy of the Digest to use occasionally.  My code to retrieve a Digest looks something like this:

public static SPObject.CDigest RetrieveDigest(HttpClient client)
{
string respString;
statusCode = RESTPost(client,”contextinfo”);
if (statusCode == HttpStatusCode.OK)
{
return new SPObject.CDigest(respString);
}
throw ApplicationException(“Something Bad Happened”);
}
public static string RESTPost(System.Net.Http.HttpClient client, string uri)
{
var resp = client.PostAsync(uri, new StringContent(string.Empty)).Result;
respString = resp.Content.ReadAsStringAsync().Result;

            if (resp.StatusCode!=HttpStatus.OK){

                 throw new ApplicationException(“Something Bad Happened”);

            }
return respString;
}

public class SPObject
{
public class CDigest
{
public string Value { get; set; }
public string TimeOutSeconds { get; set; }
public System.DateTime GetTime { get; set; }
public CDigest() { }
public CDigest(string respString)
{
XDocument xResponse = XDocument.Parse(respString);
IEnumerable digestList = from g in

                     xResponse.Descendants(CSPNamespace.dataServicesNS + “GetContextWebInformation”)
select new CDigest
{
Value = g.Elements(CSPNamespace.dataServicesNS + 

                                                    “FormDigestValue”).First().Value.ToString(),
TimeOutSeconds = g.Elements(CSPNamespace.dataServicesNS +

                                                    “FormDigestTimeoutSeconds”).First().Value.ToString(),
GetTime = System.DateTime.Now

                                                  };
CDigest digest = digestList.First();
Value = digest.Value;
TimeOutSeconds = digest.TimeOutSeconds;
GetTime = digest.GetTime;
}
}
}

Uploading The Binary Data with an HTTP POST

Now that we have a Digest object we need to compose our uri to post the data as a binary byte stream into the document library.  According to this Microsoft Documentation  (11/1/2013) the basic uri fragment is composed from this form:

web/GetFolderByServerRelativeUrl(‘{path}’/Files/add(url='{document}’,overwrite={true | false})

Where:

{path} is the Document Library and any folders.

{document} is the document and its extension.

For example for a document yourClaims.PDF within the Claims folder within the document library called My Library and we allow file overwrites,  the uri fragment becomes:

web/GetFolderByServerRelativeUrl(‘My Library/Claims/’/Files/add(url=’yourClaims.PDF’,overwrite=true)

Now we need to add the Digest Value to a Header of the HTTPClient object:

client.DefaultRequestHeaders.Add(“X-RequestDigest”, {digest values as an unqouted string});

Ok so far so good.  You can see my code example below in the method _PutBytes.  In order to POST data we need to prepare a  response body.  In C# we do this with the special object: HttpContent as:

HttpContent reqContent = new StreamContent(ms);

where ms is the open memorystream of the binary bytes of our document to post. We need to add two specialized headers to this object before we post:

reqContent.Headers.Add(“binaryStringRequestBody”, “true”);
reqContent.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse(“application/json;odata=verbose”);

Now we are ready to use the HTTPClient PostAsync method to POST our document to the server.  See my method RESTPOSTBinary below for a coding example.

void _PutBytes

( CENTRYTranslate data, MemoryStream ms, HttpClient clientWriter,string fullpath, string documentName)
{
SPObject.CDigest digest = AIC.Http.Client.CHttpObject.RetrieveDigest(clientWriter, out statusCode);
string URLCreateDocument =

“web/GetFolderByServerRelativeUrl(‘{path}’/Files/add(url='{document}’,overwrite={true | false})”

           clientWriter.DefaultRequestHeaders.Add(“X-RequestDigest”, digest.Value);
string respString = CHttpObject.RESTPostBinary(clientWriter, URLCreateDocument, ms);

            if (statusCode != HttpStatusCode.OK)
{
throw ApplicationException(“Something Bad Happened”);

              }
}

public static string RESTPostBinary(System.Net.Http.HttpClient client, string uri,MemoryStream ms)
{

         HttpContent reqContent = new StreamContent(ms);
reqContent.Headers.Add(AIC.SP.REST.CRESTAPIEndPoints.MSXBINARYREQUESTBODYHEADER,  CStaticClass.HEADERTRUESTRING);
reqContent.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse(CStaticClass.CONTENTTYPEHEADERJSON);
var resp = client.PostAsync(uri, reqContent).Result;
If (resp.status!=HTTPStatus.OK){

              throw new ApplicationException(“SOMETHING BAD HAPPENDED”);

             }

         return resp.Content.ReadAsStringAsync().Result;
}

Easy and fun, no?

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files
Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data   5 comments

Now that the administrative details of Part 2 are over we can now address some useful issues. Recall that if we query a List we can obtain the uri segment needed to access the items in the list.  This for example would return all items the referenced list:

Web/list(guid’850fae4-0cce-8c30-c2a85001e215’)/Items

Please see this post for an important update about data pages when using the …/Items call

If we don’t want all the items we can apply an OData $filter verb to select a subset of items matching some criteria. If we want a subset of items with a specific Author we could create the following fragment:

Web/list(guid’850fae4-0cce-8c30-c2a85001e215’)/Items?$filter=Author eq ‘Cloud2013’

If you are concerned about download sizes of your calls into the REST API you can limit the property fields returned using the OData $select verb.

The feed returned from a GET call with an uri fragment like that above has an identical structure to the feed discussed in Part 2 of this series. Depending on the list and the $filter there may be zero, one or many entry nodes.  Each entry node contains information on one Item with its corresponding Links collection and properties collection.  In this case the Link collections refers to operations which can be made this single item, for example the “ContentType” Link (Web/Lists(guid’850f0ae4-0cce-47e9-8c30-c2a85001e216′)/Items(3)/ContentType) would return a feed with an entry for links and properties associated with the content item for this item.  The properties collection is, of course, the fields for an item and the values associated with that item.   Here is a Quick Watch screen snap of the data for an item:

QW1

This looks pretty normal.  The field “Title” appears with its value “Points To Approved Item”.  Some of the field names have been Sharepointerized. Our field “AIC Description” has become “AIC_x0020_Description”.  What is going on here?  According to this post:

When you create a column on a list, both its DisplayName and StaticName are set to the same value. However, the StaticName contains converted values for some characters, most notably a space ‘ ‘ is converted to '_x0020_'. So if the DisplayName is ‘Product Description’, then the StaticName will be 'Product_x0020_Description'.

There’s another little bugaboo: The StaticName is limited to 32 characters including the translations for special characters.

In translation, the DisplayName is the name you see on the Sharepoint UI screen.  The value of the display name can change.  When the column is created a fixed “StaticName” is created equal to the initial DisplayName with spaces converted to _x0020.  When working with the REST API we are seeing the StaticName. Ok so far?  We have another problem some fields are missing!  The REST API does not support all columns using the

Web/list(guid’850fae4-0cce-8c30-c2a85001e215’)/Items

format above.  This interesting post has more details. 

So we will need to do some more digging Sharepoint has to store the value somewhere.  Well, in the true spirit of Sharepoint we have to say: it depends.  In our case we us a Publishing Hyperlink field.  This is a field which allows a list item to contain a reference to some other object.  That object can be:

  • an external Web URL
  • a Sharepoint Page; or
  • A Document stored in a Sharepoint Document Library

BTW, in the first instance the whole URL is stored and in the later two instances only the RELATIVE link is stored since these are assumed to point to a SP page or document within the current site.  At our site we need to retrieve the true value of the Publishing Hyperlink.  And as you can see from the above the field (which we call “AIC Publishing Hyperlink”) does not appear in any form in the original REST call.  If we jump to the Links collection for an item we can find a link called:  FieldValuesForEdit. In our case this is:

Web/Lists(guid’cc1e1896-8df8-4f32-8f1d-2afab37329ed’)/Items(2)/FieldValuesForEdit

If we call that link we get back an entry which in Quick Watch looks like:

QW2

OK, there it is, hiding in Sharepointification form as “AIC_x005f_x0020_x005f_Publishing_x005f_x0020_x005f_Hyper” (where did the _X005f come from you might ask. I have no idea, some things in SP I just accept).  The value is a HTML anchor tag and it is pretty simple to parse these to recover the href value.  Which in this case is  (after decoding) “/DLR%20Document%20Library/1969.html” where ““/DLR%20Document%20Library/” points to a Sharepoint Document Library and 1969.html as the Document.  We are working on how Managed Metadata values are stored but our research is not yet complete on this.  In my next post I will discuss how to download and upload a document from a document library (which is how I fell into this problem of retrieving the value stored in a Publishing Hyperlink field.

Ok, we are almost done for today.  Let’s look at a security issue.  What values you see for a field on an item depends on the identity of user making the REST API call and the state of the item within Sharepoint when you look at it (it’s a Quantum thing).   Recall the publication settings for a list item in Sharepoint. This screen snap may help your memory:

settings

When we enable Content Approval and set “Who should see draft items” to Only users who can approve items (and the author of the item)” then an item can exist in two states at once.  If we have an approved item then everyone sees the same thing:

approved

No if an editor changes an item, say by changing the content of the “Description” field and the item is waiting for approval the item is in a Pending Approval Status like this:

pending

If you are a general user you see the values in the Approved item.  If you are an approver or the author you see the values in the Pending Item.  As in the UI so in the REST API.  The values returned to you in your REST API call depend on the Sharepoint security group of the identity you use to access the item and the list settings for approval and visibility. So as in all things, be careful what you ask for.  In some use cases you may want to see pending values and in others you may want to see only approved values.  There are use cases, we have one in workflow processing for a list where we want to see both.  In this case we use two identities to access the same list item and so can compare the pending and approved items.

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files
Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends

Coming up: Document processing!

The Google Barge In Portland Maine.  Don’t believe the cover story.  It’s a 3D printer which can only produce other 3D printer barges!

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data   4 comments

Let’s assume that we have an HTTPClient object as described in Part 1.  That is we have

  • created the HttpClient objected,
  • set the base URL to our Sharepoint site, 
  • set the Authentication Header; and
  • set the Accept Header for  “application/atom+xml”. 

Now we want to query the REST API for the metadata of a Sharepoint List.  Sharepoint List Titles are unique by design we can use the List Title (rather than the GUID) to locate the item and return its metadata. The format of this call as a uri fragment is:

web/lists/GetByTitle(‘listname’)

Now our code looks like:

string uri=”web/lists/GetByTitle(‘Master Document Library’)”;

//Note that spaces are allowed in most calls but there are other situations where spaces are escaped!

HttpResponseMessage resp = client.GetAsync(uri).Result;

string respString = resp.Content.ReadAsStringAsync().Result;

if (resp.StatusCode != HttpStatusCode.OK)

{

throw new ApplicationExcepiton(

                    “HTTP Error. Status: {0} Reason: {1}”,resp.StatusCode,    resp.ReasonPhrase );         

           }

This will put a string representation of the XML formatted metadata about the list into the HttpResponseMessage.  Please note that the call to extract the XML from the response body:

string respString = resp.Content.ReadAsStringAsync().Result;

is only appropriate for string results (XML or JSON as specified in our Accept Header) and is not correct if the return is binary data.  I will copy binary data in a subsequent blog when I discuss file upload and download.

Few things are less documented than the exact contents of the metadata feed returned by calls like this from the REST API.  On a high level it is an Atom feed which allows for a combination of collections of  Property nodes (key/value pairs) and collections of Link nodes. The Property Nodes are the meta data fields related to the list and the Link Nodes are uri segments to guide additional REST API calls concerning what ever item is currently being queried.  Neither the properties nor the links are a fixed are vary from Sharepoint object to object and even vary between objects of the same type if those object depending on the values of the property fields (for example a list item on a list with versioning will contain a Link node for access to the versions.  If the list items are not versioned then the link item of versions will not be emitted.

Rather than list XML directly I will use the visualization tool XMLSPY to display the XML in a “grid view”.  On a high level the entry for a list would look like:

Untitled picture

The Links are on the root of the entry node and the properties are nested as entry/content/properties.  Note that the XML makes heavy use of XML namespaces and any creative manipulation of the entry requires some knowledge of XML, namespaces,  XPath or LINQ for XML.  I use LINQ for XML at my desk so I will use that idiom rather than XPATH to manipulate these objects.  If we expand the properties node it will look something like this:

Untitled picture2

There is a lot of data here about the list most of it only lightly documented.  We can see however that the property key: d:title contains the name of our list which we queried on and d:iD contains the GUID for the list.  The later never changes but the former can be renamed.

If we expand the Links collection  it would look something like this:

Untitled picture3

Note item 10, the Items link. The href attribute contains the uri for the REST API to retrieve all the items in this list, while the Fields link (item 7) is an uri for the Fields currently defined for this list. If we know the d:Id of a particular item (item IDs are not GUIDS for simple integers), say 6, we can retrieve a single item with the uri of the Items uri and post pending in the form of:

Web/list(guid’850fae4-0cce-8c30-c2a85001e215’)/Items(6)

What about the link with a blank title? For historical reasons this is blank but it represents the EDIT link.  To make my life simpler I translate the XML property and link collections into C# Dictionary objects and place them in a C# class with two supporting methods:

public class CEntry

{

public Dictionary<string, string> Links;

public Dictionary<string, string> Properties;

      public string  GetLink(string key){

string value = string.Empty;

Links.TryGetValue(key, out value);

return value;

}

      public string GetProperty(string key)

{

string value = string.Empty;

Properties.TryGetValue(key, out value);

return value;

}

}

At this time I am not using any of the root nodes so I just discard them. I get to the Dictionary objects from the XML using LINQ for XML.  I learned what little I know about LINQ for XML from this book.  To brush up on your XML try this book.   For a XML tree containing entry node(s) my LINQ looks like this:

public static class CSPNamespace

{

public static XNamespace metaDataNS = @”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata”;

       public static XNamespace baseNS = @”http://www.w3.org/2005/Atom”;

       public static XNamespace dataServicesNS = @”http://schemas.microsoft.com/ado/2007/08/dataservices”;

   }

XDocument xList =XDocument.Parse(respString);

IEnumerable<CEntry> group = from g in xList.Descendants(CSPNamespace.baseNS + “entry”)

select new CEntry

{

           Links = MakeLinkDictionary(g),

Properties = MakePropertyDictionary(g)

};

The IEnumerable collection needs special processing before it is accessed.  The following test can help:

To see if the collection contains one or more entries:

group != null && group.Any();

Having passed that test, we can then use the simple Count function to see how many entries are in the collection.

group.Count()

To get the first (or only) entry from the collection

group.First()

These last two test will fails if the collection fails the test above

        CEntry cell = group.First();  //Assumes one and one only

Where MakeLinkDicitionary and MakePropertyDictionary look like:

public  static Dictionary<string, string=””> MakePropertyDictionary(XElement xs)

{

Dictionary<string, string=””> pList = new Dictionary<string, string=””>();

var group = from g in xs.Elements(CSPNamespace.baseNS + “content”).Descendants(CSPNamespace.metaDataNS + “properties”)

select g;

foreach (XElement property in group.Elements())

{

pList.Add(property.Name.LocalName, property.Value);

}

return pList;

}

public  static Dictionary<string, string=””> MakeLinkDictionary(XElement xs)

{

       Dictionary<string, string=””> lList = new Dictionary<string, string=””>();    IEnumerable links = from g in

                      xs.Elements(CSPNamespace.baseNS + “link”)

select g;

foreach (XElement link in links)

{

string rel = string.Empty;

string href = string.Empty;

foreach (XAttribute att in link.Attributes())

{

if (att.Name.LocalName == “title”)

{

                   if (string.IsNulOrEmpty(att.Value)){

                      rel = “Edit”;

                   }else{

rel = att.Value;

                        }

}

if (att.Name.LocalName == “href”)

{

href = att.Value;

}

}

lList.Add(rel, href);

       }

return lList;

}

After this pre-processing the meta data can be accessed in a fairly straight forward manner.

var listMetaData=group.First();

string uri=listMetaData.GetLink(“Fields”);

string iD=listMetaData.GetProperty(“Title”);

We will turn to what to actual do with the meta data in the next post.

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files
Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends

%d bloggers like this: