Archive for the ‘Sharepoint 2013’ Tag

Sharepoint 2013 REST API: Retrieving List Item Data   1 comment

I first discussed retrieving List Item data here: Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data.  The simple REST API call:


functions like

select * from Table1

in a SQL data base.  However Sharepoint, although resting on SQL Server for everything, will only return the a limited number of rows from the underlying list.  The maximum number of items (rows) returned in a single call is governed by a system constraint.  On our out of the box Sharepoint 2013 installation the limit was 100 rows!  Supposable Sharepoint Sever Admins can change the maximum value but they can not eliminate the constraint.  The naïve coder might think that this can be gotten around using the $SKIP parameter in the ODATA extensions to the Sharepoint REST API.  Alas $SKIP is not implemented, instead M$ implemented it own undocumented skip parameters.  No to worry you don’t need to provide your own parameter values.  The initial call using


returns X rows of the table (in primary key index order) where X is equal to the page constraint discussed above.  If there are additional rows the metadata in the returned data structure will return a “next” hypermedia link whose value is fully qualified Sharepoint Rest API call for the next X rows (i.e. the next page). 


                               Google Barge Sinking Slowly in San Francisco Harbor after being infected with the Sharepoint 2013 Virus

We need then to setup a typical paging object to read items by page until all items are read.  I will be leveraging the code patterns given in the series referenced above the page calls (…/items) return results in XML format and is cast as an XDocument type.  Then we can probe for a “next” link as:

static string _GetLinkToNextPage(XDocument xDoc)
const string matchString = “/Web/Lists”;
var links = from g in xDoc.Descendants().Elements(CSPNamespace.baseNS + “link”)
select g;
const string NEXT = “next”;
foreach (XElement link in links)
string rel = string.Empty;
string href = string.Empty;
foreach (XAttribute att in link.Attributes())
if (att.Name.LocalName == “rel”)
rel = att.Value;
if (att.Name.LocalName == “href”)
href = att.Value;

if (rel == NEXT)

                   //return just the relative path
return href.Substring(href.IndexOf(matchString));

           //return null if no link is found
return null;

The complete object template looks like:

public static class CItemPage
public static List<CEntry> _Worker(HttpClient client, string next)
public static List Exec(HttpClient client, CEntry listMetaData)
       static void _Accumulate(XDocument items, ref List allItems)
static string _GetLinkToNextPage(XDocument xDoc)
static XDocument _GetListMetaData(HttpClient client)


The entry point for the paging activity is CItemPage.Exec which turns around and calls worker where we loop through the data and (for my own purposes) I then accumulate the items from all pages in a single list<CEntry>  using my method _Accumlate.  calls Worker

       public static List _Worker(HttpClient client, string next)
List allItems = new List();
XDocument items = null;
while (next != null)
items = _GetItems(client, next);
next = _GetLinkToNextPage(items);
_Accumulate(items, ref allItems);
return allItems;
catch (Exception x)
var b = x.Message;
return null;


Easy and fun.  Should there be an easier, faster way? yes.


This madness must stop.

Sharepoint 2013 REST API: Testing for the Existence of a Document In a Document Library   Leave a comment

This is just a short program note concerning the REST API for Sharepoint 2013.  If you want to test for the existence of a Document within a Document Library in Sharepoint you might be tempted to use this uri fragment:

web/getfilebyserverrelativeurl(‘{document library name and folder path }\{document name}’)”

and indeed if the document exists this call will return an HTTP Status 200 code and the standard ATOM feed with one entry node with the meta data for the document.

<!–?xml version=”1.0″ encoding=”utf-8″?>
xml:base=”” xmlns=”” xmlns:d=”” xmlns:m=”” xmlns:georss=”” xmlns:gml=””>
[many other links]


<d:ServerRelativeUrl>/DLR Document Library/_f37-06_tc_big.svg</d:ServerRelativeUrl>

                [many other properties]


Now here is todays puzzle.  If the file does not exist, what does Sharepoint return for this call:

  1. Http Status Code 200 and an Atom Feed with no entry nodes;
  2. HTTP Status Code 404 (Not Found) and no Atom Feed; or
  3. HTTP Status Code 500 (Internal Server Error)
If you selected answer 1 you have been programming Sharepoint 2013 awhile but are still quite naïve.

If you selected answer 2 you are a RESTafarian and have no business programming Microsoft products.

If you selected answer 3 you have been programming this interface for way to long and should find another line of work.

The correct answer is 3!  Don’t ask me why.

To be consistent we would like to query Sharepoint 2013 and get an Atom Feed if the file exists or not and expect that the feed will have zero entry nodes when the Document does not exist.  How to do this?  Query the Document Library and folder path using a completely different uri fragment  for all files and use an OData filter to limit the returned Atom Feed to zero entry nodes with the document does not exist. Here is an example:

web/GetFolderByServerRelativeUrl(‘{document library name and folder path }’)/Files?$filter=Name eq ‘{document name}’

Now you will get an HTTP Status code of 200 whether the document exists or not.  The Atom Feed will be returned in all cases but  will differ depending if the Document exists (one entry node ) or if the Document does not exist (no entry nodes).

Strange but true.  Don’t let this error happen to you.


Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends   4 comments

Since our last post was so long I left a few odds and ends for this post.  Specifically I will touch on the following in this posting:s_w19_1a35314u

  • Testing for the Presence of a Document within a Document Library
  • Testing for the presence a Folder within a Document Library
  • How to Create a Folder within a Document Library; and
  • How to Create a custom HTTP Exception class derived from the base Exception Class

Testing for the Presence of a Document within a Document Library

As with all things Sharepoint the most important part of this task is composing the correct uri fragment.  We need to have three pieces of information to preform the test:

  • The Document Library Name
  • The Folder Path (if any) within the Library
  • The Document Name (root plus extension)

We then compose a uri fragment as:

web/GetFolderByServerRelativeUrl(‘/{Document Library/FolderPath’)/Files(‘{Document Name}’)

So if we are testing for a Document myDocument.PDF in the folder path  AdminFolder\ClaimsFolder in a Document Library Call Accounting Documents our uri fragment becomes:

web/GetFolderByServerRelativeUrl(‘/Accounting Documents/AdminFolder\ClaimsFolder’)/Files(‘myDocument.PDF’)

One then makes an HTTP Get call against the Sharepoint REST API.  An Http Status Code of OK (numeric value: 200) indicates that the file exists.

An Http Status Code of NotFound (numeric value: 404 ) indicates that the file is not found at that location.

This get call does NOT return the document itself to the caller.

Testing for the presence a Folder within a Document Library

This test is even simpler.  We need two pieces of information:

  • The Document Library Name
  • The Folder Path (if any) within the Library

We compose the uri fragment as:

web/GetFolderByServerRelativeUrl(‘/{Document Library Name\Folder Path}’)

One then makes an HTTP Get call against the Sharepoint REST API.  An Http Status Code of OK (numeric value: 200) indicates that the folder path exists.  An Http Status Code of NotFound (numeric value: 404 ) indicates that the folder path is not found within the library.s_w04_1a35329u

How to Create a Folder within a Document Library

In order to create a folder we need to:

Compose a proper uri fragment;

Compose a JSON formatted content body (System.Net.Http.HttpContent);

Get a REST API Digest Value (See Part 3 of this series on this) and include it in the header; and

Make a HTTP POST call to the Sharepoint REST API

So here we go.  The uri fragment takes the simple fixed form of:


The JSON HTTPContent format does the real work and takes the form of:

[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/{Document Library Name\Folder Path}’]

So if our Document Library is “Accounting Documents” and your folder name is “ClaimsFolder” our JSON looks like:

[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/Accounting Documents/ClaimsFolder}’]

Having placed this value into a string object as:

string data=”[ ‘__metadata’: [ ‘type’: ‘SP.Folder’ ], ‘ServerRelativeUrl’: ‘/Accounting Documents/ClaimsFolder}’]”;

we create a  HTTPContent object as

System.Net.Http.HttpContent reqContent = new StringContent(data);

After adding the correct Digest Header and ContentType Header our post looks like:

var resp = client.PostAsync(uri, reqContent).Result;

string respString = resp.Content.ReadAsStringAsync().Result;

If the folder is created successfully we will get back an HTTP Status of Created (numeric: 201 )

To create a nested folder just expand the path within ServerRelativeUrl.

New lets turn to a topic that does really fit any where else but I will just stuff it in here.

How to Create a custom HTTP Exception class derived from the base Exception Classs_w01_1a35360u

HTTP calls are parameterized calls and when they fail there is some information we may wish to attach to the Exception object created which we create on errors.  Specifically:  The URL which was called, the returned HTTP Status Code and the Response Body. Here is a simple derived HTTP exception class which I use:

public class HTTPException : ApplicationException
public string URL { get; set; }
public string Response { get; set; }
public HttpStatusCode Status { get; set; }
public HTTPException(string message, HttpStatusCode status, string uRLString, string respString)
: base(message)
URL = uRLString;
Response = respString;
Status = status;

public HTTPException(string message, HttpStatusCode status, string uRLString, Exception innerException)
: base(message, innerException)
URL = uRLString;
Response = string.Empty;
Status = status;

Assume a typical HTTP method call like:

public static HttpStatusCode RestGet(System.Net.Http.HttpClient client, string uri,List allowed, out string respString)
respString = string.Empty;
HttpResponseMessage resp=null;
resp = client.GetAsync(uri).Result;
respString = resp.Content.ReadAsStringAsync().Result;
_DisplayDebugInfo(client, resp, uri, null, null, respString);
catch (Exception x)
throw new HTTP.Exceptions.HTTPException(“RestGet”,  HttpStatusCode.ServiceUnavailable,                       client.BaseAddress.ToString() + “/” + uri, x);

if (statusCode != HttpStatusCode.OK)
throw new HTTP.Exceptions.HTTPException(“RestGet”, statusCode, client.BaseAddress.ToString() + “/” + uri,respString);

return statusCode;

We can pick this up in a outer try/catch block like:


//make your HTTP call here

}catch(HTTPException xo){



}catch(Exception x1){



Ok That’s it for the REST API and the Client HTTP object.  Next up:  Preparing a WEB API Endpoint to be called FROM Sharepoint 2013 Workflow.

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files


Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient   13 comments

This is the first of a multipart series on the Sharepoint REST API and will focus on using this API with C#.  At our shop we need to manipulate documents and list items between sites.  For our purposes we will be calling a custom WEB API HTTP Endpoint from Sharepoint workflows, these endpoints will then manipulatete Sharepoint objects using the Sharepoint REST API using C#.  If REST means nothing to you but a good sleep start here. The topics we will cover include:

  • HTTPClient Object (Framework 4.5)
  • Understanding and Manipulating the Result Sets returned by the REST API (Here)
  • Downloading and Uploading Documents from Document Libraries Using The REST API
  • Developing WEB API Endpoints To package our units of work (Web API 2.X)
  • Calling the WEB API Endpoints from Sharepoint Workflows using GET HTTP Web Service Action

I will not be looking at calling the REST API directly from within Sharepoint Workflows.  How many blog posts will this be?  I am not sure probably three posts in total.

Getting Our Feet Wet:  What an HTTP GET request to the Sharepoint REST API Looks like (10,000 foot view)

The new async HttpClient is a great advance over previous Microsoft Http clients.  You need to understand something about formal HTTP communications.  Specifically the roles of the different HTTP Verbs, Request Headers, HTTP Errors, and for POST commands, Response Headers and the Response Body.  So lets begin.  When we make AJAX calls in JavaScript all of our calls are passed through the machinery of the User Agent (Chrome, Firefox, etc.) and if you are using a helper library like JQuery, the helper library is doing a lot of the work for you.  When you switch to C# and the HTTPClient there is no User Agent and the client itself is quite thin so you have to a lot of work to get a well engineer solution.  So lets begin with some of the basics of an HTTP Get request.tumblr_ls7xrkHeuq1qchzcpo1_1280

There are four components of an HTTP Get request to be concerned about:

  • Target URL
    • What endpoint is the request headed for
  • Query String
    • What, if any, arguments are passed to the target
  • Accept Header
    • The format the results should be written back to the client
  • Authorization Header
    • What identity is the request being make under

For a Sharepoint REST API we want the site URL for the target URL and append to this \_api\.  So we start with something like:

In a non-RESTful interface the query string takes the form of:


In a RESTful interface this becomes appended to the base URL itself as something like


Microsoft’s design for the REST API is a little eccentric and does not follow this pattern. A REST API call to get the meta-data for a list (not the list items) can look like this:


where: {myserver}/_api/ is the baseURL and web/lists/GetByTitle(‘MyList’) are the arguments and values used in the call.

A GET call for a List, or any Sharepoint object returns a very complex objects whose structure is (basically) identical for different object types so once you understand the basic structure of the response it applies to many different objects.  We will not discuss that response object from a Sharepoint REST API Get request in this post. Note here that objects.  Anywhere the REST API allows an argument to be enclosed in single quote marks we may use an argument value with embedded spaces with out escaping the spaces.  Of concern to us at this point is the format of the response to the call. We have two choices XML format or JSON format.  When we are in the browser world and are using JavaScript to process the response, typically JSON is the preferred format, but XML can also be processed using JavaScript and assorted helper libraries.  When we are using a language like C# we can work most effectively with XML output.  In my case I prefer to receive XML and process that response using LINQ for XML.

Authentication for the Sharepoint REST API comes in three flavors:  OAuth token, Basic Authentication and (Windows) Integrated.  OAuth token only applies if you are calling from a Sharepoint APP (which means you are using JavaScript by definition) or from a Sharepoint Provider-Hosted High Trust App. (a very rare beast indeed and if you are using one you probably don’t need to be reading this post).  I will be concerned here with Basic and Integrated Authentication.

Now Lets Look at some Details Of A Sharepoint REST API call using C#

The HTTPClient object

Simple Anonymous HTTP GET call using HttpClient

        System.Net.Http.HttpClient _Client = new System.Net.Http.HttpClient();
_Client.BaseAddress = new Uri(baseURL);
HttpResponseMessage resp = client.GetAsync(uriString).Result;
string respString = resp.Content.ReadAsStringAsync().Result;
        if (resp.StatusCode != HttpStatusCode.OK)
throw new ApplicationException(“BAD”);

Where baseURL is a string representation of the site we are calling into for the REST API:


urrString is a string representation of the actual REST API call we which to make:


Since we are not setting an authorization method we are calling anonymously and this call in its current form would return a failure HTTP Status and the response body will contain information in the REST API default return format.  The HTTP status is returned as a Framework Enum System.Net.HttpStatusCode.  In formal terms HTTP status codes are numeric values.  If you want to recover the actual value you can cast HttpStatusCode to an int.  Thus  HttpStatusCode.OK cast as an integer is 200. This call as written wourld return HttpStatusCode.Unauthorized ( 401).

We need to associate an identity to the HttpClient object in order to get the REST API to return data to us.  I will discuss two ways to do this  (of the many options):

BASIC Authorization and Windows Integrated Authorization.  Basic Authorization depends on adding an Authorization Header to the client with  appropriate  user information (typically domain, user name and password) in a very specific format.

Header Name:  Authorization

Argument: BASIC domain\username\password

where domain\username\password are encoded as a Base 645 String:

static string _FormatBasicAuth(string domain, string user, string password)
const string format0 = @”{0}\{1}”;
const string format1 = @”{0}:{1}”;
string userName = string.Format(format0, domain, user);

return Convert.ToBase64String(Encoding.Default.GetBytes(

               string.Format(format1, userName, password)));


We then create the Authorization header on the HttpClient object:

System.Net.Http.HttpClient _Client = new System.Net.Http.HttpClient();

_Client.DefaultRequestHeaders.Authorization =

         new AuthenticationHeaderValue(“Basic”, _FormatBasicAuth(Domain, User, Password));

Note that the password is NOT encrypted so BASIC Authorization should only be used over a secure transport layer like SSL.

An alternative is to use Microsoft’s Integrated Authorization (also know as challenge and response).  This takes  network credentials (either of the active thread or created from initialization values) and allows the server to confirm the identity and validity of the caller without transmitting the data over a secure transport layer (‘that password it self is never passed over the wire’).  Here is an example using Network Credentials created on the fly:

_Client = new System.Net.Http.HttpClient(new HttpClientHandler()
new NetworkCredential(User, Password, Domain)

Alternatively the NetworkCredential object can be created from the running thread of your C# program (you do not need to supply the password  and other parts of the credentials explicitly in this case).  When set up this way you do not use an Authentication Header.  The Windows challenge and response pattern (Http Status 401) are controlled on the client side by the HttpClientHandler.

Assuming your REST API call is well formed and your credentials  are valid the response from the server will contain a representation of a Sharepoint entity (or entities).  I will discuss the structure of this data in a subsequent post.  You can control how the response is formatted by adding an Accept Header to the HttpClient Object.  For the REST API you have two choices: XML or JSON.  If you do not supply a header the default format is XML.  For the REST api both the XML and JSON take a very specific format in the Accept header.  XML requires a Accept Header (if supplied) with an argument of:


while JSON requires an argument of:


The Accept Header for XML can be added to the HTTPClient as:

  _Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(@”application/atom+xml”));

Currently there is a bug in the HTTPClient which prevents adding an Accept Header for REST API formated JSON.  For JSON you need to use a different format:


Normally I use the XML format of the response for C# programs (although using JSON.NET you could process the return in JSON format).  Javascript coders will normally us the JSON format (although they could process the XML format).

My typical HTTP Get calls looks something like this:

System.Net.Http.HttpClient _Client = new System.Net.Http.HttpClient();
_Client.BaseAddress = new Uri(baseURL);

             new MediaTypeWithQualityHeaderValue(@”application/atom+xml”));

_Client.DefaultRequestHeaders.Authorization =

         new AuthenticationHeaderValue(“Basic”, _FormatBasicAuth(Domain, User, Password));

HttpResponseMessage resp = client.GetAsync(uriString).Result;
string respString = resp.Content.ReadAsStringAsync().Result;

if (resp.StatusCode==HttpStatusCode.OK){



switch (resp.StatusCode){

    //process Errors here

    //Note if the error is low level (you never reached the RESTAPI processor the

    //respString is a simple string message

    //if the error is returned by the RESTAPI processor the format is XML


OK, that is enough to get started, in the next post we will turn to more complex REST API GET calls and how to process the XML entities returned from successful Sharepoint REST API calls.

Sharepoint 2013 REST API: The C# Connection: Part 1 Using System.Net.Http.HttpClient

Sharepoint 2013 REST API: The C# Connection: Part 2 Query List or Item and Decoding The Meta-Data

Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data

Sharepoint 2013 REST API: The C# Connection: Part 4 Document Libraries, Folders And Files
Sharepoint 2013 REST API: The C# Connection: Part 5 REST API More on Folders and Other Odds & Ends

Sharepoint 2013, The REST API and The C# HTTPClient   4 comments

This is a short post to highlight two issues with how the C# HTTPClient is implemented in Framework 4.0 and how to work around these issues.  The issues are the Content Header and the Accept Headers for JSON data.  This post is NOT a full discussion of using the HTTPClient and the Sharepoint 2013 REST API but is limited solely to properly code the Content Header and the Accept Header for this interface.

The new Asynchronous web client   System.Net.Http.HttpClient is a joy to work with.  Recently I was tasked with interfacing with the Sharepoint 2013 REST API using C# (don’t ask why). Most example code for the REST interface are written using JQUERY on the browser.  Since we needed to call the API from within a C# program we attempted to use the HttpClient for this purpose.  Sharepoint Data in the REST interface is in ODATA format.  To use the API we need to declare the Accept Headers for the format of the data we want the ODATA to be formatted by Sharepoint.  If we are using POSTS, PUTS or DELETE with the API we need to declare the Content Type Header to describe the format of the data we are sending to the Sharepoint REST API.  Our choices are:

Data Format Header Value
XML application/atom+xml
JSON application/json;odata=verbose

Setting up the HTTPClient, for all verbs looks like this:

1)System.Net.Http.HttpClient _Client = new System.Net.Http.HttpClient();
2)  _Client.BaseAddress = new Uri(baseURL);

// When the format we are using for incoming data is XML we add this line:

3) _Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(“application/atom+xml”));

When the format we are using for incoming data is JSON if we replace, in line #3 “application/atom+xml” with the required “application/json;odata=verbose”, Line #3 will thrown an Exception.  The work around is to replace line #3 with:

_Client.DefaultRequestHeaders.Add(“Accept”, “application/json;odata=verbose”);

//and off we go

4) HttpResponseMessage resp = client.GetAsync(uri).Result;
5) string respString = resp.Content.ReadAsStringAsync().Result;

When we are using the HTTP verbs POST, PUT or DELETE we need to send a Request Body with the data we want to send to the server and set a Content Type Header to tell server what format data in the body contains.  The HTTPClient holds the request body in its own object (System.Net.Http.HttpContent):

1) string myData=”your data XML or JSON goes here”;

2) System.Net.Http.HttpContent reqContent = new StringContent(myData);

//We set the Content Type header on this object, NOT on the HttpClient object as:

3) reqContent.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue(“application/atom+xml”);

When the format for HTTPContent is JSON, if we replace “application/atom+xml” with the required “application/json;odata=verbose”, Line #3 will thrown an Exception. The work around is to replace line #3 with:

reqContent.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse(“application/json;odata=verbose”);

//and off we go

4) var resp = _ClientAddItem.PostAsync(addItemURL, reqContent).Result;

5) string respStringOut = resp.Content.ReadAsStringAsync().Result;

Strange but true.  You are welcome.

Photo Processing on the Nexus 7   1 comment

We ARE excited to be in San Francisco for SPTechCon.  If nothing else road trips give us an opportunity to play with some of our devices.  The last time I was in San Francisco I was using a a Nexus 7  and Adobe Photoshop Explorer to post process photos.  The results can be found here and here.  Six months later finds me back in San Francisco still using the Nexus 7 as a platform but using Adobe’s PS Touch and the Snapspeed App for post processing.  My process to gets photos from The Nikon 7 to the Nexus has been simplified but is still unacceptable.

Getting The photos to the Nexus

Moving photos from the Nikon D50 to the Nexus’s file system is a pain. I could simplify this process by

  • buying a new camera and using the Raw Vision App and a cable; or
  • Using a Bluetooth enabled SD card; or
  • Upgrading me Nexus to something with a front facing camera.

I love my Nikon and am too cheap for the cost of the Bluetooth SD card so I went the hardware root. Here is the setup:

Nexus Media Installer App (~$3)

Nexus compatible To Go cable (~$3)

A USB SD card reader (~$2)

Step One: The Physical World

SD to card reader to To Go Cable to Nexus port  


Our Topology: Cheapness Electronics, Cambridge Mass

Step Two: Copy the files from the card to your system

When the card is inserted into  “the cable system” Nexus Media Installer App will start and attempt to read the SD card directory.  I needed to be sure not to be running my file manager (File Manager HD App – free) at the same time.  You can preview all photos on the card.  No photo objects on the cards are available as documents. Select the objects you want to copy  and hit the save icon.  This will start a background job, one for each object, to transfer the photos to your file system into a  “pictures” folder.  Multiplexed and asynchronous request fulfillment . oh my. Nice software.  This folder is visible to Gallery, PS Touch and Snapspeed.  Other Apps my not make this folder accessible.  But you can always move files around with a File Manager like File Manager HD.

Post Processing the Photos

All of the software I have seen on the Nexus allow only jpeg processing.  Even Adobe PS Touch supports only JPEG. There is no RAW file type support available on the Nexus platform.  I looked at post processing apps from three board groups:

Editors with Theme filters only

Editors with adjustable photo enhancement filters; and 

Editors with spot adjustment filters and other features.

In my last blog post on this topic I was using Photoshop Express, a nice free editor with adjustable photo enhancement.  I liked the results but the app can only process files in the camera’s DCIM folder (which requires us to do file copies of imported photos twice!)

San Francisco Bay – Photoshop Express

Feeling plush I spend $2.99 and bought the next step Photoshop App: PS Touch this provides a rich set of editing tools, layers (if you use Photoshop on a laptop you know what that means), a nice history stack and more.  The results are fine but, the interface is difficult to work with but some of that may be that its UI is exposed using primarily  Surface Touch  \ Windows 8 conventions. This will flow much better on a ten inch, preferably a windows, device. (I am working on a 7 inched Nexus not a 10 inch anything).  It’s not a lot of fun to use and it may be overkill unless you do photo post processing and you have no lap top or desk top version of Photoshop to use.  But if your Surface or 10 inch iPad are your primary device this is a good choice.  Not a fun or quick or easy, choice.

SPTechCon Speaker Andrew Connel – PS Touch

A nice free alternative to Photoshop Express is the very simple but effective Snapspeed.  This is a lot of fun to use and combines theme filters and photo adjustment filters.  I did a bunch of post processing with this app and was very happy with the results. Try it.

Art Gallery – Snapspeed

A Nexus Gallery (PS Touch and Snapspeed Post Processing On the Nexus 7)

(This last one has been processed using Photoshop C5 on a laptop)

Sharepoint 2013: Get Me Out Of Here (Part 2): Cross Domain AJAX Calls in Sharepoint Apps   7 comments

The Bottom Line

You can find Part I here.

We are using Sharepoint 2013 RTM. We will be looking at Sharepoint Hosted Apps in this post and we will deal with provider hosted apps in a later blog post. Out bottom line on Development Environments is we are using a developer ‘environment in the cloud’ from CloudShare. Our specific goal is the use the new, and sparsely documented, Sp.WebProxy Javascript function to do cross domain calls into REST Endpoints which are external to Sharepoint. We will not be discussing the esotaric hidden IFrame solution discussed in Solving cross-domain problems in apps for SharePoint.

Forget about WCF and using Custom Restful Web Services like we used in Sharepoint 2010.  Although this technic did work to add proxy access to external data sources, the process was poorly documented, highly technical and fragile.  Are basic need is to use Sharepoint 2013 and to access data from other servers which we control which have our data.  For important reasons these do not fit into the B?? model and must be accessed either through proxy methods (see Part I of this series) or with cross-domain AJAX calls from the client.   When we transitioned to Sharepoint 2013 after our brief marriage to Sharepoint 2010 we were very excited about the whole App concept or  thing or whatever it is.  We decided to write some, just as soon as we could figure out what they are, how to write them and how to  set up a development environment.  Our bottom line on Sharepoint Apps.  They come in three flavors: auto-hosted, Sharepoint Hosted and Provider Hosted. 

Create a Development Environment

Wow.  Look at those specifications!  Eight Gig plus of RAM, 64 bit Quad Core Processor, Visual Studio 2013. Page after page of unclear and contradictory documentation; most it for the Preview not RTM.  Where did I put that old propeller beany I used to wear in the 1990’s?  Ok, looks like we might be able to try this with our desktop machines (they are monster machines), but just in case we also start looking at VMware to build these suckers on. The basic MSDN documents:

Well this looks really hard and we may need to set this up more than once (looks like we could trash the system more than once getting things right).  This clearly is not a simple, focused product like SQL Server.  So we started looking for more help and setup tips.  Trashed my physical machine a couple of times, rebuild are taking a long time.  Shift to Hyper-V as MS suggests, still can’t quite get a system setup and still have to deal with the corporate AD.  Well, at least re-builds aren’t taking as long.  One of my co-workers takes a (physical) class.  When he returns he is pretty excited.  They used a cloud based configuration of Sharepoint 2013 (Not Sharepoint Online or Office 365 or whatever the folks at Redmond are calling the AZURE based system today) and was pretty happy.  We shift to using CloudShare.  They have a three server template with fully licensed Sharepoint 2013, SQL Server 2013 and a separate AD server; plus all the Microsoft software you can shake a stick at (Visual Studio, Office 2013, etc. at no extra cost).  Nice, not that expensive for a developer playpen and I can spin up an instance in about 4 minutes and can take snap shots of by way stations  during any radical reconfigurations so I can drop back to a stable version in about 15 minutes.  Access is via browser and for a development environment it all runs quickly.  We are happy again.

Get The APP Documentation and Sample Code

Hmm again. A lot of the stuff on MSDN and Technet are dated July, 2012 and are based on the preview drop and/or for the Azure based system of Sharepoint.  Not good.  Hey, how about Plurasight?  They seem to be jacked pretty directly into Microsoft.  They have a whole bunch of Sharepoint  courses, tons of Sharepoint 2010 courses, this must be helpful right, Sharepoint 2013 can’t be that big of a jump form 2010 can it? And where are those Sharepoint 2013 apps courses? Oh ok, here they are,  all dated 2012-11-05:

Twelve hours of Sharepoint 2013….stuff.  Part 6 is over three hours long.  Ok lets do it.  Installation courses are ok but they seem to be leaving a lot of stuff out (or perhaps we fell asleep at some point).  The blogs for Sharepoint App development often have the same problem, based on the preview or on the cloud based Azure version of Sharepoint and often (this really gets me) just re-writes of the same base line documents from MSDN. Here are some basic papers we used (Don’t blame us for the poor capitalization of the titles):

Create A Sharepoint Hosted App

The MSDN papers How to: Create a basic SharePoint-hosted app  should be enough to get you up and running on a basic low functionality Sharepoint hosted App.  Note this correction however.  Replace the lines of code from the paper:

function sharePointReady() {
    ctx = new SP.ClientContext.get_current();    $("#getListCount").click(function (event) {

$(document).ready(function () {

ctx = new SP.ClientContext.get_current();

$(“#getListCount”).click(function (event) {






Once you have a Sharepoint hosted app which will do ANYTHING(this may take a while), start a new project based on the papers:

Cross Domain AJAX Calls

I don’t care what lies you have been told before, you can make cross domain AJAX calls from within a Sharepoint hosted app, with javascript and they all involve calling directly or indirectly SP.WebProxy.invoke.  In the romantic technical documentation of MSDN this is defined as:

[ScriptTypeAttribute(“SP.WebProxy”, ServerTypeId = “{656a77c4-1634-415c-bf85-c6c0cb286e0e}”)] public static class


WebProxy has a single method[RemoteAttribute] public static ClientResult


( ClientRuntimeContext context, WebRequestInfo requestInfo



is defined as:[ScriptTypeAttribute(“SP.WebRequestInfo”, ValueObject = true, ServerTypeId = “{71aa825d-bc12-422d-a177-d2e63fe68cd9}”)] public class WebRequestInfo : ClientValueObject

and has a plethora of

properties and methods


We use these objects and methods by including the Sharepoint JavaScript libraries (SP.Runtime.js) in our Sharepoint APP.

Call Cross Domain Using SP.WebProxy.invoke

Setting up the call in JavaScript:

var context = SP.ClientContext.get_current();

var request = new SP.WebRequestInfo();

//Set the Url, HTTP Method and Accept Headers

//We could also include OAuth information and parameters.




request.set_headers({ “Accept”: “application/json” });

var response = SP.WebProxy.invoke(context, request);

      context.executeQueryAsync(successHandler, errorHandler);

  • Http Request:


Accept: */*

X-Requested-With: XMLHttpRequest

Content-Type: text/xml

X-RequestDigest: 0x1CB7B3E3FBBD705515A23DB5DEEDDF06FF5659232C5E1891205D2C10E5F772C13DE15FF53CC85FF76AA6552B4E5DA0C845C48F6C64DFFD825159A686B2E3561F,11 Feb 2013 18:13:54 -0000


Accept-Language: en-US

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)


Content-Length: 719

Connection: Keep-Alive

Pragma: no-cache

Cookie: WSS_FullScreenMode=false

POST Body:



This is a HTTP POST to _vti_bin/client.svc/ProcessQuery.  Note the checksum in X-RequestDigest and the Accept and Content-Type headers refer to the call to ProcessQuery and not to our ultimate endpoint (…WebAPI005/api/values).  The endpoint url and headers were packaged in WebRequestInfo and appear in the body of the POST in XML format.  In this simple call we are only defining a Method (GET) Accept-Type (application/json) and an Url (http://…WebApi005/api/Values).

A successful HTTP response might look like:

HTTP/1.1 200 OK

Cache-Control: private

Content-Type: application/json; charset=utf-8

Vary: Accept-Encoding

X-SharePointHealthScore: 0

SPClientServiceRequestDuration: 1753

SPRequestGuid: 36cafd9b-c9ea-1071-02d5-3352073a4f7c

request-id: 36cafd9b-c9ea-1071-02d5-3352073a4f7c

X-RequestDigest: 0x5ADEBB9D5BE8268193CA8B29902E1C16B1C11DC23CBD94B5FBD6E6A659D34D29765539025B8B34DFC3719279ED07B3306083414D88611E1D09000B30DACFDFB0,11 Feb 2013 18:13:55 -0000


X-AspNet-Version: 4.0.30319

X-Powered-By: ASP.NET

X-Content-Type-Options: nosniff

X-MS-InvokeApp: 1; RequireReadOnly


Date: Mon, 11 Feb 2013 18:13:56 GMT

Content-Length: 597

The body for our Restful Endpoint would look like:


The content-type response header specifies application/json and our data is also json. Note also that the content-type of the Body element is defined by the JSON response element ResponseBody[2].Headers.Content-Type. Note that the format of response body is dependent upon the sender.  A response for a Sharepoint list will have a different format than the WebAPI sender we are using here. See How to: Query a remote service using the web proxy in SharePoint 2013 for an example of processing Sharepoint output.

Success and Error Routines

function successHandler() {

if (response.get_statusCode() == 200) {

var ResponseBody;

              var thing1;

              var thing2;

ResponseBody = JSON.parse(response.get_body());



                   //Do Your Thing with each value



else {

var httpCode;

               var httpText;

               httpCode =    response.get_statusCode();

httpText = response.get_body();

               //Do your thing with the error response



       function errorHandler() {

          var httpText2=response.get_body();

         //Do your thing with the error response


Call Cross Domain Using JQuery AJAX to Call SP.WebProxy.invoke

Here is the same call but using JSON to call SP.WebProxy as an AJAX REST call:

var url = “”;


url: “../_api/SP.WebProxy.invoke”,

type: “POST”,

data: JSON.stringify(


“requestInfo”: {

“__metadata”: { “type”: “SP.WebRequestInfo” },

“Url”: url,

“Method”: “GET”,

“Headers”: {

“results”: [{

“__metadata”: { “type”: “SP.KeyValue” },

“Key”: “Accept”,

“Value”: “application/json;odata=verbose”,

“ValueType”: “Edm.String”





headers: {

“Accept”: “application/json;odata=verbose”,

“Content-Type”: “application/json;odata=verbose”,

“X-RequestDigest”: $(“#__REQUESTDIGEST”).val()


success: successHandler,

error: errorHandler


Note the X-RequestDigest Header setup which gets the Digest value directly from the ASPX Form element __REQUESTDIGEST and is required.



Accept: application/json;odata=verbose

Content-Type: application/json;odata=verbose

X-RequestDigest: 0x622ED77A91DC009DACC720FEEA25768E3208E3C9FF1F6B64F7094C6461BC59B5675BEF0267B4AFD0B7F854484484706EBB8DA42BDB0E91AE536B2BC57C478824,11 Feb 2013 19:22:00 -0000

X-Requested-With: XMLHttpRequest


Accept-Language: en-US

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)


Content-Length: 292

Connection: Keep-Alive

Pragma: no-cache

Cookie: WSS_FullScreenMode=false


Note: the authentication is handled on the fly with NTLM negotiation.

JSON Content


Note here the call body is in JSON, since this is a typical JSON AJAX call to the Sharepoint REST subsystem and is used to pass the url, headers and any optional parameter values to the target Url.

HTTP Response

HTTP/1.1 200 OK

Cache-Control: private, max-age=0

Content-Type: application/json;odata=verbose;charset=utf-8

Expires: Sun, 27 Jan 2013 19:22:33 GMT

Last-Modified: Mon, 11 Feb 2013 19:22:33 GMT

Vary: Accept-Encoding

X-SharePointHealthScore: 0

SPClientServiceRequestDuration: 15009

SPRequestGuid: 24cefd9b-e96c-1071-02d5-32abbeb2cd48

request-id: 24cefd9b-e96c-1071-02d5-32abbeb2cd48

X-RequestDigest: 0xF7EEE4ED6914FDAD63C7C097117C21AEF182741021B51616C22A96D1CDD650A57C79D02D4C81F31B0EB2E956E6EC7254356CAB341E94F1B155F4E1C5A6AD1866,11 Feb 2013 19:22:33 -0000


Persistent-Auth: true

X-AspNet-Version: 4.0.30319

X-Powered-By: ASP.NET

X-Content-Type-Options: nosniff

X-MS-InvokeApp: 1; RequireReadOnly


Date: Mon, 11 Feb 2013 19:22:48 GMT

Content-Length: 1395

Response Body:


Note that here when called this way out response data is found as



Strange but true.

Note also that no matter how you call the cross-domain endpoint you are getting back two status codes.  One status code of the SharePoint call itself and one status code, nested in the JSON response,  from the cross-domain endpoint.  I am still experimenting with this.

Next Steps

My next step with SP.WebProxy will be to include security information in the call and to process this information on the REST endpoint.  See You Then.


%d bloggers like this: