Archive for the ‘OData’ Category
I first discussed retrieving List Item data here: Sharepoint 2013 REST API: The C# Connection: Part 3 Working With List Item Data. The simple REST API call:
select * from Table1
in a SQL data base. However Sharepoint, although resting on SQL Server for everything, will only return the a limited number of rows from the underlying list. The maximum number of items (rows) returned in a single call is governed by a system constraint. On our out of the box Sharepoint 2013 installation the limit was 100 rows! Supposable Sharepoint Sever Admins can change the maximum value but they can not eliminate the constraint. The naïve coder might think that this can be gotten around using the $SKIP parameter in the ODATA extensions to the Sharepoint REST API. Alas $SKIP is not implemented, instead M$ implemented it own undocumented skip parameters. No to worry you don’t need to provide your own parameter values. The initial call using
returns X rows of the table (in primary key index order) where X is equal to the page constraint discussed above. If there are additional rows the metadata in the returned data structure will return a “next” hypermedia link whose value is fully qualified Sharepoint Rest API call for the next X rows (i.e. the next page).
Google Barge Sinking Slowly in San Francisco Harbor after being infected with the Sharepoint 2013 Virus
We need then to setup a typical paging object to read items by page until all items are read. I will be leveraging the code patterns given in the series referenced above the page calls (…/items) return results in XML format and is cast as an XDocument type. Then we can probe for a “next” link as:
static string _GetLinkToNextPage(XDocument xDoc)
const string matchString = “/Web/Lists”;
var links = from g in xDoc.Descendants().Elements(CSPNamespace.baseNS + “link”)
const string NEXT = “next”;
foreach (XElement link in links)
string rel = string.Empty;
string href = string.Empty;
foreach (XAttribute att in link.Attributes())
if (att.Name.LocalName == “rel”)
rel = att.Value;
if (att.Name.LocalName == “href”)
href = att.Value;
if (rel == NEXT)
//return just the relative path
//return null if no link is found
The complete object template looks like:
public static class CItemPage
public static List<CEntry> _Worker(HttpClient client, string next)
public static List Exec(HttpClient client, CEntry listMetaData)
static void _Accumulate(XDocument items, ref List allItems)
static string _GetLinkToNextPage(XDocument xDoc)
static XDocument _GetListMetaData(HttpClient client)
The entry point for the paging activity is CItemPage.Exec which turns around and calls worker where we loop through the data and (for my own purposes) I then accumulate the items from all pages in a single list<CEntry> using my method _Accumlate. calls Worker
public static List _Worker(HttpClient client, string next)
List allItems = new List();
XDocument items = null;
while (next != null)
items = _GetItems(client, next);
next = _GetLinkToNextPage(items);
_Accumulate(items, ref allItems);
catch (Exception x)
var b = x.Message;
Easy and fun. Should there be an easier, faster way? yes.
This madness must stop.
This is a short post to highlight two issues with how the C# HTTPClient is implemented in Framework 4.0 and how to work around these issues. The issues are the Content Header and the Accept Headers for JSON data. This post is NOT a full discussion of using the HTTPClient and the Sharepoint 2013 REST API but is limited solely to properly code the Content Header and the Accept Header for this interface.
The new Asynchronous web client System.Net.Http.HttpClient is a joy to work with. Recently I was tasked with interfacing with the Sharepoint 2013 REST API using C# (don’t ask why). Most example code for the REST interface are written using JQUERY on the browser. Since we needed to call the API from within a C# program we attempted to use the HttpClient for this purpose. Sharepoint Data in the REST interface is in ODATA format. To use the API we need to declare the Accept Headers for the format of the data we want the ODATA to be formatted by Sharepoint. If we are using POSTS, PUTS or DELETE with the API we need to declare the Content Type Header to describe the format of the data we are sending to the Sharepoint REST API. Our choices are:
Setting up the HTTPClient, for all verbs looks like this:
1)System.Net.Http.HttpClient _Client = new System.Net.Http.HttpClient();
2) _Client.BaseAddress = new Uri(baseURL);
// When the format we are using for incoming data is XML we add this line:
3) _Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(“application/atom+xml”));
When the format we are using for incoming data is JSON if we replace, in line #3 “application/atom+xml” with the required “application/json;odata=verbose”, Line #3 will thrown an Exception. The work around is to replace line #3 with:
//and off we go
4) HttpResponseMessage resp = client.GetAsync(uri).Result;
5) string respString = resp.Content.ReadAsStringAsync().Result;
When we are using the HTTP verbs POST, PUT or DELETE we need to send a Request Body with the data we want to send to the server and set a Content Type Header to tell server what format data in the body contains. The HTTPClient holds the request body in its own object (System.Net.Http.HttpContent):
1) string myData=”your data XML or JSON goes here”;
2) System.Net.Http.HttpContent reqContent = new StringContent(myData);
//We set the Content Type header on this object, NOT on the HttpClient object as:
3) reqContent.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue(“application/atom+xml”);
When the format for HTTPContent is JSON, if we replace “application/atom+xml” with the required “application/json;odata=verbose”, Line #3 will thrown an Exception. The work around is to replace line #3 with:
reqContent.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse(“application/json;odata=verbose”);
//and off we go
4) var resp = _ClientAddItem.PostAsync(addItemURL, reqContent).Result;
5) string respStringOut = resp.Content.ReadAsStringAsync().Result;
Strange but true. You are welcome.
QCON the software conference hosted by InfoQ will be meeting in San Francisco next month form November 14 through November 16. I attended last years conference and am looking forward to attending again this year.
I am also a regular attendee of the Microsoft MIX Conferences. Even though both of these conferences focus on Web development the contrast between these two conferences could not be greater. First off MIX is much larger and is of course devoted to all things Microsoft. MVC 3 was the big push this year at MIX. This is a very strong development approach for Microsoft doing its ‘embrace and extend’ dance which it does so well. In this case source is Ruby on Rails and its approach to standard MVC development. MVC 3 (and Microsoft) approaches the Web from the perspective of the corporate developer of (basically) client server architecture. But it is not a bad or evil effort. Indeed the improved and streamlined http pipeline used by IIS for MVC is fast, the tools development environment are well thought out and, once you drop down a level the low level support for Rest(ful) approaches, JSON and HTML templates is impressive. In addition to JSON and JQUERY, Microsoft is also a strong supporter of the emergent ODATA standard. I recommend MIX (and the Channel 9 videos of the conference) to anyone working with or considering Microsoft development tools. I always learn new things are gain important information on how to advance the web at MIX. You can read more details on the sessions here.
In terms of big metal companies MVC 3 and Framework 4.0 are much stronger than anything Java EE has to offer. The biggest problem Microsoft has is that it can not seem to ship its HTML5 compatible browser and so it’s development systems do not optimize for (or even in some cases take advantage of) the strongest and newest features of HTML5. In addition, try as they will two things Microsoft will never be is cutting edge or free. Over in the LAMP and Rails and NOSQL world QCon is offers a look at how the world of the web will be (or at least could be) if any of the independent developers who make up most of QCON’s speakers and attendee’s are able to hit the mark with the next big thing. It’s always a mixed bag of nuts at QCon, a nice mixture of visionaries and hucksters, Rastafarians and Agile advocates. I like this conference because in addition to providing me with some alternative voices to the Google and Microsoft and Oracle, it also forces me to both re-evaluate the way I am doing things and to think independently about HOW we can do web development. And San Francisco is a much better venue than La$ Wage$. This is a hacker fest without the emphasis on cool technique not how to create the next big thing (product or Brand). This is NOT Web 2.0 Summit which is about venture capitalism defining the web. Alexia Tsotsis will not be covering this.
By the way, if you are not reading InfoQ on the web regularly you ARE missing out.
If I had a gun for every ace I have drawn,
I could arm a town the size of Abilene.
And you know I’m only in it for the gold.
All that I am asking for is ten gold dollars
And I could pay you back with one good hand
You can look around about the wide world over
And you’ll never find another honest man.
Loser – Hunter/Garcia 1971
MIX 11 Day One
MIX 11 Day Two
Mix 11 Day Three
Microsoft continued today to play the open source hand today on this the final day of MIX 11. About fifty per cent of the attendees where absent today. I don’t want to name names but the remaining attendees do not work for the Mephistopheles of Redmond. Open Source in the form of the very nicely developing NuGet was very much in on everyone’s minds. The Hackers: Phil Haack and Scott Hanselman gave a well attended presentation, “NuGet In Depth: Emerging Open Source on the .Net Platform.” The session tape of this presentation needs to be seen to be believed. The NuGet effort moves way beyond the effort in Codeplex as a open source store of open sourced Dot Net related development efforts. Continuing on the theme in evidence yesterday the packaging strategy and access methods for NuGet reminds us of the GEM package system for Ruby AND as embedded in VS2010 looks a lot like the GEM interface found in the late great Aptana IDE for RadRails. Please do not misunderstand me. I love Ruby and Rudy on Rails and think that embedded the very powerful conventions and concepts into MVC3 and NuGet are a quantum leap forward in Microsoft’s approach to software development. The fun in programming in Microsoft shops may well be back. We think the commitment to open source co-development of Dot Net Framework products is great.
OData and the new improved WCF was another hot topic of the day with four sessions devoted to these topics. The three sessions I attended and would recommend viewing the tapes of are:
Glenn Block: WCF Web APIs
Assad Khan and Maceleo Lopez Ruiz: Data In the HTML World
Jonathan Carter: OData Roadmap
When first pushed out WCF was primarily a enterprise strength product which was XML and server to server or server to (non Web) client oriented product, and it was fine in that role. But the AJAX services and JSON arrived and pretty much took over the real web where real people and companies actually work. It has taken some time (and the development of LINQ among other technologies) for MS to catch the wave again. WCF Data Services are not your older brother’s Web services any more. With the death of the SVC extension, route mapping and the introduction of the light weight WebGet meta tags WCF services are back in the game as a rock solid Web AJAX source. The newish API is very rich. For example a very interesting HttpResponseMessage object has been added to the framework to provide us with low level (read header) control over the whole message exchange process which gives us fine granularity, low weight and a greater ability to work in the standards world of HTTP 1.1 raw communication without needed obtuse code. Better REST and more restful. Glenn Blocks presentation is a useful introduction to what is happening in this area.
Some much for the browser side. Jonathan Carter’s presentation, “OData Roadmap” brings this all together with the server side code. Carter introduced a very interesting open source Dot Net thingy: The WCF Data Services Toolkit, this Framework Toolket is is open sourced, subject to on going development community development and is available via CodePlex. The toolkit introduces an IQueryable based OData object which is extensible and who external call syntax is based on well known OData conconical uri’s and data query conventions. The conventions are for ease of programming and consistency with current practices in the OData community (think Ruby on Rails again here) but can be over ridden as needed. The translation from traditional rectangular database tables to OData is straight forward (well a LOT of Entity Framework and LINQ is going on in the background but the API user does not see this). Interestingly one can nest OData mappings to different data sources within a single output OData message. Think server side mash ups here. A simple clean programming model is provided which is extremely powerful. Keeping in the Ruby On Rails colorization of the whole MVC3/OData/WCF Toolkit, the demo for this presentation featured a mapping of a Mongo DB data source into OData (with an assist from a NuGet community developed Mongo DB helper assembly). Combine this with DataJS.js and you pretty much have what you want: a light, fast, extensible AJAX/REST messaging system. And who wouldn’t want that? Now if we can get Web Socket support into the browsers…..
OK, Folks. I’ve gotta plane to catch. See you next year!