Binary Jam

Simon's blog, SharePoint, Arduino type things.

Braindump:Adding CORS support to old SOAP Webservice

This has been a bit of a nightmare,  once you start you will find hundreds of stackexchange articles about this and the problems you will have.

So here are some key things.

Below IE11 ? (maybe 10)  CORS support was provided with the XDR object and this wasn’t automatically used in libraries like jQuery so your jQuery stuff wont work because IE doesnt use the proper XMLHttpRequest objects or what it has is borked.  Till now at least.     So IE11 right!.

There is some simple code you need in your .NET project.

This extract of System.webserver is needed (Dont just paste this you have to insert it into an existing section of your web.config.

This will allow ANYONE to connect. So go read up what each of these attribute do.
What they will do together is all get added to the HTTP headers returned to the server on all items, yes including your aspx pages, now feel free to work out how to restrict that, I had enough by this point and my site only has two things on it both needing this.

        <!-- CORS Enabled -->
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Methods" value="GET,PUT,POST,DELETE,OPTIONS" />
        <add name="Access-Control-Allow-Headers" value="origin, content-type, accept" />
        <add name="Access-Control-Allow-Credentials" value="true" />
        <add name="Access-Control-Max-Age" value="31536000" />

Next you will need a global.asax  (not a global.aspx.cs like some guides refer to)

The function Application_BeginRequest is one of those that gets called as part of the request lifecycle.
What we are doing here is handling the case when you are doing a non-standard request which for SOAP services will actually be “text/xml;”

Having a non-standard request initiates part of the CORS protocol that does what they call a pre-flight request, to ask the server, “is this allowed or what?”, you are reponding with a A=OK matey.

protected void Application_BeginRequest(object sender, EventArgs e)
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
        HttpContext.Current.Response.StatusCode = 200;

Thats its.  That’s all you need to do server side to do this.

A matching client side request might look like this

       url: serviceUrl,
       type: "POST",
       dataType: "xml",
       data: soapEnv,
       crossDomain: true,
       contentType: "text/xml; charset="utf-8"",

For me all this started working.  I did all my mappings, and pushed my array into knockout observables and Chrome was working brilliantly.

Then came IE. The pain, the endless searches.

Whilst debugging this I could see my “options” (preflight) request was being made and IE reported no headers returned.  Which was nonsense cos chrome was doing it and working.

I rewrote that C# code over and over with many alternatives.  Chatted to a nodejs bloke who showed me his code, which did exactly the same (well close enough).

That code works.

I saw articles that said that the website your connecting has to be in the Intranet security zone.  Bullshit! If that’s the case how can you connect to yahoo or any other CORS compliant service.

So the one to watch when your developing this stuff.   SELF CERT SSL.

If you have created a self cert and you browse to the service in IE and its got a RED un trusted cert and it will to begin with, this wont work.  The confusing part is the chrome doesnt care, and also IE issues and gets the OPTIONS http request, instead of throwing an error before hand, it shows a 200 status, but it has to issue a request to know its not secure, its why it doesnt show any headers in the network analysis I suspect because at that point it just goes “eeek” and stops.

So export your certificate, dont next next next it,  export it specifically to “trusted root” *  DO THAT AT YOUR OWN RISK, if your not sure dont do it and go buy a proper cert to get this working.

Close your browser and check it worked by navigating to the service endpoint in the browser, if its not red it worked, if it is go do it again, but right.

With all this done you ajax call from IE11 should work with CORS.

If it don’t well good luck :-).



VS Code, Git, Gulp and SharePoint

UPDATE:Since writing this VSCode 1.0 was released which told me I was using the old Git package.  So I updated Code and Git. The new Git has a better windows credential manager and is built in now, you may find the git config win.cred stuff in here is not needed, so experiment and don’t do that bit unless you need to.  Also this was on Windows not mac or Linux, so some things might be different.

UPDATE2:I created a framework for Sandbox solutions based on gulp and git and browsersync and yeoman and npm and bower etc.  search for generator-sb-framework.   It was done 2 weeks before they announced SPfx framework, I was thinking along the same lines, but without access to 365 back end code 🙂   Mine will work on SP2010 and 20!£ and 365 though.  It’s a work in progress anyone wants to help let me know. Now on with the Article.



I get asked to “knock stuff up” quite often, its a single page that has to grab data and display it in a funky way and NOT look like SharePoint whilst utilising SharePoint.

I could be deploying it to MOSS or 2010 or 2013, any of the three so I tend to write with SPServices, and I find that REST isnt done.

I tend to write these things as single htm pages hosted in a doc lib, so this lends itself to angular and knockout with bootstrap etc.

Of course when you write stuff like this, what your doing is essentially creating a mini website connecting to services that exist outside of your website, except thans to virtual folder like _layouts they are actually part of your site, you just dont have to worry about CORS.

In the past I would just create the doc lib, open the folder (webDav) and just edit away.

This isn’t good enough though, I want my stuff to be under source code control, natively and you just can’t working like this.

Today I took the time to investigate using VSCode, git and gulp to acheive what I wanted.

To do this stuff you will need, VSCode, Git for Windows, GitHub for windows is handy too it has a better Gui and Node.js installed on your machine. You need NPM in node, but it comes with node now so no need for an extra installation. You will also need a Git repository, I’m using VS online, which has its own issues.  Go ahead and install all that lot.

There are some things called bower, and grunt and yeoman and all that.  Not touching that lot. Not yet anyways, so don’t expect it here.

To start with, create a new visual studio online project, name it, pick a methodology and pick GIT as the Version control.

Navigate to your project, because next I’m making it easier to connect by creating a credential.

Explore your project and you get a page a bit like this

There are various bits on this page, but first let’s click the Generate Git Credentials.

Then click create a personal access token,  this applies across all git projects so if you have one already then you don’t have to do this again, but you need to store that user name password combo securely in something like lastpass or dashlane

These are handy cos you can revoke them and they have a lifespan.

Go back to your default code page cos now we want to clone the Git repository.  This is easier in Git Bash , this works like linux so no “DIR” commands here


Firstly, if your in a company chances are you have a proxy server.  You need to configure git with  proxy server settings.  Because git doesn’t have a bypass capability I set my proxy server up as fiddler and then I can control what’s going on with regards to bypass lists.

So as I work in d:dev I changed Dir and then set my proxies


This will set you global git config to point at local fiddler address (IF YOU CONFIGURED FIDDLER LIKE THAT).  Feel free to set it to whatever proxy you want.


There is a slight problem when using VSCode and git with a remote host, in that when you issue a remote command it needs credentials but has no way of showing them so it just hangs till you kill it.

I found that you can save you creds locally, I do that just before cloning, see next lot of commands.

Next to do is to clone the git repository you just created in Visual Studio online.  This will create the local version of git. You check in there and then sync to the master when you want. Then branch and pull and push and whatever else git does, which I’m still getting my head around.

Copy the Clone command from visual studios code page,



don’t use mine, its easy to guess what mine would be called, you won’t get access and if you do MS and me will be having words.

but do it in git bash and enter your creds when prompted.  You can use any user name you like, but use the specials credentials you created earlier as the password,  this works, other ways I’ve tried don’t.

Your creds are now cached in the windows credential manager thing, look in control panel to remove it.

Next lets open the Folder in VSCode, File->open folder and select you VSCodeDemo folder, inside of which is a .git folder.

Once open create a test.htm file and put some words in it, anything, this is a test. Save it.

Switch to the Git tab and check in locally those changes, give it a message of “Initial test” and click the Tick at the top


Now to sync up to the web, look at the status bar at the bottom and click the cloud with an up arrow icon, which means publish or the recycle icon, you will see one of these.


When you return to visual studio on line your code tab now shows the branches you have published, synced, whatever that means.

That’s the git integration done.  Have a play add files rollback etc.  Next up configure VScode and some handy plugins.

When running VSCode in an enterprise and you want to use plugins you have to configure it to use a proxy server.

File>Preferences->User Settings

This will open two files for edit, default settings for reference (dont edit) and the settings.json

Here’s mine

	"editor.fontSize": 12,
	"editor.tabSize": "4",
        "http.proxyStrictSSL": false,
        "http.proxy": "",
        "https.proxy": ""


Again Im using fidder as my proxy server, because I have more control.

Next hit F1 and lets install some extensions.  F1 and type “ext install”

it will show you available extensions, heres what I’ve installed.


You may want to control settings for code and your plugins locally per project so open the workspaces prefs.  File->Preferences->Workspace settings.  Again this opens a settings.json file here’s mine

    "minify.minifyExistingOnSave": true,
        "indent_size": 4,
        "indent_char": " ",
        "css": {
            "indent_size": 2
    "beautify.onSave": false


This sets my auto minify on to create those min.js files for me automatically. You have to force create one first by hitting F1 and type minify.  This creates the min file and from that point on its automatic. Play with this cos if you change its settings you can minify a whole directory of js files.

Next on to task runner (gulp in vs code).

Make sure you installed node.js and npm works and if thats got proxy config to do go do it (I’ll let you figure it out).

In fact go follow this guide

OK so that made no sense, but I guess you at least installed gulp globally.

So back to our project.  Open a Node.JS command shell and change directory to our project, now run the command

npm init

Enter the details at the prompts, you can next thru most of them, change the name to all lower case those it complains if you dont.  Im sure these have proper uses but for now, meh.

Now back in VSCode hit F1 and type “Configure Task Runner”

it opens the tasks.json file.  Rip all that out and replace it with this  (we will create the actual gulp task in a minute)

    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": [
    "tasks": [
            "isBuildCommand": true

Save that.  Our task to create is a robocopy so on build it will copy the contents of our folders to another folder.

switch back to node.js and install local version of gulp and robocopy (cos globals wont work)

npm install gulp
npm install robocopy

In the root of our project create a file  “gulpfile.js”

var gulp = require('gulp'),
 fs = require("fs");
 var robocopy = require('robocopy');

gulp.task('deploy',  function() {
    return robocopy({
        source: 'Web',
        destination: 'd:\tmp\web',
        files: ['*.*'],
        copy: {
            mirror: false
        file: {
            excludeFiles: ['packages.config'],
             excludeDirs: ['Forms'],
        retry: {
            count: 2,
            wait: 3

This creates a gulp task called deploy (the one we matched in the tasks.json) that uses robocopy to copy everything under the folder Web to another folder on this machine d:tmpweb.  In VS code Hit Ctrl-Shift-B and that builds and copies.

So in using this mechanism we have to assume some stuff.  Firstly that we build our deployable content under a folder under the root called web.  We do this cos of all the bloody rubbish node and git pish and paff and every thing else needs.

Because we have all this stupid files in our project though, some of them are not compatible with GIT.  So we have to exclude them from git.  This is easy.

Create a file in the root of the project called  .gitignore


Add that line to it to exclude the node_modules, as that’s all that’s incompatible really.

And lastly where the SharePoint ?  You promised me Sharepoint, Tocker!,

Well what we want is to deploy this Web folder into a Document Library in SharePoint (assuming that we are allowed to have .js.htlm.htm extensions files, if not well save everything as a .aspx with no aspx code in it).

Imaging that we have our site collection and doc library, its full path is this

Well we just have to copy to the webDav folder equivalent.

This is a slight change to the gulpfile.js robocopy task.

Edit that gulpfile.js file and change destination from d:\tmp\web


 destination: '\\\davwwwroot\sites\mysite\myDocLib',

That davwwwroot is the magic door.

That’s it.  Build SPA’s that live in doc libs with Git source control and minifying and all thatlovely stuff.

Of course this tasker stuff lends itself to all sorts of tasks, like less/sass compilation, typescript compilation for angular 2 (or just typescript).  Have fun.






MDS Compliant Disp Template example

This is just a reminder for me really, take a look if you like, but its not done, based on Martin Hatch’s colour field example with Wictor’s mds detection.


/// <reference path="DefinitelyTyped/microsoft-ajax/microsoft.ajax.d.ts" />
/// <reference path="DefinitelyTyped/sharepoint/sharepoint.d.ts" />
/// <reference path="DefinitelyTyped/DisplayTemplateFieldColour.d.ts" />
"use strict";
//This is a work in progress, trying to come up with a kind of best practice, of best practices
//because the office pnp examples do not do things how other JS peeps might.

//Im not convinced about this either, this should be SOD'ed  / Leaving in for future reference
//(jQuery || document.write('//'));

//Creates the namespace and registers it so MDS knows its there, 

(function(ns) {
	ns.DisplayTemplateFieldColour = function () {
		//private members
		var overrides = {};
	  	overrides.Templates = {};
	   	overrides.Templates.Fields = {
	       //Colour is the Name of our field
	       'Colour': {
	          'View': colour_FieldItemRender,
	          'DisplayForm': colour_FieldItemRender

		//Create CSS classes
		var style = document.createElement('style');
		style.type = 'text/css';
		style.innerHTML = '.binaryJam_dt_FieldColour { display:inline-block; margin 3px;width:20px;height:20px;border:1px solid black; }';
		function colour_FieldItemRender(ctx) {
			if (ctx !== null && ctx.CurrentItem !== null) {
				var divStyle="style='background-color:" + ctx.CurrentItem['Colour'] + "'";
				var html = "
" + ctx.CurrentItem['Colour'] ; return html; } }; function registerTemplateOverrides() { SPClientTemplates.TemplateManager.RegisterTemplateOverrides(overrides); }; function mdsRegisterTemplateOverrides() { var thisUrl = _spPageContextInfo.siteServerRelativeUrl + "js/jslink/test1.js"; RegisterModuleInit(thisUrl, registerTemplateOverrides); }; //Public interface this.RegisterTemplateOverrides = registerTemplateOverrides; this.MdsRegisterTemplateOverrides = mdsRegisterTemplateOverrides; }; })(BinaryJam.JSLink); //This is the "official" way to check MDS // if ("undefined" != typeof g_MinimalDownload && g_MinimalDownload && (window.location.pathname.toLowerCase()).endsWith("/_layouts/15/start.aspx") && "undefined" != typeof asyncDeltaManager) { BinaryJam.JSLink.DisplayTemplateFieldColour.MdsRegisterTemplateOverrides(); } else { BinaryJam.JSLink.DisplayTemplateFieldColour.RegisterTemplateOverrides(); };


Adding a Post Feature Activation notification

Recently I’ve been doing some 2010 work on a site which needed a bunch of javascript and some CSS files. Now most of these were deployed to the style library but as you know you often have to check them in etc once deployed.

Rather than figure that out I thought I’d just give a message to the user on activation of the feature

Continue reading “Adding a Post Feature Activation notification”

Pseudo Synchronous Call Queuing in Angular with promises

Here is example code that batches aync ajax calls into blocks of 5 and waits before calling the next set, without blocking the UI thread.

Continue reading “Pseudo Synchronous Call Queuing in Angular with promises”

Posting to Pushbullet via ThingSpeak

A Step by step guide on configuring ThingSpeak to auto post to PushBullet, I use ESP8266’s but the you can use this for anything posting to ThingSpeak.

Continue reading “Posting to Pushbullet via ThingSpeak”

SharePoint 2013 App – AngularJs, BootStrap and ngGrid example

I’ve created a SP2013 App including all the code and published it on Codeplex so anyone can take a look and steal the code get ideas.

It has an example of two data fetch techniques (see my earlier blog post for the third external data using the proxy)

using $http to get REST data or $.ajax to get legacy WebService Data.  With the appropriate angular code to handle either, no antipatterns here.

Perhaps I should have used SPServices instead of $.ajax it would have made it easier and I would not have blatantly nicked Marcs getZrows function (thanks Marc 😉  )


Last Update:14/10/2014
Organised into folders and separate files for controllers and services. Changed Service to Factory.
(Service code left in project for example purposes)



Detecting the Language in a DVWP

I have been hunting all morning on how to do this.  I can do it in a CQWP with parameterbinding set correctly but no-one, no-one has put any example code on how to do this in a DVWP.  I’ve read many an article talking about ddwrt functions lists of variables which are not accessible even if you include the xslt namespace.

Eventually I figured something out.  It’s not pretty but then again, what SharePoint is doing isn’t exactly too far removed.

In order for SharePoint to know what browser is being used and what language the browser has to send this information via http headers so we have all the info we need.


In you DVWP start by adding a Parameter (I called mine Param1)


Set it to a Server Variable


and give a a default value, mine says none, but set it to your default language e.g. “en-GB”

That will get you the language codes you need into your XSLT.


However those strings can be quite large, e.g.

    en-GB,de-DE, {lots of other stuff}

The bit your interested in, well that’s up to you in my case I care about language, and being British there are no such things as regions just ours and everyone else that’s doing it wrong.

So I have in my XSLT

This grabs the “en” from the string.  If I switch to German then I get “de”

So now I have my language code I can do conditional XSLT based on a language code.


This has allowed me to create multi lingual Content web parts in WSS 3. Which is handy.

It’s simple, if someone can point me at a better way please do, I haven’t tested this thoroughly with lots of different browsers and languages so I may need to get a bit cleverer that this simple example.  But for the many of you searching for how to do it, this might be an option for you.


Technorati Tags:

List of JS Libraries I’m using / looking into

This is more of a note to myself so that I can remember the JavaScript libraries I’m interested in and what they do and maybe a related article.  These are mainly in dealing with SharePoint but could be anything really.  Not everything is compatible with everything else remember 🙂


KnockoutJs – Data binding library, works with jQuery, great for a true single page app, IE no routing just data binding.  Yes I know SPA’s pages are called views.

AngularJS – Proper SPA framework, there’s Durandal but I’?m not going there even though I love knockout.


    – Linq Like queries, does need server side integration, not for local objects

Bootstrap – For that nice CSS layout grids and things.

Bootstrap UI – Directives for Angular using bootstrap

Angular Maps – Google maps directives for angular, something I’ve used already.


LinqJs – Linq for JavaScript, will work on JS objects, doesn’t do fetch for you

Js Loaders –  and

jQuery – Well how could I not mention it.

jQuery form validators

Sliders – ,,

fontawesome – No idea, well something to do with a shed load of CSS icons, vectors (no IE7)

Charts – jqPlot,, hicharts

Spinner – svg/vml Spinner, easier than a gif ?


Stuff to investigate


Not a library but a list of handy SP2013 dev tools

Zimmergrens SP2013 Tools Page

Blog at

Up ↑