Search

Binary Jam

Simon's blog, SharePoint, Arduino type things.

Local Host file use and Azure Webs

Sometimes you are working on a site and you need to test the real url on it, but you are not ready to flip the DNS entry yet.

In IIS land you could add a host header and add an entry into your hosts file and it would just work.

Now enter Azure and you could have a dev site, perhaps even a deployment slot that contains your website and you want to test it, but some of that code relies on the URL, you have a URL Rewrite perhaps.

In order to add a custom domain to Azure you need to be able to make changes to the DNS server, it leads you to make either a CNAME entry or an A Rec / TXT entry and it validates that before adding the hostname binding.

So how do you do a test on dev with the real URL and a local hosts file entry ?

Well it turns out that you only need the TXT entry to confirm ownership, which will then allow the addition of the Custom Domain to Azure.

My TXT DNS entry

How Adding a host looks, before DNS entry. Oh well you can spot my IP if you look hard enough. Its not a real site anyways.

Once you have added a TXT record and the propagation has happened, then even though you do not have an A record, or CNAME, the Custom domain will be allowed to be added, just hit that Add hostname button and its there.

Of course you cannot get to it unless you add an entry to your DNS server or hosts file for testing.

Advertisements

My JSLink best practice

There are lots of examples about regarding how to correctly do JSLINK stuff, and I’ve nicked ideas from all of them.

I’ve not been happy with any of them and I still wonder about mine, but this is the best I’ve come up with.

It’s MDS compliant, in includes a routine to automatically assign a view to allow multiple JSLinks on a page and apply the same jslink on diff parts if required.

Its written in a module.

It’s a work in progress, it will evolve but I reckon I’m as far as I can get in this evolution.

Bits taken from Wictor Wilén, Martin Hatch, Paul Cimares.

https://github.com/binaryjam/DisplayTemplateExample

 

Best video on Homeassistant.io build

Not really for anyone else but me, but if you interested in building a pi with homeassistant.io  then feel free.

Watch this video cos its the best i’ve seen so far

 

 

Excellent article on IIS Export Application

Great article on creating an IIS export and configuring all the settings.

This will also help with those trying to create a paramters.xml file

How to: Create IIS Site Package with Web deploy

How to: Create IIS Site Package with Web deploy

Not Self Hosting

Had enough of self hosting and for $13 a year wordpress can sort it all out and I keep my domain.

So seems to be ok. DNS kicked in for me.  Cheeky wordpress doesnt do www, I find that out later after I paid.  But google’s results are all being properly directed to the equivalent page on wordpress so that’s nice.

So a saving of $11 a year is nowt really, but none of the hassle of hosting and constant updates, and the main reason I moved was a hack notice I got and I had to re-verify with google. Who needs the hassle when you know that nowadays I might have 3 registered viewers and a couple of hundred search drop ins when the main reason for this is for me to keep a record of my most handy stuff, though I should pull my finger out and slap it all in github.

 

 

 

 

 

BrowserSync, gulp based script, handling middleware via Corp proxy

Phew that was a long title.  So what’s this about.

I live in a land of corporate proxies with giant .pac scripts, of https services and authenticated proxies.

It …makes….all….this….js….dev… HELL.

I use browser-sync as my local testing sever, its great,  I need it to handle requests to remote apis because of CORS and other security issues, until I can wrap a proxy around the remote system, even then its handy to have the ability to proxy the api calls via a node server (browser sync) for me.

This becomes an absolute bloody nightmare when you have an authenticated corporate proxy server.

None of the JS tools play nice,  there is no such thing as a centralised store for proxy settings, so you have to enter then in the .rc file of every tool, git, npm, bower, and now the custom middleware.  This is where windows got it right and Linux, well sucks.   Oh I wish that I still had ISA servers client transparent proxy.

So the example I have here is a gulp file, that configures browser-sync to run and to call into the middleware extension to handle proxying of api calls to my remote system and for that component to play nice with the corporate proxy.

You need the agent, I tried without it and failed miserably.

gulpfile.js

var gulp = require('gulp');
var browserSync = require('browser-sync').create();
var proxy = require('http-proxy-middleware');
var HttpsProxyAgent = require('https-proxy-agent');

var proxyServer = "http://localhost:8080";   //Cos Fiddler yeh!

var jsonPlaceholderProxy = proxy('/api/', {
    target: 'https://www.binaryjam.com',
    changeOrigin: true,
    logLevel: 'debug',
    secure: true,
    agent:new HttpsProxyAgent(proxyServer)
});

gulp.task('default', function () {
  browserSync.init({
    "port": 8000,
    injectChanges: true,
    "files": ["./src/**/*.{html,htm,css,js,json}"],
    "server": { "baseDir": "./src" },
    "middleware":jsonPlaceholderProxy
  });
});

 

 

 

 

 

 

 

 

 

DisplayTemplate:Hide edit fields based on choice field

This is a simple displaytemplate (JSLINK) that will hide fields based on a value in a choice.  I use it for pseudo content-types, having the advantage of being able to switch.

It’s not MDS compliant.

In this example “ItemType” is the field name of the Choice field and could be “Standard” or “PopUp”

The array of hideme items are the fields that are hidden when that choice field is selected.

The HiddenItems starts empty and contains the hidden elements to be unhidden on a change of type.

It relies on jQuery being in the masterpage or somewhere on the page.

You store this as a JS file in a doclib or SPfolder or siteassets etc,  edit the form page and point the JSLINK setting to that file.  remember to use the tokens ~site or ~sitecollection as JSlink doesnt like fixed urls.

$(function () {
"use strict";


    var hiddenItems = [];
    var hideMeItems = {
        Standard: ["[id^=PopUpBodyText]","[id^=ReadMoreUrl]","[id^=ReadMoreUrlTarget]"],
        PopUp: ["[id^=Teaser]","[id^=TargetUrl]","[id^=TargetUrlTarget]"]
    };


    $('[id^="ItemType"]').change(hideItems);
    hideItems();

    function hideItems() {
        var selected = ($('[id^="ItemType"]').val());

        $.each(hiddenItems, function () {
            this.show();
        });
        hiddenItems.length = 0;

        $.each(hideMeItems[selected], function () {
            var tr = $(this).closest('tr');
            tr.hide();
            hiddenItems.push(tr);
        });
    }
});

Adventures in BrowserSync

I’m real new to browsersync and node development. So this has been a pretty steep learning curve, but I thought I’d document something I had to figure out as the documentation and guides on the web are hard to find or just missing.

For those who don’t know and are new to this javascript lark, browsersync is tool that runs under node to create a mini web server, but also it injects javascript into your pages and communicates to the server when the file watcher sees a change to a file.

The effect of this is you can configure it, then run this thing to point at the files in your directory that you are editing, it will fire up a browser and as soon as you save a file it will reload the page.  A real cool feature is called hot reloading, in certain circumstances and configuration it can detect you have changed say an image or css file and it will only change that item in the page, it uses JS to mangle it to the new version and won’t do a full page reload.

I’m using a modified version of browsersync called lite-server, by john papa, just because it was the one I came across first. I’ll be honest I’m not sure what lite-server gives me over browsersync native, it’s just where I started.  That said, you will spend a lot of time reading the browsersync docs not the lite-server page.

The main point of me writing this article was that as well as serving pages and auto reload, browsersync gives me the ability to handle API calls and proxy them to local files (possible another server but I’m not there yet).

In the framework I’m writing, to mimic the new SP framework (early days though) experience but on legacy stuff to deliver a sandbox WSP, the example code makes a call using SPServices library (this could be REST) that call as you may know has the path _vti_bin in it.  So my browsersync config has code in it (the config is javascript) that can intercept this and deliver my content instead.

Below is the bs-config.js file I wrote to achieve this.

The module.exports is the standard bit that configures bs with what files to the watch and how to configure the mini server

The special part is the middleware setting. I have set it so that the 2nd param points to my handleApi function call.  The reason I set the 2nd param (thats the “1:” bit) is that if you clear the 1st parameter then it no longer logs to the console the items its serving, which is handy.

As you can see the handleApiCall function is real simple, it detects the “_vti_bin” in the path and reads a file from a specific place and puts it out in the response stream along with the correct headers for xml.

This could be improved, lots, it could read the request object and parse it to determine what file to send back.

Of course someone has probably already done something like this, but I needed to do something quickly and there is enough to learn.

Saything that I will be looking into proxy-middleware a module for express/browsersync that will likely proxy to a real server not just my local files.

Alternatively https://www.npmjs.com/package/apimock-middleware.

You learn there are so many OS projects out there in npm land so its hard to find the right things.

// jshint node:true
function handleApiCall(req, res, next) {
    
    if (req.url.indexOf('_vti_bin') !== -1) {
        var fs = require("fs");
        fs.readFile("./WebComponents/.container/.mockapi/1.xml",function (err, data) {
            if (err) throw err;
            res.setHeader('Content-Type', 'text/xml');
            res.end(data.toString());
        });
     
    }
    else{
        next();
    }
}

module.exports = {
    'port': 8000,
    'files': [
        './WebComponents/src/**/*.*'
    ],
    'server': {
        'baseDir': './WebComponents/src',
        'middleware': {            2:handleApiCall        }
    }
};

Braindump:Adding CORS support to old SOAP Webservice

This has been a bit of a nightmare,  once you start you will find hundreds of stackexchange articles about this and the problems you will have.

So here are some key things.

Below IE11 ? (maybe 10)  CORS support was provided with the XDR object and this wasn’t automatically used in libraries like jQuery so your jQuery stuff wont work because IE doesnt use the proper XMLHttpRequest objects or what it has is borked.  Till now at least.     So IE11 right!.

There is some simple code you need in your .NET project.

This extract of System.webserver is needed (Dont just paste this you have to insert it into an existing section of your web.config.

This will allow ANYONE to connect. So go read up what each of these attribute do.
What they will do together is all get added to the HTTP headers returned to the server on all items, yes including your aspx pages, now feel free to work out how to restrict that, I had enough by this point and my site only has two things on it both needing this.

  <system.webServer>
    <httpProtocol>
      <customHeaders>
        <!-- CORS Enabled -->
        <add name="Access-Control-Allow-Origin" value="*" />
        <add name="Access-Control-Allow-Methods" value="GET,PUT,POST,DELETE,OPTIONS" />
        <add name="Access-Control-Allow-Headers" value="origin, content-type, accept" />
        <add name="Access-Control-Allow-Credentials" value="true" />
        <add name="Access-Control-Max-Age" value="31536000" />
      </customHeaders>
    </httpProtocol>
</system.webServer>

Next you will need a global.asax  (not a global.aspx.cs like some guides refer to)

The function Application_BeginRequest is one of those that gets called as part of the request lifecycle.
What we are doing here is handling the case when you are doing a non-standard request which for SOAP services will actually be “text/xml;”

Having a non-standard request initiates part of the CORS protocol that does what they call a pre-flight request, to ask the server, “is this allowed or what?”, you are reponding with a A=OK matey.

protected void Application_BeginRequest(object sender, EventArgs e)
{
    if (HttpContext.Current.Request.HttpMethod == "OPTIONS")
    {
        HttpContext.Current.Response.StatusCode = 200;
        HttpContext.Current.Response.End();
    }
}

Thats its.  That’s all you need to do server side to do this.

A matching client side request might look like this

$.ajax({
       url: serviceUrl,
       type: "POST",
       dataType: "xml",
       data: soapEnv,
       crossDomain: true,
       contentType: "text/xml; charset="utf-8"",
})

For me all this started working.  I did all my mappings, and pushed my array into knockout observables and Chrome was working brilliantly.

Then came IE. The pain, the endless searches.

Whilst debugging this I could see my “options” (preflight) request was being made and IE reported no headers returned.  Which was nonsense cos chrome was doing it and working.

I rewrote that C# code over and over with many alternatives.  Chatted to a nodejs bloke who showed me his code, which did exactly the same (well close enough).

That code works.

I saw articles that said that the website your connecting has to be in the Intranet security zone.  Bullshit! If that’s the case how can you connect to yahoo or any other CORS compliant service.

So the one to watch when your developing this stuff.   SELF CERT SSL.

If you have created a self cert and you browse to the service in IE and its got a RED un trusted cert and it will to begin with, this wont work.  The confusing part is the chrome doesnt care, and also IE issues and gets the OPTIONS http request, instead of throwing an error before hand, it shows a 200 status, but it has to issue a request to know its not secure, its why it doesnt show any headers in the network analysis I suspect because at that point it just goes “eeek” and stops.

So export your certificate, dont next next next it,  export it specifically to “trusted root” *  DO THAT AT YOUR OWN RISK, if your not sure dont do it and go buy a proper cert to get this working.

Close your browser and check it worked by navigating to the service endpoint in the browser, if its not red it worked, if it is go do it again, but right.

With all this done you ajax call from IE11 should work with CORS.

If it don’t well good luck :-).

 

Create a free website or blog at WordPress.com.

Up ↑