programming

Make great git diagrams

If you’re a developer, sooner or later you find yourself having to talk about git branching strategies. Drawing this up in a drawing tool can be very time consuming for most of us mere mortals. Luckily there is a great little javascript library that exists purely for making git branching diagrams.

See the nice diagram below? it is being created in real time by gitgraph.js.

Setting things up is pretty straight forward. You can install with npm i --save @gitgraph/js, or simply import the CDN version:

<script src="https://cdn.jsdelivr.net/npm/@gitgraph/js" crossorigin="anonymous"></script>

Add a div to the page, then pass that DOM node into createGitgraph. You can roll with the defaults, or pass in customisations via createGitgraph().

Once you graph is created, you draw the diagram in the same way you would work with git branches.

  • Use .branch() against an existing branch to create a new branch
  • Use .commit() against a branch to create a commit
  • Use .tag() to tag a branch
  • Use destinationBranch.merge(yourBranch) to merge

Creating a Custom Template

To create a custom template, call GitgraphJS.templateExtend(templateName, options) where template name is one of the 2 defaults:

  • GitgraphJS.TemplateName.Metro
  • GitgraphJS.TemplateName.BlackArrow

The options object provides all the overrides. To see which objects you can override, take a look in the Gitgraph repo.

Pass the new template object into the create call createGraph():

createGitgraph(container, { template: yourTemplate })

Check out the code sample for more detail.

Browser Support

This library supports modern browsers - IE and Edge do not work. Edge will start working once Edge-Chromoium is out in the wild, but for now stick with Chrome or Firefox.

Code Sample


const graphContainer = document.getElementById("graph-container");

const gitTemplate = GitgraphJS.templateExtend(GitgraphJS.TemplateName.Metro, {
  commit: {
    message: {
      displayAuthor: false,
      font: "12pt sans-serif"
    },
  },
  branch: {
    lineWidth: 8,
    label: {
      font: "12pt sans-serif"
    }
  }
});

// Instantiate the graph.
const gitgraph = GitgraphJS.createGitgraph(graphContainer, {
  template: gitTemplate
});

var master = gitgraph.branch("master");
master.commit("Initial commit");

var develop = gitgraph.branch("develop");
develop.commit("new dev stream").commit("bug fix");

var feature = develop.branch("feature");
feature.commit("Display blog posts in 3d");
feature.commit("Add OpenGL option");
develop.merge(feature, "Pull Request: feature TO develop");

develop.tag("RC1");
master.merge(develop, "Pull Request: develop TO master");

/eof

Calling pcap from Swift (using a closure)

Swift is Apple’s clever new language. It is designed to be intuitive, modern, and read-able. Which it is, until you want to make a call to a C library - such as pcap.h.

PCap (Packet Capture) is a C library for sniffing network packets. I wanted to try some packet sniffing on my Mac, so a command line app written in Swift seemed like a good way to try out a new language.

The Bridging Header

To interoperate with C (and C-derivatives), Apple have the concept of a “Bridging Header”. A bridging header is simply a C header file referencing any libraries you need to have available in your Swift app.

To add one, add a new Objective-C file to your Project. XCode will prompt you to create a bridging header. Let it create one, then you can delete the objective-C file you added.

Here is my bridging header:

//
//  This nifty file lets us interop with C libraries
//  (or C++, or objective C if you have a fetish for square brackets)
//
#import <pcap/pcap.h>

With that in place, we can call pcap.

Calling a C function

Let’s make a call to pcap to create a new pcap session:


var error: UnsafeMutablePointer<CChar>
error = nil
let device = "en0"

// create a new pcap session via pcap.h
let pcapSession = pcap_create(device,error)

That’s pretty standard stuff. XCode is smart enough to auto complete parameters for us, which is nice.

Calling a C function with a callback

Where it gets more interesting, is when we have a call back involved. Here is the header of the C function we want to call, along with the typedef:


int pcap_loop(pcap_t *p, int cnt, pcap_handler callback, u_char *user);
typedef void (*pcap_handler)(u_char *user, const struct pcap_pkthdr *h, const u_char *bytes);

In the code above, pcap_handler is the callback. So, how do we call pcap_loop from Swift? Back in the Swift 1.x days, you had to write your callback handlers in Objective-c - then call that Objective-C from your Swift class. In Swift 2.1 we can do it using closures. A closure is simply a self contained block of code (and Apple have made the syntax a little confusing).


// here is the syntax
{(parameters being passed to closure) -> (returnType from closure) in statements}

// and here is how we use it
pcap_loop(pcapSession, numberOfPackets,
{
    (args: UnsafeMutablePointer<u_char>,
     pkthdr:UnsafePointer<pcap_pkthdr>,
     packet: UnsafePointer<u_char>) ->Void in
            // our code goes here!
            print("packet received!")
},
nil)

So, in this case we are passing some arguments to pcap_loop. The 3rd argument is a closure (the pcap_handler in the C typedef). We pass pointers as parameters for the closure which pcap will assign things to. In our case, it assigns the packet header, the packet payload, and the third is used to pass data into the closure.

At this point, we can run our app, but all we are going to be able to do is print “packet received!” to the console. We can’t do anything stateful, as our closure is contained (scoped) and can’t access variables or properties outside its own scope.

Passing data to our closure

There are two ways we can go here. The first way is to pass something into the user argument of pcap_loop. That argument is passed to the call back function, and can be a reference to either a variable, or data structure.

The other approach is to reference an object outside of the closure scope. We could reference a global variable, but that isn’t very elegant. The singleton pattern fits our need perfectly. It provides an object to maintain state of our packet capture session, and fits more with the style of Swift (vs pointer dereferencing).

Here is our singleton class:


class PacketAnalyser {

  // this is how we create a singleton in swift
  static let sharedInstance = PacketAnalyser()

  var packetCount: Int = 0;

  // This is a basic test, so lets just print the packet
  // count to the console
  func Process() {
      packetCount++
      print("Packet count " + packetCount.description)
  }
}

Here is how we call it from within our closure:


pcap_loop(pcapSession, numberOfPackets,
{
    (args: UnsafeMutablePointer<u_char>,
     pkthdr:UnsafePointer<pcap_pkthdr>,
     packet: UnsafePointer<u_char>) ->Void in

            // singleton call
            let pa = PacketAnalyser.sharedInstance
            pa.Process()
},
nil)

Now you know how to call a C library from Swift with a callback. :)

You can find the test code here: SwiftPcap on GitHub

Please leave a comment if this was helpful, or if you find any errors.

/eof

Nestable knockout web components

Web Components are the Next-Big-Thing™, the second-coming, etc etc etc. Yeah, so we’ve all been hearing about web components. How can we use them in a nestable, compatible way?

Background

A few years back the Javascript community discovered databinding. KnockoutJS (a pretty great data binding library) was the in thing. Then AngularJS came onto the scene and I stopped hearing about Knockout. Recently, people have been talking about web components. Web Components are a great way to make your code modular, more readable, and solve all your problems. Unfortunately Chrome is the only browser with a native implementation.

A search for “Web Components” takes you to Google’s Polymer framework, which isn’t a framework - but does give you Web Components. Unfortunately you need IE10 or better. Those of us working for large corporations or government departments generally have to support IE9. What to do?

Well, good news! Knockout implements Web Components - but we can’t call them web components, they are “web component inspired” components that happen to behave indistinguishably from web components, and they also happen to work in IE9. Great!

Why do I want web components?

Simply put, you can write code that looks like this:


<html>
  <body>
    <todo-list params="color: green;">
      <checkable-item>Write blog post</checkable-item>
      <checkable-item>De-clutter house</checkable-item>
    </todo-list>
  </body>
</html>

Nice, readable code - with nestable custom HTML elements.

How to do it

checkable-item web component

This is the view for our checkable list web-component, which will be displayed in our to-do list web component.


<!-- checkable-item-view.html -->
<div style="border:1px solid blue; margin: 5px; padding: 5px;">
  <label data-bind="text: labelText"></label>
  <input data-bind="checked: checkbox" type="checkbox" />
  <input data-bind="value: textField" type="text" />
</div>


// checkable-item-viewmodel.js
define(['knockout'], function(ko) {
    function CheckableItemViewModel(params) {
        this.labelText = params.labelText;
        this.textField = ko.observable();
        this.checkbox = ko.observable();
    }
    return CheckableItemViewModel;
});

todo-list web component

Here is our ‘container’ web component. Notice that knockout template binding in the viewmodel? That allows us to pass through HTML elements. It allows for the nesting of HTML within components. The $componentTemplateNodes contains the DOM nodes nested in the markup.


<!-- todo-list-view.html -->
<div style="border:1px solid red; margin: 5px; padding: 5px;">
  <strong>
    <div data-bind="text: componentText">
    </div>
  </strong>
  <!-- The line below this is where the magic happens -->
  <!-- ko template: { nodes: $componentTemplateNodes } --><!-- /ko -->
</div>


// todo-list-viewmodel.js
define(['knockout'], function(ko) {
    function ToDoListViewModel(params) {
        this.componentText = params.componentText;
    }
    return ToDoListViewModel;
});

Host Page

Here is our HTML page. Note that for it to work, it depends on requireJS, textJS, and KnockoutJS. I’ve placed my web components under a “/Components/” folder. I’ve omitted the dependencies and requireJS configuration from the example below, to keep things brief.

The important part is the component registration.


<html>
  <body>

    <h1>How to create knockout nested components</h1>

    <checkable-item params='labelText:"component that isnt nested"'></checkable-item>

    <todo-list params='componentText:"text passed to component"'>
      <checkable-item params='labelText:"nested component"'></checkable-item>
    </todo-list>

    <script>
    // this is how we register components
    ko.components.register('todo-list', {
      viewModel: { require: 'components/todo-list/viewmodel' },
      template: { require: 'text!components/todo-list/view.html' }
    });
    ko.components.register('checkable-item', {
      viewModel: { require: 'components/checkable-item/viewmodel' },
      template: { require: 'text!components/checkable-item/view.html' }
    });
    </script>
  </body>
</html>

There you have it. Go forth and create Web Components.

/eof

Finding a javascript performance gremlin

Performance issues in a JS app can be frustrating… especially when your app is convoluted soup of data-bound observables. Today I discovered a performance gremlin. I’ll walk you through the process of finding it and rectifying it.

The Symptoms

My team have been working on a large javascript application. We deployed it to a new environment, and started going through some shakedown tests. All seemed to be going well… until I loaded a certain view. Then I waited… and waited. 7 seconds later, the view loaded. I reloaded with a different set of test data. This time the wait was 12 seconds.

That view had worked fine in our lower level environments (loading in under 1 second), what was going on?

Investigation

The first order of business was to fire up the Chrome profiler. The profiler has an option for tracking heap allocations. The way it works is by taking a snap shot of js objects currently in memory, and comparing it to a 2nd snapshot taken at the end of the profile.

The profiler gives us a way to examine all objects retained (not cleaned up by the garbage collector). This is advantageous in two scenarios:

  1. We have a memory leak
  2. The app is spending an unexpectedly long time doing something

With the profiler fired up, and a profile completed - I started examining the output. The Total Cost column shows us the % of total profiled time spent on a line of code. With this in mind, I started at the top (100%) and looked for any large sudden drops in that value. A large drop in value indicates that (for the given line) a function call made in that code path has consumed a large amount of your total cost.

My culprit quickly became obvious:

chrome profiler

Notice the sudden drop from 100% to 50.52%? and a bit further down we can see the second code path - at 49.26%.

In both cases, out problem line is the place() method in the boot bootstrap-datetimepicker.js library.

The Cause

Here is the function concerned. I’ve highlighted the problem line:


if (!this.zIndex) {
                var index_highest = 0;
                $('div').each(function () {
                    var index_current = parseInt($(this).css("zIndex"), 10);
                    if (index_current > index_highest) {
                        index_highest = index_current;
                    }
                });
                this.zIndex = index_highest + 10;
            }

That line shows a jQuery selector querying every single div on the page to get the z-index. Once it finds it, the datepicker adds 10 to the value and uses that as its own z-index. Why would it do that? Simple - so the popup calendar will always display on top of any other page content.

There are some problems with this approach:

  • the date picker calendar is going to display on top of all other elements - including dialogs, overlays, or any other obnoxious controls you are using.
  • block elements other than divs can have a z-index too, so the code might not get the highest z-index anyway.
  • Single page apps typically have a ton of views in memory at any time, which are in the DOM, but not currently visible. That could be a lot of divs to iterate through!

Just how many divs could this thing be looking at?

oh dear

1746.

Now consider that the jQuery selector is being run for every date picker on the page. Life gets painful fast.

The Fix

The current version of the date picker has a way to set the z-index via a property. If we set a z-index, we bypass that problematic jQuery selector.

I went ahead set the z-index to a suitably high value, and reloaded the view.

BAM!

The view loaded in under 1 second, and I rode off into the sunset.

/eof

My first few hours with Visual Studio Code

Microsoft have released a new editor. This is Microsoft’s take on Sublime/Atom. It looks nice, it feels fast, and it can run on OSX or Linux (as well as Windows). I’ve been using it for work this morning - here are my impressions.

Visual Studio Code

You can download it here: Visual Studio Code Preview.

The big news is that Code is cross platform. It can run natively under OSX and Ubuntu, showing just how much Microsoft has changed direction. It will also excite my OSX-using co-worker - Jason - who constantly moans about having to start his Windows vm to run Visual Studio.

First Impressions

This is what you see when Code starts:

code main

My first thought is that it looks very much like Atom - which looks very much like Sublime.

When I jump in to update settings, Code takes me to a JSON file:

code settings

Settings are updated by overriding prperties in either user or workspace based settings files. Very sublime-like, and very nice.

To the left, we have a sidebar with our file explorer, search box, git integration, and debugger. Interestingly I didn’t see any options for TFS source control.

sidebar

The explorer is split into your working files and project view. Double clicking a file will open it and place it in your working files. Code has done away with tabs and is a much better editor because of it.

The Editor

The editor has syntax highlighting (as expected). It also has fantastic intellisense support. My last few projects have been javascript based, and I’ve found that in Sublime/Atom getitng intellisense style completion going is very hit and miss. When it does work, it has always felt like a hand-me-down half hearted effort compared to the joy of Visual Studio Intellisense.

Code’s intellisense is brilliant. It can negotiate require.js dependency trees. “Go to Definition” works. Peek works. “Go to symbol” works.

intellisense

I did find one annoyance. Code wasn’t quite sure about ES6, and gave some warnings about TypeScript (which I’m not using).

es6 intellisense

Given how fast the Javascript community moves, this should be fixed fairly quickly.

Looking to the top right of the editor, we see this row of buttons:

editor buttons

The buttons are:

  • split editor
  • Changes view
  • open preview

Splitting the editor gives you 2 panes (side by side). Change view displays differences compared to the checked in version in source control - a built in diff tool - awesome!

Preview is enabled if you are in a html or markdown file. It splits the view, and gives you a preview pane on the right which updates as you make code changes.

split view

What about column editing?

It’s in there! If you hold alt and click, you get a second cursor… and a third… and a forth… etc:

multi-cursor

This is the main feature that pulled me into Sublime editor. Code has a decent implementation (but I do miss being able to middle-click and drag - Sublime style).

Other nifty things

Here are a few other items I noticed:

  • there is a debugger, but only for node and mono. ASP5 support is coming soon. The debugger seemed to work ok when I gave it a test with a node.js app.
  • the search pane is great - better than any other editor I have tried. I can use regex, I can exclude and include files or folders and apply a filter - and I can do all of those things at the same time, easily.
  • it is built on Github Electron Shell, the same base Atom uses.
  • themes are supported

Overall

How is Code in general use? If you use Atom or Sublime, you’re going to be right at home. I haven’t found anything that has pissed me off. Code looks nice, and the defaults are sane.

The only thing I can really fault it for is the name. Try searching “Microsoft Code” in a search engine. Go on. Just try.

I like that I can take Code with me if I move to a different OS.

Code uses the same base as Atom (Electron), but unlike Atom - it doesn’t have issues of random lock ups or slow down. Code feels like a native editor. It feels fast.

In summary - I love Code! I wrote this blog post in Code, I previewed it in Code, and I checked it in in Code. If you’re a front-end developer, go get Code. Go!

/eof

Accidental NPM Packaging (and JSPM!)

Last week I created a package - an NPM. I didn’t intend to. I just wanted to use the Microsoft Signal-R javascript client, and I was surprised to see it wasn’t listed in the Node package repository.

How hard can it be to create a package anyway?

There are two great motivators for developers. Laziness and annoyance.

  • Laziness: “I bet I can automate this, so I can spend more time doing interesting things”.
  • Annoyance:“Why isn’t there an NPM package for this? what is this, the dark ages?!”.

I found myself annoyed when I decided to use Microsoft’s SignalR Javascript client. There was no JSPM or NPM package.

The package manager I was using is JSPM - a wonderful bundle of joy that can use either NPMs or git repos as a package source. Lets make packages!

Pre Conditions

You need:

  • npm (node package manager)
  • jspm (javascript package manager)
  • a git client
  • github & npm accounts

Make a JSPM github package


c:\\github> mkdir mypackage
c:\\github> cd mypackage
c:\\github\\mypackage> git init

Add a .gitignore file, and setup your github remote repo as your origin.

Now you need a package.json file. This file defines what your package is, what it depends on, who maintains it, and where it lives. It can define a lot of other things too, as defined by the package.json spec. We can create one with npm.


c:\\github\\mypackage> npm init

name: (mypackage)
version: (1.0.0)
description:
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to C:\github\mypackage\package.json:

{
  "name": "mypackage",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}


Is this ok? (yes)

Edit your package.json to suit your project. Run it through a json validator. Once you’re happy with it, commit and push it to github:


c:\\github\\mypackage> git commit -a
c:\\github\\mypackage> git push origin master

Now you can test it. Add trying adding it to another project, directly from your github repo:


c:\\github\\mypackage> jspm install github:[youraccount]/[your repo]

Ok, hopefully that went well.

Create NPM package & add to NPM repo


c:\\github\\mypackage> npm publish

Update NPM package

/eof

Animating Aurelia

Have you heard of Aurelia? It’s Rob Eisenberg’s new framework. If you travelled forward in time two years and wrote a framework based on cutting edge tech, Aurelia would be the result. Aurelia is built on technology so new that the standards haven’t been ratified yet. I’m talking ES6, ES7, transpiling, a proper module loader, lambda expressions (arrow functions), computed properties, and more.

Aurelia

Now let’s use this massively powerful framework to animate a few boxes.

{% raw %}

2015-04-28 Article Updated

This article had some incorrect code listed, and was not making proper use of the Aurelia-Animator. I have corrected the text and updated the code samples. Thanks to commenter “firens” for giving me a heads up.

{% endraw %}

Applying CSS Animations in Aurelia

What we are going to do is this:

Have new databound elements fade in as they are added to the DOM, using CSS transitions. Then, we will update a counter and animate it.

To achieve our goal, we start by installing Aurelia’s animation library. Open a command prompt and add it to your project as follows:


$ jspm install aurelia-animator-css

If you haven’t seen jspm before, check it out. It isn’t limited to jspm packages. Jspm can also install and manage npm packages, as well as pulling modules down from github repositories.

CSS Animations

If you’re unfamiliar with CSS animations - don’t worry. They are straight forward. You define the key points in the animation - the starting state (0%) and the end state (100%). Apply CSS styles for those key points. The browser will extrapolate the rest. Here are the definitions for our “fade-in” and “flash” animations:


@keyframes fade-in {
  0%   { opacity: 0; }
  100% { opacity: 1; }
}
/* for compatibility with older browsers */
@-webkit-keyframes fade-in{
  0%   { opacity: 0; }
  100% { opacity: 1; }
}

@-webkit-keyframes flash {
  0%, 50%, 100% {
    opacity: 1;
  }

  25%, 75% {
    opacity: 0;
  }
}

/* Credit goes to http://daneden.github.io/animate.css/ */
@keyframes flash {
  0%, 50%, 100% {
    opacity: 1;
  }

  25%, 75% {
    opacity: 0;
  }
}

To use our new animations we need to create a CSS class to apply to our new DOM elements, and we need to define a class to animate our counter.


.fade-in-box {
  -webkit-animation: 0.5s fade-in;
  animation: 0.5s fade-in;

  border: 1px solid black;
  background: yellow;
}

.au-attention {
  -webkit-animation: 0.5s flash;
  animation:  0.5s flash;
}

Aurelia .js class

This is our view model.


import {inject} from 'aurelia-framework';
import {CssAnimator} from 'aurelia-animator-css';

// Use this decorator to inject the animator
@inject(CssAnimator)
export class ListExample{

  heading = 'List Example';

  listItems = [
    { listItem: 'pencils', qty: 2 },
    { listItem: 'glue', qty: 1 },
  ];

  // The inject decorator needs an appropriate constructor
  // to inject the animator.
  constructor(animator) {
    this.animator = animator;
  }

  addListItem() {
     this.listItems.push({listItem: 'packing tape',  qty: 1 });

     // then removeClass method returns a promise. We can use "then"
     // to chain it to addClass. This allows us to 'toggle' the class on the
     // element - and fire the animation
     this.animator.removeClass(this.elGridCount, 'au-attention')
        .then(this.animator.addClass(this.elGridCount, 'au-attention'));
  }

}

The code above creates a list of items and a button - which we bind our view to. Notice the nice ES6 syntax?

Aurelia .html view

This is our view.


<template>
  <section>
    <h2>${heading}</h2>

    <div><button click.trigger="addListItem()">Add List Item</button></div>

    <div role="grid">
      <div repeat.for="row of listItems" style="display: flex;">
        <div style="flex: 1 auto;" class="fade-in-box" >${row.listItem}</div>
        <div style="flex: 1 100px;" class="fade-in-box" >${row.qty}</div>
      </div>
    </div>

    Count: <div ref="elGridCount">${listItems.length}</div>

  </section>
</template>

In the view, we’ve defined a list, and used “repeat.for” to bind it to our view model list items. The button is bound to addListItem().

We’ve also added a count of our list items. Note the “ref” attribute on the counter. This attribute is used in the model to reference the element in the view.

Click!

When the user clicks the button, the following events occur:

Events related to new item addition

  1. The view model method addListItem() adds a new item to the list.
  2. Aurelia observes the new item being added, and triggers a DOM insert to add it to the grid.
  3. The browser begins the fade in animation for the new element

Events related to the counter

  1. The view model method addListItem() calls the aurelia animator with the removeClass method, and passes in addClass via a promise..
  2. The Aurelia-Animator removes the “au-attention” class (if it exists), then follows the next function in the promise chain (addClass).
  3. The Aurelia-animator adds the “au-attention” class to the “elGridCount” element in the view.
  4. The browser renders the “flash” animation referenced in the “au-attention” css class.

Pretty cool, huh? Some subtle, unobtrusive animations can really make a page seem slick. Just don’t go overboard, or you’ll take us back to the hellish days of animated gif wall papers.

You may be thinking “all the animator has done is add a css class to an element - the browser handled the animation part. What is the point of the animator?”.

Good question. The animator:

  • gives us events (such as animation start/end)
  • allows us to chain multiple animations together using promises
  • can also set/change the speed of animations
  • can stagger animations - one after the other

The animator uses Aurelia’s animator service - which provides an animation interface. The interface allows the animator service to have alternative implementations swapped in. For example, an implementation of the famo.us animation library) could be integrated. Exciting times ahead!

Aurelia-animator Supported Events

The DOM events supported by Aurelia’s animator are:

  • enter - Element enters view
  • leave - Element leaves view
  • removeClass - Class removed from DOM
  • addClass - Class added to DOM

The “move” event is defined, but not currently supported. It will return a failed result if you try.

/eof

3 Awesome New C# Features

A new version of C# is on the way. Here are 3 new features to make your life easier.

New additions in C# 6 are focused on making developers more productive - getting more done with less code. Let’s start with a nice new operator.

Null Propogation Operator

Most C# devs will be familiar with this sort of code:


if (businessThing != null) && (businessThing.OtherThing != null) {
  if (businessThing.OtherThing.NestedThing != null) {
    businessThing.OtherThing.NestedThing.DoThings();
  }
}

A common repetitive task is to check that an object is not null, and if it is not null, do somthing with one of the objects properties.

Now we have this shiny new operator:

?.

This is the null propation operator. It does a null check of a property, and if that null check passes, it then accesses the property. If the null check doesn’t pass, it jumps out of the statement and continues program execution without throwing an exception.

Our ugly block of code above, becomes this:

businessThing?.OtherThing?.NestedThing.DoThings();

It really improves readability, as well as making your code more consise.

Nameof Expression

This tells us the name of a variable. Here’s a use case:


public void PrintFavourites(string animal, string food) {
  // using nameof
  if (animal == null)
    throw new ArgumentNullException(nameof(animal));

  // using magic strings, bad
  if (food == null)
    throw new ArgumentNullException("food");

  // do stuff
}  

Cool huh?

Auto Property Initialisers

Auto properties are great, but up until now we can’t auto initialise them, which is less than great for a read only property. Currently we need to:

  1. Create backing field
  2. Initialise the backing field in our constructor
  3. Explicitly create the property, referencing the backing field

C#6 brings us auto property initialisers, allowing declaration and initialisation in a single line:

public string Bob { get; } = "Dave";

Bam. Done.

How long do I have to wait?!?!

C#6 will be shipping with VS2015, which recently had the developer preview released. The release date isn’t finalised yet.

You can get more details on C#6 here, and more detail on VS2015 here.

/eof

SPA within a SPA? Durandal child routers

My current project had an interesting requirement come through: “We want the app to navigate to another single page app when the user clicks next on this page, but still have our widgets at the top”. The client wanted to utilise an existing durandal SPA inside a new durandal spa.

Note
This post is relates to the Durandal single page app framework. If you are not working with Durandal you are probably going to find it very dull.

This is the requirement I was given:

Ez-Anchor

A SPA within a SPA using Durandal. Why? well, we have some existing SPAs that the client wants to reuse, but they need to run within a new SPA - as part of a multi-step process.

There are two ways we can achieve this:

  1. Use Compose and directly load the view in your current SPA
  2. Use a child router

Using compose is easier and faster, is only useful if you only want to use a single view. If you have a bit of complexity in the 2nd SPA, child routers are going to give you far more flexibility.

Child routers?

Exactly what they sound like! A router that is logically a child of another router. They allow you to include an entire navigation structure from one spa, in a second SPA. As an added bonus you can raise events from one SPA to the other.

As I have multiple pages and some complexity to deal with, I’ve gone with a child router (rather than trying to jam in multiple views with compose).

Here is my folder structure:

  • Website
    • Apps
      • SPA1
      • Views
        • spa1_page1.html (spa2 host)
        • spa1_page2.html
        • spa1_shell.html
      • ViewModels
        • spa1_shell.js (router is here)
        • spa1_page1.js (child router is here)
        • spa1_page2.js
      • DataModels
      • SPA2
      • Views
        • spa2_view1.html
      • ViewModels
        • spa2_view1.js
      • DataModels
      • Shared
      • Services

Here is the sequence of events when the user loads the SPA:

  1. User navigates to app/SPA2/page1.
  2. The router notices that the route matches a splat route: /SPA2/*. The splat route points to a page containing a child router. The router is intelligent enough to determine that it should hand off to the child router.
  3. The second SPA hosted on the page attempts to load its default route.

Easy as that.

Notes

  • Your child router should logically be set up as a sub folder (meaning if your site is logically under /, your child router will need routes for /derp/) - if you try to use a flat structure, you are going to see “route not found” a lot, even when it should work. If you do get it to work, you may find that when you try to navigate out of your child SPA you get stuck in the child SPA, ending up with nested child SPAs. It’s like dividing by 0, and can cause the end of the world. No one wants that.
  • SPA2 needs to be hosted within a block element in a view in SPA1.
  • The child router can be configured to pre-append a path to all route destinations, allowing them to all be relative to a given page.
  • You may need to update your require.js conf to allow any services needed by SPA2 to be resolved.
  • If both SPAs rely on different libraries with the same name (for instance, both have a common.js within the SPA, with different content) you are going to have to refactor a little and give these libraries different names/Ids. Otherwise you will end up in dependency hell.

Code Sample

Here is the code we need to make it all work.

spa1_shell.js (Main Router)


define(function(require) {
  "use strict";

      var router = require('plugins/router');
      var system = require("durandal/system"),

      return {
        activate: activate
      };

      function activate() {

        var routes = [
            // Default view is the one with the empty string route
            { route: '', moduleId: 'Splash', title: 'Splash', nav: true },
            { route: '/SPA2/*', moduleId: 'PersonalDetailsHost', title: 'PD SPA', nav: true },
        ];

        return router.map(routes)
          .buildNavigationModel()
            .activate();
      }

spa1_page1.js (Child Router)


define(['plugins/router', 'knockout'], function (router, ko) {

  var childRouter = router.createChildRouter()
      .makeRelative({
        moduleId: '../../../../apps/SPA2/viewmodels',
        fromParent: true
      }).map([
            { route: '*SPA2', moduleId: 'spa2_view1', title: 'This is SPArta' },
      ]).buildNavigationModel();
  return {
    router: childRouter, //the property on the view model should be called router
    continueClick: continueClick // we still want to capture this event
  };

spa1_page1.html (Second SPA host)


Heading

SPA 1

Look, I could be a SPA 1 widget

/eof

Yes, the doc-type is still important

or… “why don’t my before/after pseudo elements work in IE8?”. It may be 2014, but doc types still matter.

If you have been a web developer longer than 3 minutes, you may remember the HTML doc type tag at the top of all your HTML markup. You probably just set it to <!DOCTYPE html> - and moved on with your life. That’s usually fine, but not always.

How I got burned today

For the last couple fo weeks, I’ve been rectifying some HTML written under Dot Net 1.0 (Yes, you did read that correctly). This code gets used by thousands of people every day. It is mostly reliable, but it is not accessible, and due to a change in government policy - my team needs to make it accessible (WCAG2 AA compliant).

Have you seen the movie Inception? This app is like thats that, but with never ending levels of nested tables. One of my co-workers replaced most of the tables with DIVs, but jury is out on whether that made things better or worse.

While bringing the CSS to modern standards, I used the CSS “before” attribute, as follows:

The style was quite simple:

.spiffy-form input.required-field:before {
    content: "*";
    font-weight: bold;
}

In Firefox, I had a nice star displayed.

Strangely, IE8 had no star.

IE8 does support the :before and :after pseudo-elements, what was going on?

It turns out that IE8 doesn’t support :before and :after if it’s rendering engine is in quirks mode.

Feeling Quirky

In the dark days of the web IE and Netscape Navigator rendered things a little differently to each other. Each browser had a partial implementation of the HTML (and later, CSS) specification - as well as custom extensions to it. This continued well into the time Netscape died and FireFox emerged.

The IE team had to move closer to standards compliance, but couldn’t risk breaking thousands (millions?) of sites to do so. The solution from the IE team (and later, other browsers) - was to implement “Quirks mode”. For IE, this means loading an earlier version of the rendering engine - one that has a bad/old/loose HTML compliance. You essentially get IE 5.5 with support for a few later features jammed in.

I’m going to emphasise that:

Quirks Mode == Internet Explorer 5

Obviously we don’t want to be using IE5.5 - because IE5.5 is crap.

Moving Forward

If it is a new(ish) site/app, go for the HTML5 doctype:

<!DOCTYPE html>

If you are updating some godawful mess from 2003, you are going to want the HTML4 doctype with loose standards compliance:

<!DOCTYPE HTML PUBLIC “-//W3C//DTD HTML 4.01 Transitional//EN” “http://www.w3.org/TR/html4/loose.dtd">

If you want to stick with HTML4, but you care about accessibility, this doctype will let you use ARIA attributes:

<!DOCTYPE html PUBLIC “-//W3C//DTD HTML+ARIA 1.0//EN” “http://www.w3.org/WAI/ARIA/schemata/html4-aria-1.dtd">

Apply any of those doc types, and your before/after pseudo elements will start working. You will also have a more maintainable site and more consistent rendering.

Happy coding!

Further Reading

This great site goes into various DocTypes and their effects in a good level of detail.

/eof

Quick mobile optimisation for your blog

Here are some quick changes to make your blog (or other site) more readable on mobile devices. TLDR: viewport and mediaqueries.

Where to begin?

Here is my starting point:

Ugly mobile site

Straight away, we can identify the following issues:

  • The title is too small and hard to read
  • There are excessive side margins
  • The columns are too small to be readable

Not a good impression for a first time visitor.

Fix the Viewport

The viewport meta tag was originally introduced in Safari, by Apple. It is not part of any standards, but all major browsers now support it.

In a nutshell, the viewport meta tag allows developers to control the size and the scale of the web site display area. Mobile browsers have long used a “view port” to render the visible portion of a page. On larger websites, this allowed the viewport to be larger than the actual device - enabling the user to pan and zoom. This was browser controlled, and was usually a good thing. As most websites (at the time) were designed with 1024px wide screens as a minimum - the viewport enabled the user to have a good web experience.

Things were fine - until Apple introduced the Retina display. This upped the pixel density and other manufacturers soon followed. Websites that looked fine on mobile were suddenly one third smaller.

To deal with this, the CSS 2.1 spec included this line…

If the pixel density of the output device is very different from that of a typical computer display, the user agent should rescale pixel values

In other words, if a device has a display with a high pixel density, the browser should scale the page to allow it to display correctly.

Add this line to the HEAD section of your page:


<meta name="viewport" content="width=device-width, initial-scale=1.0">

The line above enables caling to occur. Once we provide an initial scale, the device can scale - determined by device resolution and size (a high res phone for exampke, may have a scale of 1.75 device pixels = one “CSS” pixel).

Additionally, we have given the viewport a starting width based on the device screen width.

Please note that you should only use viewport if your site is responsive.

Now that our view port is properly set up, we can use Media Queries…

Media Queries are great

Media queries are very very awesome. They allow use to target our CSS according to screen size. To see how useful this is, check this out. Say we have some CSS in our main content area that adds a 50px margin to each side:


.padSides {
  padding-left: 50px;
  padding-right: 50px;
}

Well crap, now our mobile viewers have margins that are too big. Lets fix our margins by adding a media query:


@media (min-width: 800px)
{
  .padSides {
    padding-left: 50px;
    padding-right: 50px;
  }
}

The @media part indicates a media query. It is followed by a condition. In this case, the style(s) will be overridden when the viewport is 800px wide or greater. Remember that order is important in CSS. You need to put your media-query rules after your default rules - otherwise the media query rules will never be applied!

Media queries can have rules for:

  • width/height
  • device type (TV or Handheld for example)
  • device orientation
  • and more…!

The Result

What does my blog look like now?

Nice looking mobile site

Much better, eh?

Note: To get the final result I fixed a few other items in my media query CSS (title sizes, columns). I have excluded the details to keep it simple.

References

/eof

Get Angular Running in 5 minutes

Want to try some Angular development? Need to get a dev environment set up? With a couple of automation tools, I’ll have you up and going in 5 minutes.

Prerequisites

You need the following two apps installed:

  1. Git
  2. Chocolately package manager.

Instructions

We are going to use the Angular Seed project template.

Fire up a command window, then run the following commands:

cd\git   (change this to the folder you intend to use)
git clone https://github.com/angular/angular-seed.git
cd angular-seed
cinst nodejs.install
"c:\Program Files\nodejs\npm" install
"c:\Program Files\nodejs\npm" start

You should now be able to open http://localhost:8000/app/ in your web browser.

What just happened?

git clone https://github.com/angular/angular-seed.git

This cloned Angular Seed to your local drive.

cinst nodejs.install

Chocolatey installed nodejs and npm (node package manager).

"c:\Program Files\nodejs\npm" install

NPM found the node based project in your current folder, and automatically installed the node dependencies.

"c:\Program Files\nodejs\npm" start

NPM launched the node.js instance, which happens to be available at http://localhost:8000/app/

Too easy! :)

For a full run down of how to use the template, go here: angular-seed on github.

/eof

Developing for Screen Readers is a special kind of hell

Picture the scenario…. your manager asks you add support for assistive technologies to your shiny new SPA (Single-Page-App is a development pattern that can be roughly described as a highly responsive javascript explosion). Your client wants to implement the W3C-AA Accessibility Standard.

You are going to pull out your IDE, make a few tweaks to the app, and the screen reader will happily read your web application out to appreciative users.

After adding a few ARIA tags, you fire up a screen reader….

You’ll likely discover that your screen reader will miss half of your content, read out things it shouldn’t, and spend a surprising amount of time making funny noises on punctuation symbols.

The Internet Sucks for Blind People

Until I started running a screen reader over a few random sites, I had no idea just how much the internet really really sucks for visually impaired users.

This is the internet for blind people:

How the internet feels for blind people

Most websites are not structured properly (those H1, H2, H3 in the wrong places, tab order not used, etc), and they have terrible keyboard navigation. You might be surprised at just how many websites have buttons and widgets bound to mouse clicks and no keyboard bindings.

Blind users often have to switch between multiple screen readers or browsers on the same website. Imagine how annoying that must be.

Scott Hanselman has an excellent interview with a blind technologist - Katherine Moss. Listen to it here. Katherine gives a good account of what the internet is like for the visually impaired.

How a screen reader works

A screen reader (such as JAWS or NVDA) is a complex beast. I had always assumed a screen reader would work by using a browser hook to retrieve a rendered HTML document, then parse it and iterate through all the visible text on the page. This is not quite right.

Here is how it actually works (this example uses a javascript triggered change):

Current state of accessibility

In the browser….

  1. A page item is updated
  2. As a result, the DOM is updated
  3. The DOM change might raise an event to the Browser’s accessibility API, as long as:
  4. does the browser support the associated accessibility API? (such as an Aria tag on the item)
  5. does the change involve something we care about? (for example, ignore hidden objects)
  6. should we randomly ignore it? The same type of DOM change that raised an event may not actually raise the event again, which is very frustrating.
  7. If an accessibility event is raised, pass it to subscribed screen readers or other tools.

In the screen reader…..

  1. Do we understand this API call? if no, discard it.
  2. Is the user configured to hear this event? if no, discard it.
  3. Is the context still relevant, or has the user moved past it? if no, discard it.
  4. Read it.

There are a whole lot of events that can stop a screen reader from reading something. As mentioned at step 3, sometimes the browser behaviour is inconsistent (I’m looking you, Chrome) - at other times the screen reader just gets into a bad state and needs to be restarted.

###… and along came Single Page Apps###

Things get worse as we mix in more javascript.

Screen readers come from those quaint old times when javascript was only used for the occasional dialog and pages were generally static. In recent years, we have seen the rise of AJAX, JQuery, and the Single Page App pattern.

Consider the following: Your nice new single page app is dynamically loading a widget. You want to inform the user when that widget is ready. In theory, aria-live attributes should take care of it. In practice, browser support is patchy.

You would think that there is a method to have a screen reader read a given piece of text via a javascript call.

Nope.

There is no way to directly raise an event to the browsers accessibility API. Instead, you have to do something like this:


//screen reader only, hide from everyone else
    .sr-only {
    position: absolute;
    width: 1px;
    height: 1px;
    padding: 0;
    margin: -1px;
    overflow: hidden;
    clip: rect(0,0,0,0);
    border: 0;
}

<div class="sr-only" id="screen-reader-text">
 <span role='alert' aria-live='assertive'>Accessibility Helper</span>
</div>

<script>
function readMessage(msg)
{
    $('#screen-reader-text').empty();
    $('#screen-reader-text').append("" + msg + "");
    </script>
}

readMessage("Something happened");
readMessage("Something else happened");
</script>

The code above initialises an aria-live area (which is hidden off screen via CSS). We then clear it and re-insert a SPAN element to hopefully trigger an event the accessibility API will catch.

This is… less than ideal.

How to fix it

In some instances, it makes sense for a developer to directly interact with the browsers’ accessibility API. Why not give them a means to do so?

Extend the BOM (Browser Object Model), and expose the accessibility API, allowing developers this additional path:


    // don't actually try this, because it won't work
    browser.accessibility.readtext("Your phone number has been updated");

It could work like this:

Current state of accessibility

The diagram shows how a developer could use Javascript, calling the BOM to directly signal the Accessibility API. This new feature would not replace the existing accessibility functionality - it would compliment it. The screen reader would still make the final decision on how to present those events to the user.

The reason we do not have the ability to do this already is a ideological reason - not a technical one. The committee who come up with these accessibility standards have the view that a vision impaired user should be served exactly the same content as a sighted person - with some additional HTML attributes for a screen reader.

That is nice in theory, but the problem is that as technology and patterns change and move forward, there will be more and more instances where messages/events do not get to the browsers accessibility API.

Many developers will find hacky work-arounds. Many won’t bother. Either way, Accessibility will suffer.

How about extending some trust to developers, and giving us the option to directly message the Accessibility API when we need to? that way we can provide the best possible experience to our users - which is what application development is all about.

References

/eof

Every developer should learn regex

Being able to write Regex (otherwise known as Regular Expressions) is a skill every developer should learn. You can use Regex to find very specific text strings, to reformat files, and to do very specific replacements.

When combined with a good text editor, you are unstoppable.

Developers from the *nix world are usually familier with regex - they have the sed/awk commands at their fingertips for a very long time. Those with perl experience are all usually regex gurus as well. Perl seems to be a scripting language built on the premise that regex needed a friendly “wrapper”, and they tend to use regex to the point of insanity (“hey look at what I can do in a single expression!”). There’s no hell quite like mantaining another developers uncommended perl code.

Microsoft stack developers, (on the other hand), generally only learn about regex when they are watching a coworker doing some find replace in a file. The initial reaction is always great.

Developer1> *triggers the regex*
Developer2> *eyes go wide*
Developer2> "How did you do that?"
Developer1> "What?"
Developer2> "You know, that..! Replace all those lines at once!"

Developer2 then spends the next hour researching regex and badgering Developer1 for more information.

Back when I was Developer2 regex support used to be pretty rare in most text editors/IDEs. These days pretty much any editor (other than Notepad.exe) probably supports it right out of the box. Here are some details on the common editors:

Visual Studio was - until recently - a special case. All versions up to and including Visual Studio 2010 used a bizarre custom syntax that was originally aimed at C++ developers. If you are using 2010 or earlier… don’t bother using the built in regex. Just don’t. Copy your text to a different editor with sane regex support and run your expressions there.

Visual Studio 2013 on the other hand (and 2012) both have proper regex support.

Regex is hard

As much as I love Regex, there is one thing you should be aware of. Regex can be hard. Very hard. Sure, a basic find/replace is easy… but at some point you will want to do more. You’ll end up with character classes to restrict matched characters, you’ll want to negate matches, you’ll learn what the term greedy means in the context of pattern matching.

Regex golf

When you are writing and debugging regular expressions, keep this in mind:

It should not take longer to write and debug your expression than it would take you to manually do the find/replace.

If it is taking that long - stop. Take a breather, and consider takling the problem another way. Regular expressions are supposed to save you time and effort compared to manually replacing text.

Here is a practical example of using regex:

Transform comma delimited data into SQL inserts

I’ve used this exact case many times before. Given some comma delimited data, like this:


    34930,Bob,Jones,23 Developer St,Baker
    94084,Anne,Jackson,11 Side St,Plumber
    90385,Jean,Gray,5 Hollywood Place,Engineer

We can use a regular expression to transform it into SQL inserts.

Here is our match expression:


^(.),(.),(.),(.),(.*)\r\n

Here is our replace rule:


INSERT INTO ContactDetails (CardNumber,FirstName,LastName,Address,Occupation) Values ($1,'$2','$3','$4','$5')\r\n

Here is the result:


INSERT INTO ContactDetails (CardNumber,FirstName,LastName,Address,Occupation) Values (34930,'Bob','Jones','23 Developer St','Baker')
INSERT INTO ContactDetails (CardNumber,FirstName,LastName,Address,Occupation) Values (94084,'Anne','Jackson','11 Side St','Plumber')
INSERT INTO ContactDetails (CardNumber,FirstName,LastName,Address,Occupation) Values (90385,'Jean','Gray','5 Hollywood Place','Engineer')
Regex window in Visual Studio

So, where to from here, for the regular expression newbie?

Guides for getting started

These links should give you a good starting point to learn regex:

If you know a developer who doesn’t know regex - teach them! Spread the love. It’s too cool a skill to let other developers miss out on.

/eof

Create a tag cloud using Jekyll

I’ve recently switched from Blogger to Github pages, and I’m using the very cool Jekyll app to manage my blog posts. For those that don’t know, Jekyll allows you to end your blog pages locally using Markup syntax, then it translates your post to HTML. From there, you commit it to a git repository.

One feature missing from Jekyll is Word Clouds. I will show you how to add one.

Create a Jekyll Helper


To do the grunt work, we need to create a Jekyll helper. In your _includes/JB folder, create a new file called “tag_cloud”.

Copy and paste this into the file:


{% raw %}
    {% comment %}
        Creates a tag cloud on your page.
    {% endcomment %}
    {% for tag in site.tags %}
      <span class=“tag-cloud-{{ tag | last | size | times: 10 | divided_by: site.tags.size }}”>
        <a href=“{{ BASE_PATH }}{{ site.JB.tags_path }}#{{ tag | first | slugize] }}-ref”>{{ tag | first }}</a>
      </span>
    {% endfor %}
{% endraw %}

There is nothing too clever here. We just iterate through all the tags on your site, then output a span with a link in it. To give different weighting for each number, we: 1. Get the percentage the tag makes up of the total 2. Round down to the nearest 10 (eg, 43% becomes 4) 3. Append that number to the style name. In this case, 43% becomes 4 becomes “tag-cloud-4”

That oddly named function - slugize - converts the human readable form of the URL into a shortened URL.

Add the styles to your CSS file

Go to your CSS file (check your themes folder, in my case it was styles.css). Append the following to the bottom of the file:


/* tag cloud */
.tag-cloud-0 { font-size: 1em; }
.tag-cloud-1 { font-size: 1.5em; }
.tag-cloud-2 { font-size: 1.5em; font-weight: bold; }
.tag-cloud-3 { font-size: 2em; }
.tag-cloud-4 { font-size: 2em; font-weight: bold; }
.tag-cloud-5 { font-size: 2.5em; }
.tag-cloud-6 { font-size: 2.5em; font-weight: bold; }
.tag-cloud-7 { font-size: 2.75em; }
.tag-cloud-8 { font-size: 2.75em; font-weight: bold; }
.tag-cloud-9 { font-size: 2.75em;
    font-weight: bold;
    font-style: italic;
}
.tag-cloud-10 { font-size: 3em; }
/* end tag cloud */

Feel free to tweak these styles to change the look and feel of your tag cloud.

Now update your template

Now you can add the tag cloud to your page. Open your default.html template file. Place the following where you want the tag cloud to appear:


{% raw %}
    <div>
        <h2 class=‘title’>Tag Cloud</h2>
        {% include JB/tag_cloud %}
        <div class=‘clear’></div>
    </div>
{% endraw %}
Now you can build your site, and enjoy your shiny new tag cloud. :)

/eof

Create a Single Page App in 2 Minutes using Ember App Kit

Single Pages Apps (SPAs) are the current flavour of the month. They have seemingly appeared from nowhere, but now it seems like every developer is talking about them. Wikipedia describes this pattern as follows:

A single-page application (SPA), also known as single-page interface (SPI), is a web application or web site that fits on a single web page with the goal of providing a more fluid user experience akin to a desktop application. In a SPA, either all necessary code – HTML, JavaScript, and CSS – is retrieved with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions.

In my experience they offer some advantages over traditional apps:

  • Very responsive
  • Rapid development time
  • Data is usually provided via a REST API - making integration to other systems easy

along with some disadvantages:

  • Usually implemented with javascript (with all baggage it brings, such as horrendous function passing syntax everywhere)
  • Can be harder to debug - you can end up in dependency hell
  • Common tasks such as continuous integration is harder than with more mature patterns such as MVC (example: making javascript unit tests play nice with TFS).

The high responsiveness alone makes SPAs worth of investigating. Responsive web apps == happy users!

Now I’m going to show you how to build one.

Ok, I’m sold. What now?

So, where do we start? The SPA (and javascript) community is fast moving place. I have lost track of the number of new templates and libraries in use… Knockout, Angular, Durandal, Ember, Bootstrap, JQuery, etc, etc, etc. You can attempt to roll your own SPA from the ground up, or you can use a ready made project template - such as the Ember App Kit.

Ember App Kit gives you:

  • A nice project structure
  • GruntJS automation for minification, compilation, and deployment
  • Ember SPA framework - with Handlebar templates, routing, controllers
  • Unit tests with QUnit
  • Javascript 6 (ECMA6) modules (this is very very cool- a tool called a transpiler is used. ECMA6 takes Javascript into the realm of modern languages with a proper object model and sane module support)

Esentially, this kit gives us an out-of-the-box ready to go application. You can use it as a starting point to customise and build on for your own application.

Let’s get started

  1. Go to Ember App Kit’s git hub page and either (https://github.com/stefanpenner/ember-app-kit/archive/master.zip">Download the template (and unzip it to a folder) or clone the repo.
  2. Install Node JS if you don’t already have it. Ember App Kit uses Grunt to automate various tasks, and grunt uses Node.
  3. Once you have Node, open a command window and do a global install of Grunt using the following command: npm install -g grunt-cli
  4. Now install Bower. Bower is a client side package manager: npm install -g bower
  5. Still in the commend window, change to your new folder from step one. Install your npm dependencies: npm install
  6. Install your Bower dependencies: bower install

Now run the app

The following command will run your app in debug mode and watch for file changes (restarting the app as needed): grunt server

…and this is the result:

Where to now?

You now have a fully working Ember app ready to build on. Check out the Ember Sherpas guide as well as the getting started guide to get some in depth detail on some of the features mentioned above. Be sure to leave a comment if you find this helpful (or not so helpful).

References

/eof

Fix Visual Studio taking a long time to load debug symbols

I lost a good couple of hours tracking this one down. If Visual Studio is taking a very long time to load debug symbols when debugging (think 5 minutes to 30 minutes), try this…

Delete all break points (Debug - > Delete All Break Points).

Sometimes something small like this can be a lot more frustrating than a big issue. Hopefully this saves someone from going crazy.

/eof