Post

The effect of technology on jobs

Leave a reply

If you are afraid of losing your job because of technological advancement then you are doing it wrong. You might forgot what is a job at the fundamental level.

A job is a relationship between a problem and an entity. The entity should solve the problem and the job is done. A solution may create new problems as causality dictates this world. If you are afraid of losing your job as a result of lack of demand for it then you go against the fundamentals of problem solving.

According to modern society jobs exist to provide a source of living. This is true at the current level of development of society. Majority of the people need an actual job to get money so they can live. Money is the greatest abstraction ever made by humanity. Its main purpose is to measure, exchange and effectively handle value. Most of the value extracted from our current world is from solving problems. If you are afraid that there will be no problem that you can solve in the future and so you cannot extract value from it then why are you trying to solve current problems at the first place? If your goal is to conserve problems just for the sake of money and perceived security then you are contributing to the history’s biggest social illusion ever. We are already generating a bunch of bullshit jobs and it is getting harder to distinguish between real value and jobs where there is no real problem to solve. What is valuable is of course highly subjective and I don’t fully agree with the linked article. However it is a pretty interesting article.

I am a software engineer and this industry is working hard to solve as many problems as possible. Even the problem of the existence of software engineering. One day (probably in the near future) we won’t develop computer programs as we are developing them today. We will arrive to a situation where is no need for humans to write these software. It is actually the stone age of technology today. We are writing software by hand. It is like manufacturing common stuff several hundred years ago. A few days ago Bill Gates hold his 3rd AMA (ask me anything) on reddit. There was a question about this topic. Is programming a safe future for a newcomer or we will automate it so we need less people to get the job done? I can answer this question very quickly: there is no such thing as safe future job. And this is amazing. This means that we are going to solve problems faster than ever before. If you are going to be a programmer and you are afraid that there will be no big demand for programmers in e.g. 10-20 years then you should not be a programmer. As one wrote in a comment: “Almost seems like a snake eating its own tail.“. It is true if you do not accept the fundamentals of jobs and you want to live in an illusion where you are doing something without real value behind it. The fundamental is to solve the problem and not to keep it.

There are a lot of examples that new technology renders old jobs unimportant. On the other hand new technology creates new jobs. Let’s change the word technology to solution and job to problem. There are a lot of examples that new solutions renders old problems unimportant. On the other hand new solutions creates new problems. There are debates on the balance so it is not that clear if more new jobs are created than destroyed by technological development. But I believe this is not a big deal. It is a short term concern for sure because of the state of societies but on the long term why would we cling on old problems just to maintain our current playground? Even if the number of problems that can be solved by humans will dramatically decline (which is hard to predict) there will be no Armageddon. We started as humans with problems like how to make fire and we arrived to a time where we are afraid that we cannot get a proper job. Even the lack of jobs is a problem to solve. It requires a higher level of thinking.

What are we going to do if there is no need for programmers, plumbers, farmers, bankers, postmen, pilots, taxi drivers or corporate lawyers? It only depends on us. Where do you extract value from today when not solving problems (doing your job)? Probably from your family, friends and hobbies. From music, arts through sports and games to the exploration of space there will be an almost infinite number of activity you can do and get value from. The same things you do today just on a bigger scale.

There is an ever growing intersection of fun things and jobs. Believe me there were not as many enjoyable activities a century ago. Life of a single person is too short to realize that humanity as a whole is advancing at a rapid pace and you have way more possibilities to get value than your grandparents had. Financial crisis? Growing inequality? High unemployment? Yes, short term concerns but look at the bigger picture. If you were born a century ago your life would be much more determined and restricted about what to do. It is even true for the poor regions of Earth just a bit differently.

What is the role of money in this game? Money won’t go anywhere. Sorry revolutionary folks but money is the projection of human interaction so it will only disappear if humanity disappears. It will certainly change as it has been changing from the beginning. It must change since the source of value is constantly changing too. What we have to do is to embrace the basic value of humanity: the human. Humanity declares what is valuable and so we can declare that a human without anything is a valuable thing by itself. In the language of money this is called the Unconditional Basic Income. Actually it is conditional. You have to be alive🙂. We need advanced societies to make it work. Switzerland may be the first. We need societies where life itself is valuable enough that we can assign money to it. There are critics and concerns about UBI but I think it is a way better approach than artificially creating or keeping jobs without real value behind them. It could be also better than QE to stimulate economy. Once value is assigned to every single person in a society they can start dealing with much better things. Things that you can’t even imagine today.

Post

Prison of Legacy Code

Leave a reply

A few weeks ago I experienced something interesting about legacy code in the office. One was really upset by the fact that the legacy code base he has to work with makes it much harder to make new changes. It does not matter what the exact situation was but this inspired me to write this post.

I see a parallel between life and programming. Of course I do because I am a programmer. Everyone from every profession would compare the world to their profession. This statement is maybe a cliche but the idea is much more interesting when you compare actual problems between the two contexts. The same skills are required to resolve legacy code issues as the skills required to move your life from A to B. You have to make smalls steps and do those steps constantly.

When someone says “this is our code and we cannot do anything about it” it is exactly the same attitude you experience from people who are constantly waiting for others to solve their problems and who always blame others for their problem. This is of course a dead end. In most of the situation you are the one who makes things happen. Lets change things. Make it a little better. If you think that you cannot do it then you gonna end up in the exact same situation over and over again. For some people this can be comfortable just like prison. You know the rules. You may not like them but you know them and this gives you confidence. Once you have to leave your comfort zone you find yourself in a new world with new rules that you may not understand on the first day. Just like Brooks Hatlen.

To go forward on this analogy let’s define what is a prison. My understanding is that a prison is just a place where rules are more restricted than you got used to. This is why it is a punishment for most of us. If you would born in a prison you may not realize that this is actually a prison. If this definition is correct then the actual free world is a prison too, only the rules are more flexible and the possibilities expanded. Real world supports this idea because humanity as a whole always try to expand our world to break from the current prison. Which is the current level of development. When you have to deal with legacy code base your current level of development is well under the possibilities. When you cannot make a change that means the rules are too restrictive at the moment. In a real prison your best bet in this situation is to sit until they let you go but fortunately you can freely break out from the legacy code prison and bend your own reality to expand your possibilities.

That is what we do. We expand the possibilities, we change the rules, we alter reality and there are no prison guards only yourself. You should not run away but face the problem and solve it.

Post

The Beauty of 21th century

Leave a reply

It is January 2014. Winter in my home country, Hungary. We were swimming in the Andaman sea with my friends when I decided to write this post.

We came to Thailand for a month long trip in the middle of hungarian winter. Did I spend my full yearly holiday allowance at the beginning of the year? Obviously not. I only took a few days to get here and I am going to work the whole month with the hungarian team. This is possible because:

  1. I am a software developer
  2. We (the people) have the software to do this
  3. My company is really flexible
  4. I do not spend all my money for stuff

1. A software developer can work from anywhere. I only need a computer and an internet connection and I can do my work. There are definitely lots of jobs out there where you can do this and there will be even more as the 21th century advances.

2. Software development evolved so far that we created systems that support this remotely thing. Actually my company develops these kind of softwares so it was straightforward for me to use:

  • LogMeIn Hamachi for VPN (so I can access the network of my company)
  • LogMeIn Pro to access my office PC if I need it
  • join.me to attend company meetings
  • and of course I use Skype for international calls

3. LogMeIn allowed me to do this trip so they definitely showed me how friendly is this company. Actually there is a 6 hour time shift between Hungary and Thailand so I work with the guys from 4 pm to midnight (so I can spend almost my whole day freely).

distance

4. I am a regular reader of Leo Babauta’s Zen Habits blog and the post which I linked above is one of my favorites and I completely agree with that. People who know me enough knows that I love to live with less and less stuff. I do love experiences instead.

These four factors are the main reasons why I am here. And I really love it. A few decades ago I could not have done this and I believe that it is going to be even better. The opportunity is here and growing. The opportunity is available for more and more people. You can do this also, you just have to believe in yourself. Really.

Post

A custom Unity lifetime manager

Leave a reply

Lifetime management is an important responsibility of a dependency injection container: if the DI container serves the dependencies for you then the DI container should manage the lifetime of the served objects.

The most common lifetime management strategies are transient (new instance per resolve), singleton (one instance per container) and per-request (new instance per web request). Most of the time these are enough but don’t forget the possibilities: you can create any kind of lifetime managers.

What is the task of a lifetime manager in it’s most basic form? Actually it’s very simple. It should decide between two things:

  1. return an existing object or
  2. return nothing

If it returns an existing object then the container will use that. If it returns nothing then the container will create a new instance (based on the configuration) and calls back to the lifetime manager with the created object (so it can do whatever it wants to do with it).

I used this analogy a few weeks ago for a file based component: it loads its information from files and stores it in memory. Actually it builds up itself from files. It can be anything but imagine a file based localization library for example.

Once the actual object graph is ready then we don’t need the files again until they change. It sounds like a singleton lifetime situation and the obvious answer to reload the files should be to watch it in the component implementation. But wait a minute: we already implemented the reloading. It happens when the object graph created by the container. Then why would we implement it again? The container created the object so it should decide when we should receive a freshly built one.

So what we need is the following:

public class FileWatcherSingletonLifetimeManager 
	: ContainerControlledLifetimeManager
{
	private readonly string[] _fileNames;

	private readonly string _cacheKey;

	public FileWatcherSingletonLifetimeManager(
		params string[] fileNames)
	{
		_fileNames = fileNames;
		_cacheKey = CreateCacheKey();
	}

	protected override object SynchronizedGetValue()
	{
		return HttpContext.Current.Cache[_cacheKey] == null 
			? null 
			: base.SynchronizedGetValue();
	}

	protected override void SynchronizedSetValue(object newValue)
	{
		base.SynchronizedSetValue(newValue);
		HttpContext.Current.Cache
			.Insert(_cacheKey, new object(), 
				new CacheDependency(_fileNames));
	}

	private static string CreateCacheKey()
	{
		return "FileWatcherSingletonLifetimeManager:" 
			+ Guid.NewGuid();
	}
}

It builds on the fact that the .NET framework already has the feature to invalidate a cache object if certain files changed. We just pass a simple object to the cache but we aren’t actually interested in the object: we are interested in the existence of the object.

If we turn back to my previous simplification then it does the following:

  1. if we don’t have a flag in the cache then we should return nothing (then the container will create a new instance)
  2. if we have the flag in the cache then we should return the object

The base class of the solution is the ContainerControlledLifetimeManager which is the singleton lifestyle equivalent in Unity. Our class is only a proxy in front of it so what we get is a singleton lifetime managed object which is only singleton until certain file system events: if any of the given files changes then the container will rebuild the object and we restart the cache cycle.

What custom lifetime managers do you use?🙂

Post

Secure your data with TrueCrypt and Cubby

Leave a reply

Keeping your personal data secure while you want to access it anywhere can be challenging. In this cloud based world you can access anything from anywhere. This is the easy part because the cloud storage business is rising so you can choose from several services with different advantages.

I like to keep all my stuff for the reasons above and for backup reasons too, but a few weeks ago I realized that I have too much unsecured personal information stored in the cloud, on my notebook and on my mobile . If anybody ever gains access one of these he gains access to all my stuff. I don’t mind my music or ebooks but all my coding projects and personal media etc. So I came up with following solution.

The first thing I need to secure my data within the cloud storage. Cubby Locks is available in (uh, what a surprise) Cubby. You can create multiple cubbies within your system and you can separately “lock” them. In the real world this means that you can have a simple folder for your casual stuff to sync that all around and you can have another folder which contains your personal data synchronized with the cloud. But here comes the difference: if you lock your cubby then your data stored in an encrypted format in the cloud. This means that even if you left your mobile or browser logged into Cubby you cannot access your data without the password which is the key for encryption.

For more technical details of Cubby Locks check this article: Technical deep dive into cubby locks.

This is the right point to protect my data in the cloud but it won’t protect it in my notebook (if you read the Cubby pages you know that this is a cloud side encryption). So the second thing I need is to encrypt the data on the machine itself. One of my colleagues advised a popular encryption approach which is TrueCrypt and it’s a mature, proven product. 

With TrueCrypt you can create an encrypted container which can be mounted to the system as a virtual hard drive or removable drive. You may already figured it out that you can create a cubby inside this container and you can lock that so it is encrypted in the cloud and encrypted on the machine too. So for example I have 3 different TrueCrypt containers:

drives

M for media, P for personal documents and W stands for work. Inside these virtual drives I have the cubby:

secure_work

When you work with the content of this folder it’s the same experience than when you work with a regular folder but you can easily dismount the TrueCrypt drive and then nobody has access to it without the password.

You may pay more attention to the startup order of Cubby and TrueCrypt and the dismound order. For example if you dismound the drive before you stop Cubby then you have to re-add the folders later (there is a merge option so you don’t have to re-upload everything). I think this is an odd thing but I expect improvements on this. Until that I like to stop the Cubby before any dismount (either manually or scripts triggered by system events) to avoid the re-add and merge cycle.

It can be a bit uncomfortable but stolen personal data is more uncomfortable in my opinion🙂

Post

Duplex communication in ASP.NET MVC with Domain Events

1 comment

We are going to talk about duplex communication in ASP.NET MVC web apps, without Flash, Java, SL or anything like that. I assume that you have a good understanding of comet techs because we’ll look into a ready-to-use library so most of the hard things are abstracted away. This library is PokeIn and I use it as the base library to provide real-time things on my gaming site fumind.

This is not a how-to-do-it step-by-step tutorial but you may find it interesting if you are looking around for concepts related to the topic.🙂

I’m going to write about solutions I used to provide these real-time updates and gameflow. The site provides gomoku and gomoku-alike games so don’t expect some fancy action game, these are strictly turn-based stuffs. Because of this we don’t need really fast solutions, so we can go with (or fall back to) classic comet solutions, as the speed of a quick ajax call is good enough for our needs.

Our plan is the following:

  1. Abstract the comet communication away
  2. Implement domain events to keep things simple
  3. Forward domain events to comet listeners

All the three point is about single responsibility and abstraction. (1) We don’t want to tie our code to a specific comet solution so we need some kind of abstraction which is PokeIn in our case. (2) We don’t want to mess up our MVC actions with PokeIn calls but we need a consistent way to notify our system that something happened so if anybody interested in it will get a notification and can handle it in a separate component. (3) After we have domain events we can write listeners which can directly notify the PokeIn clients.

The first one is really easy as we only have to setup the PokeIn library (you can find concrete examples on the PokeIn website). The second requires some foreseeing because this is the base of the two-way communication. Simply put, we expect to see almost the same things in the same moment for all the players and observers. To achieve this in a consistent and maintainable way I dedicated two channels for the communication in the case of real-time things:

  1. Incoming actions
  2. Outgoing events

Incoming actions are plain MVC actions and they will return an OK signal every time it succeded or some kind of error message if something went wrong. These actions will raise domain events so a listener can notify everybody on the outgoing channel, which is PokeIn. This is the key. Because nobody receives the updates directly as the action response we can treat everybody equally when updating the clients. This will lead to a predictable environment which is good for maintainability.

As you may see it has several seams where I can hook in the functionality. I can write my incoming actions and I don’t need to care about any update to the client, I can focus on the action. I can write my domain event listeners separately from my MVC actions so I can focus on only the client selection and notification for every kind of domain events. I can listen on everything on the client side without changing anything on the server side. This is possible because I route through every PokeIn callback on a single JavaScript function and from my custom code I only subscribe to this central callback point then every subscriber can decide if it should do anything with the given information.

Let see some code from the fumind.com codebase. One of the most important parts to achieve what I’m talking about is to create the concept of the domain events. You may find several definitions on the internet, I use it as a mediator pattern implementation.

So for example when somebody makes a move on fumind this action will controll the flow:

[HttpPost]
[UnitOfWork]
[RestrictToAjax]
[ValidateAntiForgeryToken]
public ActionResult MakeMove(MakeMoveViewModel moveViewModel)
{
	Game game = Enter(moveViewModel.GameID);

	var user = _profileService.CurrentUser;
	var context = new MakeMoveContext(game, moveViewModel.Position, user);

	var result = _gameFlow.MakeMove(context);

	if (result == GameFlowActionResult.TimeOver)
	{
		OnSuccessfulUoW = () => EventBroker.Current.Send(this, new TimeOverPayload(game));
		_logger.Info("Time over in game: {0}", moveViewModel.GameID);
	}
	else if (result == GameFlowActionResult.Success)
	{
		OnSuccessfulUoW = () => EventBroker.Current.Send(this, new MoveMadePayload(game, game.LastMove));
		_logger.Info("move made in Game: {0}, Position: {1} by {2}", moveViewModel.GameID, moveViewModel.Position, user.UserName);
	}
	else
	{
		_logger.Warn("Move failed in game: {0}, Position: {1} by {2}", moveViewModel.GameID, moveViewModel.Position, user.UserName);
	}

	Exit(game);

	return Json(Constants.AjaxOk);
}

There are several things in this snippet which aren’t really relevant but I wanted to post the whole thing to make it clear how I made it. You can see things highly related to the makemove process but what is important is the response and the way unit of work helps us here. We declare this action as a unit of work and we respond to the client that we have succeeded:

return Json(Constants.AjaxOk);

Of course only if we really succeeded. This is the incoming part of the flow. An extension point in this code is the OnSuccessfulUoW which is declared in a base controller and you can assign an action to it. This will only run if the whole unit of work succeeded. This indirection is important to avoid false notifications so every client will stay valid. As you can see I call into the EventBroker ambient context inside this callback so basically when the UoW succeeds it will raise a domain event:

OnSuccessfulUoW = () => EventBroker.Current.Send(this, new MoveMadePayload(game, game.LastMove));

With this we are over the first step, we have the action and it has only one responsibility: delegating the makemove attemp to the domain logic. We are handling the updates in a subscriber which listens to the MoveMadePayload. 

Some words about the EventBroker:it based on the Unity DI container so if you ever want to subscribe to a domain event you need to implement the IEventSubscriber<TPayload> interface and register it:

container.RegisterType<IEventSubscriber<MoveMadePayload>, MoveMadeSubscriber>(typeof(MoveMadeSubscriber).FullName);

And the EventBroker will resolve it:

public class UnityEventBroker : EventBroker
{
	private readonly IUnityContainer _container;

	public UnityEventBroker(IUnityContainer container)
	{
		_container = container;
	}

	public override void Send<TPayload>(object sender, TPayload payload)
	{
		var subscribers = _container.ResolveAll<IEventSubscriber<TPayload>>();
		if (subscribers == null) return;
		foreach (var subscriber in subscribers)
		{
			subscriber.Receive(sender, payload);
		}
	}
}

This can be (and will be) improved to handle when a listener throws an exception, but it’s ok for now.

So basically all domain event works because this event broker. You can easily see how we can subscribe to one or more event.

public class MoveMadeSubscriber : IEventSubscriber<MoveMadePayload>
{
	public void Receive(object sender, MoveMadePayload payload)
	{
		//define clients
		//build update data-transfer-objects (dto)
		//notify pokein listeners:
		//var json = PokeIn.JSON.Method("mindline.pokein", dto);
        //CometWorker.SendToClients(viewers.ToArray(), json);
	}
}

It really doesn’t worth to include the whole makemove notification logic here so I just wrote the steps as comments. Of course we need to define the PokeIn clients to update (we can define different branches too, e.g. for makemove I define observers and players separately), we need to build up our dto (data transfer object, I build up slightly different dtos for observers and players) and call into the PokeIn library as you can see.

The only thing left is the client extension point. You can see in the subscriber that we define the mindline.pokein javascript function as the client side endpoint. I use TypeScript as JavaScript preprocessor, but this is really very simple:

/// <reference path="../typings/jquery.d.ts"/>

declare var PokeIn;

module mindline {

    var listeners: { (any): void; }[] = new any[];

    export function addPokeInListener(listener: (any) => void ) {
        listeners.push(listener);
    }

    export function pokein(payload: any) {
        $.each(listeners, (k: any, listener: (any) => void ) => {
            listener(payload);
        });
    }
}

So anybody can hook into this client side root point and can handle any PokeIn calls, such as our makemove action.

I hope you see the gained values of this architecture:

  • I can easily replace PokeIn with another library, it can work with websockets too
  • I can work on my action logic and I don’t need to think about the comet logic meanwhile
  • I can write as many domain event subscriber as I want so I can easily extend the behavior of the existing application
  • I can hook into any comet callback on the client side without modifying any line of the existing code
  • I can encapsulate the complete game flow on the server side so the server controls what you see
  • Real-time comet callbacks are treated exactly the same as if you need to update the clients due to a scheduler event (time over for example), so it’s easier to maintain the code (consistency).

You can do duplex communication in a lot of ways but I think this one is really simple for starting off and provides a consistent environment to deal with the upcoming tasks. And it is reliable🙂.

What do you think about it?

Post

DeliveryTracker with ASP.NET MVC 4 RC

15 comments

Well, this blog post is about how to make SPA work with RC. The discussion started at http://aspnetwebstack.codeplex.com/discussions/358133

My plan is to share experiences with the SPA (which is a little bit paused now by the team), so don’t expect a complete guide about it now, this will be a series of posts as we move forward and resolve new issues.

As you may already know, SPA is removed from the ASP.NET MVC 4 stack and continued as a separate project. Another problem is the pause on it, so it’s a very unstable area. New problems come day-to-day and old ones get resolved. For example when I first wrote to the codeplex discussion thread about that we resolved the incompatibility between SPA and RC we had some server side problems that disappeared with the current nightly build, but we don’t have OData filters now🙂.

In this article I will try to make the DeliveryTracker work with the latest MVC builds. If you need the OData filters with SPA right now, you should go with an older changeset (as we do) or another OData lib. For more information check out this thread: http://aspnetwebstack.codeplex.com/discussions/359229

One more thing before we dive into is that we don’t really use EntityFramework and we didn’t explore everything about this stack. Ahh, and this is my first blog post btw.🙂

Ok, first of all, you will need the DeliveryTracker sources: https://github.com/SteveSanderson/DeliveryTracker

To get the latest MVC nightly builds follow the instructions here: http://blogs.msdn.com/b/henrikn/archive/2012/06/01/using-nightly-asp-net-web-stack-nuget-packages-with-vs-2012-rc.aspx

You will also need the asp.net webstack sources, because the SPA related packages don’t have nuget builds: http://blogs.msdn.com/b/henrikn/archive/2012/04/09/getting-started-with-asp-net-web-stack-source-on-codeplex.aspx

If you got all the stuff then open up the DeliveryTracker solution. You will see that it has the beta MVC packages delivered with it. Update the following ones from the nightly source:

  • ASP.NET MVC 4
  • Web Api Client
  • Web Api Core
  • Web Api Web Host

If you try to run the solution now you will get an application error. This is because the Web Api changed since the beta SPA.

Let’s do some changes!

We need four components for our SPA solution:

  • System.Web.Http.Data
  • System.Web.Http.Data.EntityFramework
  • System.Web.Http.Data.Helpers
  • upshot.js

You can find these packages in the aspnetwebstack source, but now they belong to the Microsoft.* namespace (probably because they continue SPA as a single project).

We will create 3 new projects inside our DT solution and clone the following 3 from the latest:

  • Microsoft.Web.Http.Data
  • Microsoft.Web.Http.Data.EntityFramework
  • Microsoft.Web.Http.Data.Helpers

This is because we probably have to make some changes to make it work or to keep our solution working at the next nightly update (not absolutely necessary now, but it’s safer).

Open up the Runtime.sln (which comes with the latest sources), and check out the Microsoft.Web.Http.Data project. There you will find some linked files in the Common folder so don’t forget to copy those too.

Include the files and do the same with the Microsoft.Web.Http.Data.Helpers and System.Web.Http.Data.EntityFramework.

After you have the sources in the DT you probably have to add nuget packages to the new projects too, such as

  • ASP.NET MVC 4
  • Web Api Client
  • Web Api Core
  • Web Api Web Host

And some reference from the .NET:

  • System.Runtime.Serialization
  • System.ComponentModel.DataAnnotations

If I missed something here then you will need to figure out it yourself (or ask me in a comment), but I hope this list is complete.

The next few steps are temporary solutions, but this whole thing we do here is temporary anyway. So when you try to build it you will get a lots of internal usage error, but who needs internal protection while our goal is to hack around the problems? So change them to public. Do it everywhere the compiler complains about it.

Another problem you will face here is the resource accessing in Errors.cs, you can figure out what’s going on here if you want but I don’t care about the concrete error resource now (it’s really a temporary solution), just change it like in the attached picture.

I don’t really remember, but there is a

using System.ComponentModel.DataAnnotations.Schema;

directive somewhere in the Microsoft.Web.Http.Data that we don’t need and the compiler can’t find it. If you have ReSharper then it’s more easy.

You should have a successful build now (and I hope you have!).

Ok, now we have a DT with the latest MVC and 3 SPA related projects around it. You have to remove the old references from DT and add our new projects:

  • System.Web.Http.Data
  • System.Web.Http.Data.EntityFramework
  • System.Web.Http.Data.Helpers
  • System.Web.Http.Data.Helpers2

The problem now is that the new code has a different namespace. So change them! (System.* -> Microsoft.*)

Delete EntityFramework and install a new one from nuget because we use a newer one in Microsoft.Web.Http.Data.Helpers and we don’t want version missmatch errors.

Also we have to update our namespaces in our web.config files (in the normal and under the Views too).

Change
<add assembly=”System.Web.Http.Data.Helpers, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />
to
<add assembly=”Microsoft.Web.Http.Data.Helpers” />

and under the Views
<add namespace=”System.Web.Http.Data.Helpers” />
to
<add namespace=”Microsoft.Web.Http.Data.Helpers” />

Ok, the code is ready and it should run now (we will have client side errors).

The new server side code generates a different result and our upshot.js is from the beta package. We need to update and fix that too.

Check out the SPA project in the Runtime solution. Upshot made up from different components, so you have to build (combine) the files.

When I made this step for us, the upshot.js had some incompatibility with the current server code, but we will check out what’s the situation now.
I made a little cmd tool to combine the files, you can find the source code here. Copy the exe into the SPA folder and run it. You should have an upshot.js now.


Overwrite the original with it and change the path in the _SpaScripts.cshtml partial.

Well, upshot still has incompatibility problems, so lets fix them. The main difference between the old and the new json pushed by the server is the structure, you can inspect it.

The other is the type handling. I’m not going to write about them one-by-one, you can download the fixed upshot.js and the script file which contains the actual fix functions here.
In the upshot.js I have marked the fixed parts with a comment: //FIX, so you can check out them.
You have to include the SPAHacks.js before upshot to make it work. Refresh the page and voila.

This is it. I hope everything worked out, but if not, you can always ask in comments.

As you can see it’s not a fail proof way to build applications, so I suggest to wait until a stable release comes out. If you don’t want to wait (like me) you can play with it, and I’m going to write more posts when we have more problems.