Friday, October 11, 2013

Moving from TFS(vss) workflow to Git Part II

In my previous post, I outlined our current VSS-inspired workflow.  Here, I will outline a Git workflow that I believe will provide the isolation that we need without the danger or overhead of cherry picking commits to merge up.

To start with, I believe we should use the fairly well established “git workflow” pattern with a few minor modifications. Most of this is outlined there, but I will address a few issues here.

At the outset, with we will have a Dev branch. This branch will represent finished features that have not yet been to QA. From this branch, each feature will branch ( as a verb ) down into a feature branch ( noun ). The feature branch will be maintained until the feature or fix is considered complete and ready to go to QA. During the developer’s daily process, the Dev branch will be merged down to his feature branch at least once a day, and possibly more often, depending on the amount of work going on elsewhere that needs to be integrated.  Once the developer’s feature is completed, he will merge it up to development, making it available for deployment to QA and merging to other feature branches.

Two points to make about the above: (1) the Dev branch will always be in a state that is ready to push to QA, and (2) the reason that the feature branches merge down from Dev often is twofold. First, new completed features may be committed by other developers at any time, QA fixes and hotfixes will be being merged down into Dev, also at any time. The features will want to have an accurate picture of the Dev branch. Furthermore, frequent down merges will make the final up merge considerably easier, ( albeit at the cost of many small conflicts in the down merges. Nevertheless, this is much preferred over having a bunch of conflicts to resolve all at once ).

While I believe “git workflow” covers the QA and Prod workflow, a quick synopsis follows. At some point, someone says “Hey! I just finished my feature and would like to have it QA’d”. After he merges into Dev, he can then merge Dev into QA knowing that all features in Dev are in an initial completed state and are ready to merge up to QA. QA may find errors with the new features. The feature owner, at this point, branches off of QA, fixes the problem, merges up to QA and down to Dev, and QA tests again. A similar process is used for doing hot fixes on production. branch –> fix –> up merge –> down merge to both QA and  Dev.

So, now our developers can commit all day long, as they well should, without worrying that their commits will inadvertently make it to production. Perhaps an even nicer step would be to make available an environment in which a feature branch can be QA’d. This way, the back and forth of bug fixes could take place off-line, and when a feature is merged with Dev, it will be in a much more complete state. However, I’m not sure what would be involved with getting multiple, disposable testing environments up.

One caveat is that all the features that are merged with Dev must be cleared to go into production in the same push.  Once they are in the main line there is no easy way to tease them back out.  There is a workflow that would facilitate this practice but (a) it is pretty much an incremental (albeit large) change to the proposed workflow so we can evaluate it later, and (b) it is somewhat complex and unless we are very sure that we need this flexibility I feel simpler is better.

In a future post I will look at some of the actual steps and commands that developers would be using given the proposed workflow.

Moving from TFS(vss) workflow to Git Part I

Here at work we currently use TFS for, well, everything.  We have made the wonderful -some would say only responsible- decision to move to Git.  This decision will require all manner of stuff to happen.  In this post, I will explore the developer experience.

We currently have certain workflows for check-ins, merges, etc. that are based on the insane way that TFS encourages you to do these things and these workflows will not work with a respectable source control system.  So, first I will lay out the way we currently do things.

We have a relatively small team: three backend developers and a frontend developer.  None-the-less all of us tend to be working on different things at any given time.  We all check in to our Dev branch whenever we feel like it.  We do not employ CI so the state of our Dev branch is never really known until you get latest and build.  Fortunately, it tends to be in pretty good shape despite the absence of procedure or control.

The fun begins when we want to move our code into a QA branch.  The technique we use is to merge specific change sets.  As I mentioned, everyone checks in their changes ad hoc, meaning that everyone is in a different state of readiness.  So, when we want to merge up to QA one person’s changes and not the rest, we look at all the change sets and try to distinguish,  either by reading the commit message or by actually looking at the code in the change set, which are the appropriate change sets to merge.  This process is repeated when we move from QA to Prod.

On the one hand, this seems like an insane proposition that veritably begs for disaster.  On the other hand, due to the hard work and diligence of the team members, we make it work which facilitates our siloed development of different features and bug fixes.  As this is a rather ingrained workflow that has been in place since long before I joined the team, our switch to Git will have to permit some facsimile of this process without polluting or perverting our new source control system.

In the next post I will outline some of the practices that I believe will allow us to be successful with Git.

Wednesday, September 4, 2013

Embrace the frontend

In my last post I wrote about getting out of the “forms” business and into the “AJAX” business, which I view as a pivotal step in the development of Web applications that are both better managed and more responsive.

The problem is that once you are liberated from the tyranny of <forms> you find that you can no longer be just a “backend developer” that surfaces web pages. You must learn the language of the frontend. I know you love your strongly typed, server side, cozy blanket, and you can certainly go work for some monolithic death corp and write the same code for the rest of your life. But if you want to develop interesting applications and stay current in the flow of software technology, get over it and start learning javascript.

One’s first steps into javascript invariably go as follows:

1) Write a bunch of crap in a <script> tag in the HTML file it’s self

2) Discover what a mess that is and how hard it is to debug and move same crap into a separate file.

At that point you think you are being responsible and call it done. A year goes by and you find that you have a lot of those files, with lots of duplicated code and/or duplicated functionality with different implementations. You consolidate that into some horrible library that contains every “helper function” in the world and include the whole file on every page. Probably minify it so when it breaks it’s a pain to figure out what’s going on and now you think you’re done.

Well you’re not. You wouldn’t write your C# application like that. In fact you’d probably go on some career-ending rant if you saw it being done (much as I am doing here with js). You must treat you’re javascript application with respect if you want it to respect you. Sure it lives for, perhaps 10 seconds, but those are pivotal seconds and that time can and will grow as you feel the joy of building an app on the client side.

So how do you treat it with respect? Well, the KEY to success is to employ a pattern, a nice pattern, and stick with it for every page. JS doesn’t have a very appealing, object oriented story out of the box, but there are a few patterns for creating objects or classes that are easier and more familiar to work with. Some are better than others. And there are several frameworks that provide implementations of these patterns and generally wrap them up with some other functionality.

As I mentioned in my previous post (not smart enough to know how to point to previous post here), I like very much the Knockout.js . However, Knockout endorses a rather repugnant pattern of dispersing your javascript between your HTML and a simple javascript object. You essentially (within Knockout’s attribute) put onclick -> fire this event in my ViewModel. Their ViewModel is just a plain old javascript object with properties and functions on it. The idea of putting a bunch of behavior inside of HTML is most unpleasant. However, I am willing to concede the need for a two-way binding attribute. The payoff is so great and the offense rather minor. But when you start putting all manner of logic in the HTML attributes, it’s my loud opinion that, you have crossed the line, Jack. Furthermore, the anemic Knockout ViewModel is not very helpful, when it comes to organizing your code. Thus, I say that KO.js has an excellent two-way binding story but you should leave it at that. Don’t let it take you down the dark road of HTML decoration.

I can see how KO would be compelled to provide other functionality so as to seem more like a “framework” than a “library”, and I can see how they would try to extend what is already working so well for them (e.g. the HTML decoration). I can see this but you can’t make me use it. Other “frameworks” Angular.js for one have taken a similar direction. Angular is very popular, but their HTML wrangling makes KO look like a minor offender. I won’t go any further into my objections. If you want to debate, hit the comments.

In my next post I will write about how I use a combination of backbone.js and ko.js to create a best-of-both-worlds cocktail.

Furthermore, I am cross posting this to my personal blog if you like it so much you want to read it twice. http://cannibalcode.blogspot.com/

Tuesday, September 3, 2013

Shredding your forms

Shredding your Forms

Using the <form> tag to wrap elements and then submit the data contained within worked in the 90’s. Hell it worked in the early 2000’s. But with the advent of AJAX techniques, the <form> element is now really more of a liability than a help. The problems are as follows:

1) Through some method, unbeknownst to me, when a <button type=”submit”> is clicked the <form> makes a post to the url found in one of it’s attributes. There is no hook into this process not before it goes out and not after it comes back.

2) Add to this that if you have something you need to submit outside of the <form>, you’re pretty much out of luck.

3) Thus you get what’s in your form and nothing else and you must redirect to a whole new page on return. No nice success or error message showing up smoothly, just a whole new page of HTML (which could include your messages of course).

4) This leads to the highly unappealing practice of packing hidden elements within the form, and/or - even worse - updating element names when you dynamically create a new element. Shudddr.

While there are work_arounds for these problems that allow you to use AJAX to catch a form submit, you can take it from me it can be a byzantine nightmare to try to customize.

So when you throw out your horrible <form> tags and start using AJAX to post and get data from the server, you will find that, while it did a rather crap job of it, the <form> tag did at least harvest your values from the elements contained. Without it, you must now query each element that contains data you want to post back. While this is exactly where you get the benefit, it is also a pain. Add to this the fact that, if you are using C# MVC, MVC expects that data to come back in a very precise and unintuitive manner, and you are now faced with a rather boring if not daunting task. In fact, it can be so daunting I may do a blog post explaining how to do it.

Luckily, the solution is not only beautiful it is wonderful and awesome, all wrapped into one. By employing a model binding framework like Knockout.js or one of it’s lesser cousins you can create two way binding between your DOM elements and a JSON object (heretofore referred to as the ViewModel). This means that when you change the value in, say, a text box, the corresponding property on the ViewModel changes as well, and vise versa. So now, when you want to submit your data via AJAX you don’t talk to the DOM at all. Insteadm, you just submit the ViewModel object. Again, the versa of this vise is that when you want to update the DOM you must merely speak, in it’s native tongue, to the ViewModel.

Creating this two-way binding, emancipates you to some degree from the business of mucking around in the DOM. I say to some degree because you most likely will still have to interact with the DOM to perform other actions: clicks, show/hide, fade out with pixels, etc. Still if you can find a way to abstract that noise, you could quite possibly, write tests for your javascript logic without the incredible hassle of spinning up a browser and mocking your HTML.

I have glossed over A LOT of the implementation details in favor of a much higher lever ( and much shorter post). I would be happy to write a post on the details should anyone ask.

In my next post I will discuss the strategy that I have found to be quite fruitful for employing two way binding without horribly polluting your HTML or creating a deep and vast plate of javascript spaghetti.

Monday, April 22, 2013

ClipX Still the best

ClipX is a great clipboard extender.  It remembers your last x number of ctr c(s) and has them available from a hotkey.  It saves images as well as text and does lots of other cool stuff I never bothered to mess with.  What’s more, it was developed in 2005!  the url is http://bluemars.org/clipx/

Here’s the caveat though, it will set a couple of hotkeys for you and one of them is Ctrl Shift N which of course is “Navigate to File by File Name” in resharper.  You will be plugging away happy as pie then ctrl shift n and it will open a browser for you rather then the resharper command and MAN will you be pissed.  Then you’ll waste copious amounts of time trying to figure out how the hell to get that shit back.  Well, I just done that step for you so go to clipx settings and change that bastard and live happily ever after.

Thursday, November 1, 2012

Thoughts on being an Asshole

Last night I read a post by Nicholas C. Zakas on being nice, which in turn lead me to a post by Tom McNichol on being and asshole which it’s self caused me to start to muse on the nature of being an asshole.

The Tom McNichol’s article was largely on Steve Jobs and how he was successful despite being an asshole, rather then because he was an asshole.  One quote in particular caught my attention

“Even people who worked with Jobs told me that they'd seen him make people cry many times, but that 80 percent of the time he was right” – actually that was McNichol quoting someone else.  I don’t know how to show that.

So this is concerning because I am right 95% of the time.  And while I’ve only made someone cry once in recent history, I do get the sense that I am perceived to be an asshole.  So my musings are as follows.

I believe there is room here for a discussion on intent, however, I don’t have time and this is long and boring enough already.  But I will say this, regardless of how I am perceived, I am always surprised and saddened when I find I have hurt someone’s feelings.  Well, almost always.  I guess I see myself as eminently diplomatic. And other's? They may see me as an asshole.

If one, in a certain situation, is correct, should that person accept an inferior solution, poor reasoning or miss-understanding of facts?  I find that very difficult to do.  In fact, at times, I feel that tacitly accepting poor thinking is a disservice to the thinker, and perhaps to the world.  So does the fact that you are right, in effect, make you an asshole?  ( I know what you’re thinking, ‘It’s the fact that you think you are right that makes you an asshole’, but let’s just assume for the sake of argument that you are in fact right. ).

I understand that it’s also an issue of presentation.  How do you go about correcting a situation.  If you have the ultimate say it’s easy to be easy going, but if you do not have the ultimate say and there is a very real possibility that a bad outcome may come to pass, then you really need to be more assertive.  You must make sure you’re points are taken into consideration and clean up the mess later. 

I guess this internal rambling comes down to two points. 

Point 1) A) do you follow the Christian ethic and turn the other cheek, hope to inherit the earth, and make an unpleasant situation pleasant at all costs, or B) do you follow a more Nietzschean track which says that pity is  a disservice to the person you are pitying. That one must seek the higher road of intellectual authenticity before pandering to the feelings of those that may be offended.

Point 2) How important is the issue at hand.  I often find that appalling grammar, and sentences which do not actually convey the meaning that the speaker believes they do, is in-fucking-tolerable. ( bad spelling is ok Smile ).  I believe this may fall into the he’s an asshole category.  But hell it’s sooooo annoying.   Other issues such as should we purchase this car, or vote for this politician, may have much greater ramifications.

I guess I could go on here about how in effect nothing really matters as one can not reasonably predict what might happen next week much less predict the course of human events and so one should always opt for making people feel good about themselves.  But the fact is I’m a freakin optimist, a romantic if you will.  I have to believe that doing what is ‘right’ has some effect somehow.  Of course what is ‘right’ conveniently lines up with my belief system perfectly.  Which causes, perhaps an endless recursive loop.

Whatever

Thursday, December 15, 2011

A Better Ajax throbber

I’ve been messing with setting up an ajax throbber ( the little waiting icon or animated gif.  That’s what they call it.  It ain’t me. ).

Most examples show that you should put the show() logic right before you make your ajax call and then the hide() in the $.ajax.complete() event.  This has a number of problems. 

1) Unless you have just one single repository through which all ajax calls go ( and I do ) you end up putting the start throbber logic all over the place, wherever you make a call

2) unless you use a setTimeout function the throbber will always show even if just for a flash.  It’s much better to have a pause and if the ajax call is quick enough then you don’t see the throbber at all.

Number two has a major problem though.  And this is the meat of the post.  If you set your time out for 1000 ( one second ) and the ajax call comes back in 500 then the ajax complete will fire before your setTimeout has fired.  Then the setTImeout will fire and the throbber will never disappear.

Two ways to deal with this are to put the hide() logic in the repository call back method which unless you have a single repository as mentioned above with a single OnComplete callback that then calls your real callback you end up with code everywhere again.

So what I decided to do is to have global ( or scoped to your module global ) variable called showThrobber, and set it to false.  Then in the $.ajaxSetup.before send I put my timeout with a check for showThrobber. in the $.ajaxSetup.complete is set showThorb to false and hide the throbber.  so it looks like this

$.ajaxSetup({
            complete:function(){dci.showThrob = false;  $("#ajaxLoading").hide();},
            beforeSend:function(){setTimeout(function() {if(dci.showThrob) $("#ajaxLoading").show(); }, 500)}

});

In my ajax get ( or post ) I turn on the showThrob

ajaxGet:function(url, data, callback){
             dci.showThrob=true;
           // showLoader = setTimeout(function() {if(dci.showThrob) $("#ajaxLoading").show(); }, 1000);
            $.get(url,data,function(result){repositoryCallback(result,callback);
            });
        }

sorry about the formatting.

This is I believe called a latch mechanism.