Test Driven Development i softwareprojekter

For et par uger siden var jeg en tur i Oslo i forbindelse med min deltagelse i graduate programmet i Visma Consulting. En fantastisk tur, hvor vi bl.a. blev introduceret til Visma koncernen og mødte de norske graduates (“Nytt Krutt”, som det kaldes deroppe). En del af turen gik desuden med et kursus i Test Driven Development (TDD), hvor vi først blev bedt om at skrive et simpelt brætspil uden tanke på test. Derefter blev vi bedt om at skrive spillet om, hvor vi denne gang skulle skrive unit tests til spillet før vi skrev selve koden. Til sidst blev vi introduceret til mocking og denne måde at skrive unit test på. Ved alle disse forskellige måder at skrive spillet på, blev vi bedt om at programmere det hele sammen med minimum én anden udvikler (par-programmering).

 

Test Driven Development som metode

Som en del af min bacheloruddannelse har jeg kortvarigt stiftet bekendtskab med eXtreme Programming (XP). Flere af de idéer der var/er en del af denne metode, går umiddelbart også igen i TDD. Begge er agile softwareudviklingsmetoder, men som navnet antyder, så er der i TDD stort set kun fokus på test. Den store parallel mellem TDD og XP er et koncept fra XP kaldet “test first” der bl.a. siger, at når man skal skrive ny kode, skal man starte med at skrive unit testen. Denne idé gennemsyrer hele udviklingsprocessen i TDD:

Test Driven Development Cycle

Test Driven Development Cycle

 

Umiddelbart virker det for mig ligegyldigt, at skrive kode til en unit test, før man har skrevet noget kode der kan testes. Det har dermed i praksis en stærk association til det kendte spørgsmål om “hønen og ægget”. Den grundlæggende idé med at skrive testen først, er at udvikleren får en forståelse for de krav der er til det der skal udvikles, gennem fordybning i use cases og user stories, før der bliver skrevet kode. Denne idé er ganske god, men for mig at se burde man starte et skridt før og i stedet gøre det til en del af use casen eller user story, at tilknytte en beskrivelse af hvordan den enkelte feature skal testes eller rettere en beskrivelse af den enkelte tests struktur. På den måde opnår man stadig, at udvikleren forstår det der skal udvikles, dog uden at bruge tid på at skrive en test der endnu ikke har en reference til genstanden for selve testen – den kode der skal skrives. Når koden så efterfølgende er skrevet, kan testen skrives bl.a. ud fra use casen.

 

Test i softwareudviklingsprojekter
Lad mig først være fuldstændigt ærlig – gennem mine tidligere ansættelser som software udvikler i diverse sammenhænge har jeg desværre aldrig mødt en .NET baseret applikation, hvor der var inkorporeret test. Det skal lige siges at jeg er blevet meget positivt overrasket af det fokus der er på test i Visma og det mener jeg også skinder i gennem i koden.

Der kan være mange årsager til hvorfor test ikke fra starten af et softwareudviklingsprojekt bliver prioriteret. Projekttrekanten viser den korrelation der er mellem ændringer i henholdsvis tid, omkostninger (økonomi) eller performance (mål) i forbindelse med et projekt. Test kan i denne sammenhæng ses som et performancemål og vil derfor være påvirket af ændringer i tid og omkostninger/økonomi. På samme måde vil en reducering af test i et projekt, både spare tid og ressourcer der kan allokeres til andre krav eller mål for projektet.

Alt i et projekt kan i bund og grund beskrives som prioritering mellem de tre dimensioner. Problemet er dog, at præmissen for især et agilt softwareprojekt (og ofte også alle andre typer projekter) netop er at hverken tid, økonomi eller performance er konstante. Det betyder også, at så snart kompleksiteten i softwaren stiger, vil der uvægerligt også opstå flere fejl. Dette samtidig med at kravene til softwaren bliver flere, tiden til at udvikle bliver mindre og økonomien ditto. Antallet af fejl, og dermed tiden og ressourcerne til at finde og udbedre dem, bør umiddelbart kunne forbedres ved i starten af projektet, at prioritere test som en vigtig del af processen med at udvikle softwaren og en del af arkitekturen. Fejl vil hurtigere kunne spores, identificeres og rettes, på trods af at kompleksiteten stiger og dimensionerne ændres. En “bonus” er desuden, at når projektet er afsluttet, softwaren skal overdrages og løsningen skal overgå til drift, vil en løsning hvor der har været fokus på test alt andet lige også være mindre ressourcekrævende at vedligeholde og videreudvikle, både på den korte og lange bane.

 

Forbedret softwarearkitektur

En af de ting jeg lagde mærke til under kurset i Oslo var, at den kode der kom ud af at skrive testen før selve koden, gav en langt mere gennemtænkt, overskuelig og “korrekt” arkitektur. Introduktion af mocking og det at skrive unit test ved så vidt muligt kun at benytte interfaces, dvs. så vidt muligt at skrive testen først, gjorde det også nødvendigt, at tænke på at logikken i applikationen (metoder osv.) kunne håndtere abstraktioner i gennem interfaces i stedet for konkrete objekter. På den måde er kimen umiddelbart lagt til at benytte bl.a. dependency injection samtidig med at arkitekturen helt automatisk levede op til flere af SOLID principperne, især den sidste kaldet “Dependency inversion principle“. Med andre ord blev vi gennem brugen af TDD automatisk “tvunget” til at udforme softwaren med en bedre arkitektur.

 

Konklusion

Der er  umiddelbart en række gode elementer i TDD og det er desuden vigtigt med et generelt fokus på at lade test være en integreret del af softwareudviklingsprojekter. Som med andre agile udviklingsmetoder kan det dog være nødvendigt, at tilpasse metoden til den situation man står i. Samlet set har jeg fundet frem til følgende fordele og ulemper ved at benytte TDD og test generelt i softwareprojekter.

 

Fordele:

  • Større generelt fokus på test.
  • Bedre softwarearkitektur.
  • Nemmere vedligeholdelse og fejlfinding af softwaren, også efter projektet er afsluttet.

 

Ulemper

  • Et interessant generelt paradoks med disse typer af softwareudviklingsmetoder (TDD, XP, Scrum osv.) er umiddelbart deres tendens til at være alt for rigide i praksis, på trods af de gode intentioner med dem og at metoderne netop forsøger at gøre projekter mindre rigide. Derfor vil man i mange sammenhænge “låne” de dele til projektet der giver mest mening/værdi og lade resten være.
  • TDD kan ikke stå alene og har et for stort fokus på unit test alene. Mit forslag om at inkorporere test som en del af use casen eller lignende, vil muligvis kunne tage hånd om denne problemstilling, da man så her vil kunne beskrive en hvilken som helst form for test der skal benyttes til casen.
  • Risiko for “over-engineering” i løsningen. Man kan hurtigt komme til at bruge for meget tid på at gøre arkitekturen i softwaren mere omfattende og kompleks end det er nødvendigt.

Indexing files in Solr using Tika

After installing the latest Solr release I noticed that the schema.xml file (the XML-file that holds the information about how Solr should index and search your data) already was setup for something called Apache Tika that can be used to index all sorts of documents and other files that holds all kinds of metadata (audio-files for instance).

See this post to learn how to install Solr on Windows: Installing Apache Solr on Windows

The great thing about Tika is that you don’t need to do anything in order to make it work – Tika does everything for you – or almost everything. You can also set it up so that it suits your needs just by adding a config-XML file in the Solr directory and a reference in the Solr-config XML-file. I tried to get it to work, but found that quit difficult also because there wasn’t much help to get out on Google. Because of all my problems getting it to work I created this post on StackOverflow. Before I got a reply on the subject though, I found the solution myself burried deep inside some forum posts about SolrNet.

This how you do in order to get it to work inside Solr using SolrNet as client.

  1. Install Solr using the link from my earlier post above or something similiar – the main thing is that you install it using the release from Solr that is already build.
  2. Create a new folder called “lib” inside your Solr-install folder.
  3. Copy the apache-solr-cell-3.4.0.jar file from the “dist”-folder from the Solr zip-file to the newly created “lib”-folder the folder where you installed Solr.
  4. Copy the content of contrib\extraction\lib from the Solr-zip to the same newly created “lib”-folder the folder where you installed Solr.

Now Tika is installed in Solr! Remember to go to http://localhost:8080/solr and confirm that it is installed correctly.

To use it in a .NET client application you can use the newest release of SolrNet (currently the 0.4.0.2002 beta version release) and add the DLLs to your .NET project (all of them – seriously!). This is an example of how to use it in C#:

Startup.Init("YOUR-SOLR-SERVICE-PATH");
var solr = ServiceLocator.Current.GetInstance();
 
using (FileStream fileStream = File.OpenRead("FILE-PATH-FOR-THE-FILE-TO-BE-INDEXED"))
{
   var response =
      solr.Extract(
         new ExtractParameters(fileStream, "doc1")
         {
            ExtractFormat = ExtractFormat.Text,
            ExtractOnly = false
         });
}
 
solr.Commit();

The response will hold all the metadata that has been extracted from the file using Apache Tika. The ExtractParameters is given a FileStream object and an ID for the Solr index (here just “doc1” – can be anything as long as it is unique). The ExtractOnly property can be set to true if you don’t wan’t to get Tika to index the data, but only wan’t it to extract the metadata from the file that is sent. The file is streamed to the Solr API using HTTP POST. You can read more about that here: http://wiki.apache.org/solr/ExtractingRequestHandler

In the above code the data sent to Solr is indexed in the last line, where the data is committed to Solr. If you would like Solr to index and commit the files when sent to the service you can set the AutoCommit property to true inside the initiation of ExtractParameters:

...
   var response =
      solr.Extract(
         new ExtractParameters(fileStream, "doc1")
         {
            ExtractFormat = ExtractFormat.Text,
            ExtractOnly = false,
            AutoCommit = true
         });
...

Because the commit is done everytime you send a new file to the Solr-API you can search during the indexing and, of course, you don’t need to call the solr.Commit() method after indexing.

You need a request handler inside your solrconfig.xml (inside {your-solr-install-path}/conf) to make Solr understand the request from the client. Below is an example of how the solrconfig.xml looks when you haven’t changed anything after install of Solr. See this for further information about configuring Tika inside Solr: http://wiki.apache.org/solr/ExtractingRequestHandler

  <requestHandler name="/update/extract" 
                  startup="lazy"
                  class="solr.extraction.ExtractingRequestHandler" >
    <lst name="defaults">
      <!-- All the main content goes into "text"... if you need to return
           the extracted text or do highlighting, use a stored field. -->
      <str name="fmap.content">text</str>
      <str name="lowernames">true</str>
      <str name="uprefix">ignored_</str>
 
      <!-- capture link hrefs but ignore div attributes -->
      <str name="captureAttr">true</str>
      <str name="fmap.a">links</str>
      <str name="fmap.div">ignored_</str>
    </lst>
  </requestHandler>

Your Solr schema.xml file (inside {your-solr-install-path}/conf) needs some fields in order to index the metadata from the files you send to Solr. You can provide the fields you need and index/store/ the metadata as is required for the files you need to index. This is the fields that Solr is installed with:

   <field name="id" type="string" indexed="true" stored="true" required="true" /> 
   <field name="sku" type="text_en_splitting_tight" indexed="true" stored="true" omitNorms="true"/>
   <field name="name" type="text_general" indexed="true" stored="true"/>
   <field name="alphaNameSort" type="alphaOnlySort" indexed="true" stored="false"/>
   <field name="manu" type="text_general" indexed="true" stored="true" omitNorms="true"/>
   <field name="cat" type="string" indexed="true" stored="true" multiValued="true"/>
   <field name="features" type="text_general" indexed="true" stored="true" multiValued="true"/>
   <field name="includes" type="text_general" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
 
   <field name="weight" type="float" indexed="true" stored="true"/>
   <field name="price"  type="float" indexed="true" stored="true"/>
   <field name="popularity" type="int" indexed="true" stored="true" />
   <field name="inStock" type="boolean" indexed="true" stored="true" />
 
   <!--
   The following store examples are used to demonstrate the various ways one might _CHOOSE_ to
    implement spatial.  It is highly unlikely that you would ever have ALL of these fields defined.
    -->
   <field name="store" type="location" indexed="true" stored="true"/>
 
   <!-- Common metadata fields, named specifically to match up with
     SolrCell metadata when parsing rich documents such as Word, PDF.
     Some fields are multiValued only because Tika currently may return
     multiple values for them.
   -->
   <field name="title" type="text_general" indexed="true" stored="true" multiValued="true"/>
   <field name="subject" type="text_general" indexed="true" stored="true"/>
   <field name="description" type="text_general" indexed="true" stored="true"/>
   <field name="comments" type="text_general" indexed="true" stored="true"/>
   <field name="author" type="text_general" indexed="true" stored="true"/>
   <field name="keywords" type="text_general" indexed="true" stored="true"/>
   <field name="category" type="text_general" indexed="true" stored="true"/>
   <field name="content_type" type="string" indexed="true" stored="true" multiValued="true"/>
   <field name="last_modified" type="date" indexed="true" stored="true"/>
   <field name="links" type="string" indexed="true" stored="true" multiValued="true"/>
 
   <!-- catchall field, containing all other searchable text fields (implemented
        via copyField further on in this schema  -->
   <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
 
   <!-- catchall text field that indexes tokens both normally and in reverse for efficient
        leading wildcard queries. -->
   <field name="text_rev" type="text_general_rev" indexed="true" stored="false" multiValued="true"/>
 
   <!-- non-tokenized version of manufacturer to make it easier to sort or group
        results by manufacturer.  copied from "manu" via copyField -->
   <field name="manu_exact" type="string" indexed="true" stored="false"/>
 
   <field name="payloads" type="payloads" indexed="true" stored="true"/>

The fields above is a fits-all scenario, so with this you can both index audio and document files.

Supported formats in Tika can be found here:
http://tika.apache.org/1.0/formats.html

Installing Apache Solr on Windows

Apache Solr is a Java-based enterprise search platform build on top of the Apache Lucene search engine (the two of them are now merged). It makes all the great search engine features available through a RESTful API (HTTP/XML and JSON): indexing, full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. The best part is that it’s open source and free for all.

I have got a lot of know-how about this great tool both for home and business purpose and would like to share, in this blog-post, how you can install Solr on a Windows system (Windows 6 / 7 / 2008 (R2) Server). This guide is written because I had a hard time finding a guide on this subject out there on the web. So here we go.

First of all, you need to install a webserver that can run Java-servlets. I use the Apache Tomcat webserver. Download the latest Tomcat server (the MSI installer is perfect for this – Binary Distributions -> Core -> 32-bit/64-bit Windows Service Installer) and install it on your system: http://tomcat.apache.org/download-70.cgi (right now the latest version is 7.0) After this is installed, check that it is correctly installed and running (go to http://localhost:8080/).

After you have checked that it is running correctly, go to the directory where you installed Tomcat and then open the server.xml file in the conf folder (conf\server.xml). Inside this you then add this attribute to the first Connector XML-tag (Server -> Service -> Connector): URIEncoding=”UTF-8″.

Download and unzip the latest version of Solr into a temporary folder on your system – could be something like “C:/temp/solr” (I have experienced some problems running version 3.5 on Tomcat – use the 3.4 version for now): http://www.apache.org/dyn/closer.cgi/lucene/solr/

Create a folder on your file system where you would like Solr to be installed. Copy the content from the “C:\temp\solr\example\solr” folder into the folder you just created.

Stop the Tomcat service. If you installed using the MSI installer you can do this by going to the Tomcat folder inside All Programs in the start menu and click on “Configure Tomcat” (you might need to do this by right-clicking on it and choose to “Run as administrator”). Keep the Tomcat configuration window open after you have stopped the service. We are going to use it later.

Copy the *solr*.war file from “C:\temp\solr\dist” to the webapps folder inside your Tomcat installation folder. The .war file is called apache-solr-3.4.0.war for instance when you have the 3.4 version of Solr. When the file is copied, rename it to “solr.war”.

Now we need to configure Tomcat so that it recognizes the Solr install folder that you created earlier. This is done by adding a Java Option: Open the Tomcat configuration window mentioned earlier and then go to the “Java” tab. Here you have a “Java Options” textbox with alot of lines in it. On the bottom of this textbox add the line “-Dsolr.solr.home={solr-install-folder}”, where {solr-install-folder} is the path to your Solr install folder.

In the Tomcat configuration window, start the Tomcat service again. After starting the service, try to open a web-browser and navigate to this site (the local Solr administration site): http://localhost:8080/solr/admin. If the site starts nicely, Solr has been installed on your system.

CSV Parser

I’m currently working on a project where I needed a C# console application that was able to read through a Excel CSV (Comma Separated Values) file.

Basically the CSV file format is just a txt file with rows and each column is then separated by a comma (surprise!) or a semicolon. Besides a comma the data in each column can optionally be “framed” by quotation marks.

Therefore i started out with the following code, just as I would read through a normal txt file:

try{
    using (StreamReader readFile = new StreamReader(path))
    {
// Do something here…
    }
}
catch (Exception e)
{
    // Do some error handling here…
}

This is, as you can see, really straight forward. First of all I declare an object of a StreamReader in a using statement. Using the object “readFile” I am able then to navigate the file. The using statement is important as this will do the cleanup for me, by calling StreamReader.Dispose(), when the statement finishes. I always wrap this kind of code in a try…catch because when you work with files, errors just occasionally happen.

Now, to read the data from the CSV file I add the following lines of code inside the using statement:

List<string[]> parsedData = new List<string[]>();
string line;
string[] row;

while ((line = readFile.ReadLine()) != null)
{
    row = line.Split(‘,’);

    parsedData.Add(row);

}

It just declares a new List that can hold an array of strings and the line and row variables is needed when traversing through the file. I then use the readFile object to call the ReadLine() method of the StreamReader class in a while loop. When there is no more lines in the file the line variable will be null. Inside the while loop I use the string.Split() method to split the line into an array of strings (my columns) and I then add this array to my List object (parsedData).

The problem then was that I didn’t know exactly what encoding the file would be in. What to do then? I settled on a solution where I tell the StreamReader what encoding the file probably has and it will then open it in that encoding. This can be done by adding a parameter when calling the constructor on the StreamReader class like this:

using (StreamReader readFile = new StreamReader(path, encoding))

Finally all this can be wrapped in a nice method. I also added a check to be sure that the file I want to parse is actually available. But there you go:

public static List<string[]> ParseCSV(string path, Encoding encoding, char splitter)
{
if (!File.Exists(path))

        return null;

    List<string[]> parsedData = new List<string[]>();

    try

    {
        using (StreamReader readFile = new StreamReader(path, encoding))

        {
            string line;

            string[] row;

            while ((line = readFile.ReadLine()) != null)

            {
row = line.Split(splitter);
parsedData.Add(row);
            }
        }
    }
    catch (Exception e)
{
        // Do some error handling here…
    }

    return parsedData;

}

The man without Facebook

I will start this post by telling you that I have a confession to make. Not a really nasty one I think, but for some it might seem quit odd: I don’t have an account on Facebook, never had and never will. There, I said it to the world. “So why is that?”, you might ask.

Before going into all that stuff about pros and cons about Facebook and other “social networks”, I will start out by sharing some experiences from my everyday life, my life without Facebook.

First of all a little data. I have been told recently that around 60% of the grown population (18-65) in Denmark, where I live, have a Facebook account (that’s around 2 mio. people). I don’t know if you can use those statistics. I heard for example once that one of my friends made an account for his dog. The amount of young people in Denmark that has an account and frequently uses it is declining. I have read articles in local newspapers (yes they still exist) recently about young people, after they admitted to be somewhat addicted to always checking their account has abandoned Facebook after realizing that there is other ways of talking with friends than through the Internet.

Almost every one of my friends still today have a Facebook account. When I started at my current study (Msc. in IT and business) little over a year ago and started to talk with my new fellow students, the picture was the same. Every single one of them had an account, so when I told them that I didn’t, they were quit stunned. How can a guy whose main interest is IT, don’t be on Facebook? I have always told politely about why I don’t use Facebook and realized when people think it through many of them don’t really have an explanation for using it and let it be a part of their everyday life. Some days ago when I was at my school for the first lecture in a new course, the teacher asked the class how many in the room had an account on Facebook, MySpace, Twitter and so on. Again I was the only one that didn’t have an account on Facebook. The girl who sat next to me said: “Oh yeah! You’re the guy who’s not on Facebook.” Those info spread fast i guess.

My close friends some time ago stopped asking me when I was going to get on Facebook. I think it’s great to know that the friends who are close to me and have known me for many years, knows that when ever they would like to see me they phone me up, send a text message (SMS) or write on Messenger. For me that’s the way it should be. Of course, from time to time, there is events on Facebook that just fly past me, and that’s quit okay.

People would say, that I am both on MySpace, I have this blog and you have a profile on LinkedIn. Let me explain. On MySpace i tried for some time to publish some of my own music, but I haven’t used for a long time. LinkedIn is a way to show others my professional profile. It is purely professional and that’s what I like about it. This blog first started as a way for myself to keep a log of tips and tricks in programming so that I could find it again, when I need it. This has evolved, but I have some great articles lying on the programming subject that will be published when I find the time to finish it.

For me it’s quit simple. If you don’t have social contact with people in real life, what meaningful social interaction will come out of using Facebook and be linked even with people who I don’t need to be social with anymore? All the way through a person’s life, as the person changes so does his og hers social connections. Some you grow apart from and others changes their lives focus and thereby their social circle. There is a reason why you stop seeing people in the first place. The other thing is that I don’t need to be social all the time and share my privacy with the whole world. I like the fact that I most of the time on the Internet controls what private data I give away about myself. People always say to me that I can decide what private information I share on Facebook, but the sum of shared information from the time you start using Facebook and through to the time where you stop using it or “delete” your profile you have no control of where does information go and what they are used for. That’s a fact and it’s quit frightening for me and again and again I have seen evidence of the lack of security on Facebook. Some days ago I read that the Danish government stated that it will never be possible for the Danish citizens to contact the Danish government through Facebook. Who thought of that as an opportunity in the first place? I can think of many reasons why contacting the local government through Facebook is a really bad idea. And be “friends” with the government? No way!

Generally it’s not about whether social networks is a bad idea. Facebook doesn’t offer any value to my life so why use it?

All this considered with all the pros and cons in mind I think I have made a rationally right decision not ever being on Facebook. Now I will go out and be social in the real world with some real friends… 🙂

Alle brugere tæller

Der dumpede her til morgen et link ind i min mailboks til et blogindlæg af Dorte Toft, som jeg synes er meget interessant. Det handler om, at hun har et nytårsønske hvor IT skal være for alle inkl. dem der ikke er superbrugere eller har overskudet til at sætte sig ind i et stort nyt IT-system.

Blogindlæget kan læses her: http://www.business.dk/tech-mobil/nytaarsoenske-it-tumper

I bund og grund handler indlæget vel egentlig om det vigtige i at have alle brugere af et IT-system med på vognen fra første spadestik og måske endnu før dette tages i forbindelse med IT-projekter. Jeg kunne ikke være mere enig. Som Dorte Toft også skriver er der en række eksempler på, både i det private og det offentlige, at der ikke er blevet taget højde for den diversitet der kan være i de brugere der skal kobles på et system senere hen.

Jeg er dog ikke enig med Dorte Toft om, at man ikke skal regne med at finde forståelse for dette hos konsulenterne. Det må vel være i alles interesse på et projekt, at det bliver en succes hele vejen rundt? Desuden mener jeg det er vigtigt for en konsulent på et projekt ikke kun at pleje egne interesser, men selvfølgelig først og fremmest at give kunden det som kunden egentlig har brug for, både når det kommer til teknologi og brugervenlighed og interaktionen mellem disse.

Jeg fik et skrækeksempel, på noget af alt dette her i efteråret i forbindelse med mit studie. Her havde vi på et kursus en gæsteforelæser fra Dansk IT ude for at forklare hvordan de mener den offentlige sektor i Danmark skal digitaliseres. Her drejede samtalen sig hurtigt over på en af de store problemstillinger der opstår når dette igen og igen skal realiseres, nemlig de mange forskellige forudsætninger brugerne af en offentlig IT-service har. Her mente gæsteforelæseren fra Dansk IT, at man skulle fra det offentliges side i digitaliseringen gå efter, at få “en mand på månen” og så måtte man jo leve med “at alle ikke blev lige så kloge som Niels Bohr”, underforstået, at han mente man skulle gå efter at tænke så stort som muligt og så måtte man bare erkende, at alle danskere ikke ville kunne bruge de digitale IT-services. Da jeg i forbindelse med forelæsningen efterfølgende påpegede, at det umuligt kunne være i Danmarks interesse, at sætte en meget stor del af den danske befolkning af med denne tankegang, fik jeg svaret at det måtte man jo leve med hvis man ønskede at tænke så stort som muligt. Desuden var der fra gæsteforelæserens side en generel idé om at tingene skulle gøres nu og det skulle gøres hurtigst muligt. Det virker gang på gang, når der skal laves store IT-projekter i det offentlige, som om det er vigtigere at gøre tingene hurtigt i stedet for en gang for alle at gøre det ordentligt og bruge den tid på det som er nødvendigt for at produktet bliver i orden. For mig at se er den digitale tinglysning et skrækeksempel på netop det sidste.

Se i øvrigt den her artikel og bliv overrasket: http://www.business.dk/tech-mobil/danmark-er-klar-til-faa-mand-paa-maanen

Læg i øvrigt i ovenstående artikel mærke til sammenligningen mellem Carlsberg og McAffee som deres “gode eksempel”. Den er rent ud sagt tåbelig. Man kan overhovedet ikke sammenligne en produktionsvirksomhed og en IT-virksomhed på den måde. Det svarer til at sammenligne pærer og bananer. De er begge frugter, men derefter holder lighederne også op.

Meget hen ad vejen er det, for mig at se, en god forklaring på hvorfor så mange digitaliseringsprojekter i det offentlige herhjemme ikke bliver den succes som de var tiltænkt. Det er, som Dorte Toft skriver i sit blogindlæg, vigtigt at have alle med. Alle kan vel forhåbentlig se det logiske i, at det ikke nytter at afskære en stor del af den danske befolkning fra at benytte de offentlige services fx. pga. deres IT-kundskaber? For at digitaliseringen af det offentlige skal blive en succes skal man netop gøre det muligt for alle at benytte de digitale services der stilles til rådighed. Men det gælder jo også, som Dorte Toft er inde på, for dem der skal servicere borgerne både i administrationen i kommunerne og personalet på sygehusene, at de services de skal bruge en stor del af deres hverdag på, ikke i høj grad er tiltænkt brugere der har tid til at fordybe sig i de mange muligheder systemet stiller til rådighed, men giver dem hurtigt og enkelt mulighed for at passe deres arbejde på en tilfredsstillende måde for alle parter.

Jeg har i forbindelse med mit arbejde, ofte fået succesoplevelser i forbindelse med at man tidligt i processen med et IT-projekt har inddraget alle typer af brugere. Dette kan netop være en øjenåbner for hvordan man kan udforme et IT-system så alle kan benytte det. Senere i processen bliver det dog desværre tit til en prioritering af hvilket fokus der skal lægges ift. hvilken type bruger systemet skal være lavet for og det her kæden mange gange hopper af. Det er svært at lave IT-systemer der fokuserer på alle typer af brugere out-of-the-box, men man kan give brugerne mulighed for selv at træffe valg om hvad der er vigtigt for dem i stedet for at man som udvikler vælger for dem fra starten.

WCF-services hosted in a Windows service

Some time ago I ran into one of those problems with no obvious solution. I was on a project where I needed to use a WCF-service for a Silverlight solution. First I started out by making a service that was hosted on the IIS. That was working fine with a connection to a database, but the service was also going to open up some physical files in a specific folder on the Windows server where it was hosted. This could not be done as the service didn’t have access permissions to a folder outside of those under the service. So what to do then?

I searched for a solution to the problem and one that I found was to use Windows impersonation in the service. This “simulates” a user logged in on the server with the rights given to that user. For me this wasn’t an optimal solution for a number of reasons, first of all because it didn’t seem very secure. I quickly started to search for another way to cope with the problem.

The solution I came up with was this: I realized that as I had administrator rights to the server I could host my WCF-services in a Windows service and install it as such. In this way you can run the WCF-service outside of the IIS and run multiple services in the same Windows service as well. Another great thing about it is that if you install it right (as I will show below) you can get it to have access to the file system and it will run as a service under Windows. The example below shows you how the constructor in a Windows service can look like:

        public Service()
        {
            InitializeComponent();
 
            this.ServiceName = "Name of your Service";
            this.EventLog.Log = "Application";
 
            this.AutoLog = true;
            this.CanHandlePowerEvent = true;
            this.CanHandleSessionChangeEvent = true;
            this.CanPauseAndContinue = true;
            this.CanShutdown = true;
            this.CanStop = true;
        }

What happens here is that I provide the name the service has (in Windows this will be the name of the service) and if it should use the application log under Windows to log in. I also tell it that it should log automatically to the application event log when something happens with the service and I tell it that it’s okay to stop, shutdown and pause and continue among others.

The code example below shows the Main-method of the service. As with any other Windows application this has to be provided as the starting point of the application.

        static void Main(string[] args)
        {
            try
            {
                ServiceBase.Run(new Service());
            }
            catch (Exception ex)
            {
                // Some logging or error handling here...
            }
        }

Here I make the service run. Remember the try-catch block because if something bad happens when the service is initialized this will not make everything crash. Also notice the inheritance from the ServiceBase-class. This is what makes our class a service and is needed when we’re going to install it later also and make it run. To make the service do something in certain situations before Windows service events is fired, when it starts, stops, continues, pauses or shuts down (if the server it resides under for instance shuts down), you can override the OnStart, OnStop, OnContinue, OnPause or OnShutdown methods respectively. Up until know I haven’t shown how I combine the Windows and WCF-services, but my next code examples shows just that. What you need first of all is to make your WCF-services start when the Windows service starts. This is done by overriding the OnStart method as mentioned above and then inside this hosting the WCF-services in some service hosts and open them. I learned that a good way to be able to control every service host with the WCF-service inside is to have it declared as variables inside the class. An example on this is provided below.

    partial class Service : ServiceBase
    {
        public ServiceHost serviceHostFirstWcfService = null;
        public ServiceHost serviceHostSecondWcfService = null;
        ....
    }

In my OnStart method I do the following:

       protected override void OnStart(string[] args)
        {
            try
            {
                if (this.serviceHostFirstWcfService != null)
                    this.serviceHostFirstWcfService.Close();
 
                this.serviceHostFirstWcfService = new ServiceHost(typeof(FirstWcfService));
 
                this.serviceHostFirstWcfService.Open();
 
                if (this.serviceHostSecondWcfService != null)
                    this.serviceHostSecondWcfService.Close();
 
                this.serviceHostSecondWcfService = new ServiceHost(typeof(SecondWcfService));
 
                this.serviceHostSecondWcfService.Open();
            }
            catch (Exception ex)
            {
                // Some exception handling...
            }
        }

First of all for every WCF-service I find out whether it has been initialized before. If it has, I close it so I don’t get an exception thrown when I try to open a service that is already open. Afterwards I declare the service host given the type of WCF-service that it should open and open it. As with everything else I pack it all inside a try-catch block to prevent the Windows service from crashing.

Finally you need to provide the normal configuration settings for your WCF-services inside an App.config file in the Windows service class library. The App.config file can look like the one below:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <system.serviceModel>
        <behaviors>
            <serviceBehaviors>
                <behavior name="">
                    <serviceMetadata httpGetEnabled="true" />
                    <serviceDebug includeExceptionDetailInFaults="false" />
                </behavior>
            </serviceBehaviors>
        </behaviors>
        <services>
            <service name="MyNamespace.FirstWcfService">
                <endpoint address="" binding="basicHttpBinding" contract="MyNamespace.IFirstWcfService">
                    <identity>
                        <dns value="Something" />
                    </identity>
                </endpoint>
                <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
                <host>
                    <baseAddresses>
                        <add baseAddress="http://MyFullBaseAddress/MyNamespace/FirstWcfService/" />
                    </baseAddresses>
                </host>
            </service>
            <service name="MyNamespace.SecondWcfService">
                <endpoint address="" binding="basicHttpBinding" contract="MyNamespace.ISecondWcfService">
                    <identity>
                        <dns value="Something" />
                    </identity>
                </endpoint>
                <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
                <host>
                    <baseAddresses>
                        <add baseAddress=" http://MyFullBaseAddress/MyNamespace/SecondWcfService/" />
                    </baseAddresses>
                </host>
            </service>
        </services>
    </system.serviceModel>
</configuration>

There you have it! The service works fine like this and if you have provided the correct base address the service can be called from there (remember that what you provide as address in the baseAddress property as the full address for the service – so when you call it you don’t need the .svc prefix). Tip: You can provide multiple endpoints if you need multiple bindings.

The service worked great until I started to test it with the Silverlight application and realized that the service, of course, needed a clientaccesspolicy.xml file to have the right access rights for the WCF-service, but where do you put this when you don’t have your service on the IIS and you can’t just put the access policy file in the root directory? The solution for this is to understand how the Silverlight application asks for the clientaccesspolicy.xml file when calling the WCF-service. When a Silverlight application calls the service in another domain it automatically assumes that the clientaccesspolicy file is located in the root of the domain where the service resides (the base address provided in the configuration file without the service name). If it isn’t there it gives you the standard 404 error (”File not found” or something like that) when you try to call the service from the application. So if you can ”broadcast” a clientaccesspolicy.xml file on the root address this is the solution, but how to do that? The solution is to add a new WCF-service that, by using the HTTP GET-protocol stream the xml-file as a message. The interface for the new service should look like below. The UriTemplate-property of the WebGetAttribute tells what the URI should be. In our example it is the name of the clientaccesspolicy file.

    [ServiceContract(Namespace = "http://YourService")]
    public interface IClientAccessPolicyService
    {
        [OperationContract]
        [WebGet(UriTemplate = "clientaccesspolicy.xml")]
        Message ProvidePolicyFile();
    }

Then, in the implementation of the service interface, you just open and read the clientaccesspolicy.xml file as a stream, load it into a StringReader, add this to a XmlReader, and make a new instance System.ServiceModel.Channels.Message, add your Xmlreader object to the message and return it.

    public class ClientAccessPolicyService : IClientAccessPolicyService
    {
        public System.ServiceModel.Channels.Message ProvidePolicyFile()
        {
            try
            {
                string fileContent = string.Empty;
 
                StreamReader fileStream = new StreamReader("C:thefullpathtoyourclientaccesspolicy.xml");
                fileContent = fileStream.ReadToEnd();
                fileStream.Close();
 
                StringReader sr = new StringReader(fileContent);
                XmlReader reader = XmlReader.Create(sr);
 
                System.ServiceModel.Channels.Message result = Message.CreateMessage(MessageVersion.None, "", reader);
                return result;
            }
            catch (Exception ex)
            {
                return null;
            }
        }
    }

There you have it! When you now call your service from a Silverlight application, you will see that it gets the clientaccesspolicy.xml from the service.

After my service is done I can install it using a Windows installer or a basic console application. I chose to install it just by running a console application and you can see how this is done in this example: Installing a Windows service

Installing a Windows service

To install a service you need to use the ServiceProcessInstaller and the ServiceInstaller classes. These handle the installation of the Windows service as a process with the provided information. The Account-property on the ServiceProcessInstaller is used to tell what security context the service should run under when installed. I use LocalSystem here because I need my service to have access to my file system, but here you have to choose what best fits your situation of course. The username and password properties are used to tell what user the service should run as. Next for the properties of the ServiceInstaller you can specify how and when your service is started with the StartType-property and you can set the service name, display name, a description and then you need to assign the ServiceProcessInstaller instance as parent to the ServiceInstaller instance. Then you need to specify the context in which the service should be installed: the full path to the executable that holds the service as a commandline and a path to a log file if needed. Then you install it by calling the Install-method on the ServiceInstaller instance. Then, because we need to start up the service, we takes control of the installed service with the ServiceController-class and call the Start-method. That’s it! Your service is installed.

ServiceProcessInstaller processInstaller =
    new ServiceProcessInstaller();
processInstaller.Account = ServiceAccount.LocalSystem;
processInstaller.Username = null;
processInstaller.Password = null;
 
ServiceInstaller serviceInstaller =
    new ServiceInstaller();
serviceInstaller.StartType = ServiceStartMode.Automatic;
serviceInstaller.ServiceName = ServiceName;
serviceInstaller.DisplayName = ServiceDisplayName;
serviceInstaller.Description = ServiceDescription;
serviceInstaller.Parent = processInstaller; 
 
String path = String.Format("/assemblypath={0}", ServiceExecutablePath);
String[] cmdline = { path };
String logFilePath = “C:\pathtoyourlogfile.txt;
serviceInstaller.Context = new System.Configuration.Install.InstallContext(logFilePath, cmdline);
 
System.Collections.Specialized.ListDictionary state =
    new System.Collections.Specialized.ListDictionary();
serviceInstaller.Install(state);
 
ServiceController serviceController =
    new ServiceController(serviceInstaller.ServiceName);
 
serviceController.Start();

If you need to uninstall your service to update it for instance (that’s right, the service executable cannot be altered as long as the service is installed), you can use the below code to do just that. It should be quite straight forward. You just take control of the service as in the code above, figure out if the service is running, if it’s running then it has to be stopped before we can uninstall it, wait for it to stop and then gives the need context-information and then we uninstall by calling Uninstall on our ServiceInstaller object.

ServiceInstaller serviceInstaller =
    new ServiceInstaller();
serviceInstaller.ServiceName = ServiceName;
 
ServiceController serviceController =
    new ServiceController(serviceInstaller.ServiceName);
 
if ((serviceController.Status == ServiceControllerStatus.Running)
    || (serviceController.Status == ServiceControllerStatus.Paused))
{
    serviceController.Stop();
 
    serviceController.WaitForStatus(ServiceControllerStatus.Stopped, new TimeSpan(0, 0, 0, 15));
 
    serviceController.Close();
}
 
String path = String.Format("/assemblypath={0}", ServiceExecutablePath);
String[] cmdline = { path };
String logFilePath = “C:\pathtoyourlogfile.txt;
serviceInstaller.Context = new System.Configuration.Install.InstallContext(logFilePath, cmdline);
 
serviceInstaller.Uninstall(null);

Bootup the virtual haddisk

Fall last year (2009) I was attending a session with Scott Hanselman in Copenhagen, Denmark where he talked about the, back then, brand new features in ASP.NET MVC 2.0. After getting to the conference room at the hotel 15 minutes late, because he went to the wrong hotel (yes Copenhagen can be a large city to travel in 🙂 he started the session by telling about a new cool feature in Windows 7 where it’s possible to make a bootable virtual harddisk that you can attach in the Disk Manager and then boot up on just like if it was a normal installed OS.

This is cool because you don’t have to run it in the Virtual PC (as a x86 installation – why can’t MS just make the free version of VPC x64 based!?), you can distribute it on multiple PC’s and because it boots up normally it utilizes the hardware in the PC as the normal OS. In that way it becomes the golden mean between a rigid Virtual PC installation and the mess of having two OS-installations.

This I knew I had to try some time and yesterday I got the opportunity. I needed to run a Windows Server 2008 R2 Enterprise beside my Windows 7 installation to use some of the services in the server for a little project I’m working on. Here I will explain what I did and what I learned.

I have to tell you a very important thing I learned after my PC crashed and didn’t wont to boot up again after I thought I had got everything done exactly as in Scott’s blogposts: YOU NEED TO HAVE WINDOWS 7 ULTIMATE TO GET THIS TO WORK! Otherwise you will find yourself stuck, as I was, with a PC that gives you a strange boot error, a Logitech USB/Bluetooth keyboard that is not working outside of Windows and no PS2 keys anywhere near you so you can skip the error, because PS2 was “so outdated”. Now, when I make something like this from now on, I will have a PS2 keyboard no less than 1 meter away from me 🙂

The first thing to do is to make the bootable VHD (Virtual Hard Disk) from a Windows installation ISO. This blog from Scott will tell you what to do:
Step-By-Step: Turning a Windows 7 DVD or ISO into a Bootable VHD Virtual Machine

So I followed Scott’s description of how to make the VHD and I was able to get it done. Remember the /SKU:SERVERENTERPRISE setting in the cscript command if you want to have a enterprise server.

Then I moved on to this post from Scott telling about how to get it to appear when booting up the PC:
Less Virtual, More Machine – Windows 7 and the magic of Boot to VHD

When you have a VHD already you just need to go down to the paragraph with the title “Setting up your Windows Boot Menu to boot to an Existing VHD” and start from there. This is quit straight forward, but remember one thing when doing this: go to the Disk Manager and attach the VHD before doing any of the things in this post! Otherwise it won’t work.

After doing the bcdedit commands and veryfied that your VHD appears in the Windows Boot Manager (remember to be absolutely sure that everything is looking alright here before restarting your system!). The problem is that if the Windows Boot Manager finds that one of the entries is not correct it simply wont but anything, even the OS that is correct! When you boot up now you will have a new OS in the Windows Boot Manager menu that you can choose. The first time you startup the new OS you will have a Administrator user and it will prompt you to give it a password.

Then you are good to go! 🙂 The coolest thing about all this is that you have all your harddisks in the new system (the VHD will now be your C-drive) and you can move files from one harddisk to the other and even cooler, you can take the VHD, move it to another PC, add it to the Windows Boot Manager using the last post from Scott that I mentioned and you have a full functioning OS with all you need installed.

Have fun with it 🙂

New features in .NET 4.0

I just found this on MSDN where you can see what’s new in the whole .NET 4.0 framework:
What’s New in the .NET Framework 4

It’s still the RC framework so be patient with it 🙂 As it says above, it’s “subject to change”. But I think it’s as close to the final list as it can be.

One of the cool things that I have forgot I needed in C# that was in C++ is optional parameters in methods. This realy saves time.

Another cool thing is the Enum.TryParse. I have missed that so much as it has been a pain to work with enums. Now I can safely parse enums as I like.

The implementation of String.IsNullOrWhitespace method is also a very cool thing.