Simplest HttpServer

If you ever just need a simple http server to serve up some html/javascript,
because for instance access to local filesystem, camera, microphone etc is disabled when using the file:// protocol
just install Python 3.x and run

1
python -m http.server 8000

from the command line and voila

more..

SmartTarget, Render Engine Language and the Ambient Data Framework

The next major release of SmartTarget will among other things include support for REL (Render Engine Language).
One of the major use cases we wanted to support is

Have a website in a technology that isn’t covered by our Content Delivery platform and show a webpage by:

  • Rendering the page through the Content Delivery Web service
  • Render targeted content on that page
  • Have promotions that can trigger on (front end) visitor profile information

In the 2013 GA version of SDL Tridion the Content Delivery Web service supports forwarding of claims in the ADF from another web application. This together with the REL support in SmartTarget is what makes it all possible.

I’ve chosen to use the Play framework as my website technology, as this uses my favorite programming language Scala, can be run in a java based application server and it should be reasonably easy to get the ADF running there.

I’m assuming the Content delivery Webservice is setup and Component Templates and Publication Targets are setup to publish REL content

1. Setup  the play framework

Follow these instructions to download/install the play framework (i’m using 2.1 as that is the latest the play2war plugin support)
setup the play2war plugin by following the steps here
set it up for the 2.5 servlet specification

2. Getting the Ambient Data Framework to run

Add the dependency to cd_ambient to Build.scala

This requires you to have the CD jars deployed to a maven or ivy repository, alternatively you could add the jars in a lib/ folder in the play project.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import sbt._
import Keys._
import com.github.play2war.plugin._
object ApplicationBuild extends Build {
  val appName         = "playTridion"
  val appVersion      = "1.0-SNAPSHOT"
 
  val appDependencies = Seq(
      "com.tridion" % "cd_ambient" % "7.1.0-SNAPSHOT"    )
 
  val main = play.Project(appName, appVersion, appDependencies)
    .settings(Play2WarPlugin.play2WarSettings: _*)
    .settings(
      Play2WarKeys.servletVersion := "2.5",
    )
}

* note that I’m using the development snapshots here, replace these with the release version numbers once we’ve released.

Configure a web.xml

To configure the ADF we need to add the Filter information to the web.xml.
The play2war plugin will add any file placed in the war/ folder to the resulting war
So we can add the following web.xml in /war/WEB-INF/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5"
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
  <filter>
    <filter-name>Ambient Data Framework</filter-name>
    <filter-class>com.tridion.ambientdata.web.AmbientDataServletFilter</filter-class>
    <async-supported>true</async-supported>
  </filter>
  <filter-mapping>
    <filter-name>Ambient Data Framework</filter-name>
    <url-pattern>/*</url-pattern>
  </filter-mapping>
  <listener>
    <listener-class>play.core.server.servlet25.Play2Servlet</listener-class>
  </listener>
  <servlet>
    <servlet-name>play</servlet-name>
    <servlet-class>play.core.server.servlet25.Play2Servlet</servlet-class>
  </servlet>
  <servlet-mapping>
    <servlet-name>play</servlet-name>
    <url-pattern>/</url-pattern>
  </servlet-mapping>
</web-app>

And we need to add the cd_ambient_conf.xml to /war/WEB-INF/classes

Test if it works

Play uses the MVC pattern by setting up routes which are URL Patterns that have corresponding Controller methods which will return rendered views.
Let’s show a list of claims in a page by setting up a route and controller plus view for it in play
in /conf/routes add the following line

GET     /claims                     controllers.Application.claims()

this means for every request to /claims the Application.claims() method will be called
The claims() method in the controller looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def claims = {
    val cs = Option(AmbientDataContext.getCurrentClaimStore)
    Action {
      val claimsFuture = Future(cs match {
        case Some(cs) => {
          cs.getAll().map(c => (c._1.toString -> c._2))
        }
        case None => {
          Logger.error("no claim store found")
          mutable.Map[String, AnyRef]()
        }
      })
      Async {
        claimsFuture.map { claims => Ok(views.html.claims(claims)) }
      }
    }
  }

First thing i’m doing is getting the claimstore, as Play is setup to be asynchronous from the bottom up, and the ADF stores the claimstore in a ThreadLocal variable during the request, once i’m in the Action{} part i’m actually in another thread.
I’m wrapping it in an Option class, which is scala’s way to avoid having to deal with null values.
Then I’m just converting it from a Java HashMap<URI, Object> into a Scala mutable.Map[String, AnyRef] and passing it onto the view:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@(claims: scala.collection.mutable.Map[String, AnyRef])
 
<!DOCTYPE html>
<html>
<head>
    <title>Claims</title>
    <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")" />
    <link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")" />
    <script src="@routes.Assets.at("javascripts/jquery-1.9.0.min.js")" type="text/javascript"></script>
</head>
<body>
    <table>
        <tr>
            <th>Claim URI</th>
            <th>Value</th>
        </tr>
        @claims.map { claim =>
            <tr>
                <td>@claim._1</td>
                <td>@claim._2</td>
            </tr>
        }
</table>
</body>
</html>

Calling the url will show a table with the name/values of all claims in the claimstore.

If you have any cartridges for the ADF, simple add them as appDependencies to your /project/Build.scala and in your cd_ambient_conf.xml and the claims should appear in this page.

1
2
3
4
  val appDependencies = Seq(
      "com.tridion" % "cd_ambient" % "7.1.0-SNAPSHOT",
      "com.tridion.smarttarget.ambientdata" % "session_cartridge" % "2.1.0-SNAPSHOT"    )

3. Forwarding claims

Configuring webservice

To forward claims from your webapplication to the Content Delivery Web service you need to serialize them and pass them on with your request as a cookie, in your Web service you need to whitelist the IP and claims you accept and set the name of the cookie to use in your cd_ambient_conf.xml on the webservice tier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<Configuration>
  ...
  <Security>
    ...
    <WhiteList>
      <IPAddresses>
      <Ip>10.100.0.0-10.100.255.255</Ip>
      <Ip>127.0.0.1</Ip>
    </IPAddresses>
    </WhiteList>
    <GloballyAcceptedClaims>
      <Claim Uri="taf:claim:ambientdata:sessioncartridge:useragent:os" />
      <Claim Uri="taf:claim:ambientdata:sessioncartridge:useragent:os:version" />
      <Claim Uri="taf:claim:ambientdata:sessioncartridge:useragent:browser" />
    </GloballyAcceptedClaims>
  </Security>
  <Cookies>
    ...
    <Cookie Type="ADF" Name="TAFContext" />
  </Cookies>
  ...
</Configuration>

Adding the Claims as a cookie

CD has a helper method to serialize the claims, it will convert the claims to JSON and then base64 encode them.
It will also split them up to avoid reaching the maximum length a cookie can have. the highlighted line is the Content Delivery Class that does the serializing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
  private def getClaimsAsCookies():Seq[(String, String)] = {
    Option(AmbientDataContext.getCurrentClaimStore) match {
      case Some(cs) => {
        val claimCookieSerializer = new ClaimCookieSerializer("TAFContext")        val serializedClaims = claimCookieSerializer.serializeClaims(cs.getAll())
 
        serializedClaims.map(cookie => {
          ("Cookie", cookie.getName + "=\"" + new String(cookie.getValue) + "\"")
        })
      }
      case None => {
        Logger.error("No claimstore, nothing to serialize")
        Nil
      }
    }
  }

This method will return a Sequence of cookie headers that can be added to the request to the webservice

3. Requesting a Page from Tridion

For simplicity I will just create a route that get’s a page based on it’s tcmuri
also for simplicity I will not create a client model.

GET     /page/:id                   controllers.Application.page(id)

Then the controller method will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
  private lazy val baseUrl =  Play.current.configuration.getString("tcdweb").get
  private val pagePattern = "/Pages(PublicationId=%s,ItemId=%s)/PageContent"
  def page(pageId:String) = {
    val headers = getClaimsAsCookies()
 
    Action {
      val pageUri = new TCMURI(pageId)
      val url = baseUrl + pagePattern.format(pageUri.getPublicationId, pageUri.getItemId)
 
      val responseFuture = WS.url(url).withHeaders(headers: _*).get()
      Async {
        responseFuture.map {
          resp => {
            Ok(views.html.page(getContentFromXml(resp.body, "Content")))
          }
        }
      }
    }
  }
 
  private def getContentFromXml(xmlText: String, tagName:String):String = {
    val xml = scala.xml.XML.loadString(xmlText)
    val content = xml \\ tagName filter ( z => z.namespace == "http://schemas.microsoft.com/ado/2007/08/dataservices")
    content.text
  }

The webservice url is taken from the configuration, and again the getClaimsAsCookies() method is outside the Action to avoid the issue of the claimstore being in another thread.
As I’m rendering a complete page, my view template is very simple the @Html() is there to avoid html entities being escaped.

1
2
3
@(content: String)
 
@Html(content)

4. SmartTarget REL Tag example

Just for reference this is a simple Page Template that uses SmartTarget to output some promotions in a header and sidebar region

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:tridion="http://www.tridion.com/ContentManager/5.0">
  <head>
    <!-- TemplateParam name="Page" type="boolean" value="true" -->
    <title>@@Page.Title@@</title>
  </head>
  <body style="font-family: Verdana; margin:0; padding:0; border:0;">
    <h1>Smart Target Promotions</h1>
    <tcdl:query publication="@@Publication.ID@@">
      <h1>Header</h1>
      <div>
        <tcdl:promotions region='Header' maxItems='5'>
          <tcdl:itemTemplate>
            <tcdl:promotionalItems>
              <tcdl:itemTemplate>
                <tcdl:ComponentPresentation ComponentURI='##componentUri##' TemplateURI='##templateUri##' Type='Dynamic'/>
              </tcdl:itemTemplate>
            </tcdl:promotionalItems>
          </tcdl:itemTemplate>
          <tcdl:fallbackContent>...
          </tcdl:fallbackContent>
        </tcdl:promotions>
      </div>
      <h1>Sidebar</h1>
      <div>
        <tcdl:promotions region='Sidebar' maxItems='5'>
          <tcdl:itemTemplate>
            <tcdl:promotionalItems>
              <tcdl:itemTemplate>
                <tcdl:ComponentPresentation ComponentURI='##componentUri##' TemplateURI='##templateUri##' Type='Dynamic'/>
              </tcdl:itemTemplate>
            </tcdl:promotionalItems>
          </tcdl:itemTemplate>
          <tcdl:fallbackContent>...
          </tcdl:fallbackContent>
        </tcdl:promotions>
      </div>
    </tcdl:query>
  </body>
</html>

Food for thought

To make my life a little easier, I was cheating a little bit by using a technology that is compatible with Java so I could use Content Deliveries method to serialise claims and of course the Ambient Data Framework itself.
If for instance you would do this in a technology that is even further removed from Java, say Javascript, Ruby or even PHP, you’ll need to serialise the claims yourself.
Luckily the format is very well described on SDL Live Content, a search for “JSON Cookie format” will give you all the information you need.

If Javascript is your technology of choice you might want to take look at the CT4T Project by Angel Puntero and some other MVP’s and the related blog articles by Will Price on Tridion Developer and Alexander Klock on Coded Weapon

You can already do everything except showing targeted content with the SDL Tridion 2013 Release, to able to use REL for your SmartTarget content, you have to be patient a little while longer.

The source code for this application can be found in my github repository
oh yeah besides being a Developer in SDL I’m also the Technical Product Owner for the SmartTarget product line within SDL

Small Proxy server in node.js

If at some point you need a very simple proxy server save the next 19 lines to a file (f.i. proxy.js)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var url = require('url');
var http = require('http');
 
http.createServer().listen(9000).on('request', function(request, response) {
  try {
    var options = url.parse(request.url);
    options.headers = request.headers;
    options.method = request.method;
    options.agent = false;
 
    var connector = http.request(options, function(serverResponse) {
            response.writeHeader(serverResponse.statusCode, serverResponse.headers);
            serverResponse.pipe(response, {end:true});
    });
    request.pipe(connector, {end:true});    
  } catch (error) {
    console.log("ERR:" + error);
  }
});

Install node.js and run

 node proxy.js

Git statuses in Fish command shell

I’ve been playing around with fish shell for a while now and like it a lot.
The only thing i missed was the GIT information i used to have in my prompt.

I Solved this by added a file: ~/.config/fish/config.fish with the following content:

# in .config/fish/config.fish:
# Fish git prompt
set __fish_git_prompt_showdirtystate 'yes'
set __fish_git_prompt_showstashstate 'yes'
set __fish_git_prompt_showupstream 'yes'
set __fish_git_prompt_color_branch yellow

# Status Chars
set __fish_git_prompt_char_dirtystate '⚡'
set __fish_git_prompt_char_stagedstate '→'
set __fish_git_prompt_char_stashstate '↩'
set __fish_git_prompt_char_upstream_ahead '↑'
set __fish_git_prompt_char_upstream_behind '↓'

function fish_prompt
  set last_status $status

  set_color $fish_color_cwd
  printf '%s' (prompt_pwd)
  set_color normal
  printf '%s ' (__fish_git_prompt)
  set_color $fish_color_cwd
  printf '> '
  set_color normal
end

This should work on both linux and OSX
Also on the mac i’m using TotalTerminal which gives me a nice shell dropping down from the top on pressing ctrl twice

Scala: extend or import

yesterday evening at the amsterdam.scala meetup I learned a neat trick, I also learned a lot more about type aliases and type classed but I have to get my head around those first.

The trick being not forcing users of your code to either extend or import, but allowing them to choose

  trait MyTrait {
    def myMethod = ..
  }
 
  object MyTrait extends MyTrait

now my users can either extend their class or use an import

First steps into Ceylon

A good friend of mine is part of the ceylon development team, so when M2 came out which had Java interoperability, I decided to have a go and dabble a bit with it and mis-use our friendship by badgering him with questions, but he’ll get something out of it too. well mostly bugs to fix (and the start of an FAQ).

 

Around the same time I was looking into netty and decided to port the small webserver I made with netty to ceylon.

 

During this exercise I ran into a few blocking issues, which is normal in this stage of the project so I will not mention those as they are fixed by the time you’ll be reading this.

First let setup the project (I already have a maven based java project, I’m adding my ceylon sources to /src/main/ceylon, but i’m compiling from the commandline):

As we have a dependency on netty, for now we need to copy it in our local ceylon repository

  • Create the required directory structure
    ~/.ceylon/repo/org/jboss/netty/3.4.0/
  • copy your downloaded netty jar in here and rename it to org.jboss.netty-3.4.0.jar
  • create a sha1 checksum by calling (As sha1sum is not part of the MacOS tools, I had to build it from sources)
    sha1sum org.jboss.netty-3.4.0.jar > org.jboss.netty-3.4.0.jar.sha1
And import it  in the module.ceylon
Module module {
    name='net.addictivesoftware.nbws';
    version='0.1';
    dependencies = {
       Import {
           name = 'org.jboss.netty';
           version = '3.4.0';
       }
    };
}

Next lets have a look at the main class:

The Java version is on github (will open in a new window/tab for comparing)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
    import org.jboss.netty.bootstrap {ServerBootstrap, Bootstrap}
    import org.jboss.netty.channel.socket.nio {NioServerSocketChannelFactory}
    import java.net {InetSocketAddress}
    import java.util.concurrent {Executors{newCachedThreadPool}}
 
    variable Integer port := 9000;
 
    shared void run() {
        String[] args = process.arguments;
        if ((nonempty args) && args.size == 1) {
            Integer p = parseInteger(args.first ? "-1") ? -1;
            if (p != -1) {
                 port := p;
            }
            print("Starting server on port: " + port);
            ServerBootstrap bootstrap = ServerBootstrap(
                NioServerSocketChannelFactory(
                    newCachedThreadPool(),
                    newCachedThreadPool()));
 
            bootstrap.pipelineFactory := HttpServerPipelineFactory();
            bootstrap.bind(InetSocketAddress(port));
            print("Server started");
        } else {
            print("specify port as the only argument");
        }
    }

First thing you will notice is the imports. particular the static method import of newCachedThreadPool(), by specifying it as

import your.package {Class{method}}

the method becomes a top-level method, optionally you can give it a name by doing

import your.package {Class{name=method}}

In ceylon there is no new keyword, when porting java code, just remove the new and you’ll be fine.

Commandline Arguments are passed on by process.arguments, the nonempty checks for null or empty, this construction in combination with the exists (not null) keyword makes for pretty readable code.

Something I didn’t like, but maybe i’m overlooking something is the construction that parses the first argument to an Integer.

13
14
15
16
Integer p = parseInteger(args.first ? "-1") ? -1;
if (p != -1) {
    port := p;
}

This is because args.first returns a String? (the ? meaning it can be null) which I need to check for null (with the ? operator) because the parseInteger requires a String (cannot be null in ceylon), and returns a Integer? which I then need to check for null again
This seems overly complex and is going to happen a lot as you tend to restrict your parameters to not be null and your return value to be null, also as Java objects can be null, they will be cast to their [object]? counterparts in ceylon.  But like I said I’m probably missing something obvious.

Next we need two more classes:

HttpServerPipelineFactory (Java version):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import org.jboss.netty.handler.codec.http{HttpChunkAggregator,HttpRequestDecoder,HttpResponseEncoder}
import org.jboss.netty.handler.stream{ChunkedWriteHandler}
import org.jboss.netty.channel{ChannelPipeline,ChannelPipelineFactory,Channels{staticChannelPipeline=pipeline}}
 
shared class HttpServerPipelineFactory() satisfies ChannelPipelineFactory {
    shared actual ChannelPipeline pipeline = staticChannelPipeline();
 
    pipeline.addLast("decoder", HttpRequestDecoder());
    pipeline.addLast("aggregator", HttpChunkAggregator(65536));
    pipeline.addLast("encoder", HttpResponseEncoder());
    pipeline.addLast("chunkedWriter", ChunkedWriteHandler());
    pipeline.addLast("handler", HttpServerHandler());
 
}

In line 12 we add our own Handler that will handle the requests:

HttpServerHandler (Java version not complety converted to ceylon yet):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import org.jboss.netty.channel{SimpleChannelUpstreamHandler, ChannelHandlerContext,
                        MessageEvent, Channel, ChannelFuture, ChannelFutureListener{close=iCLOSE}}
import org.jboss.netty.handler.codec.http{HttpRequest, DefaultHttpResponse, HttpResponse}
import org.jboss.netty.handler.codec.http{HttpVersion{http11=iHTTP_1_1}, HttpResponseStatus{ok=iOK}}
import org.jboss.netty.buffer{ChannelBuffers{copiedBuffer}}
import org.jboss.netty.handler.codec.http{HttpHeaders{isKeepAlive}}
 
shared class HttpServerHandler() extends SimpleChannelUpstreamHandler() {
 
    shared actual void messageReceived(ChannelHandlerContext? ctx, MessageEvent? e) {
        if (exists e) {
            if (is HttpRequest request=e.message) {
                HttpResponse response = DefaultHttpResponse(http11, ok);
                Channel ch = e.channel;
                ch.write(response);	
 
                String responseText = "Retrieved: " + request.uri;
                ChannelFuture writeFuture = ch.write(copiedBuffer(responseText, "UTF-8"));
 
                if (!isKeepAlive(request)) {
                    // Close the connection when the whole content is written out.
                    writeFuture.addListener(close);
                }
            }
        }
    }
}

Two things worth mentioning here:
1) The following structure:

if (is HttpRequest request=e.message)

will assign e.message to the request variable and checks if it is of type HttpRequest.
If this is true then inside the if, the request is of type HttpRequest, no casts necessary, I really like this.

 

2) at this point capitalized class names have a special meaning to the compiler, that is why you need to prefix it with i and assign it a name

 import org.jboss.netty.handler.codec.http{HttpVersion{http11=iHTTP_1_1}}

 

To conclude, Ceylon allows for shorter and more readable code which for me means less bugs and better maintainability. The java interop still has some rough edges. but that will improve, and once the SDK is released, we will be less dependent on the java classes anyway

The entire project is hosted on github for those who want to have a closer look.
to compile the ceylon classes, run

ceylonc -src src/main/ceylon net.addictivesoftware.nbws

from the commandline.
It will need a compiler build after the 14th of April to work

to run it:

ceylon net.addictivesoftware.nbws/0.1

 

next I’ll be looking into reading a file from a resource stream, doing some reflection and running unittests with junit

Developing for the Cloud

Lately I’m becoming more and more aware of the possibilities that are out there for developers, both in building, testing and running your applications in the cloud.
So I changed a hobby project i’m working on to be build, tested and deployed in the cloud. and I like to share my experiences here.

Currently i’m using 3 parties to get the picture complete

  1. Github - Online Git repositories with nice collaboration features, if you know it, you’ll be using it, if you don’t, have a look, you’ll never look back
  2. Cloudbees – Free PaaS with Git Repository and Jenkins CI instance, you could also run your apps here, but no support for Scala, Lift at the time I looked at it
  3. Cloudfoundry – Free for now  (Beta) PaaS from VMWare – Support Scala and Lift, which makes it easy for me.
For this to work I needed to make a couple of changes to my application that I will explain below.

Registering a app at cloudfoundry

Once you have an account and installed the command-line tools you can claim your little application space at the hosted cloudfoundry (or download and run a micro.cloudfoundry instance in your local Vmware system)

$ vmc target api.cloudfoundry.com
$ vmc login [email]
$ vmc push my-app --path target/war --url my-app.cloudfoundry.com --instances 2 --mem 512k --runtime java
$ vmc create-service mysql my-app-db my-app

What this means is, i’m pushing my war based maven build webapplication to the hosted cloudfoundry as two instances with each 512k memory and runtime java, I’m also binding it to a mysql service called my-app-db.
To see the result of my actions I can type vmc apps

$ vmc apps
+-------------+----+---------+---------------------------+-------------+
| Application | #  | Health  | URLS                      | Services    |
+-------------+----+---------+---------------------------+-------------+
| my-app      | 1  | RUNNING | my-app.cloudfoundry.com   | my-app-db   |
+-------------+----+---------+---------------------------+-------------+

The commandline tools also give me options to check logs and crashes, start and stop the application etc.

Database connectivity

As cloud applications/instances/services are usually not as easily identifiable with an ip-address, your application needs to find out about services (like a database) in a different manner.
I’m doing this by using the org.cloudfoundry.runtime library to create a mysql Service for me

1
2
3
4
5
6
7
8
9
10
11
12
13
14
  object CloudFoundryConnection extends ConnectionManager {
    def newConnection(name: ConnectionIdentifier): Box[Connection] = {
      try {
        import org.cloudfoundry.runtime.env._
        import org.cloudfoundry.runtime.service.relational._
        Full(new MysqlServiceCreator(new CloudEnvironment())
                        .createSingletonService().service.getConnection())
 
      } catch {
        case e : Exception => Empty
      }
    }
    def releaseConnection(conn: Connection) {conn.close}
  }

This object will return a connection if it is running in the cloud, otherwise I’ll fall back to the normal database connection of in my case an H2 database which I use for development
so I also changed the DB initialisation part of Lift in its Boot class.
The order of DB discovery is now 1) jndi connection, 2) cloudfoundry 3) connection in Properties file 4) H2 database

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
if (!DB.jndiJdbcConnAvailable_?) {
      val connection = CloudFoundryConnection
      connection.newConnection(DefaultConnectionIdentifier) match {
        case (Full(connection) => {
          DB.defineConnectionManager(DefaultConnectionIdentifier, connection)
        }
        case Empty => {
          val vendor = new StandardDBVendor(
            Props.get("db.class") openOr "org.h2.Driver",
            Props.get("db.url") openOr "jdbc:h2:lift_proto.db;AUTO_SERVER=TRUE",
            Props.get("db.user"),
            Props.get("db.password")
          )
          LiftRules.unloadHooks.append(vendor.closeAllConnections_! _)
          DB.defineConnectionManager(DefaultConnectionIdentifier, vendor)
        }
      }
    }

Maven integration

Cloudfoundry also provides a maven plugin that allows most of the functionality to be run as part of a maven goal, which is ideal when you want to deploy from your Jenkins build.
Here are the changes to my pom.xml
Add the repository for the org.cloudfoundry.runtime dependency

<repository>
    <id>springsource-milestones</id>
    <name>SpringSource Milestones Proxy</name>
    <url>https://oss.sonatype.org/content/repositories/springsource-milestones</url>
</repository>

Add the dependency itself

<dependency>
    <groupId>org.cloudfoundry</groupId>
    <artifactId>cloudfoundry-runtime</artifactId>
    <version>0.6.1</version>
</dependency>

For the plugin add the plugin repository

<pluginRepository>
    <id>repository.springframework.maven.milestone</id>
    <name>Spring Framework Maven Milestone Repository</name>
    <url>http://maven.springframework.org/milestone</url>
</pluginRepository>

and the plugin itself

<plugin>
    <groupId>org.cloudfoundry</groupId>
    <artifactId>cf-maven-plugin</artifactId>
    <version>1.0.0.M1</version>
    <configuration>
        <server>mycloudfoundry-instance</server>
        <target>http://api.cloudfoundry.com</target>
        <url>medicate.cloudfoundry.com</url>
        <memory>512</memory>
        <instances>2</instances>
    </configuration>
</plugin>

Username and password information you can specify in the servers section of your .m2/settings.xml file

<server>
    <id>mycloudfoundry-instance</id>
    <username>email@address.com</username>
    <password>s3cr3t</password>
</server>

now you can update your deployed application by a simple

$ mvn cf:update

 Continous Integration on Cloudbees

First setup an account at cloudbees.com, create a repository and a Jenkins instance, and make sure cloudbees has your public key to allow git pushes.

Then setup a maven 2/3 build in Jenkins, setting the scm section to watch for changes from your cloudbees git repository.

Make sure your build name doesn’t have any spaces in it, cloudbees will choke on that, at least with the scala compiler.

As we need the login information from the .m2/settings.xml, you can upload this through webdav to repository-[username].forge.cloudbees.com/private directory.

And then in our jenkins build set /private/[username]/settings.xml in the alternative maven settings field (advanced button in the maven section)

I’ve set the maven goal to clean scala:doc cf:update meaning it will compile, test, create scala docs and deploy the application to cloudfoundry.

to finish it up you need to add a remote to cloudbees in your local git repository and push your changes. this then will trigger a build and deploy your app.

git remote add cloudbees ssh://git@git.cloudbees.com/[username]/my-app.git

To conclude, besides the 2 i’ve mentioned, there are a number of solutions out there, that all work in similar ways, but each still has it own requirements, If you want to run Java, Ruby, python, php, node.js applications, with Mysql, postgres, MongoDb, CouchDb, or RabbitMQ, you can get it to work on one or more of these providers.

Last but not least here is a picture from the actual application i’m building to visualize all I’ve talked about above:
Develop for the Cloud

Unittesting custom tags with Selenium Webdriver

In the company I work in we have some complex tags that are not easily unit-testable because there are some dependencies between them, the best option here should be to re-factor them, unfortunately as this would break backward compatibility, we have a process of deprecation to go through until we can actually do that.

But code with hardly any tests on them always makes me feel uncomfortable. so I decided to do some (almost integration) tests but running as part of the normal unit test suite
I’m doing this by placing the tags on a jsp, running this on a embedded Jetty Server and using Selenium Webdriver to test the resulting html.

First the base class, which takes care of starting/stopping the embedded Jetty Server and setting up the web driver.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public abstract class BaseTest {
    private static int port = 9595;
    private static Server server = new Server(port);
 
    protected static String baseUrl = "http://localhost:" + port + "/";
    protected static WebDriver driver;
 
    @BeforeClass
    public static void setup() throws Exception {
 
        //configure jetty as an exploded war
        URL webAppUrl = BaseTest.class.getClassLoader().getResource("/");
        WebAppContext wac = new WebAppContext();
        wac.setContextPath("/");
        wac.setWar(webAppUrl.toExternalForm());
 
        server.setHandler(wac);
        server.setStopAtShutdown(true); //makes sure the sever is stopped even if the @AfterClass is never reached
        server.start();
 
        //HtmlUnit drive doesn't give a popup, the rest of the drivers do
        driver = new HtmlUnitDriver();
    }
 
    @AfterClass
    public static void teardown() throws Exception {
        driver.close();
        server.stop();
    }

In my project I have setup a resources directory that contains a WEB-INF/ with a web.xml containing my custom tag definitions and my tld file as you would with a normal web project
Webdriver comes with a set of Drivers for different browsers (Firefox, chrome, IE, iPhone, Android, etc) for testing browser compatibility, I don’t care about that, the tags I’m testing do not output any html/css that might require this.
The HtmlUnit driver is internal and doesn’t popup a browser window so ideal for my purpose.

Writing a test now becomes very easy:

In this case I have a custom tag that when invoked simply outputs “Hello World”

my test jsp:

1
2
3
4
5
6
7
8
9
10
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"  isELIgnored="false" %>
<%@ taglib uri="/customtaglib" prefix="custom" %>
<html>
  <head>
    <title>Hello World</title>
  </head>        
  <body>
    <p><custom:helloworld /></p>
  </body>
</html>

The Test class:

1
2
3
4
5
6
7
8
9
10
public class HelloWorldTest extends BaseTest {
 
    @Test
    public void testHelloWorldTag() {
        driver.get(baseUrl + "helloworld.jsp");
        Assert.assertEquals("Hello World", driver.getTitle());
 
        WebElement element = driver.findElement(By.cssSelector("body p"));
        Assert.assertEquals("Hello World", element.getText());
    }

Testing the resulting html is made very easy, by having the WebElement findElement() and List findElements() methods.
and then setting predicates with the By class. for instance By.cssSelector(), By.tagName() etc.
It is also possible to test click throughs and form submits, by calling click() or submit() on WebElements.

Added bonus is that although the tag code is running in the embedded jetty server and not in the unit tests itself it’s coverage is measured by our code coverage tools

One thing to look out for is for the embedded Jetty to work with JSP’s you need to add jetty’s version of the jsp spec to your project not the javax.servlet.* ones.
my maven dependencies look likes this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
    <dependencies>
        <dependency>
            <groupId>org.seleniumhq.selenium</groupId>
            <artifactId>selenium-java</artifactId>
            <version>2.12.0</version>
        </dependency>
        <dependency>
            <groupId>org.mortbay.jetty</groupId>
            <artifactId>jetty</artifactId>
            <version>6.1.22</version>
        </dependency>
        <dependency>
            <groupId>org.mortbay.jetty</groupId>
            <artifactId>jsp-api-2.1</artifactId>
            <version>6.1.14</version>
        </dependency>
         <dependency>
            <groupId>org.mortbay.jetty</groupId>
            <artifactId>jsp-2.1</artifactId>
            <version>6.1.14</version>
        </dependency>
        <dependency>
             <groupId>junit</groupId>
             <artifactId>junit</artifactId>
             <version>4.7</version>
             <scope>test</scope>
        </dependency>
    </dependencies>

Show promotions based on any custom data with Tridion SmartTarget

In this blog post I hope to show how easy it is to use your own data within SDL SmartTarget, To execute this example you do need to have a working SmartTarget installation.

Ok, lets get started by imagining we have a web store that sells books and we store information about our visitors, and our Marketeer would like to target certain offers to customers based on which categories they visit the most.
I’m not explaining the customers implementation but i’m assuming a method from the business layer can be called. (the source-code contains a dummy implementation that returns random categories)

Since SDL Tridion 2011 there is an Ambient Data Framework (ADF) on the Content Delivery site. This framework allows sharing of data between applications. You can create so-called cartridges that can retrieve and store data within a request or session scope by specifying input and/or output claims.
The ADF will figure out any dependencies (if cartridge 1 has an input claim, that is an output claim  of cartridge 2, the ADF will make sure cartridge 2 is run first)

SDL SmartTarget comes with a session cartridge, which will provide information like the client’s browser, os, ip address etc
SDL Audience Management comes with it’s own cartridge which will provide information about the logged in Audience Manager visitor

What we need to do is:

  1. Get the most looked at category into the Ambient Data Framework
  2. Configure SmartTarget to use this data when doing a query for promotions
  3. Add it as a trigger type so we can use it as a trigger in a promotion

Writing a cartridge for the Ambient Data Framework
A cartridge consists of a configuration file and one or more ClaimProcessor classes.
The configuration file will contain the input and output claims and which classes will implement the functionality, in our case just one output claim and just one implementing class

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<CartridgeDefinition Uri="com:tridion:smarttarget:samples:recommendedbooks" Description="Recommended books cartridge.">
	<ClaimDefinitions>
		<ClaimDefinition Uri="com:tridion:smarttarget:samples:recommendbooks:category"
                         Scope="REQUEST"
                         Subject="com:tridion:smarttarget:samples:recommendbooks"
                         Description="The Most lookedat category" />
	</ClaimDefinitions>
	<ClaimProcessorDefinitions>
		<ClaimProcessorDefinition Uri="com:tridion:smarttarget:samples:recommendbooks"
				ImplementationClass="com.tridion.smarttarget.samples.recommendbooks.RecommendBooksClaimProcessor"
				Description="This will put the most looked at category into the claimstore">
			<RequestStart>
				<InputClaims />
				<OutputClaims>
					<ClaimDefinition Uri="com:tridion:smarttarget:samples:recommendbooks:category" />
				</OutputClaims>
			</RequestStart>
		</ClaimProcessorDefinition>
	</ClaimProcessorDefinitions>
</CartridgeDefinition>

and the ClaimProcessor looks like

1
2
3
4
5
6
7
8
9
10
public class RecommendBooksClaimProcessor extends AbstractClaimProcessor {
 
    private final static URI RECOMMENDED_CATEGORY_URI = URI.create("com:tridion:smarttarget:samples:recommendbooks:category");
 
    @Override
    public void onRequestStart(ClaimStore claimStore) throws AmbientDataException {
        String category = YourBusinessModel.getMostLookedAtCategory();
        claimStore.put(RECOMMENDED_CATEGORY_URI, category, true);
    }
}

so very simply get your category from your business layer and putting it on the ADF, the boolean at the end, defines whether or not other cartridges can override this value
true meaning it cannot be overwritten

Then the cartridge needs to be added to cd_ambient_conf

1
2
3
4
5
6
<Configuration>
    <Cartridges>
        <Cartridge File="/recommendedbooks_cartridge.xml" />
            <!-- other cartridges removed for brevity -->
    </Cartridges>
</Configuration>

Step 2 configuring SmartTarget
Here we configure how the category will be added to the query with a prefix to avoid name conflicts

1
2
3
4
5
6
7
8
9
10
11
12
13
<Configuration Version="1.1.0">
    <!-- the rest of the configuration left out for brevity -->
    ..
    <SmartTarget>
        ..
        <AmbientData>
            ..
            <Prefixes>
                ..
                <com_tridion_smarttarget_samples_recommendedbooks>rb</com_tridion_smarttarget_samples_recommendedbooks>
            </Prefixes>
        </AmbientData>
    </SmartTarget>

The name of the element should reflect the base part of the URI for the data stored in the ADF, the last part of the URI together with the prefix will become the name
so in our case storing “Science Fiction” in

com:tridion:smarttarget:samples:recommendbooks:category

will end up as

rb_category=Science+Fiction

in the query

Step 3: configuring the trigger-type
in the config directory of your fredhopper installation there is a trigger-types.xml
in here all the custom trigger-types are configured.

1
2
3
4
5
6
7
8
9
10
11
12
13
<trigger-types xmlns="http://www.fredhopper.com/schema/knowledge-model/trigger/type/1.0">
    <!-- other trigger-types removed for brevity -->
    ...
    <trigger-type url-param="rb_category" name="Category (most looked at)" basetype="text">
        <list-of-values multiselect="true">
		<value>Fantasy</value>
		<value>Fiction</value>
		<value>Romance</value>
                <value>Science Fiction</value>
                <value>Thriller</value>
	</list-of-values>
    </trigger-type>
</trigger-types>

The multiselect=”true” will create a list of checkboxes in the GUI allowing a promotion to trigger on more categories.

if you’re list of categories (or any other enumeration) is changing a lot, there is a rest interface that allows you to modify them runtime.
(in Audience Management we do that for instance to propagate changes in Audience management segments to SmartTarget)

The source code for the cartridge is available on github, with the sample configurations.
To run it, you’ll need maven and the cd_ambient.jar from the CD Installation in your maven repository
run mvn package, and copy the resulting jar to your website lib where you have the ambient data framework configured.

I hope this has given you some ideas on how to better integrate Smart Target with your business