The MVC 1.0 (JSR371) specification has just released an early draft for review. I'm very happy to see that people are interested in this JSR and start to have a deeper look at how MVC 1.0 will look like. At this point any kind of feedback is very valuable and welcome.

One of the questions which is asked very frequently is which view technologies MVC 1.0 will support. The answer is that MVC has built-in support for JSP and Facelets. This seems to disappoint people. Especially because JSP isn't considered to be a "modern" view technology any more.

I want to note a few things here. First JSP isn't as bad as most people think. Sure, it has its weaknesses (like any other technology), but there are good arguments for using JSP. Every IDE for example supports JSP out of the box and I think that decent tooling is something that is lacking for many of the other view technologies. One of the major problems with JSP is that it allows you to do really really bad things like embedding Java code in the view. But if you use JSP in a reasonable way, it works very well. And if you don't like JSP, you can also choose Facelets which has been around in the JSF world for quite some time and offers many great templating concepts.

However, I agree that there are many other great view technologies out there. And I fully understand people who prefer libraries like Thymeleaf as they offer many great features you don't get with JSP or Facelets.

One of the things I like most about MVC 1.0 is that it actually doesn't matter what view technologies it supports out of the box. Why? Because it is so easy to integrate arbitrary view technologies with MVC. You don't believe me? Ok, here is an example to convince you.

Creating a custom view engine

One of the most popular template engines for node.js is Jade. Similar to Haml in the Ruby world it is based on the idea to use indention to define blocks which basically means that you don't need to manually close elements any more. There is even a Java implementation of the Jade language called jade4j. So let's have a look the required steps to use jade4j as the view technology for your MVC 1.0 based application.

To integrate a template engine with MVC, you have to implement the ViewEngine interface which consists of only two methods. MVC 1.0 uses CDI to discover all view engine implementations. There is no need to register the implementation anywhere. Just add @ApplicationScoped to your implementation and MVC will find it.

OK, let's have a look at the code:

@ApplicationScoped
public class JadeViewEngine implements ViewEngine {

  @Inject
  private ServletContext servletContext;

  @Override
  public boolean supports(String view) {
    return view.endsWith(".jade");
  }

  @Override
  public void processView(ViewEngineContext context) throws ViewEngineException {

    try {

      String viewName = "/WEB-INF/views/" + context.getView();
      URL template = servletContext.getResource(viewName);

      String html = Jade4J.render(template, context.getModels(), true);

      context.getResponse().getWriter().write(html);

    } catch (IOException e) {
      throw new IllegalStateException(e);
    }

  }

}

That's all! Everything you need to integrate jade4j with MVC 1.0. That's not much code, isn't? Of cause this code can be (and should be) improved in regard to error handling, caching and so on. But it is a great example to demonstrate the basics.

The supports() method is used by the MVC implementation to find the view engine responsible for the view name returned by a controller method or specified with the @View annotation. In this example our view engine will process all views with the file extension .jade.

The processView() method is where all the magic happens. The purpose of this method is to render the view and write the result to the response stream. Everything you need for this is available from the ViewEngineContext. The code above uses the ServletContext to load the Jade template from the default view folder /WEB-INF/views. Jade4J provides simple static methods you can use to evaluate templates. All you need is the template and a model. After that you can write the resulting HTML page to the output stream.

Using the view engine

Now let's check if the view engine works as expected. First we will create a simple controller that uses the MVC 1.0 API:

@Path("/hello")
public class HelloController {

  @Inject
  private Models models;

  @GET
  @Controller
  public String controller() {
    models.put("name", "Christian");
    return "hello.jade";
  }

}

As you see there is nothing special about this controller. It looks like every other MVC 1.0 controller. The only difference is that it returns the view name hello.jade instead of something like hello.jsp. Now let's have a look at the corresponding view:

!!! 5
html
  head
    title Jade4J Demo
  body
    h1.
      Hello #{name}

I guess this looks very weird if you have never seen something like Jade or Haml before. This is a simple page that renders a h1 element containing a greeting. It uses EL-like expressions to reference values from the model. If you want to learn more about Jade, have a look at the Jade Language Reference.

So you see the Jade integration is working fine. And we had to create just a single class. Easy, isn't it? :)

Conclusion

I hope this example shows you how easy it is to integrate custom view technologies with MVC 1.0. Ozark, the reference implementation of MVC 1.0, already provides a number of extra view engine implementations for template engines like Thymeleaf, Freemarker, Velocity, Handlebars and Mustache. All of them are available from Maven Central.

I hope you enjoyed this blog post. If you want to give MVC 1.0 a try, I recommend to have a look at todo-mvc, which is a small sample application I created to demonstrate how a typical application created with MVC 1.0 looks like.

Have fun!

I'm very proud to announce that I will be speaking at JavaLand 2015 in about a month. I'm really looking forward to it as I wasn't able to participate last year and (according to the feedback I got from various sources) JavaLand is an awesome conference.

The title of my talk is BYOC - Bring Your Own Container. Basically it is about the concept of shipping the container with your application which allows you to create executable JARs that don't need to be deployed to an application server any more. This provides an improved developer experience like the one you get when building you application with Spring Boot or Dropwizard.

I will show a small open source project I created called Backset which is basically an implementation of this concept. The core idea is similar to projects like Spring Boot or Dropwizard but with one important advantage: You build your app using well known Java EE APIs like CDI, JPA, JSF, etc.

I'm also looking forward to the various other talks such as:

So, what is your excuse for not joining us at JavaLand 2015? :)

There have been recently much discussion about the missing binary downloads of JBoss AS 7.1.2.Final and 7.1.3.Final. To keep the long story short, here are just the most important facts: The latest version of AS7 available on the download page is 7.1.1.Final which has been release about a year ago. But if you have a look at the AS7 GitHub repository, you will notice tags for 7.1.2.Final and 7.1.3.Final.

To be honest, I don't fully understand the reason for this. But that's the current situation. So if you want JBoss AS 7.1.3.Final, you have to build it yourself.

It seems like most people think that the process of compiling AS7 is difficult. But it is not. It is actually pretty straight forward and only takes about 10 minutes.

The first step is of cause to download the sources. Fortunately GitHub provides downloadable ZIP files for all tags.

$ wget https://github.com/jbossas/jboss-as/archive/7.1.3.Final.zip
$ unzip 7.1.3.Final.zip
$ cd jboss-as-7.1.3.Final/

Now you have to start the build by entering this command:

$ ./build.sh -DskipTests -Drelease=true install

This command requires some explanation. The build.sh file is a helper script that performs some additional checks to ensure you are using the correct Maven version. I also recommend to use -DskipTests to skip the test suite which will reduce the overall build time dramatically. You also have to set -Drelease=true to ensure the build creates the distribution archives.

The compilation took about 9 minutes on my box. After the build completed you will find the ZIP distribution in the dist/target directory:

$ ls -1 dist/target/*.zip
dist/target/jboss-as-7.1.3.Final-src.zip
dist/target/jboss-as-7.1.3.Final.zip

That's all. Simple, isn't it? So you see there is no reason for not building AS7 yourself. :)

I love git! And therefore I also love github.com! I use GitHub very often to publish smaller or even large projects and share them with others. As I mostly use Maven to build my Java projects, I recently searched for an easy way to publish Maven artifacts via GitHub. I learned that it is in fact very easy! Interested? Read on! :-)

The basic idea of hosting Maven repositories on GitHub is to use GitHub Pages. This GitHub feature offers a simple but powerful way for creating and hosting web sites on their infrastructure. Fortunately this is all we need to create Maven repositories. I'll explain the process by example. Therefore I'll show you how I created a repository for jsf-maven-util, one of my recent spare time projects.

The first step is to create a separate clone of your GitHub repository in a directory next to your primary local repository:

$ pwd
/home/ck/workspace/jsf-maven-util
$ cd ..
$ git clone git@github.com:chkal/jsf-maven-util.git jsf-maven-util-pages
$ cd jsf-maven-util-pages

The GitHub Pages web site must be created as a branch named gh-pages in your repository. So lets create this branch and empty it. Refer to the GitHub Pages Manual if you are interested in the exact meaning of these commands.

$ git symbolic-ref HEAD refs/heads/gh-pages
$ rm .git/index
$ git clean -fdx

We will place the Maven repository in a subdirectory of this new branch:

$ mkdir repository

We also want to have a pretty directory listing. Unfortunately GitHub Pages doesn't have native support for this. So we will create our own directory listing with a simple bash script.

Create a file named update-directory-index.sh in the root of the new branch (next to the repository directory). This script will walk recursively into the repository directory and create index.html files in each subdirectory. Please be careful when using this script as it overwrites all exiting index.html files it finds.

#!/bin/bash

for DIR in $(find ./repository -type d); do
  (
    echo -e "<html>\n<body>\n<h1>Directory listing</h1>\n<hr/>\n<pre>"
    ls -1pa "${DIR}" | grep -v "^\./$" | grep -v "^index\.html$" 
        | awk '{ printf "<a href=\"%s\">%s</a>\n",$1,$1 }'
    echo -e "</pre>\n</body>\n</html>"
  ) > "${DIR}/index.html"
done

Congratulations! Your repository is ready. Now you will have to modify the distributionManagement section of your pom.xml to let Maven deploy your artifacts to the new repository. Go back to your primary repository clone and edit your pom.xml:

<distributionManagement>
  <repository>
    <id>gh-pages</id>
    <url>file:///${basedir}/../jsf-maven-util-pages/repository/</url>
  </repository>
</distributionManagement>

Now you are ready to deploy your first artifact to the repository:

$ mvn -DperformRelease=true clean deploy

You will see that Maven copies the artifacts to your local checkout of the GitHub Pages branch. After Maven has finished you'll have to update the directory listings, commit the changes made to the repository and push them to GitHub:

$ cd ../jsf-maven-util-pages/
$ ./update-directory-index.sh
$ git add -A
$ git commit -m "Deployed my first artifact to GitHub"
$ git push origin gh-pages

Now let's check the result. Please note that the first publish may take some time to appear on the web server.

Looks great, doesn't it? :-)

If you want to use your repository in another project, just add the following repository entry to the pom.xml:

<repository>
  <id>jsf-maven-util-repo</id>
  <name>jsf-maven-util repository on GitHub</name>
  <url>http://chkal.github.com/jsf-maven-util/repository/</url>
</repository>

As you can see deploying Maven artifacts to GitHub is very simple. You can also use a similar approach to publish your Maven generated project site to GitHub. But that's a different story.... :-)

I was recently confronted with the task of displaying the version of a JSF project in its page title. As the version was already contained in the project's pom.xml and I didn't want to duplicate this information in another file, I searched for a simple way to display the Maven artifact's version in the JSF page.

As there was no easy way to do this, I created a small library for this usecase and named it jsf-maven-util. The main idea of it is to supply a JSF managed bean that lazily checks for pom.properties files of Maven artifacts on the classpath. These files are created during the Maven packaging process and are stored in the META-INF/maven/ directory of the output archive.

The library is very easy to use. A bean named maven is automatically placed in the application scope of your webapp. It contains a map which you can use to get the version of an artifact by using the groupId and artifactId (colon-separated) as the key.

This example shows how to display the version of a web application in its page title.

<head>
  <title>
    My Application #{maven.version['com.example.myapp:myapp-webapp']}
  </title>
</head>

You can also display the version of any of your project's dependencies as long as it includes a pom.properties in its archive:

<p>
  powered by Weld #{maven.version['org.jboss.weld:weld-core']}  
</p>

If you are interested in using this feature in your own project, add the following repository to your pom.xml:

<repository>
  <id>jsf-maven-util-repo</id>
  <name>jsf-maven-util Repository</name>
  <url>http://chkal.github.com/jsf-maven-util/repository/</url>
</repository>

Then add the following dependency to your project:

<dependency>
  <groupId>de.chkal.jsf</groupId>
  <artifactId>jsf-maven-util</artifactId>
  <version>1.1</version>
</dependency>

I pushed the source to a GitHub repository. Let me know if you have any issues.

I recently looked for a way to integrate the Google Analytics tracking code into the project site of my current spare time project Criteria4JPA. I'm using the Maven Site Plugin to automatically build the project page because it makes the process of creating a site very easy.

After some time I realized that there seems to be no easy way to do this. Somebody on the maven-user list proposed to create a copy of the original site template and modify it to include the necessary JavaScript in the page header. But I think that this solution is much to complicated for such a simple job. I also found an existing JIRA issue describing the problem but it is still unresolved.

But after much more searching I discovered a very simple and elegant way to get the Google Analytics tracking code into the maven site. I found the hint on the doxia-dev mailing list. Someone talked about a mysterious <head> element that can be used in the site.xml descriptor. The JIRA issue DOXIA-150 seemed to prove the existence of this feature.

I tried it and it worked. See the site.xml file of Criteria4JPA for an example:

<?xml version="1.0" encoding="ISO-8859-1"?>
<project name="Criteria4JPA">

  <body>

    <head>
      <!-- Google Analytics - Start -->
      <script type="text/javascript">
      var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
      document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
      </script>
      <script type="text/javascript">
      try {
      var pageTracker = _gat._getTracker("UA-1234567-8");
      pageTracker._trackPageview();
      } catch(err) {}</script>      
      <!-- Google Analytics - End -->
    </head>

    <!-- more stuff -->

  </body>
</project>

As you can see adding the tracking code is very easy. Just place a <head> element inside the body and copy the Google Analytics code in there. The default JavaScript code you get from Google Analytics is already correctly escaped so you can copy and paste it into the XML descriptor without problems.

I don't know for sure which versions of Maven, Doxia and the Site Plugin are required for this but I can confirm that Maven 2.2.0 together with the Maven Site Plugin 2.0-beta-7 works.

Happy tracking... :-)

I recently tried to setup the Eclipse TPTP Profiler on my two Linux boxes (Ubuntu 7.04 "Feisty Fawn" and 8.10 "Intrepid Ibex"). I thought this would only require installing some plugins from the Ganymede update site. I learned that the installation can be more complex on Linux systems.

The most problematic part of the TPTP Profiler installation was setting up the Agent Controller. The Agent Controller is a native binary and as we all know native code does not follow the philosophy "compile once, run everywhere"! :-)

After installing the the features "TPTP Tracing and Profiling Tools Project" and "TPTP Profiling for Web applications" from the Ganymede update site I restarted Eclipse and tried to profile a simple web application via "Profile as -> Profile on Server". Unfortunately this failed with:

[Error: FATAL ERROR: JPIAgent can't load ACCollector]

Basic troubleshooting procedure

To debug such problems I recommend to try starting the Agent Controller directly from the command line. This way you can easily find problems related to the agent controller. I decided to start the Agent Controller binary ACServer instead of the startup shell script ACStart.sh, because I didn't figure out which way Eclipse uses to execute the Agent Controller.

To start the Agent Controller from the command line you must find the directory of the Agent Controller bundle. For my Ganymede installation the process looks like this:

$ cd plugins/org.eclipse.tptp.platform.ac.linux_*/agent_controller/bin
$ ./ACServer
./ACServer: error while loading shared libraries: libtptpUtils.so.4: cannot open shared object file: No such file or directory

Your first try will probably fail like in this example because of a missing shared library. The library libtptpUtils residents in the library directory of the agent controller. Eclipse will take care of setting the corresponding paths but as we are trying to start the Agent Controller from the command line, we have to perform this step to temporarily fix this problem for our test:

$ export LD_LIBRARY_PATH=../lib

If you are lucky, the Agent Controller now starts without any problems. But on my systems the execution failed for different reasons described in the following sections.

Broken symlinks

There seems to be a problem with symlink creation during the installation of the Agent Controller bundle. This bug showed up with Ubuntu 7.04 but not with Ubuntu 8.10.

$ ./ACServer
./ACServer: error while loading shared libraries: libtptpUtils.so.4: cannot open shared object file: File too short

On my system there was a regular file named libtptpUtils.so.4 which contained only the string libtptpUtils.so.4.5.0. Instead of creating symlinks the installation process seemed to create regular files which contain the name of the referenced file.

I fixed the problem by manually removing the broken files and creating the symlinks:

$ rm libtptpUtils.so libtptpUtils.so.4
$ ln -s libtptpUtils.so.4.5.0 libtptpUtils.so.4
$ ln -s libtptpUtils.so.4 libtptpUtils.so

$ rm libxerces-c.so libxerces-c.so.26
$ ln -s libxerces-c.so.26.0 libxerces-c.so.26
$ ln -s libxerces-c.so.26 libxerces-c.so

$ rm libxerces-depdom.so libxerces-depdom.so.26
$ ln -s libxerces-depdom.so.26.0 libxerces-depdom.so.26
$ ln -s libxerces-depdom.so.26 libxerces-depdom.so

$ rm libtransportSupport.so libtransportSupport.so.4
$ ln -s libtransportSupport.so.4.5.0 libtransportSupport.so.4
$ ln -s libtransportSupport.so.4 libtransportSupport.so

Surprisingly this problem only showed up on my first try getting Eclipse and the TPTP Profiler to work. I tried to reproduce the problem with a clean Eclipse installation while working on this blog post. But the second time all symlinks were created successfully. This makes me suspect that the bug is fixed in the current TPTP releases.

Missing shared libraries

The Ubuntu 8.10 box showed another problem, that didn't occur on my old Ubuntu 7.04 installation:

$./ACServer
./ACServer: error while loading shared libraries: libstdc++-libc6.2-2.so.3: cannot open shared object file: No such file or directory

The Agent Controller requires an old libstdc++ library that wasn't installed on Ubuntu 8.10. Fixing this problem depends on the Linux distribution and its version. For Ubuntu 7.10 the library is contained in the "libstdc++2.10-glibc2.2" package, which was already installed. Ubuntu 8.10 is lacking a package containing this file. I fixed the problem by manually downloading a package from an older Ubuntu release:

$ cd /tmp
$ wget "http://de.archive.ubuntu.com/ubuntu/pool/universe/g/gcc-2.95/libstdc++2.10-glibc2.2_2.95.4-24_i386.deb"
$ sudo dpkg -i libstdc++2.10-glibc2.2_2.95.4-24_i386.deb

Depending on your system there might by other libraries missing. The following list contains all shared library dependenies of the Agent Controller binary:

$ ldd ACServer
   linux-gate.so.1 =>  (0xffffe000)
   libtptpUtils.so.4 => ../lib/libtptpUtils.so.4 (0xb7f72000)
   libtptpLogUtils.so.4 => ../lib/libtptpLogUtils.so.4 (0xb7f65000)
   libtptpConfig.so.4 => ../lib/libtptpConfig.so.4 (0xb7f4b000)
   libprocessControlUtil.so.4 => ../lib/libprocessControlUtil.so.4 (0xb7f46000)
   libxerces-c.so.26 => ../lib/libxerces-c.so.26 (0xb7b3f000)
   libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7b12000)
   libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb79d1000)
   libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb79cd000)
   libuuid.so.1 => /lib/libuuid.so.1 (0xb79ca000)
   libstdc++-libc6.2-2.so.3 => /usr/lib/libstdc++-libc6.2-2.so.3 (0xb7982000)
   libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb795a000)
   /lib/ld-linux.so.2 (0xb7f95000)

If some of these libraries are missing on your system you will have to find and install the corresponding packages. A Good place to start searching for packages is the Ubuntu Packages Search, the Debian Package Search, the Fedora Package Database or other distribution's package directories.

Missing TEMP environment variable

This problem was the easiest to solve. The Agent Controller complains about the missing environment variable TEMP.

$ ./ACServer
The TEMP environment variable does not point to a valid directory.
Agent Controller will not start.

As already hinted this problem can be solved easily. Just put this line in your .bashrc file:

export TEMP=/tmp

It should be mentioned that I did not test whether Eclipse takes care of setting this variable for the Agent Controller process. But it does certainly no harm to set it manually.

Finally...

If you don't stumble across other problems the Agent Controller should now start normally. In this case you won't see any output on the console.

$ ./ACServer
<no console output>

Now you can stop the Agent Controller by hitting CTRL+C and retry profiling an application in Eclipse. On my system profiling in Eclipse now worked as expected.

Conclusion

The installation of the TPTP Profiler can be very problematic on a Linux box. I have presented solutions for the different problems I was confronted with. By reading this short blog post everyone should get an idea of the most common problems regarding the Agent Controller and which way to go to solve them.