Getting Started With The Spark Framework on Plesk 12

getting started with the spark framework on plesk 12

In the world of web application development, especially rapid web application development, the names you usually hear of are PHP, Ruby, Ruby on Rails, Python, and, more recently, Go. The name you usually don’t hear, one rarely associated with web development, is Java.

Java is usually associated with big iron servers in stock exchanges, intensive mathematical calculations, or frontend GUI applications, which try to offer the same user interface experience across all platforms; though usually never quite making it.

Java is considered “enterprise software”, software which is typically complicated to develop, setup, test, and deploy. Recently, however, I came across a Java-based framework, one inspired by Sinatra, which just may change that impression. It’s called Spark.

In today’s post, I’m going to step you through setting it up on a Plesk 12 server, and show you how to create and deploy a basic web application.

What Is Spark

To keep it simple, here’s what the website has to say:

Spark is a simple and lightweight Java web framework built for rapid development. Spark’s intention isn’t to compete with Sinatra, or the dozen of similar web frameworks in different languages, but to provide a pure Java alternative for developers that want to, or are required to, develop in Java.

Spark focuses on being as simple and straight-forward as possible, without the need for cumbersome (XML) configuration, to enable very fast web application development in pure Java with minimal effort. It’s a totally different paradigm when compared to the overuse of annotations for accomplishing pretty trivial stuff seen in other web frameworks, for example, JAX-RS implementations.

Plesk Web Hosting Setup and Database Creation

The first thing we need to do is setup a new domain, or sub-domain. If you’re not familiar with how to do this, check out the quick guide here on the Conetix blog. This will step you through everything.

It makes reference to creating a database. But in this tutorial, we don’t need to worry about that. The guide needs you to supply a set of values as you step through; here’s a handy set to help you out.

Quick Note

Setting

Value

Domain

spark.conetix.com

Path

<default>

I’ll assume, for the rest of this tutorial, that you’re using the following values for your account; though please change them to suit your setup.

Setting

Value

username

www-data

hostname

spark.conetix.com

path

/home/www-data/spark.conetix.com

Installing Java 1.8

To get everything setup, you need to be running, at least, Java 1.8. If you’re not sure what version you’re running, from the command line, or terminal, run the command java -version. This should output something similar to the following:

bash java version "1.8.0_31" Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)

If you have an installed version less than 1.8, you need to install it, and the best way is via the package manager. Assuming that the base operating system is Centos/RedHat, then you can install Java using the following command:

bash sudo yum install java-1.8.0-openjdk-devel.x86_64

After that’s done, you’ll also have to create a new environment variable, JAVA_HOME, as it’s required by Maven, which we’ll install next, and not automatically created by the package manager. To do so, I suggest you update /etc/profile, so that the change is universal, and add the following, at the bottom of the file:

bash export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.31-1.b13.el6_6.x86_64

 Installing Maven

With Java installed or upgraded to at least version 1.8, we next need to install Maven. If you’re not familiar with Maven, that’s OK, but over the longer term, you’re going to need to become familiar with it, if you want to develop more than hello world style applications using Spark.

But what is Maven? Paraphrasing the Maven website slightly:

Maven was originally started as an attempt to simplify the build processes in the Jakarta Turbine project. There was a desire to have a standard way to build projects, a clear definition of what the project consisted of, an easy way to publish project information, and a way to share JARs across several projects. The result is a tool that can now be used for building and managing any Java-based project.

Maven does the following five tasks for any Java project:

  • Makes the build process easy
  • Provides a uniform build system
  • Provides quality project information
  • Provides guidelines for best practices development
  • Allows transparent migration to new features

If you’d like to become more familiar with Maven, there’s an excellent 5 minute tutorial, which will bring you up to speed on it. Otherwise download Maven, the latest version at the time of writing, is 3.2.5, and uncompress it. To save time, you can do this with the following command:

bash curl http://mirror.arcor-online.net/www.apache.org/maven/
maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz | tar zxv

This will result in the directory, apache-maven-3.2.5, being created. With that done, move the uncompressed directory to /opt/maven. You can use the following command to save time:

bash sudo mv apache-maven-3.2.5 /opt/ && cd /opt && sudo 
ln -s apache-maven-3.2.5 apache-maven

Then you need to update your environment to include the apache-maven/bin directory in your path. If you’re using Unix or Linux, then update your path similar to the below:

bash MAVEN_HOME=/usr/local/apache-maven; export PATH=$PATH":$MAVEN_HOME/bin";

Now we need to test that Maven’s available; which we can do by running the command below:

bash mvn -version

This should give you output to that below:

bash Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
2014-12-14T18:29:23+01:00)
Maven home: /usr/local/apache-maven Java version: 1.8.0_31 Java home:
/System/Library/Java/JavaVirtualMachines/1.8.0.jdk/Contents/Home 
Default locale: en_US, platform encoding: MacRoman OS name: "mac os x", 
version: "10.10.2", arch: "x86_64", family: "mac"

If you see the output above, then we’re ready to get started creating the application.

Creating An Application

We now have the infrastructure we need ready to go, so it’s time to start creating an application. You can save time by cloning the SparkServletExample repository from Simon Rice. Run git clone https://github.com/simonrice/SparkServletExample.git in the subdomain directory to do so. This will form the basis of the code which will be reference throughout the remainder of the tutorial.

Where you do so doesn’t really matter, so long as it’s somewhere logical; though I suggest you clone it in your domain/sub-domain directory, which was created earlier when you setup a new sub-domain. I’ll assume that you’ve cloned it in the sub-domain directory. Given that, there’ll be a new directory there, called SparkServletExample.

Go in to that directory and have a look around. There’s not a lot to the repository, which you need to be concerned about, save for two files. These are:

  • src/main/java/com/simonrice/sparkservletexample/HelloWorld.java
  • pom.xml

HelloWorld.java, which you can see below, follows the Sinatra style. It declares three routes, '/', '/hello', and '/hello/:name', supplying Lamda functions as the second parameter, which will respond when any of the routes are requested. The first route, '/' will redirect to '/hello', which just outputs the string “Hello World”.

The third route, '/hello/:name' takes a named parameter, ‘:name’, which will be assigned to a variable in the Request object. Assuming that the user requested '/hello/matthew', then the route will output 'Hello, matthew!'.

package com.simonrice.sparkservletexample;

import spark.Request;
import spark.Response;
import spark.Route;
import spark.Spark;
import spark.servlet.SparkApplication;

public class HelloWorld implements SparkApplication {
  @Override
  public void init() {
    Spark.get(new Route("/") {
      @Override
      public Object handle(Request request, Response response) {
        response.redirect("/hello");
        return null;
      }
    });

    Spark.get(new Route("/hello") {
      @Override
      public Object handle(Request request, Response response) {
        return "Hello World!";
      }
    });

    Spark.get(new Route("/hello/:name") {
      @Override
      public Object handle(Request request, Response response) {
        return  String.format("Hello, %s!", request.params(":name"));
      }
    });
  }
}

The POM.xml Configuration File

Now let’s look at pom.xml. This file is the Maven project configuration file, providing everything Maven needs to know to be able to manage the project. Let’s look over the key sections. There’s the initial configuration, down to repositories; this sets out the name of the project, the type of package to build (war).

Then we have the build information, which lists the plugins required to build and deploy the project. In our case, we have the maven-compiler-plugin, which is responsible for compiling the code, org.mortbay.jetty which is responsible for running the project, and org.apache.maven.plugins which gets the project ready to run.

Finally we have the dependencies. These are all the libraries which the code makes reference to. In this case, there’s only one, Spark. This references the Maven repository. So if you need to add further dependencies, you need to search for the package, then copy the dependency XML from the package page.

Here’s an example of searching for logging.

Deploying The Application

With that done, we now need to launch the deployment process, which will compile the source files, packaging them up in to a Jar file, and launch the Apache Jetty server, which will listens for requests on port 8080. If you’re not familiar with Apache Jetty, it’s a Java HTTP (Web) server and Java Servlet container.

To get it started, run the following command in the root of the project directory:

bash mvn jetty:run

You’ll likely see a lot of output, similar to that below, which I’ve truncated for sakes of space.

bash [INFO] <<< jetty-maven-plugin:8.1.5.v20120716:run (default-cli) 
< test-compile @ sparkServletExample <<<
[INFO] [INFO] --- jetty-maven-plugin:8.1.5.v20120716:run (default-cli) 
@ sparkServletExample --- [INFO]
Configuring Jetty for project: Sample Spark Servlet Webapp 
[INFO] webAppSourceDirectory
/Users/mattsetter/Workspace/settermjd/Java/Maven/SparkServletExample/src/main/webapp
?does not exist.
Defaulting to /Users/mattsetter/Workspace/settermjd/Java/Maven/SparkServletExample/
src/main/webapp 
[INFO] Started Jetty Server

If that’s what you see, then Jetty is ready and listening for requests, so open up your browser to http://spark.conetix.com:8080 where you can test that everything is working.

Setting Up Apache as a Reverse Proxy

Assuming that everything went well, let’s now setup Apache as a reverse proxy to the Jetty server so that requests can be made without specifying the port.

This will have Apache pass all requests to the Spark sub-domain to our Spark application. Secondly, we need to create an initialisation shell script, to make sure that our Spark application is started when the server boots.

Thanks to the smooth Plesk 12 interface, configuring Apache as a reverse proxy is quite straight-forward. Under the hosting settings for the sub-domain, click Show More at the bottom.

Then click the second option, in the middle column, labeled Web Server Settings, which you see in the screenshot below.

plesk 12 spark web server settings screenshot

Then, when the page’s loaded, under Additional directives for HTTP, add the following configuration directives:

ProxyRequests Off

<Proxy *>
Order deny,allow
Allow from all
</Proxy>

ProxyPass / http://localhost:8080/
ProxyPreserveHost On
http settings for proxy in plesk 12

Service Initialisation Script

To keep this as simple as possible, use a script I created, available in this Gist. This will allow for the server to be both started and stopped as needed. Save that file in /etc/init.d as spark-service.

We now need to enable the script to be called at the correct runlevel. To do so, run the following commands:

chkconfig --level 345 spark-service on chkconfig --list | grep spark-service

This will setup the script as required and do a quick validation that it was done successfully. You should see the following output:

spark-service 0:off 1:off 2:off 3:on 4:on 5:on 6:off

 Wrapping Up

OK, that’s a little bit of work, at least for the initial setup. But when it’s complete, you’ll have a setup in which you can deploy a large number of Spark/Java applications quite quickly, and reliably.

One which provides a standardised, effective, and efficient environment, supported and maintained by a large number of people. I hope you’ll consider using the Spark Framework for your next application.

Back to the Blog

avatar of matthew setter

Matthew Setter

  • Conetix
  • Conetix

Let's Get Started

  • This field is for validation purposes and should be left unchanged.