Author Archives: MiNio

Prime Linux devop server

Install Ansible

sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
sudo nano /etc/ansible/hosts

Install Git

mkdir prime
cd prime/
mkdir ansible
cd ansible/

Update Ansible to run localhost

sudo nano /etc/ansible/hosts

Add as last row

localhost ansible_connection=local

Download and run Git role

ansible-galaxy install geerlingguy.git

Create Ansible script for Git

nano git .yml

---
- hosts: all
  roles:
  - git

sudo ansible-playbook -i "localhost," -c local git.yml

 

Elastic kart results

From OLAP to elastic

I found elasticsearch a couple of months ago and decided to give it a try. I plan to use it as a NOSQL schema less document database with full text search for my old kart racing result data. Elasticsearch use a REST api to index and store the JSON documents and will run as a service on my Linux box.
The racing result data comes from my old project for kart race administration and result presentation that I have run for about 10 years. Data was presented on the web the last five years and contains about 60000 result records. The data is stored in a star-shaped OLAP database because I wanted to do some statistics on the data. Well statistics was simple but searching was a nightmare. I also tried to use Lucene for free text searching, but at that time I didn’t manage to get satisfying results so now I will try to do it with elasticsearch.

The star-shaped OLAP database contains a central fact table with the actual results and several dimension tables with data like driver name, event name, result date etc.

OlapStar

Elasticsearch Installation

Linux packages and server set up is still a bit of a mystery to me so this is for us that grew up in Windows land.
This don’t work for me:

wget -qO - http://deb.opera.com/archive.key | sudo apt-key add – dont work

So I have to download the key file separately and install elasticsearch:

 sudo wget -O es.key http://packages.elasticsearch.org/GPG-KEY-elasticsearch
 sudo apt-key add es.key
#Download ES from https://gist.github.com/wingdspur/2026107
 wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.0.1.deb
 sudo dpkg -i elasticsearch-1.0.1.deb
 sudo update-rc.d elasticsearch defaults 95 10
 #Starting Elasticsearch Server
 sudo /etc/init.d/elasticsearch start

You should also download and install some tools like elasticsearch-HQ  or Marvel (installed on the server) because you need some tools to test indexing and searching. I use the Sense plugin for Chrome but it’s now part of Marvel.

Now it was time for take II

After setting up the elasticsearch server on my Linux box I started to experiment with indexing. There are several pages about setting up and trying elasticsearch like Joel Abrahamsson blog post elasticsearch 101 . After trying the ‘hello world’ examples I needed to get some real data into the index. Starting with a nice SQL join where the star-shaped data was converted to ‘flat’ data without relations. The generated data was inserted into the index by a simple Perl script (well simple is a relative concept when I comes to Perl) that read from an exported csv file, converted it to JSON, and sent it to the elasticsearch service via the REST API. I had some problems with the Perl JSON converter because it just have to add quotes around numbers so elasticsearch interpreted them as strings and then term filters don’t work.  After that I decided to generate the JSON by hand. So far so good but the problem was that I have no idea what a document is!

What is a document anyway?

My first assumption, after 20 years in the SQL swamp, was that a record is a document:

 "SKCC 2";"Malmö AK";"2010-05-08 00:00:00";"Träning 1 JUNIOR 60";Training;10;1;"JUNIOR 60";"Mark Hansson";"Jönköping KC";22;998

So after inserting some 2000 records into the index I started to think about how to show the search result. After a while I realized that a record was not a document after all! Trying to relate the documents in a NOSQL database was wrong! Using sub documents also seems unnatural so a document must be something else. After a long midday walk a new structure came up. By thinking bigger aggregates, a document cold be all the results from a race event like:

{
 "EventName":"SKCC 2",
 "EventClub":"Malmö AK",
 "EventDate":"2010-05-08 00:00:00",
 "RaceName":"Slutresultat final",
 "ResultType":"Final sum",
 "ClassName":"JUNIOR 60",
 "sortorder":51,
 "result":[
 {"StartNumber":2,
 "DriverName":"Will Smith",
 "ClubName":"Göteborgs KRC",
 "Position":1,
 "BestLapTime":66.216},
 {"StartNumber":3,
 "DriverName":"Mad Max",
 "ClubName":"Göteborgs KRC",
 "Position":2,
 "BestLapTime":66.431}
 ]
 ….
 }

Now the problem was that all the flat data have merged into a document, in Perl, and with arrays of hashes of arrays of hashes…. Well you get the picture. After some night coding in Perl all documents were ready to be inserted into the index server and it was possible to search the documents again.

Using the Sense plugin for testing a query could look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
POST /kartresultat/raceresult/_search
{
    "query": {
                "query_string": {
                   "query": "andersson"
                   }
    },
    "highlight" : {
        "fields" : {
            "driverName" : {},
            "className" :{}
        }
    }
}

Resulting in:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
"hits": {
 "total": 202,
 "max_score": 0.5282536,
 "hits": [
 {
 "_index": "kartresultat",
 "_type": "raceresult",
 "_id": "lGc3E2H-R0ayK6_VMtwWoA",
 "_score": 0.5282536,
 "_source": {
 "eventName": "SKCC Deltävling 4",
 "eventClub": "Uddevalla KK",
 "eventDate": "2009-06-07",
 "className": "KZ2",
 "races": [
 {
 "raceName": "Slutresultat final",
 "resultType": "Final sum",
 "sortOrder": 51,
 "className": "KZ2",
 "results": [
 {
 "startNumber": 196,
 "driverName": "Viktor Öberg",
 "driverId": "SWE_MTk5MzAyMTUwNTc5",
 "clubName": "Borås MK",
 "position": 1,
 "bestLapTime": 47.952
 },
.....

The web client

First attempt is to apply a driver perspective where you search on a driver name plus class, club, event etc. and get a hit list. With help of the highlight function in the elasticsearch API the full driver name are added to the result data and can be displayed together with the search result.

I will use a Java web application running on JBoss AS with JFS and Richfaces to build the web client. After trying to do the object relation mapping from JSON search result into a POJO by hand (why do I always start down that road? Its 2014 now) I found Google GSON and it work like a charm. Just add the dependecy to the pom file:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>1.7.1</version>
</dependency>

And in code

1
2
Gson gson = new Gson();
EventResult raceResult = gson.fromJson(hit.getSourceAsString(), EventResult.class);

Java client API

The elasticsearch project provide a Java API and the dependency shall be added to the pom file

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>1.0.1</version>
</dependency>

The connection from the application to the elasticsearch server was implemented as an application scoped bean. The bean are injected in search beans and handle client creation and closing. Observe that the Java client connect to port 9300 and not to 9200 that the web based client use.

Client provider bean

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@ApplicationScoped
public class ElasticSearchClient {
 
   private Client client;
 
   public Client getClient(){
      return client;
   }
 
   @PostConstruct
   public void init() {
      Settings s = ImmutableSettings.settingsBuilder().put("cluster.name", "elasticsearch").build();
      TransportClient tmp = new TransportClient();
      tmp.addTransportAddress(new InetSocketTransportAddress("ubuntu-01", 9300));
      client = tmp;
   }
 
   @PreDestroy
   public void destroy() {
      client.close();
      client = null;
   }
}

Searching

The QueryBuilder are used to set up the search query.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
   public List findRaceResults(String searchString) {
      Gson gson = new Gson();
      List raceResults = new ArrayList();
      try {
         QueryBuilder queryBuilder = QueryBuilders.queryString(searchString).field("_all");
 
         SearchRequestBuilder searchRequestBuilder = clientElasticSearchClient.getClient().prepareSearch(INDEX_NAME);
         searchRequestBuilder.setTypes(RACE_TYPE_NAME);
         searchRequestBuilder.setSearchType(SearchType.DEFAULT);
         searchRequestBuilder.setQuery(queryBuilder);
         searchRequestBuilder.setFrom(0).setSize(20).setExplain(true);
         searchRequestBuilder.addSort("_score", SortOrder.DESC);
         searchRequestBuilder.addHighlightedField("driverName").addHighlightedField("className").addHighlightedField("clubName");
 
         SearchResponse response = searchRequestBuilder.execute().actionGet();
 
         if (response != null) {
            int documentCount = 0;
            for (SearchHit hit : response.getHits()) {
               EventResult raceResult = gson.fromJson(hit.getSourceAsString(), EventResult.class);
               raceResult.setId(hit.getId());
               raceResults.add(raceResult);
 
               documentCount++;
 
               for (String fieldName : hit.getHighlightFields().keySet()) {
                  HighlightField highlightField = hit.getHighlightFields().get(fieldName);
 
                  for (Text hitText : highlightField.getFragments()) {
                     raceResult.addHit(highlightField.getName(), hitText.string());
 
                  }
               }
            }
            log.info("Hits" + documentCount);
            return raceResults;
         }
 
      } catch (IndexMissingException ex){
         log.severe(ex.getMessage());
      }
      return null;
   }

Presentation

Search page

The search result page show the hits with highest score together with the highlight information.

Search

 

Document page

Result page

 

 

Wrap up

It have been very interesting to work with the elasticsearch server and it was far more easy than when I used Lucene only. Next project will be an attempt to make an ASP.NET web client.

Kafka Sharp

The beginning

It all started by Fred George µService Architecture presentation at Öredev (http://vimeo.com/79866979)

Looking at the presentation give me one of those moments when I think: This is COOL, where can I use this concept. It have to be an application that produce data continuously. Why not use it for my home data collection and control system? Well it’s not 250k messages per second but it’s a steady stream of data.

So now I have an application to test the concept on. Heading down in the cellar to find a box to run Linux on because the Kafa server run best on Linux. As Linux is my second operation system I have to struggle a bit to get the Kafka service to actually run as a service together with the ZooKeeper service (the configuration service). I decided to go for the latest version at the time 0.8 and that gave me some more work later on, but I will get back to that. When everything was up and running you can start a couple of PuTTy terminals and run commands to set up a topic, produce data and consume it:

kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic kafkatopic
kafka-console-consumer.sh --zookeeper localhost:2181 --topic kafkatopic –from-beginning

Then in another terminal:

kafka-console-producer.sh --broker-list localhost:9092 --topic kafkatopic

A c# client please

Well this is exiting for a while but there was a project to do. So to connect the data logger, running as a service written in C#, we need a C# client to connect to the bus. There are a Java client implementation bundled with the Kafka packages and there are clients in several languages here

But there are no C# client! I found one at github but it’s written for version 0.7. No problem I start with that code (thanks to that programmer!) and modify it to use Kafka 0.8 protocol. Well the changes between 0.7 and 0.8 was not minor! There is a good description of the protocol at the Apache Kafka wiki . As the work went on I have done a lot of refactoring and large parts of the code is new.

After getting all the code in place I mange to connect to the bus on the Linux box and both produce and consume messages from my c# client, but I haven’t found any way to create a new topic like the kafka-create-topic.sh do. There is a configuration on the broker to make it auto create a topic but that doesn’t work. OK I have the Java client running, so whay not start up the old reliable WireShark? After some digging in TCP conversations I found out that the Java client made two request for meta data on the topic before it produced on it. The first request returned an error saying that there are no leader broker on this topic but it also trigger a topic creation on the leader broker according to the server log. So the next request tell what leader and which partition the topic can be found on. After changing the producer to do that dance at startup, it work just fine.

First µ-service

At last it was time for a µservice! I decided to pick up data from sensors on the 1-Wire bus and put the data on the Kafka bus. Digging out the 1-Wire bus access code from the old logger (glad I made nice structured code there) and whip up some code to connect it to the Kafka bus resulted in less than 100 lines of code!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
private const string TopicName = "OneWireSensor";
static readonly JavaScriptSerializer JavaScriptSerializer = new JavaScriptSerializer();
static readonly KafkaBusConnector BusConnector = new KafkaBusConnector("192.168.0.105", 9092, "KafkaConsole");
private const int UsbPortNumber = 1;
 
static void HandleSensorData(KafkaOneWireData data)
{
   var message = JavaScriptSerializer.Serialize(data);
   BusConnector.Produce(TopicName, -1, message);
}
 
static void Main(string[] args)
{
   var reader = new OwReader("{DS9490}", "USB" + UsbPortNumber);
   reader.Start(HandleSensorData);
   while (!Console.KeyAvailable)
   {
      Thread.Sleep(100);
   }
   reader.Close();
}

The future

My plan is to build a new logger and control Eco-system based on a couple of µ-services that handle a task like:

  • Read sensor data
  • Process data like averaging and filtering
  • React on a value change and send out a command on the bus
  • React to a command and turn on the floor heating
  • Collect data from the bus and store it in the RRD.
  • Etc.

 

To be continued….

 

 

 

 

CI on PI

Do you, like me, think that no home is complete without a CI server and a deployment pipeline, but don’t want to have another computer running 24/7? I guess so. Well there is a solution for this problem: run it all on a Raspberry Pi. I just have to test the concept and added a 250 G hard disk to the Pi to make space for everything. The Linux OS is completely new to me so this is not an installation guide. Almost every command have been issued twice, first without sudo and then with sudo.

Setup

Java

I started by installing Java. There are several guides on the web and by picking a hint here and a trick there I manage to execute the magic java –version command to get:

java version "1.7.0_40"
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) Client VM (build 24.0-b56, mixed mode)

I guess it become a bit messy because I persist on keeping Java (and the other packages) on the hard disk and I have to set up numerous links.

Tomcat 7

So with Java on board it was time for the application server. Installing Tomcat 7 was fairly easy but it needed some more links to keep the installation on the hard disk. After installing the Tomcat admin module it was time to fire up the browser to verify the installation. Success!

sudo apt-get install tomcat7
sudo apt-get install tomcat7-admin

Don’t forget to adjust the PermSize. Jenkins will run out of memory with standard setting. As stated in  /usr/share/tomcat7/bin/catalina.sh don’t change that file. Instead create or edit /var/lib/tomcat7/bin/setenv.sh

Set JAVA_OPTS=”-XX:PermSize=128m -XX:MaxPermSize=128m” and restart tomcat.

sudo service tomcat7 restart 
tail -f /var/log/tomcat7/catalina.out

Nexus

What else does a CI server need? A repository manager of course! So download the Sonar Nexus war file and deploy it to the server by copy the war file to /var/lib/tomcat7/webapps . Now the speed of Pi, or lack thereof, becomes clear. I take forever to start the Nexus application, but after about 30 minutes (yes minutes) it appears. Initially my intention was to use the repository as a proxy for everything but it will only be used as a local central repository for my own deployments.

Maven

After installing Maven it was time to configure the project pom file. I struggled for a while with the settings. I found most of the Maven documentation confusing until I found this page Configure Maven to Deploy to Nexus . It was actually that simple!

Add this to the pom file:

<distributionManagement>
  <repository>
     <id>deployment</id>
     <name>Internal Releases</name>
     <url>http://pici:8080/nexus-2.7.0-06/content/repositories/releases/</url>
  </repository>
  <snapshotRepository>
     <id>deployment</id>
     <name>Internal Releases</name>
     <url>http://pici:8080/nexus-2.7.0-06/content/repositories/snapshots/</url>
  </snapshotRepository>
</distributionManagement>

This have to be added to Tomcat’s settings.xml

<servers>
  <server>
    <id>deployment</id>
    <username>deployment</username>
    <password>deployment123</password>
  </server>
</servers>

The important part is that the id tag shall be the same in both files

With Maven and Nexus in place it was time for a gently ‘mvn clean’. It works! Let’s try mvn clean deploy.

Uploading: http://pici:8080/nexus-2.7.0-06/.....snapshots/se/minidev/savings/Savings/3.1-SNAPSHOT/Savings-3.1-20140115.224942-3.war
Uploaded: http://pici:8080/nexus-2.7.0-06/.....snapshots/se/minidev/savings/Savings/3.1-SNAPSHOT/Savings-3.1-20140115.224942-3.war (14887 KB at 809.3 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

YES module safely stored in the repository!

Jenkins

Time for Mr. Jenkins. Just download the war file from jenkins-ci.org and drop it in the webapps folder. Restart server, wait half an hour for Nexus start up and then call Jenkins at http://pici:8080/jenkins/

Set up the first job is quite simple and after a while it’s all up and running. I got a strange errors when i have a space in the job name. Jenkins failed to write to the /target folder.

Blue build!
jenkins

Next step

Now when everything run, slow but lean, I just have to set up more jobs for release-builds and deployment to my JBoss AS running on my Widows 2008 server. The deployment pipeline will be the next project but I have to read up on that a bit more.

Setting up the Pi was an interesting experience, very different from my previous work on Windows servers, but newer the less very interesting. If it’s possible to run a CI / deployment pipeline on about 10W it’s really nice. But the next setup will use a 32G high speed SD card instead of an external hard disk. Eventually Nexus and Jenkins should run on separate Raspberry’s.

The inspiration

Great thank’s to Mattias Nyrén and Johan Rydström from Diversify for their DEV + OPS = Fun! presentation, at the Diversify Competence Conference in Barcelona last year, that inspired me to try this.

Links

Setup Tomcat
https://www.mulesoft.com/apache-tomcat-linux-installation-and-set
Maven
http://maven.apache.org/download.cgi
Nexus
wget http://www.sonatype.org/downloads/nexus-latest.war

 

The House project pt1

Early days

Collecting and storing

Collecting data have always been interesting and there have been several projects for collecting temperatures. It started with a AS3145KT, a small PIC board that read four DS1820 and send the temperature readings on a serial port (yes it was the mid nineties). Temperatures indoor, outdoor, brine (heat exchange pump) and hot water out was monitored. The data was collected by a service running on a Windows NT 4 computer and stored in an Access data base. The Acess db was not that optimal because after about three weeks the database file have grown to about three megs and the service come to a grinding halt, spending all time trying to update the database. It was solved by copying the file and empty the database, but it was a real mess to merge the data together afterwards. Well the MySql real database appeared and after that the data size was not a problem.

Presentation

Now the presentation was a challenge. Wading through temperature readings with a 10 seconds resolution isn’t nice! So now some data consolidation / compression was needed. Fortunately I discovered RRDTools in time before I manage to do something on my own. An excellent package providing both data consolidation and presentation. BUT it was written in old faithful C. Well it was possible to wrap the program or start it as a process from inside the logger service, but that was to messy.

RRD From C to Java to C# (and back)

After some digging the RRD4J project surfaced but Java was not my first choice. At that time I was a hard dying C++ / C# programmer in the MSDN trenches. Actually I was forced into Java (Java EJB3 on Jboss to be exact) at my assignment so the obvious solution was to port RRD4J to C#! Now RRD4N (.NET) was born and a new logger design started. The port was not optimal and I manage to introduce a nasty bug that showed up years later. I also made an effort to separate the data storing from the presentation, as we where told to do in 2005, but that made the implementation over complicated. Eventually I switched back to Java but that’s another post).

The old logger service was a simple piece of software just listening on the serial port and writing the data to the database every 10 seconds. The fine resolution was needed for the heater hot water return but a waste when it comes to outdoor temperature. It was time for a bit more sophisticated logger. This time it has to be in C# because it’s 2008 and C# is still a preferred language.

To be continued…..

 

 

 

 

 

How to show current version in an JBoss application

The problem

I want the current release version in my page footer on my JBoss/JSF application. After searching for a simple solution for a while I came up with something. I’m using maven to build the release version and Subversion are used as version control system. Maven have plugins for writing the pom version and svn revision number into the manifest file that end up in the war archive. The problem was to read that manifest file. The manifest file can be loaded by the ServletContext, but I did’n find a way to get hold of the context. I’m still convinced that there are a simpler solution, but as this is (another) hobby project it’s good enough.

Edit:
I found a much simpler solution to the problem of getting hold of the servlet context (link) thanx to this blog: http://www.mkyong.com/jsf2/how-to-get-servletcontext-in-jsf-2/

It’s not that obvius and a lot of ‘dot train’, but I can get rid of the listner code:

1
ServletContext servletContext = (ServletContext) FacesContext.getCurrentInstance().getExternalContext().getContext();

The solution

First the following plugins have to be added or updated in the pom file, in the build section, to generate the revision:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>buildnumber-maven-plugin</artifactId>
	<version>1.2</version>
	<executions>
		<execution>
			<phase>validate</phase>
			<goals>
				<goal>create</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<doCheck>false</doCheck>
		<doUpdate>false</doUpdate>
		<useLastCommittedRevision>true</useLastCommittedRevision>
	</configuration>
</plugin>
<plugin>
	<artifactId>maven-war-plugin</artifactId>
	<version>${version.war.plugin}</version>
	<configuration>
		<!-- Java EE 6 doesn't require web.xml, Maven needs to catch up! -->
		<failOnMissingWebXml>false</failOnMissingWebXml>
		<archive>
			<manifest>
				<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
				<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
			</manifest>
			<manifestEntries>
				<Dependencies>com.google.guava,org.slf4j</Dependencies>
				<revision>${buildNumber}</revision>
			</manifestEntries>
		</archive>
 
	</configuration>
</plugin>

Be sure to set the useLastCommittedRevision to true.

Running mvn package will genrate a war file containing a MANIFEST.MF file in the META-INF folder. The meta-inf file will contain the following version information:

revision: 12
Implementation-Version: 0.0.1-SNAPSHOT (if addDefaultImplementationEntries = true)
Specification-Version: 0.0.1-SNAPSHOT (if addDefaultSpecificationEntries=true)

Get hold of the ServletContext

With the information in the manifest file we just have to read it and show it on the page, and here it becomes a bit tricky. When running Spring on a Tomcat server I can get the ServletContext injected into my bean, but I didn’t found out how to do it on Jboss with CDI. The only way to get hold of the ServletContext was to set up an ServletContextListener and read the manifest file at application startup.

@ApplicationScoped
public class ServletContextProvider implements ServletContextListener {
	@Inject
	private Logger log;
 
	private static String version = "Unknown";
 
	@Produces
	@Named("revision")
	public String revision() {
		return version;
	}
 
	@Override
	public void contextInitialized(ServletContextEvent contextEvent) {
		ServletContext context = contextEvent.getServletContext();
		Properties prop = new Properties();
		try {
			prop.load(context.getResourceAsStream("/META-INF/MANIFEST.MF"));
			version = prop.getProperty("Implementation-Version");
			String revision = prop.getProperty("revision");
			if (revision.length() &gt; 1)
				version += " Revision:" + revision;
		} catch (IOException e) {
			log.warning("Fail to read manifest file on ServletContext:"
					+ e.getMessage());
 
		}
	}
 
	@Override
	public void contextDestroyed(ServletContextEvent contextEvent) {
		log.info("Context Destroyed");
		version = "Unknown";
	}
}

This listener have to be configured in the web.xml file (not needed with the new solution):

	<listener>
		<listener-class>se.minidev.rrdserver.util.ServletContextProvider</listener-class>
	</listener>

And at last, show it in the page! I put it into the footer in the page template: