Chapter 11. Development

Table of Contents

hvif2png Rendering Tool
Run from Apache Maven
Database Setup from Blank State
Building Locally On-Host and Run
Automated Testing
Integration Testing
Docker Image
Testing the Docker Image

This section covers how to setup the system for development purposes. First review prerequisites.

hvif2png Rendering Tool

Software on the Haiku operating system uses the HVIF file format for representing icons. This format is vector-art based rather than using a bitmap representation. To render icons clearly at any size, Haiku Depot Server uses an external tool to render the HVIF data into PNG images for display on the web browser. Part of the Haiku Source Code includes a tool called "hvif2png" that can be used for this purpose.

Work through building Haiku operating system from source and then build the hvif2png tool using "jam -q "<build>hvif2png". Note that you may need to install the "libpng" libraries for this to build and yield the hvif2png tool as a build artifact. On a debian-based host this can be achieved with "sudo apt-get install libpng-dev".

The build product in this case will be a binary for the build host.

To install this on a development or deployment host, you will either need to generate a tar-ball to install onto a linux environment such as debian. The scripts that yields the tar-ball is included in the Haiku Depot Server source at;

  • support/buildhvif2pngtarball.sh

The only argument to these scripts is the "generated" directory where the Haiku operating system build products are situated. It will yield a tar-ball in the generated directory in the "tmp" directory. The tarball can be unpacked anywhere on the target deployment or development host.

You can configure Haiku Depot Server to then use the deployed hvif2png tool.

Run from Apache Maven

The project consists of a number of modules. "haikudepotserver-webapp" is the application server module. You should configure a development configuration file at the following location relative to the top-level of the source; "haikudepotserver-webapp/src/main/resources/local.properties". See the configuration chapter for details on the format and keys for this configuration file.

To avoid the HDS application server from failing to health check connectivity to a mail service you may wish to configure the configuration property "management.health.mail.enabled" to "false".

To start-up the application server for development purposes, issue the following command from the same top level of the project;

./mvnw clean package
./mvnw \
-Dfile.encoding=UTF-8 \
-Duser.timezone=GMT0 \
-Djava.awt.headless=true \
spring-boot:run

This may take some time to start-up; especially the first time. Once it has started-up, it should be possible to connect to the application server using the following URL; "http://localhost:8080/". It is also possible to access the actuator endpoint at "http://localhost:8081/actuator".

There won't be any repositories or data loaded, and because of this, it is not possible to view any data. See the section on settng up repositories for details on loading-up some data to view.

Database Setup from Blank State

Assuming a blank Postgres database with a super-user authentication;

CREATE ROLE haikudepotserver WITH PASSWORD 'haikudepotserver' LOGIN;
CREATE DATABASE haikudepotserver OWNER haikudepotserver;
            

Building Locally On-Host and Run

The build process uses the Apache Maven build tool. The application comes with a script "mvnw" at the top level of the source which will download Apache Maven and use a specific version.

From source code, you can obtain a clean build by issuing the following command from the UNIX shell; "./mvnw clean package".

Given the state of the source code, this will produce corresponding build artifacts. Note that this may take some time for the first build because the process will need to download various dependencies from the internet.

To launch the binary with 256 megabytes of heap memory, issue a command similar to;

You will need to setup configuration if you have not done so already.

${JAVA_HOME}/bin/java \
                -Xmx256m \
                -Dfile.encoding=UTF-8 \
                -Duser.timezone=GMT0 \
                -Djava.awt.headless=true \
                -Dconfig.properties=file:///etc/haikudepotserver/config.properties \
                -jar /opt/haikudepotserver/haikudepotserver-webapp-1.2.3.jar
            

Automated Testing

The build system has a number of automated tests. To skip automated testing, use the "-DskipTests" flag on the "./mvnw" command.

Integration Testing

The module "haikudepotserver-core-test" contains the automated integration tests for the "haikudepotserver-core" module. It also provides (as a build artifact) a set of resources that can be used for testing other modules such as "haikudepotserver-webapp".

Most of the tests in the project so far are integration tests. The reason behind this is because the ORM data objects are not POJOs and so they require a backing database of some description. This is a facit of the ORM technology and makes it difficult to create specific, unit tests.

The integration tests can be run with the following maven command; "./mvnw clean verify". These tests will execute against an actual Postgres database. The Postgres database is started by the test logic using the "Test Containers" system.

Docker Image

The source can again be checked out at the correct tag. This provides a Dockerfile ready to go. The Docker tooling is used to create the image. This will use the files in the HDS source as well as resources from the internet to assemble a Docker image that can be run in Docker on a suitable virtual environment. From the support/deployment directory;

docker build --tag haikudepotserver:x.y.z .

Testing the Docker Image

To test the local Docker image before using it requires that a suitable database server is configured. If the database is not available to the test docker container, it is possible to TEMPORARILY port-forward from the internal database to the external address so that the Docker contain can access the service. For example;

ssh -L 192.1.1.1:5432:localhost:5432 user@192.1.1.1

A shared volume is used to convey runtime secrets to the Docker container. This contains a well-known file called "hds_secrets" and HDS reads this as it starts up. First create the secrets volume;

docker volume create secrets

Either you can edit the "hds_secrets" file on the host machine or you can edit from a container that is running a shell. An example of the file can be found in the deployment sources of the HDS project. To edit it using a shell on a running container;

docker run -it -v secrets:/secrets <image-id> /bin/bash

The file to edit is /secrets/hds_secrets and as the image has no editor, it may be necessary to edit the file with the basic "cat" tool.

cat - > /secrets/hds_secrets

Use CTRL-D to terminate and write the output file. An example content is to be found under ".../support/deployment/hds_secrets".

Now run the container;

docker run -v secrets:/secrets -p 8080:8080 -p 8081:8081 <image-id>

It should now be possible to access the running HDS system from a browser on the host system using "http://localhost:8080". It should be possible to view the actuator information on "http://localhost:8081/actuator".