Practical DevOps for Big Data/Maritime Operations

Use Case Description
Posidonia Operations is an Integrated Port Operation Management System highly customizable that allows a port to optimize its maritime operational activities related to the flow of vessels in the port service area, integrating all the relevant stakeholders and computer systems. In technical terms, Posidonia Operations is a real-time and data intensive platform able to connect to AIS (Automatic Identification System), VTS (Vessel Traffic System) or radar, and automatically detect vessel operational events like port arrival, berthing, unberthing, bunkering operations, tugging, etc. Posidonia Operations is a commercial software solution that is currently tracking maritime traffic in Spain, Italy, Portugal, Morocco and Tunisia, thus providing service to different port authorities and terminals. The goals of creating this case study are adopting a more structured development policy (DevOps), reducing development/deployment costs and improve the quality of our software development process.

In the use case, the following scenarios are considered: deployment of Posidonia Operations on the cloud considering different parameters, support of different vessel traffic intensities, add new business rules (high CPU demand), run simulation scenario to evaluate performance and quality metrics.

Business goals
Three main business goals have been identified for the Posidonia Operations use case. Posidonia Operations is offered in two deployment and operational modes: on-premises and on a virtual private cloud. When on-premises, having a methodology and tools to ease the deployment process will result in a shortened time to production, thus saving costs and resources. In the case of a virtual private cloud deployment, it is expected that the monitoring, analysis and iterative enhancement of our current solution will result in better hardware requirements specifications, which in the end are translated into lower operational costs. Posidonia Operations is defined as a “glocal” solution for maritime operations. By “glocal” we mean that it offers a global solution for maritime traffic processing and analysis that can be configured, customized and integrated according to local requirements. In addition, the solution operates in real-time making tasks like testing, integration, releasing, etc. more critical. By the application of the methodology explained in the book, these tasks are expected to be improved in the development process, thus resulting in shortened development lifecycles and lower development costs. Several quality and performance metrics have been considered of interest for the Posidonia Operations use case. Monitoring, predictive analysis or ensure reliability between successive versions will end in an iterative enhancement of the quality of service to our current customers.
 * Lower deployment and operational costs.
 * Lower development costs
 * Improve the quality of service

Use Case Architecture
Posidonia Operations is an integrated port operations management system. Its mission consists on “glocally” monitor vessels’ positions in real time to improve and automatize port authorities operations. The below image shows the general architecture of Posidonia Operations. The architecture is based on independent Java processes that communicate with each other by means of a middleware layer that gives support to a Message Queue, a Publication and Subscription API and a set of Topics to exchange data among components.



An overview of the main components for Posidonia Operations would be:


 * Vessels in the service area of a port send AIS messages that include their location and other metadata to a central station. (This is out of the scope of the architecture diagram)
 * An AIS Receiver (a spout) receives those messages and emits them through a streaming channel (usually a TCP connection)
 * The AIS Parser (a bolt) is connected to the streaming channel, parses the AIS messages into a middleware topic and publishes it to a Message Queue.
 * Other components (bolts) subscribe to the Message Queue to receive messages for further processing. As an example, the Complex Event Processing engine receives AIS messages in order to detect patterns and emit events to a different Message Queue.
 * The Posidonia Operation client "Web" allows to the employees of the port to have a visual tool that allows them to control the location of the vessels in real time. This website shows on a map the different vessels that are within the area of influence of a port, with a list of operations that are happening.

Use Case Scenarios
There exists different usual scenarios where Posidonia Operations development lifecycle can benefit from the knowledge explained in this book. These scenarios are a small subset of the possible ones but are representative of interesting situations and are based on our current experience delivering a data intensive application to port authorities and terminals.

Deployment Scenario
Currently Posidonia Operations can be deployed in two fashions:


 * On-premises: The port authority provides its own infrastructure and the platform is deployed on Linux virtual machines


 * In the cloud: Posidonia Operations is also offered as a SaaS for port terminals. In this case, we use the Amazon Virtual Private Cloud (VPC) to deploy an instance of Posidonia Operations that gives support to different port terminals.

Apart from this, configuration varies depending on the deployment environment:
 * Hardware requirements (number of nodes, CPU, RAM, DISK) to deploy of Posidonia Operations on each port is based on the team experience. For each deployment, the hardware requirements are calculated manually by engineers, considering the estimation of the number of vessels and the complexity of the rules applies for each message to be analysed. DICE tools can help to tune automatically the appropriate hardware requirements for each deployment.


 * Posidonia Operations deployment and configuration is done by a system administrator and a developer and it varies depending on the port authority. Although deployment and configuration is documented, the DICE tools can help to adopt a DevOps approach, where deployment and configuration can be modelled in order not only to better understand the system by different stakeholders, but also to automate some tasks.


 * A DevOps approach can help to provide also test and simulation environments that will improve our development lifecycle.

Support vessels traffic increase for a given port
Posidonia Operations core functionality is based on analysing a real-time stream of messages that represent vessels positions to detect and emit events that occur on the real world (a berth, an anchorage, a bunkering, etc.). Different factors can make the marine traffic of a port increase (or decrease), namely:
 * Weather conditions
 * Time of the day
 * Season of the year
 * Current port occupancy
 * etc.

This means that the number of messages per second to be analysed is variable and can affect performance and reliability of the events detected if the system is not able to process the streaming data as it arrives. When this is not possible, messages are queued and this situation has to be avoided. We currently have tools to increase the speed of the streaming data to validate the behaviour of the system in a test environment. However, the process of validating and tuning the system for a traffic increase is a tedious and time consuming process where DICE tools can help to improve our current solution.

Add new business rules (CEP rules) for different ports
Analysis of the streaming data is done by a Complex Event Processing engine. This engine can be considered as a “pattern matcher”. For each vessel position that arrives, it computes different conditions, that when satisfied produce an event. The number of rules (computation) to be applied to each message can affect the overall performance of the system. Actually, the number and implementation of rules vary from one deployment to other. DICE tools can help on different quality and performance metrics, simulation and predictive analysis, optimization, etc. in order to tune our current solution.

Give support to another port in the cloud instance of Posidonia Operations
Give support to another port (or terminal) in the cloud instance of Posidonia Operations usually means:


 * Increase the streaming speed (more messages per second)
 * Increase on computation (more CEP rules executed per second)
 * Deployment and configuration of new artefacts and/or nodes

In this case, DICE tools can help improve Posidonia Operations also on estimating the monetary cost of introducing a new port on the cloud instance.

Run a simulation to validate performance and quality metrics among versions
CEP rules (business rules) evolve from one version of Posidonia Operations to another. That means that performance and quality of the overall solution could be affected by this situation among different versions. Some examples of validations we currently do (manually): One of the main issues of the current situation is that measuring the performance (system performance and quality of the data provided by the application) is done manually and it’s very costly to obtain an objective quantification. By using DICE simulation tools, performance and reliability metrics can be predicted for different environment configurations, thus ensuring high quality versions and non-regression.
 * Performance: New version of CEP rules don’t introduce a performance penalty on the system
 * Performance: New version of CEP rules don’t produce queues
 * Reliability: New version of CEP rules provide the same output as prior version (they both detect the same events)

DICE Tools
The DICE Framework is composed of several tools (DICE Tools): the DICE IDE, the DICE/UML profile, the Deployment Design (DICER) and Deployment service provide the minimal toolkit to create and release a DICE application. In order to validate the quality of the application the framework encompasses tools covering a broad range of activities such as simulation, optimization, verification, monitoring, anomaly detection, trace checking, iterative enhancement, quality testing, configuration optimization, fault injection, and repository management. Some of the tools are design-focused, others are runtime-oriented. Finally, some have both design and runtime aspects and are used in several stages of the lifecycle development process.

Some of the DICE Tools have been used during the use case, in order to achieve the business goals set for the use case. The follow table summarizes the tools used during the use case and the benefits obtained from its use.

Conclusion
We affirm that the use of the DICE methodology and the DICE framework, are very useful in the Maritime Operations use case. It provides a productivity gain when developing the use case. The results obtained from applying the DICE Tools to the use case can be summarized in:


 * Assessment of the impact in performance after changes in software or conditions. We can predict at design time the impact of changes in the software (number of rules, number of CEPs) and/or conditions (input message rate “Simulation Tool”, CPU overloads “Fault Injection Tool”). Moreover, bottleneck and anomalies can be detected using the Anomaly Detection Tool.
 * Increase the Quality of the system. We can detect punctual performance problems with the use of the Anomaly Detection Tool and we can detect errors in the CEP component. Detection of loss rules and false rules detection and delays in the detection of the rules compared to the real time in which the event happened.
 * Automatic extraction of relevant KPI. Easy computation of application execution metrics (generic hardware metrics such as CPU, memory consumption, disk access, etc)and easy computation of application-specific metrics which have to do with computational cost of rules, number of messages processed per second, location of events on a map, quantification of application performance in terms of percentage of port events correctly detected by the CEP(s), etc.
 * Automation of deployment. Much faster deployment and possibility of deployment in different cloud providers.