Remedium was launched

The first beta version of Remedium was launched this week at

It has been a very long year moving this platform from my sketches onto a a real implementation as seen on the screenshot but I am happy with the current result.

Remedium opens the door to a new portofolio of projects based on the Java platform/language.

It is also an opportunity to move from an aging concept of applications restricted to work from a desktop onto applications that take advantage of web browsers and remote management.

So, when accounting the learning curve to program in Java proficiently, the effort to create a new platform from scratch and still account with the research work necessary to implement a new security concept, I would say that our development time required to make this possible since the past year was indeed put to good use.

Wouldn't have made it this far without the help of Professor Benoit Morel from CMU and my close friend José Feiteirinha. The professor helped me keep my feet to ground about the goals to reach and José brought really cool ideas to make this platform a reality.

Unless you had been involved since the start, it is difficult to have an idea of the effort required to implement this work. It required a bit too many rewrites until it resulted in a simple solution.

The first phase has been completed. The remedium client is indexing all files inside a given machine and placing the gathered data inside a set of databases.

Indexing files is not an amazing feature, creating a superb architecture was the real challenge. It provides enterprise functionality such as message queues, web server and component based modules that will allow us and other developers to expand remedium with more features.

The next two phases are equally challenging if not even more. We will start to work in a networked environment and aggregating data from multiple sources along with also feeding multiple points.

In the end, we should have a system capable of allowing users to exchange data and provide feedback on their own, using a perfectly decentralized structure.

A lot of work ahead, but it has been fun so far.

Little pieces.. Little pieces..

Today I am very happy. Solved a challenge that had been a concern for almost a year.

I needed to find a way of keeping two databases (more specifically two tables) in different machines synchronized with each other. It was simple enough to solve this task when designing the system.

As depicted on the image, the proposed concept was clean to understand and perfectly logical on paper. But in implementation this solution was not clean at all.

Maintaining two databases synchronized was not difficult, the difficulty appeared when the pairs of databases to be maintained starting multiplying and the conscience worries of knowing that even more databases would appear in the future.

When designing system architectures, one can imagine in advance several problems that will be encountered. But when building a brand new system, there exists scarce preparation to get things up and running as intended from a blue print. The proposed solution was too complicated for getting data exchanged. Common sense dictates that the more complex a system becomes, the more prone it becomes to errors and difficult to improve in the future.

I wasn’t happy. The implementation looked “ugly” from all possible perspectives, and to my despair it wouldn’t even scale to the exchange of gigabytes between remote machines as necessary.

After five months of frustrating effort to make the designed concept work in a simple manner, it was time to throw the towel and re-think the attitude to solve the problem.

Gathered my notes about what had been learned from past months, locked myself in the house for two days and only stopped when it was finally working: the new solution is simple and functional.

Too many data containers need synchronism, the server part had to manage each one manually and the whole thing grown into a huge puddle of mess.

The new solution is imaginative and breaks away from a rigorous concept of client-server onto a mixed model of peer to peer data exchange.

It is so simple that required only two methods inside a single class to work as intended and scale to whatever size of data in need of being exchanged up to the tens of gigabytes.

In resume, we start with the traditional client-server approach to discover who is authorized to exchange data, after the initial handshake protocol we ask each database container to “talk” with their counterpart database container on the server side.

This method drastically reduced the administration burden. Each client container will update with new information their server container when it becomes available and vice-versa. We maintain a supervising entity of transactions to provide security while allowing each container to ensure that they correctly pass data onto the other side.

So, I no longer need to care about individual instances of containers and if more instances are added to the bulk, they will follow the same rules and patterns as the previous ones.
Breaking the data exchange problem onto little pieces has helped to break the complexity of this problem into a manageable solution.

Simple solutions are nowhere simple to be found. They take a lot of effort, time and concentration to be found. They truly bear a heavy price to achieve.

My advice is that you should make your best to run that extra mile and find them.

In the end, they are worth the effort.

(screenshot image credits to