all about java, coding and hacking

Caluga - The Java Blog

this blog will cover topics all around Java, opensource, Mongodb and alike. Especially Morphium - THE POJO Mapper for mongodb

found results: 53

<< 1 ... 2 ... >>

category: Computer --> programming --> MongoDB --> morphium

New Release of Morphium V3.1.7

2017-11-21 - Tags: java mongo

We just released V3.1.7 of morphium - caching mongodb pojo layer.

  • performance increase insert vs upsert
  • update handling of non-mongoid ID-fields (bugfix)
  • updated Tests
  • new strategy for buffered writer: WAIT
  • setting maxwait / timeout for waitstrategy in bufferedwriter
  • moving id creation to morphium, implementing proper inserts, fixing bugs
  • fixing buffered writing on sharded environments
  • performance increase
  • mongodb driver version update, checkfornew default fix

Details can be found at the project page on github. You can easily add it to your projects using maven:


category: Computer --> programming --> MongoDB --> morphium

new release of Morphium 3.1.5

2017-09-29 - Tags: morphium

This release is about tidying up things a bit and includes some minor fixes and tweaks.

  • fixed some statistics
  • removing drivers into different project
  • improving byte array / binary data handling
  • fixing some tests
  • fixing checkForNew behaviour, is now a bit easier to understand. If switched off globally, setting it at the annotation does not have an effect.

Available via Maven Central

category: global

git and optware fail after Qnap update to 4.3.3

2017-08-21 - Tags: git qnap storage

for quite some time now, I have a qnap runnin in my basement storing whatever Storage needs, my servers or family members might have.

The qnap is also being used as git server - wich was totally fine the last couple of years but failed recently...

fail after update

I just saw, that there is a firmware update pending for the TS and that this is more or less the last one for this old model (hey, the qnap is not that old, is it 3 years maybe?). There also was a warning message in the release notes that the switch to 64bit might cause some apps not to work anymore...

so far, so uninteresting. The usual blah blah... Unfortunately Optware (which installs addidional opensource software) is only available in 32 bit obviously.

But this is something you will only learn the hard way: the software just woud not work. trying to access the GUI of it, will just result in "internal server error", "page not found" or simply "Permission denied" - depending on what you just tried to make it work.

if you log in via ssh and try to use ipkg in the shell, you will get a file not found error, although you tried to exec the file specifying the absolute path. The linux gurus know - this means some lib is missing!

And in that case, I do not need to dig further - the 32 bit libs are not there anymore.

That would not be the problem, but everything installed by Optware also relies on those libs and hence fails now... bummer!

to solve the problem..

.. you install an alternative to optware called entware. You download the file and install it via Qnap-GUI. Unfortunately this tool does not have a "nice" (the so called gui of Optware was never nice) GUI for this, just a command line command called opkg (mind the o).

after that you only need to create symlinks for the git-binaries (after fiddling with the sshd for enabling pk auth and more than just the admin user):

cd  /bin
for in $(/opt/bin/git*); do
   ln  -s $i

And as always - happy hacking!

category: security

Virenscanner for Mac or Iphone

2017-05-29 - Tags: virus security

originally posted on:

As some of my readers are not that good in reading and understanding German, I'll try to write some of my posts, which might be interesting in english also. I hope everything is understandable so far emoji people:smirk - This is not a translation, just a rewrite in English. Lets start with the last post about Anti-Virus Software

Anti-Virus software on the Mac or iPhone?

People are more and more concerned about viruses. Also Mac users start to worry about that threat. So, is it neccessary to install anti-virus software on the mac? I was asked that question several times lately...

First of all, this question it totally justified. Everyone should harden his computers and phones as far as he feels safe. Actually, more than a feeling would installing an anti virus software on the mac not produce. As of now there is a handfull of harmful software known for the mac, all of them will be filtered by macs own security mechanisms and thus are not really a thread anymore.

At the moment the Mac is safe - but soon...

"Soon it will be very bad for Mac users. Viruses will come..."

I hear that every year. When the new market share numbers are published and OSX gains. Then everybody tells me, that the marketshare is soon reaching some magic percentage when it will be so interesting for Virus-Programmers to write Viruses for Macs that ther will be a flood of malware. Or will there?

Of course, marketshare is definitely influencing the number of malware for a certain system. But in addition to that, you should take the necessary effort and feasibility into account. And the use... (in terms of malware: what could I gain? Keylogging? Botnet?)

I think, one should take both into account: Is the system easy to hack, it will be hacked, even if almost nobody is using it. Is the systems' marketshare not that high, but relatively simple to hack - it will be hacked! For example: the Microsoft Internet Information Server (IIS) is being attacked far more often than the marketshare leader Apache. When a system is very hard to hack, you need some good incentive to take the effort. Which could be the reason why there is no real virus for Linux or OSX.

And when I write "hacked" its more in a viruses term of use - not remote hacking of user accounts. And: it needs to be done more or less automatically by software. Otherwise there will be no real virus or worm. If somebody wants to hack a certain machine and has the knowledge, he can do it - depending on resource, effort and motivation ;-) I knew a hacker once, you could hire to hack the servers of an competitor for example. Those things are always possible. But this is almost always an administrative problem. There is no real protection against those guys. You can hack any machine you can physically touch - resources and motivation required, of course. Best example: the Jailbreaking of iOS! But if there is enough motivation, resources and knoledge, you're not really safe (see NSA & Co). So it's a question of effort: to hack the machine of a 14 year old student is definitely not as interesting as hacking the machine of a CEO of a big company or a politician.

Same thing is valid for malware and viruses: Malware is not developed for the fun of it (well, at least most of the time it's not). People want to make money with them. This is the only reason why there are Viruses! Maybe that's the reason why there is still the rumor, that actually the Anti-Virus-Software vendors pay some virus developers to spread viruses every once in a while. who knows... i cannot rule that out for sure. I met some Russian guys who claimed that to be true. If so, then I don't understand why there is so few malware for Linux and OSX. That would be a huge market for Anti-Virus-software vendors - millions of users, complete new market segment worth millions or billions of dollar.

I think, viruses are only developed to directly (data theft, credit card fraud etc) or indirectly (by spamming, using hacked machines as bots on the way to the real target, bot nets etc) to MAKE MONEY! And when money is involved: the effort and resources necessary to achieve that must be lower as the estimated revenue of course. So we are at the combination of effort and marketshare again. Marketshare influences the potential revenue (assuming that when more machines are hacked or affected by malware, more money is being made), efforts are the cots. And in some cases this is obviously not a positive figure...

malware in general

First of all, you need to distinguish between the different kinds of malware. In media and the heads of non-IT-guys all malware is named "Virus". But it's necessary to know what kind of software pest is out there in order to be able to protect yourself against those effectively.

The media and in the heads of non IT guys usually every malware is called a "virus". But in order to be able to protect yourself from those malware, it is important to know exactly what you're dealing with. You can classify three different kinds of malware: Viruses, Trojans and Worms - but there are some mixtures of those in the wild, like a virus which spreads like a worm - hence toe umbrella term "malware").

  • a virus is a little program, which reproduces itself on the system and does dort it's dirty stuff. most of the time, those viruses do exploit some security holes in order to get more privileges. If those privileges are gained, the virus will do things things, you usually do not want him to do - like deleting things, sending data do a server...
  • a trojan is most similar to a virus, but needs the users help to get installed. Usually it looks like some useful piece of software, a tool of some kind, but in addition to the funktionality you desire, it also installes some malware on the system. Usually the user is being asked, that the software needs more access - on OSX at least. But even if it does not seek privilege escalation, your data still is at risk. See wikipedia
  • a worm is a piece of malware, that is capable of spreading itself over the network (either locally or over the internet, see wikipedia). You can easily protect yourself against worms if you just unplug the network from your computer (and/or disable WiFi) or at least disable internet access. Sounds insane, but I myself was at some offices and departments, who do exactly that: They are unplugged from the internet in the whole building, only a certain room, which is specially secured, does have internet access - but not into the local network.
  • a new type of malware just got famouse with wanacry: ransomware these are usually some trojans which do then use bugs in the system to encrypt all data. And you only can decrypt it, if you send a couple of bitcoin to the author.
  • of course, there are mixrures of all those types. Usually there is a trojan, that acts like a virus on the system to gain root (or admin) access and uses that to spread himself over the network (worm).

on the Mac?

you always get such "warning messages" on the mac, if any malware wants to do something, that is out of the ordenary and does need system privileges. Exactly that happened a couple of months ago when there was a Trojan, who was installed using Java and a security issue therein. But still, the users were asked, that the software needs more privileges. And enough people just said "yes" to very question...

Please do not get me wrong, I do not want to deemphasize malware. It is out there, and does cause a lot of harm and costs. But you can be saved by trojans more or less by using common sense:

  • Why does the new calculator app need access to my contacts?
  • Why does my new notes app need admin permissions?
  • why does software XY ask about this or that permission?
  • is it clever to download tools from an untrusted source, especially if this source does offer cracks or exploits or something?

It is getting harder, if the trojan uses its newly gained privileges to hack the system itself, maybe even exploiting additional security issues there, so that the user is not being asked. Then a secure operating system architecture is helping to avoid those kind of things. Which is usually implemented by all unix OS.

Viruses and worms can not be avoided so easily hence those do exploit bugs in the system. But even then, Unix based systems are a bit better suited for that case than others.

This is according to a very strict separation between "System" and "Userprocesses" and between the users themselves. And, especially on OSX, we have Sandboxing as an additional means against those malwares. And the graphical user interface is not bound so tightly to the operating system kernel like it is in Windows NT for example.

But, overall, the Admin of the system is the one, really determining how secure a system is. He should know about the problems, his OS has and can take counter measures accordingly.

Malware on mobile devices

if we are talking about malware, whe should also have a closer look at mobile devices. Especially Smartphones and alike are often attaced, because they do have a lot of interesting data which are just worth a lot of money. Or you can just make money directly (e.g. by sending expensive SMS).

to "beak into" such a closed system, very often security relevant bugs are exploited. But sometimes just social engineering is also successful.

Usualy the user is than made to do some certain action, that does involve downloading something, that is installing a trojan on the system. or just opening the system that the attacer than can install some malware. Or you just "replace" an official app in the corresponding appstore.

Trojans on the smartphone usualy are masked as litte useful tools, like a flashlight app. But they then copy the addressbook and send out expensive short text messages, switch on video and audio for surveillance and so on.

It's hard to actually do something against that, because you do not know, ob the app, you install does something evil or not. Apple is trying to address this problem with the mandatory review process that all apps in the Appstore need to pass. All apps need to pass an automated and a manual check before anyone can download it. The apps are for example not allowed to use unofficial API (for accessing the internals of the os) and that the app does exactly what the description of the app tells the users it does.

This is no 100% protecion, but it is quite good (at least, i do not know any malware on the appstore right now).

But I would also name WhatsApp, Viber and alike as malware. Those do exaclty that, what a trojan would do. Grab data, upload them to a server. But here the user happily agrees and likes it.... but that is a different topic.

on iOS users are a bit more secure, than on andriod (if you do not jailbreak your iphone). Android is based on Unix, but some of the security mechanisms within uinx have bin "twisted". So there is a "kind of" Sandbox, just by creating a new user for every app on the device. So all processes are separated from each other. Sounds like a plan. But then you end up having problems with access to shared resources, like the SD-Card. This needs to be global readable!

Also the Security settings of apps can at the moment only take "all or nothing" (that did change in later versions, at least a bit). So you can either grant the app all the permissions, it wants. or No permission at all.

Problematic is, you need to set the permissions before actually using it. This makes it very easy for malware programmers, as people are used to just allow everything the app needs.

IN addition to that, Andriod apps do have an option to download code over the internet - this is forbidden in iOS. And there is a reason for it: How should any reviewer find out, that the code downloaded stays the same after the review? Today I download weather data, tomorrow some malware wich sends chareable short texts?

Another problem is, that there is not one single store for android but more like a quadrillion of them. Hence you can install software from almost any source onto your andriod device.

of course, every os does have bugs which might be used to execute good or evil code on the device. Hence there are updates on those OS on a regular basis, which should fix security relevant bugs and issues. with iOS you can be sure, that you get updates for your device and the OS on that for at least a couple of years. (current iOS run on 3 to 4 year old hardware still). With android it is not as easi to make such a statement as the support is strongly depending on the vendor. It might be, that support for devices not older than 1,5 years are stopped. Especially the cheap Android phones loos support quite ealry, which means there are still Android 2.x out there (and you actually still can buy new devices with that installed). Including all the bugs, that the old OS version had - which makes it quit interesting for malware authors.

in combination with the a bit more insecure system and the unsecure sources of software, this makes android a lot more prone to be hacked or infected by malware. And this makes it especially interesting for the bad guys out there.

This is leading to really rediculous things like virus scanners and firewalls for smartphones. read it here in German

You can say about apple, what you want, but the approach of the review of every app for the appstore is at least for the enduser a good thing (and by that I do not mean the power user who wants to have his own Version of the winterboard installed). Even if you are not allowed to do "all" with your phone - Normal users usually do not need to.

And the poweruser can jailbreak his iphone still - and if he knows what he is doing, it might be ok.

installed software as a gateway for malware

Unfortunately viruses, trojans or more generic malware, can use any bug of any software on the system, no matter if it is part of the OS or not. So a breach can be done via a 3rd party software. Like the "virus" that was infecting a couple of thousand macs through a installed java. In this case, the user again was asked several times(!) if he wants to grant admin permission to a java app - if you agree to that, your system is infected. If not - well, nothing happens. Common sense is a great "Intrusion Prevention System" here.

Of course, osx or any other operating system cannot avoid 3rd party software of doing some dubious things - especially, if the user agreed to it. But the software is only able to gain the permissions, what the software that was used as gateway has. An on OSX and iOS all applications run in a Sandbox with very limited permissions. If the app, a malware uses as gateway does not have admin permissions, well, the malware won't have it neither.

If all 3rd party software you run on your system only has minimal permissions, then a malware that would use those as a gateway would also have minimal permissions, and could not do too much harm (and could easily be removed).

But the thing is, just getting access as a normal user is not the goal of such a virus vendor - they want your machine to be part of a botnet in order to sell your computing power or to use it in the next DDOS-attack. Or just use it as spambot.

Also it is in the best interest of this virus vendor to make it as hard as possible to remove the software from the system. So everything needs to be burried deeply into the system files, where normaly no user takes a closer look at.

And this is usually only possible, if the malware would get admin permissions. It could use "privilege escalation" hacks in order to gain more permissions - best case, without the user knowing.

Usually, the user should be asked, if any process tries to gain more permissions, and the user may or may not agree to that (that happens every time, a process tires to do something outside of the sandbox). of course, that would be bad, as it would reduce the success of the virus. So virus vendors try a lot to avoid this kind of informing or asking the user.

on unix systems this is quite some hard task, or at least a lot harder as on windows OS see here or here. In almost all of the cases, on osx the user is informed about software that does do something strange.

But there is one thing, we should think about even more: if any software could be used as a gateway, I should reduce the number of programs on it to a minimum (especially those, with network functionality... which is almost any app nowadays). Especially I should keep software that runs with admin permissions do the absolute minimum - which is 0! Unfortunately, virus scanners and firewalls and such "security" software, need admin permissions to do their job. This is one of the reasons, why anti virus software is very often target of attacks from malware and viruses and end up as spreading the very thing they try to protect us from. (this has happened on windows machines)

Then, count in that a Anti-Virus software can only detect viruses, that are publicly known for a while, you actually would not increase the protection a lot by installing this on your machine.

Same thing goes for firewalls, which have their use on windows systems unfortunately, but not on unixes or osx. How come?

Well, on unix systems the network services are usually disabled, or not installed! so the visible footprint on the internet for such a machine is quite low.

Windows on the other hand, is depending on some network services to run, even if you do not actively use it. Disabling those serivces (and SMB is one of them - this was used by wannacry!) would affect the system in a bad way and some things would not run as expected see here.

Hence, if your system does have a minimal footprint - or attackable surface - you do not need a firewall.

Btw: do not mix up this local firewall, with a real IP-filter firewall that is installed in routers!

Virus scanners on servers

So, there is a lot that explains, why using virus scanners on the desktop (especially if it is a unix desktop) can have negative effects or at least no effect. So, you're probably fine without them...

But on servers, things look a bit different.

If i have clients are not well maintained or I just do not know (or just windows emoji people:smirk ), I want to avoid storing data on my server, that could infect them. So, even if the viruses do not infect my server, or my mac. The mails could be read by other clients, that might then be infected. So, be nice to your neighbors...

Do not forget, virus scanners do need some resources. And sometimes a lot of it (they monitor every access to/from the system, which in return can or will slow it down to a certain extend).

Security is not for free

Whatever you do, security comes with a cost. in "best" case, things get inconvenient to use, cause you need to do complex authentications or need to agree to a lot of popups that pop up every second (remember Windows Vista? emoji people:smirk )

in the worst case, there are errors because of the high complexity, or expensive bacause you need additional hardware (iris scanner, external firewalls, Application-level firewalls that scan data for viruses...) and still being inconvenient at the same time. And time consuming (those systems need to me maintained).

So, you need to decide, what level of security do you want, and what is senseable. The use of an Iris Scanner for the Bathroom is probably a bit over the top... don't you think?

common sense

the best weapon in our hands against malware still is the thing between the ears! Use it when surfing, when installing software. No software will ever be able to stop you from doing something stupid to your system.

So, it is not ok to feel to safe when being on a mac. This leads to sloppiness! Passwords for example, need to be real passwords. If the password could easily be guessed, why should a malware take the detour for hacking the system? It could just "enter" it and you lost your system to the bad guys....

I don't want you to get paranoid on that neither! Just keep your eyes open. When installing software, only do it from trusted sources. And, from time to time, have a closer look. There was malware available in the AppStore for a couple of days / weeks before apple removed it. Even the best system can be outwitted.

You should think about, which apps you use and which not. And even apps, that are not really malware per se, dan do harmful things - like whatsapp and viber. You should ask what is happening there! I mean, whatsapp is uploading the addressbook to facebooks servers and the people whos data you upload there, are not asked if they like that... just a small example...

Just remember: if the product is for free, then YOU are the product

There is no such thing as free beer!


I tried to be not tooo aniti microsoft - which is hard, because most of the security issues are only existing on windows systems. Unfortunately on windows the user needs to make it secure and stop it from doing harmful things.

Anti Virus software does lull in the user to make him feel safe, but most of them really have a louse detection rate. And really new viruses are not detected at all.

So, should you install anti virus software on a mac? You need to decide yourself, but I tend to "no, you should not". But there are valid reasons to see it differently. But I am not alone with my thoughts: see here and here.

But you definitely should distinguish between desktop and server, as you may be serving out data to windows machines as well, a virus scanner might be a useful thing.

Almost all I wrote here is valid for osx and for linux or other unixes. Right now, there is no know wide spread malware out for unix based systems, that I know of.

category: Computer --> programming

Markdown - THE cool way of writing text?

2017-05-26 - Tags: markdown

What's that?

We do write a lot of texts every day. Everyone of us, who is working with computers. And we all struggle with formatting texts. Text without proper indentations, emphasizing or boldface is a bit boring to read and you lack an additional means of expression.

Most editors, even in web nowadays support "WYSIWYG": What you see is what you get. This is looking nice, most of the time it acutally works. But is hard to use (you switch between mouse and keyboard very often). Well, it annoyed me at least. Especially if you edit text, and after editing it, the italics are not at the right place anymore and vice versa.

So, why not put special sequences in normal text an have it rendered afterwords?

This is not really a "new" idea: Not so long ago, this was one of the prefered ways of formatting Text. You enrich standard text with so called markup codes to make it possible to define special formats and what not. Most of you probably use this but do not realize that: HTML is the most pupular version of a Markup Language. Of course, nowadays you see it in your browser, and not the code. So it became a lot more, than just "Hypertext".

One other well known (well, if you are a nerd emoji people:smirk ) implementation of a markup language ist LaTeX - this is a markup language especially concentrating on typesetting. So you will get a very good looking printout (but can use your favorite Text editor). Maybe that is one of the reasons, why it is so popular when using for master thesis or diplomas.

Those "languages" are a bit too complicated and complex to just create an email or so. And that is, where Markdown comes to help.

why that?

But why would you want to have your text enriched with command sequences, if you can do it with the mouse cursor?

Yes, it works using a mouse. But there are people amongst us, which type so quickly that switching between mouse and keyboard actually slows them down. And I am one of them.

And the beauty of markdown is, that the sequences being used for formating text, are easy to reach more or less standard characters (nothing special).

For Example: If I want something to be amphasized (italics), I just put an underscore _ before and after the sequence - done!

or I want to have a numbered list... I start a paragraph with a 1. - this will be indeted as expected and als subsequent lines accordingly.

But there is a lot more, a list of what is possible can be viewed here.

I for instance write everything here in the blog with Markdown! So I did not have to use a WYSIWYG-Editor for the Frontend. And texts are stored every "simple" and can be indexed quite easily. So no proprietary XML-Stuff or even worse some binary format.

Especially if you are a developer, Mardown does have advantages. If it is properly configured, your sourcecode (which are usually part of the documentation) can be highlighted (if markdown is configured properly):

Example Java Code
public static void main(String args[]){
     System.out.println("Markdown is colorful");
Example Bash-Shell script

echo "and understands different languages"
for in $(ls); do                       
   echo "This is rocking $i"             

I will add another post here some time, because in order to get the syntax highlighting to work with the markdown renderer library I use, I had to extend the software a bit.

cool, and now?

Well, if you want to use Markdown, you will have to learn those "commands" or Sequences. But, it is really worth it, to do. Especially if you are a touch typist and fast at it emoji people:smirk

There is a lot of tools to easily create stunning texts. And there is a huge community for markdown, who do come up with new features, new tools. So there is an extension fro Markdown like CriticsMarkup, or even board the exensions from multimarkdown.

All in all , this is a really powerful toolset which helps you concentrate about the thing that really matters: The Text!

here you can see a list of all standard functionalities in markdown.

Markdown on the Mac

on the mac there is already a list of good tools that support markdown or help with syntax highlighting. Those editors sometimes have a live preview and are able to export your text as HTML, RTF or PDF. I will create a couple of Test of Tools posts for those editors. Hier a little list about some tools I used:

Also most IDEs do support Markdown (Xcode, IntelliJ & Co). So, to create documentation in your Project directly, markdown definitely is an option.

Unfortunately the support in mail is a bit of a problem now. Right now, apple Mail does not support markdown. But you can use markdown-here to help minder the pain.

Of course, that is a bit inconvinient to use. Better use Mailclients that support markdown natively like MailMate or AirMail2 (the latter one does have some severy data privacy issues, but that is a different topic).


Markdown does have a lot of advantages especially because of the excellent tools available already. You just type your text, concentrate on typing. Formatting is done automagically. Afterwards you can export it as PDF, RTF or whatever.

So markdown definitely is an alternative not only for developers, but also for users who create a lot of texts like books, reports, emails and what not.

But you will have to get used to the tools, and you need to add the rendering of the document to your time sheet as well....

category: global


2017-05-20 - Tags: english ergodox-ez

I was a little bit bored of always creating the layout by "hand" and I have to admit that I never got the Massdrop Configurator to work properly. And I never managed it to get the Overview PNG up to date with all of my changes. Hence I decided to create a little tool, to help me with that.

  • it should create the keymap.c file
  • it might also read it - no need for an own filetype
  • it should have support for Macros - there are only a couple of macro types that are useful. Like type that sequence of keys or if pressed, press these keys, if typed, type this sequence
  • it should create a GUI that could be used as documentation for it

So... The first milestone is finished, I created a little java application that can read my keymap.c and is able to show the layout graphically. Yes I know, it does not look that good, but it is ok...

Here is the first Screenshot of this simple tool, showing my current osx_de layout:

Things still to be done:

  • proper C-file export
  • macro support
  • sorting of layers
  • KeyMapping
  • UI improvements...

so for now, this tool actually helps with documenting of keymaps. It reads in a keymap.c file and shows it in a more graphical way. This is an example:

The tool is available on Github:

Disclaimer: This is in prototype phase. Not really more than a proof of concept. So use it at own risk. Same for the code - it works, but there are some ugly parts in it...

Here is the documentation PNG of all layers put together. Creation of this took 5 mins.


Although some people thought this would be an April's fool prank - it isn't. This tool really exists and really works! It is now capable of creating overview PNGs with a click of a button. This is the overview PNG for my own layout osx_de:

But it works with all other layouts so far as well. Like the default one:

This is an example, where the parsing worked fine, but the file lacks some information. The layers do not have descriptive names. And there you also see that there are a lot of macros being called. Here most of them are just unicode output of special letters, but the ELG does not show them properly:

Things still to be done:

  • create a release, executable jar file, so that everybody can just test it.
  • fix didplay of keys - EXCLM should be a !.
  • fix parsing of macros. especially the dealing with unicode keys
  • if the display is correct, deal further with the input of keycodes
  • store as proper keymap.c

First BETA Release available Go have a look here here. This is a BETA Release, not all functionality is implemented yet. But you can create your documentation overview PNG file...

should run on all machines having java installed (current JDK8!)

category: global

Tweet: got the first version of editing ready. Only Macro...

2017-05-20 - Tags: tweet

got the first version of editing ready. Only Macros missing for now. Next: keymap.c creation...

category: global

Tweet: New documentation PNG for the @ErgoDoxEZ layout os...

2017-05-20 - Tags: tweet

New documentation PNG for the @ErgoDoxEZ layout osx_de. was super easy to create now...

category: Computer --> programming --> Java

ErgodoxLayoutGenerator Documentation

2017-05-20 - Tags: english ergodox ergodox-ez java-2

If you read my blog, you might have noticed, that I'm fond of cool keyboards. We IT-guys use them the whole day, but most keyboards are just awful to work with. So I'm glad I found a "proper" ergonomic one, my ErgodoxEZ (look at for more information or read my review here).

One of the greatest things about the ErgodoxEZ is its programmability. But you actually need to know how to code, in order to get it to run easily. And even if you do, it is not very intuitive to create a c-program that runs on your keyboard and is showing your layout.

There is a WYSIWYG-Editor at, but unfortunately I never got it to work properly - and it is somewhat limited in functionality (like Macro support etc).

Hence, I started creating my own little tool for creating my layouts and optimizing them...

The ErgodoxLayoutGenerator

The ErgodoxLayoutGenerator (very clumsy name, but I could not think of something more better for now - lets just call it ELG for short) is programmed in Java, so it should work on all machines and OSs java is available for. You need to have the latest version of the Java8 installed... As I got some feedback already: It needs to be the latest official Oracle JDK or JRE, it does not work with OpenJDK out of the box. If you need any help using OpenJDK, just contact me...

The idea is pretty much similar to the one in massdrop, but the ErgodoxLayoutGenerator is built around the qmk firmware, and it generates a keymap.c file for your local installation!

Prerequisites and remarks

  1. you need a current version of Oracle Java installed on your system
  2. you should have a current version of the QMK-Firmware repository cloned to your machine for the LayoutGenerator to make any sense. It will still generate keymap.c files for you, but you cannot flash them to your keyboard.
  3. Although the qmk-firmware is available for a bunch of keyboards, the ErgodoxLayoutGenerator is only covering the Ergodox and ErgodoxEZ (hence the name).
  4. keep in mind, that the ErgodoxLayoutGenerator is still in development and was not heavily tested on different environments or OS's.
  5. If you have any feature requests or ideas you want to share, please visit the github project page here
  6. The ELG cannot reach the flexibility of a c-program of course. It is limited to the most basic functionalities. For example it lacks support for UTF-Characters and the GUI definitely needs to be refurbished. But I never wanted to win a desgin price with that - it should just work.
  7. The ELG does read the keymap.c file and parses it to some extend. But that means, it is very relying on the structure of the file to be more or less similar to the "official" ergodox layouts that come with the qmk-firmware. If you want to use the ELG to work with your layout, you should make sure, that your keymap file does follow this lead
  8. There is support for custom macros, where you can just add c-code to your layout. There is no checking if that is correct or highlighting or whatever.
  9. The support for non-us keyboard layouts is a bit... clumsy. You can use the proper keycodes in most cases, but especially when creating macros, the automatic generation of macro strings, is not working. It will create macros for the US layout. This is limited due to the fact that there is no regulation on how to name the keycode definitions. For example, there could be a KC_ACUT, DE_ACCENT, DE_OSX_ACT all "meaning" the same key.
  10. As the ELG is reading in your keymap.c file, things might get messy. If you use one of the standard keymaps, all should go fine. But if you use some of the advanced stuff, things will go wrong. It will probably parse the keymap and most of the functions will show up fine, but some things might go missing. At the moment there is no support for the FN[1-9]-keys! So, if your keyboard uses those, please make sure, that you replace the functionality with a macro.

The parsing of the C-Files does have its drawbacks, but the great advantage is the possibility to have ELG read in existing keymaps! That does work most of the time.

Getting Started

Usually it should be ok to get the latest release from the gitub page. Download the JAR-File attached to the release and double click it. If java is installed properly, it should start up fine and you should see a screen with an empty layout:

starting it from commandline

If the above does not work, you can try to run the jar file from commandline with java -jar ergodoxgenerator-1.0BETA2.jar - or whatever file you downloaded. If it still does not work, you'd get a proper errormessage then. On the github page you can create an issue for that.

Sometimes it might be helping, to not use the JAR-Start funktionality of java, but run it manually: java -cp ergodoxgenerator-1.0BETA2.jar de.caluga.ergodox.Main. If you get the same error as above, please create an issue at github.

compiling it yourself

If you're a java-guy, you can compile it yourself (and hopefully contribute to the project). The project is a standard maven project. So if you cloned the repository to your local machine, running mvn install should compile everything. Your executeable will then be in the directory target and called ergodoxgenerator-1.0-SNAPSHOT-shaded.jar. This one should be executeable...

Defining the QMK-Sourcedir

At the bottom of the main window, there is a button called set qmk sourcedir. Here you should set the root directory of the qmk sources. This is necessary for putting the layout at the proper position at the end. If you did not specify this directory, you always need to navigate there manually.

Opening a keymap

If you defined the QMK-Sourcedir, the open dialog will start in the correct directory for the ergodox layouts. You choose the directory of the keymap, as all keymap files actually are called keymap.c.

When the keymap was parsed successfully you should get a display of the base layer of this layout.

UI explanation

Layer chooser

Usually an ergodox layout consists of several different layers. Like when hitting ALT on a keyboard, all keys do something else. But here you are more or less free to define as many layers as you want (not really, your keyboard has limited memory). To switch to the different layers, you need to press or hold a key (see below). when changing the layer in this combobox, the layout will be shown accordingly.

Adding, renaming, deleting layers

When creating a layer, you only have one layer called base defined. The buttons on the top let you create new layers, rename them or delete them. Attention: Deleting layers and still having layer toggles or macros referencing them, will cause unexpected behavior. Also you should not rename the base layer, as this might also cause problems later.

LED indicators

on the top right of the window, you can see the 3 LEDs the ergodox does have. You can switch them on and off by clicking for the selected layer. This reflects the behaviour of the LEDs when flashed on your keyboard.

the keys

the main portion of the screen is filled with keys. These represent the corresponding key on the ergodox keyobard. If you select a key, it will be marked (green border) and a more detailed description of the key is shown in the lower part of the window. Assigning functionality to a specific key can be done via the context menu. Just right click on a key, it will be marked and then you can

  • clear: well, this usually means, that the keycode KC_TRNS is assigned to that key. This code states, that this key should behave as defined in the "previous" layer (usually base). This can of course not work in the base layer!
  • assign a key: well, you assign a keycode to that specific key. The keycode is represented in Text. All keycodes starting with KC_ are the "default" ones. There are also different keycodes for different locales or OS, like DE_OSX_. You can also assign here a combination of keys. Like "Shift-A" or "CMD-S". If you specify more than one modifier, a macro will be created for you!
  • assign layertoggle: This will create a key that can toggle a certain layer on and off.
  • assign layertoggle/type: this is temporarily toggling to a specific layer as long as the corresponding key is held. When the key is released, you return to the layer before. If you just type that key, it will issue a different keycode. For Example, y does issue CTRL when held in my layout
  • Assign macro: This is the most feature rich thing, see below. Attention: The macro functionality is not 100% implemented yet!

the legend

At the lower part of the window there are representations of some colors and keys. Those state, that a green marked key would be of type Layertoggle / type, and hence shows two informations: first line is the key being typed, 2nd line is the layer to switch to as long as the key is held

Save Button

There you will be asked for the file to store the keymap to. This file should always be named keymap.c and should be stored in the QMK-Sourcedir at keyboards/ergodox-ez/keymaps/YOUR_KEYMAP, where YOUR_KEYMAP needs to be replaced with the name of your keymap.

When you want to store a completely new keymap, you need to create this directory yourself. You can do that from within the save dialog.

Save Img

This will create a PNG showing all layers. This is useful to add to your layouts, if you want to publish them and have them merged to the official qmk repository as it makes it easier for others to use your layout. like this one:


Open a keymap. You need to choose the directory, not the file!

reopen last

If you are a bit like me, you usually work on your own layout again and again. The button "reopen last" will open the file you last opened or saved!


Creates a completely new layout - Attention There is no "are you sure"-Question yet! IF you hit that button now, you'll end up with a new empty layout!

Assigning keys

When assigning keys, you first need to choose the "prefix" of the keycode names. Usually the prefix is related to the locale. Like "DE_OSX" is the German OSX version of some keycodes. all keycodes starting with "KC_" are the default (US-layout) keycodes.

You can add a modifier to the key if you want. And there are 2 different ways these modifiers might work: all at once (like SHIFT-A for a capital A) or the modifier when held, the key when typed! Like when holding the key Y, you hold CTRL, when typing it, it is just a plain old y.

assigning a layer toggle

For this functionality you only need to define the layer you want to switch to. Quite simple. When flashed to your keyboard, the corresponding key will switch on a specific layer when hit, and switch it off again when hit again. If you switch to such a layer in that way, it is probably a good idea, to set the leds properly.

assigning layer toggle / type

as already mentioned above, this will create a key, that will temporarily switch to a layer as long as the key is pressed. If you only type the key (= pressing it shortly), you will just type a normal key.

assigning a macro

The assignment of macros is quite easy, you can just choose one from the dropdown and then hit "assign macro". This works only like that, if the file you opened has some macros defined.

If you hit the "new Macro" or "edit macro" button, the Macro editor is shown. You can create, delete, or edit the macros in this layout.

The ErgodoxLayoutGenerator supports these kind of macros:

  • TypeMacro: This is a typing macro. Which means, it will send a series of keystrokes when the key is pressed
  • HoldKeyMacro: this is very similar to the above, but there is a different set of actions that can be defined, when the key is released. Example: When the key is pressed, CMD+SHIFT is pressed, A is typed. When released, CMD+Shift are also released.
  • LongPressTypeMacro: Different behavior is the key is pressed and held or only typed shortly
  • layer toggle macro: well, use a macro to toggle a layer
  • Custom macro: custom c-code

ATTENTION The Macros only support keycodes that do not represent a combination of keys. For example the keycode DE_OSX_QUOT is actually a replacement for LSFT(DE_OSX_HASH). This will not work in a macro, it will only send the keycode DE_OSX_HASH without the modifier. If you want your macro to work in your locale you need to be aware if this key is typed with a modifier or not.

All actions a macro can do, are the following:

  • DOWN(KEYCODE). press down a key
  • UP(KEYCODE). release the key
  • TYPE(KEYCODE). Type this key
  • W(199) . Wait some milliseconds, 199 in this case
  • I(1). Change the Interval and set it to 1 in this case

you just add these together, separated by comma, and you have your macro actions for the specified case.

Additional notes

This little project was first of all only built to run on my machine and make it more easy for myself to tweak with the layout. So it is only tested on a Mac OSX machine, not sure how it will work on windows or linux.

There are still a lot of things missing:

  • gui layout could be improved a lot. Especially in the Macro-Workflow
  • Errorhandling is completely missing
  • some "are you sure"-Dialogs need to be added.
  • store if you changed something on the layout, if so, ask before quitting, erasing...
  • code quality is... prototype grade!

Compiling it

the latest versions of the ELG do have a "compile" button. When you have your keymap saved and the qmk-sourcedir is set, you can compile it. This is done by running the commands make clean and make in the qmk directory of the ergodox-ez.

This can only work, if your system is capable of compiling it. Please ensure that you have everything installed and is in your path. Take a look at the qmk-github page for more information on how to prepare your system.

When the compilation finished successfully, you can read the log output. USually that is not very interesting, if everything worked fine. On errors you can check closely what went wrong.

When this dialog is dismissed, you will be asked if you want to copy the .hex file - which is the result of the compilation - to the keymap directory. This is useful, if you want to submit your code to the official github project. When your keymap does also have a .hex file, everybody can just download an use it without having to deal with compilation and stuff.

If you just compile for yourself, hit "no" there...

little Update:

The release Candidate is here... completely with support for a new Type of Macro called ToggleLayerAndHold which will toggle a layer as long as the key is pressed, or, if the key is only typed, toggle the layer as with the TG() function call.

Also, the latest version will add a list of all Macros to the PNG file and a short description... helpful for documenting things.

And now there is a compile button, which will compile your layout if everything is set correctly! The resulting .hex file can then be uploaded to your Ergodox or Ergodox-EZ!

Bugs and issues

If you find anything not properly working and you think it is a bug in the ELG, do not hesitate and create an issue at the github page. Please provide the following:

  • the log output (for compiling and such). If it is related to the ELG, the log is not shown by default. If you want to see the output, you need to start the ELG in the shell / commandline using java -jar ergodoxgenerator-VERSION.jar
  • what happened
  • what did you want to happen
  • what did you try to accomplish
  • maybe the resulting keymap

category: global


2017-05-20 - Tags: jblog

hi ho,

as you all know, the software here is quite new and of course i found some bugs i did not realize till I went live...

unfortunately I cannot deploy without downtime yet sorry

I will try to keep it to a minimum!


category: Computer --> programming --> Java

Jblog - Java Blogging Software

2017-05-20 - Tags: java blog

I explained here why PHP and Wordpress is a pain in the ass and that I decided to build a new software on a stack I knew.

As I am the only user of this software, I did not have to care about Mulit-User-Role-Permission Management, or Theming or plugins. Just a straight forward webapplication. But it should to it in the following:

  • java as a basis. This language I speak best emoji people:smile
  • Freemarker as Templating-Engine - have a lot of great good examples with that
  • It should do everything based on markdown. Wordpress never did that in a proper way. It was always a hassle. I use Flexmark - available at github.
  • mongodb as Storage
  • and of course Morphium as POJO Mapper (download it here)
  • Spring Web and Spring MVC
  • multilanguage support (well, 2 , I only speak english and German)
  • simple whitelabeling

Well... er.... that’s it. You see, quite simple actually. I used Bootstrap 4 in the frontend although it is not officially out yet. But I did think, by the time I finished this blog software, bootstrap 4 would be out... ok...

The application runs within a tomcat, called by an NGinx which also does SSL-Termination. All more or less standard.

Of course there will be changes over time. but for the first try it is not too bad... emoji people:smirk

category: Computer --> programming --> Java

New Version of Morphium available V3.1.3

2017-05-17 - Tags: java morphium mongodb

This release contains some minor fixes and improvements:

  • fixing some testcases, adding new tests
  • bug in storelist fixed, where it does not honor the disableBufferedWriterForThread
  • improving aggregator, making it more easiy to use (no need to call end() on group anymore)
  • adding aggegator functions $stdDevPop and $stdDevSamp
  • caching fix for ID-Cache, cache projection fix
  • avoiding ConcurrentModificationException in whe flushing buffered writer
  • minor improvements to performance...

You can either get it from github or via maven central:


category: Computer

New blogging software

2017-05-16 - Tags: java jblog security

originally posted on:

I did complain about wordpress several times (for example here). I took that for an opportunity, to take on my software development skills and use a weekend or two to build a new blogging software. Well, th result is this wonderful (well... hop so) page here.

PHP sucks

To stop all PHP fainbois from whyning, I do not like PHP very much, because I don't know it very much. Hence, wordpress is also kind of a mystery for me. The configuration works with luck, let alone get php to do what you want in a more secure way.

so, my blog was hacked several times during the last year now and this is pissing me off! So, I wanted to use a java based solution, but it seems like there is no simple, easy to use one out there.

so why not do it yourself?

exactly. That was my thought also. Could not be so complicated, could it? So, I wanted to create a blogging software that

  • has a simple technology stack
  • does not need a complex plugin funktionality. If it cannot do, what I like it to do, i rewrite it
  • themes or designs... well... er... could be better, but I think this is ok
  • Security, that is the point. I created the blogging software (called it jblog - not rally creative) myself and it is not so complex as wordpress. So we should be ok. I guess. But I know for sure, that th standard wordpress exploits wont work no more!
  • Intrnationalization... also a topic. jblog does only do 2 languages, German and English (I do not speak more, so I don't need more for my blogs).
  • whitelabeling. I have a couple of domains, I wanted to reuse / revive with this project.
  • one administration: I did not want to create the same thing 3 times, I wanted to have the same thing look like 3 different hings. Hence there should only be one administration page.


I am quite ok with what I accomplished here. Although it took longer than one weekend, it was finished quite fast. I lik that.

But please: if some links do not work anymore, some images look strange or are missing - I will fix this eventually emoji people:smirk

the different blogs - this blog here

the private main blog. Will cover topics like hobby, drones, games, gadgets etc. - the java blog

There I will put all my opnsource stuff, like morphium. And all the other programming tips and tricks I wrote over time. Hmm... seems like 'java blog' is not the right term...

This should be a business site anyways. So, here I will put in topics about my professional carreer, Scrum, processes etc.


well, this is going to be tough. I cannot produce content for 3 full blogs. Even filling one is quite hard. But I will try. And we will see, how that works

technical discussion

as mentioned above - not here, but at emoji people:smirk

category: Java --> programming --> Computer

new release of Morphium V3.1.0

2016-11-02 - Tags:

sorry, no english version available

category: Computer

First Beta release of Morphium 3.0

2016-04-07 - Tags: java-2 mongodb morphium-2

sorry, no english version available

category: Java --> programming --> Computer

Logging in Java - example in Morphium

2016-03-02 - Tags:

sorry, no english version available

category: Java --> programming --> Computer

Update on Morphium 3.0

2016-02-25 - Tags: english java-2 morphium-2

sorry, no english version available

category: Computer

It happened - this site was hacked... partly.

2016-02-15 - Tags:

sorry, no english version available

category: Java --> programming --> Computer

Morphium V3.0ALPHA

2016-01-18 - Tags:

sorry, no english version available

category: Computer

Java8 and Vector - yes, you can use it again!

2015-11-23 - Tags:

I collegue of mine came to me today and mentionend, that the use of ArrayList would cause problems in multithreadded environments - and he's right! At this very occasion it is discussing some internal cache of our application, where a lacking object here and there is not ab big deal. BUT: What we found out with his help is the following:

We were experimenting with lock and synchronized a bit, and found, that locks are way slower than using synchronized - in java 8 that is. There seems to be siginficant performance optimization in the synchonization in the VM itself. So, we wanted to compare the access to a list in a multithreadded environment and measure the timings. Here is the method, we used:

 private void testIt(final List lst) {
    long start = System.currentTimeMillis();
    int threads = 300;
    threadCount = 0;
    for (int i = 0; i < threads; i++) {
        final int j = i;
        new Thread() {
            public void run() {
                for (int k = 0; k < 1000; k++) {
//                        synchronized (lst) {
                    try {
                        lst.add("hello " + j + " - " + k);
                    } catch (Exception e) {
//                        }

    while (threadCount < threads) {
    long dur = System.currentTimeMillis() - start;
    System.out.println("write took : " + dur);
    System.out.println("Counting   : " + lst.size() + " missing: " + ((threads * 1000) - lst.size()));
    threadCount = 0;
    start = System.currentTimeMillis();
    for (int i = 0; i < threads; i++) {
        final int j = i;
        new Thread() {
            public void run() {
                for (int k = 0; k < 1000; k++) {
//                        synchronized (lst) {
                    try {
                        if (j * 1000 + k < lst.size())
                            lst.get(j * 1000 + k);
                    } catch (Exception e) {
//                        }

    while (threadCount < threads) {
    dur = System.currentTimeMillis() - start;
    System.out.println("read took : " + dur);

The code does not do much: creates 300 Threads, each of those storing data into a shared List of certain type. And after that, we create 300 threads reading those values (if they are there, that is - when using non-threadsafe datastructures, you will end up with data missing!).

Here is the result:

Testing with ArraList
write took : 83
Counting   : 255210 missing: 44790
read took : 22

Testing with Vector
write took : 64
Counting   : 300000 missing: 0
read took : 89

Testing with LinkedList
write took : 38
Counting   : 249998 missing: 50002
read took : 13367

Everybody knows, it is not a good idea to use Vector - it’s old and sluggish, slow and not useful. Do your own synchronization... This has been true obvously till JDK 1.7 - We ran the same test with JDK1.7 and Vector was at least 3 as slow as ArrayList or Linkedlist (only faster in reading).

We were shocked to see, that Vector ist actually faster than ArrayList! Significantly! And Thread-Safe! And it is even faster than using the same code with a synchronized block when accessing the list (see the commented out synchronized statements in the code above):

Testing with ArraList (synchronized block)
write took : 191
Counting   : 300000 missing: 0
read took : 80

Testing with Vector
write took : 68
Counting   : 300000 missing: 0
read took : 79

Testing with LinkedList (synchronized block)
write took : 178
Counting   : 300000 missing: 0

Of course, this is not a total in depth analysis as we actually don’t know for sure, what is causing this performance increase. But it really is reassuring - love to see, that Vector got some love a gain ;-) So - in an Java8 environment, you could actually use Vector without having to think about performance issues...

Update: I just compared the creation times (Default constructor) of the different types also, these are the timings:

Duration vector    : 31ms
Duration ArrayList : 2ms
Duration LinkedList: 3ms

So, what remains is: use Vector, if you do not create too many instances of it 😉

2nd Update: I just want to make things about this test a bit more clear. People tend to tell me that "this is no proper test, no Warmup phase, no proper Threadding... yadda yadda".

you might be surprised, YES I KNOW!

Instead of discussing the Idea, they discuss the toolset... facepalm my fault. Thought, this was clear from the beginning. Sorry for that.

This piece does not try to be the proof of anything. It is just showing, that there is some significant performance increase on Java 8 vs java 7 when it comes to Vector. Also, as already mentioned above, this code was not created like this, it is just a "byproduct".

The rest of this was to put in in perspective. Agreed, this was not very clear. It shows, that when your data structure is not synchornized, you might end up with data being lost. The test quantifies this loss with numbers. Which is also interesting - but for a different topic.

The results of this piece of code are reproduceable. Which means, that the numbers might differ, but comparing everything, the numbers are quite in the same area. Again, this is not a proper micro benchmark! This is better solved with something else, I agree.

So the goal was never to prove something, it is only a hint, that even Vector might be worth trying. It is still around, right? not marked deprecated, and not used as it is "slow". This is maybe not true to the extend it used to be.

But: to make things clear. As it seems in further Tests (those were done with the JMH-Testing framework), that often the Collections.syncrhonizedList(new ArrayList<>()) returns a better performing version than Vector.

But again: this whole thing here just wants to show that the huge performance loss you got when using Vector in JDK1.7 and before is now a bit smaller... and in some cases even gone!

category: Computer

Stephans Blog wieder online...

2015-06-12 - Tags: allgemein blog

originally posted on:

no english version available yet

Das war stressig. Zum Umzug kam noch hinzu, dass mein Server die Grätsche gemacht hat. Ich musste neu installieren. Was ja – dank Backups – eigentlich kein allzu großer Aufwand wäre, hätte ich nicht vergessen, ein Backup von der Datenbank zu machen… Deswegen jetzt der neue Start des alten Blogs ;-)

category: Computer

Feature Release Morphium 2.2.16

2015-01-22 - Tags: java-2 morphium-2

sorry, no english version available

category: MongoDB --> programming --> Computer

Additional Feature Release V2.2.10 morphium

2014-09-29 - Tags:

sorry, no english version available

category: MongoDB --> programming --> Computer

Feature Release of Morphium V2.2.9

2014-09-28 - Tags:

sorry, no english version available

category: Computer --> programming --> MongoDB --> morphium

Morphium Doku V3.0

2014-09-05 - Tags: morphium java mongo

want help translating / documenting / coding? Conctact us on github or via slack


Morphium Documentation

This documentation is refering to Morphium version [%morphium_version] and mongodb [%mongodb_version]. this documentation follows "MultiMarkdown" and was created using the MultiMarkdownComposer.

HTML Version here: MorphiumDoku If you just want to start right now, read [quick start]!

Ideas and concepts

When we started using MongoDB there was no fully capable POJO Mapper available. The only thing that was close to useable was Morphia (which is now developed by MongoDb. Unfortunately, Morphia had some issues, and lacked some features, we'd like to have, like (besides the usual features fast mapping, reliable query interface and so on):

  • Thread safety
  • cluster awareness
  • declarative caching
  • profiling support
  • support for partial updates
  • reference support incl. lazy loading of references
  • adaptable API (need to implement special POJO Mappings, Cache implementation change etc)
  • Cache synchronization in cluster
  • Validation
  • Declarative Index specification
  • Aggregation support

At that time there was nothing available providing all those features or what we could use as a basis to create those features (although we tried to implement that on base of Morphia - but the architecture of Morphia was not built for customization).

So, we started creating our own implementation and called it "Morphium" to honor the project "Morphia" which was the best around at that time.

But Morphium is a complete new Project, it was built totally from scratch. Even the POJO-Mapper is our own development (although there were some available at that point), but we had some special needs for Morphium's mapping.

The mapping takes place on a per-type basis. That means, usually (unless configured otherwise) the data of all objects of a certain type, will be stored in a corresponding collection.

In addition to that, the mapping is aware of object hierarchy and would even take annotations and settings into account, that are inherited.

Usually Morphium replaces camel case by underscore-separated strings. So an Object of type MyEntity would be stored in the collection my_entity. This behaviour can be configured as liked, you could even store all Objects in one collection. (see [Polymorphism])

Changes in Version 3.0


Morphium 3.0 brings a lot improvements and changes, most of them are not really visible to the user, but unfortunately some of them make V3.x incompatible to V2.x.

The changes were triggered by the recent mongodb java driver update to also 3.0, which brings a whole new API. This API is (unfortunately also) not backward compatible[^not quite true, the driver contains both versions actually, but old API is usually marked deprecated]. This made it hard to add the changes in the official driver into morphium. Some of the changes made it also impossible to implement some features in morphium as it was before. So - the current implementation of morphium uses both old and new API - wich will break eventually.

The next step was, to be more independent from the driver, as those changes caused problems almost throughout the whole code of morphium. So, introducing with V3.0 of morphium, the driver is encapsulated deep within morphium.

Unfortunately, even the basic document representation changed[^old version used BasicDBObject, new version uses Document], which are very similar, but unfortunately represented in a whole new implementation of BSON[^binary json - details can be found here].

Also, we had some problems with dependencies in maven, causing to be several version of the mongodb driver being installed on production - which then caused some weird effects, most of them not really good ones ;-)

This made us reduce all dependency to the mongodb driver to a minimum - actually it is only used in the MorphiumDriver implementation for the official mongodb driver. But that also meant, we needed to get rid of all usages of ObjectID and BasicDBDocument and reduce usages of that into the driver implementation within morphium.

The question was - do we need to introduces some new object type for representing a Map<String,Object>? We thought no, so we changed the whole code in morphium, to internally use only standard Java8 API.

Yes, that is one feature also, since Morphium 3.0 we‘re running on java 8.


As you know the motivation now, these are the changes.

  • Driver encapsulated and configurable - you can now implement your own driver for usage with morphium
  • no usage of MongoDb classes, replaced by type MorphiumId and simple Map<String,Object> - this might actually break your code!
  • (soon) MongoDB Dependency in maven will be set to be provided, so that you can decide, which Version of the driver you want to use (or none...)
  • Morphium 3.0 includes some own implementation of drivers (mainly for testing purpose):
    • Driver: This is the Implementation of MorphiumDriver using the official Mongodb driver (V3.x)
    • InMemoryDriver: Not connecting to any mongo instance, just storing into memory. Good for testing. Does not support Aggregation!
    • SingleConnectDirectDriver: Just connecting to a master node, no failover. Useful if you do not have a replicaset
    • SingleConnectThreaddedDriver: Same as above, but uses a thread for reading the answers - slightly better performance in multithreaded environments, but only useful if you don't run a replicaSet
    • MetaDriver: A full featured implementation of the MorphiumDriver Interface, can be used as replacement for the mondogdb driver implementation. It uses a pool of SingleConnectThreaddedDriver to connect to mongodb.
  • Many changes in the internals
  • in references you can now specify the collection the reference should point to.
  • improvements in the internal caches, using the new improved features and performance of Java8[^see also here]
  • complete rewrite of the bulk operation handling
  • code improvements on many places, including some public interfaces (might break your code!)

quick start

Simple example on how to use Morphium:

First you need to create data to be stored in Mongo. This should be some simple class like this one here:

    public class MyEntity {
        private MorphiumId myId;
        private int aField;
        private String other;
        private long property;
        //....  getter & setter here

This given entity has a couple of fields which will be stored in Mongo according to their names. Usually the collection name is also derived from the ClassName (as most things in Morphium, that can be changed).

The names are usually translated from camel case (like aField) into lowercase with underscores (like a_field). This is the default behavior, but can be changed according to your needs.

In mongo the corresponding object would be stored in a collection named my_entity and would look like this:

      _id: ObjectId("53ce59864882233112aa018df"),
      a_field: 123,
      other: "value"

By default, null values are not serialized to mongo. So in this example, there is no field "property".

The next example shows how to store and access data from mongo:

    //creating connection 
    MorphiumConfig cfg=new MorphiumConfig()
    cfg.setHostSeed("localhost:27018", "mongo1","mongo3.home")
    //connect to a replicaset 
    //if you want to connect to a shared environment, you'd add the addresses of 
    //the mongos-servers here 
    //you can also specify only one of those nodes, 
    //Morphium (or better: mongodb driver) will figure out the others
    Morphium morphium=new Morphium(cfg);
    //Create an entity 
    MyEntity ent=new MyEntity()
    //the query object is used to access mongo 
    Query q=morphium.createQueryFor(MyEntity.class)
    List lst=q.asList();
    //or use iterator 
    for (MyEntity e:q.asIterable(100,2)) { 
        // iterate in windows of 100 objects 
        // 2 windows lookAhead 

This gives a short glance of how Morphium works and how it can be used. But Morphium is capable of many more things...


Morphium is built to be very flexible and can be used in almost any environment. So the architecture needs to be flexible and sustainable at the same time. Hence it's possible to use your own implementation for the cache if you want to.

There are four major components of Morphium:

  1. the Morphium Instance: This is you main entrypoint for interaction with Mongo. Here you create Queries and you write data to mongo. All writes will then be forwarded to the configured Writer implementation, all reads are handled by the Query-Object
  2. Query-Object: you need a query object to do reads from mongo. This is usually created by using Morphium.createQueryFor(Class<T> cls). With a Query, you can easily get data from database or have some things changed (update) and alike.
  3. the Cache: For every request that should be sent to mongo, Morphium checks first, whether this collection is to be cached and if there is already a batch being stored for the corresponding request.
  4. The Writers: there are 3 different types of writers in Morphium: The Default Writer (MorphiumWriter) - writes directly to database, waiting for the response, the BufferedWriter (BufferedWriter) - does not write directly. All writes are stored in a buffer which is then processed as a bulk. The last type of writer ist the asynchronous writer (AsyncWriter) which is similar to the buffered one, but starts writing immediately - only asynchronous. Morphium decides which writer to use depending on the configuration an the annotations of the given Entities. But you can always use asynchronous calls just by adding aAsyncCallback implementation to your request.

Simple rule when using Morphium: You want to read -> Use the Query-Object. You want to write: Use the Morphium Object.

There are some additional features built upon this architecture:

  • messaging: Morphium has a own messaging system.
  • cache synchronization: Synchronize caches in a clustered environment. Uses messaging
  • custom mappers - you can tell Morphium how to map a certain type from and to mongodb. For example there is a "custom" mapper implementation for mapping BigInteger instances to mongodb.
  • every of those implementations can be changed: it is possible to set the class name for the BufferedWriter to a custom built one (in MorphiumConfig). Also you could replace the object mapper with your own implementation by implementing the ObjectMapper interface and telling morphium which class to use instead. In short, these things can be changed in morphium / morphiumconfig:
    • MorphiumCache
    • ObjectMapper
    • Query
    • Field
    • QueryFactory
    • Driver (> V3.0)
  • Object Mapping from and to Strings (using the object mapper)

Configuring Morphium

First lets have a look on how to configure Morphium. As you already saw in the example in the last chapter, the configuration of Morphium ist encapsulated in one Object of type MorphiumConfig. This object has set some reasonable defaults for all settings. So it should be just as described above to use it.

Configuration Options

There are a lot of settings and customizations you can do within Morphium. Here we discuss all of them:

  • loggingConfigFile: can be set, if you want Morphium to configure your log4j for you. Morphium itself has a dependency to log4j (see Dependencies).
  • camelCaseConversion: if set to false, the names of your entities (classes) and fields won't be converted from camelcase to underscore separated strings. Default is true (convert to camelcase)
  • maxConnections: Maximum Number of connections to be built to mongo, default is 10
  • houseKeepingTimeout: the timeout in ms between cache housekeeping runs. Defaults to 5sec
  • globalCacheValidTime: how long are Cache entries valid by default in ms. Defaults to 5sek
  • writeCacheTimeout: how long to pause between buffered writes in ms. Defaults to 5sek
  • database: Name of the Database to connect to.
  • connectionTimeout: Set a value here (in ms) to specify how long to wait for a connection to mongo to be established. Defaults to 0 (⇒ infinite)
  • socketTimeout: how long to wait for sockets to be established, defaults to 0 as well
  • socketKeepAlive: if true, use TCP-Keepalive for the connection. Defaults to true
  • safeMode: Use the safe mode of mongo when set to true
  • globalFsync, globalJ: set fsync (file system sync) and j (journal) options. See for more information
  • checkForNew: This is something interesting related to the creation of ids. Usually Ids in mongo are of type ObjectId. Anytime you write an object with an _id of that type, the document is either updated or inserted, depending on whether or not the ID is available or not. If it is inserted, the newly created ObjectId is being returned and add to the corresponding object. But if the id is not of type ObjectId, this mechanism will fail, no objectId is being created. This is no problem when it comes to new creation of objects, but with updates you might not be sure, that the object actually is new or not. If this obtion is set to true Morphium will check upon storing, whether or not the object to be stored is already available in database and would update. Attention: Morphium 3.0 removed the dependency from codebase and hence there is no ObjectId for POJOs anymore. You should replace these with the new MorphiumId.
  • writeTimeout: this timeout determines how long to wait until a write to mongo has to be finshed. Default is 0⇒ no timeout
  • maximumRetriesBufferedWriter: When writing buffered, how often should retry to write the data until an exception is thrown. Default is 10
  • retryWaitTimeBufferedWriter: Time to wait between retries
  • maximumRetriesWriter, maximumRetriesAsyncWriter: same as maximumRetriesBufferedWriter, but for direct storage or asynchronous store operation.
  • retryWaitTimeWriter, retryWaitTimeAsyncWriter: similar to retryWaitTimeBufferedWriter, but for the according writing type
  • globalW: W sets the number of nodes to have finished the write operation (according to your safe and j / fsync settings)
  • maxWaitTime: Sets the maximum time that a thread will block waiting for a connection.
  • writeBufferTime: Timeout for buffered writes. Default is 0
  • autoReconnect: if set to true connections are re-established, when lost. Default is true
  • maxAutoReconnectTime: how long to try to reconnect (in ms). Default is 0⇒ try as long as it takes
  • blockingThreadsMultiplier: There is a max number of connections to mongo, this factor determines the maximum number of threads that may be waiting for some connection. If this threshold is reached, new threads will get an Exception upon access to mongo.
  • mongoLogin,mongoPassword: User Credentials to connect to mongodb. Can be null.
  • mongoAdminUser, mongoAdminPwd: Credentials to do admin tasks, like get the replicaset status. If not set, use mongoLogin instead.
  • acceptableLatencyDifference: Latency between replicaset members still acceptable for reads.
  • autoValuesEnabled: Morphium supports automatic values being set to your POJO. These are configured by annotations (@LasChange, @CreationTime, @LastAccess, ...). If you want to switch this off globally, you can set it in the config. Very useful for test environments, which should not temper with productional data
  • readCacheEnabled: Globally disable readcache. This only affects entities with a @Cache annotation. By default it's enabled.
  • asyncWritesEnabled: Globally disable async writes. This only affects entities with a @AsyncWritesannotation
  • bufferedWritesEnabled: Globally disable buffered writes. This only affects entities with a @WriteBuffer annotation
  • defaultReadPreference: whether to read from primary, secondary or nearest by default. Can be defined with the @ReadPreference annotation for each entity.
  • replicaSetMonitoringTimeout: time interval to update replicaset status.
  • retriesOnNetworkError: if you happen to have an unreliable network, maybe you want to retry writes / reads upon network error. This settings sets the number of retries for that case.
  • sleepBetweenNetworkErrorRetries: set the time to wait between network error retries.
  • blockingThreadsMultiplier: Sets the multiplier for number of threads allowed to block waiting for a connection.

In addition to those settings describing the behavior of Morphium, you can also define custom classes to be used internally:

  • omClass: here you specify the class, that should be used for mapping POJOs (your entities) to DBOject. By Default it uses the ObjectMapperImpl. Your custom implementation must implement the interface ObjectMapper.
  • iteratorClass: set the Iterator implementation to use. By default MorphiumIteratorImplis being used. Your custom implementation must implement the interface MorphiumIterator
  • aggregatorClass: this is Morphium's representation of the aggregator framework. This can be replaced by a custom implementation if needed. Implements Aggregator interface
  • queryClass and fieldImplClass: this is used for Queries. If you want to take control over how queries ar built in Morphium and on how fields within queries are represented, you can replace those two with your custom implementation.
  • cache: Set your own implementation of the cache. It needs to implement the MorphiumCache interface. Default is MorphiumCacheImpl. You need to specify a fully configured cache object here, not only a class object.
  • driverClass: Set the driver implementation, you want to use. This is a string, set the class name here. E.g. morphiumconfig.setDriverClass(MetaDriver.class.getName()

Morphium Config Directly

The most straight foreward way of configuring Morphium is, using the object directly. This means you just call the getters and setters according to the given variable names above (like setMaxAutoReconnectTime()).

The minimum configuration is explained above: you only need to specify the database name and the host(s) to connect to. All other settings have sensible defaults, which should work for most cases.

Morphium Config From Property File

the configuration can be stored and read from a property object.

MorphiumConfig.fromProperties(Properties p); Call this method to set all values according to the given properties. You also can pass the properties to the constructor to have it configured.

To get the properties for the current configuration, just call asProperties() on a configured MorphiumConfig Object.

Here is an example property-file:

hostSeed=localhost\:27017, localhost\:27018, localhost\:27019

The minimal property file would define only hosts and database. All other values would be defaulted.

If you want to specify classes in the config (like the Query Implementation), you neeed to specify the full qualified class name, e.g. de.caluga.morphium.customquery.QueryImpl

Morphium Config From Json File

The standard toString()method of MorphiumConfig creates an Json String representation of the configuration. to set all configuration options from a json string, just call createFromJson.


Singleton Access

In some cases it's more convenient to use a singleton Instance to access Morphium. You don't need to implement a thread safe Morphium Singleton yourself, as Morphium does already have one.

The MorphiumSingleton is configured similar to the normal Morphium instance. Just set the config and you're good to go.

    MorphiumConfig config=new MorphiumConfig()//..configure it here

Connection to mongo and initializing of Morphium is done at the first call of get.

POJO Mapping

When talking about POJO Mapping, we're saying we marshall a POJO into a mongodb representation or we unmarshall the mongodb representation into a POJO.

Marshaling and unmarshalling is of utter importance for the functionality. It needs to take care of following things:

  • un/marshall every field. Easy if it’s a primitive datatype. Map to corresponding type in Monogo - mostly done by the mongodb java driver (or since 3.0 the MorphiumDriver implementation)
  • when it comes to lists and maps, examine every value. Maps may only have strings as keys (mongoldb limitation), un/marshall values
  • when a field contains a reference to another entity, take that into account. either store the
  • the POJO transformation needs to be 100% thread safe (Morphium itself is heavily multithreaded)

The ObjectMapper is the core of Morphium. It's used to convert every entity you want to store into a mongoldb document (java representation is a DBObject). Although it's one of the key things in Morphium it's still possible to make use of your own implementation (see chapter [Configuring Morphium]).

Querying Mongo

This is done by using the Query object. You need to create one for every entity you want to issue a query for. You could create one yourself, but the easiest way of doing so is calling the method .createQueryFor(Class class) in Morphium.

After that querying is very fluent. You add one option at a time, by default all conditions are AND-associated:

    Query q=morphium.createQueryFor(MyEntity.class);

The f method stands for "field" and returns a Morphium internal representation of mongo fields. Threre you can call the operators, in our case it eq for equals, lt for less then and ne not equal. There are a lot more operators you might use, all those are defined in the MongoField interface:

    public Query all(List
    public Query eq(Object val);
    public Query ne(Object val);
    public Query size(int val);
    public Query lt(Object val);
    public Query lte(Object val);
    public Query gt(Object val);
    public Query gte(Object val);
    public Query exists();
    public Query notExists();
    public Query mod(int base, int val);
    public Query matches(Pattern p);
    public Query matches(String ptrn);
    public Query type(MongoType t);
    public Query in(Collection vals);
    public Query nin(Collection vals);
span class='java-comment'> /**
     * return a sorted list of elements around point x,y
     * spherical distance calculation
     * @param x pos x
     * @param y pos y
     * @return the query
span class='java-comment'> */
    public Query nearSphere(double x, double y);
span class='java-comment'> /**
     * return a sorted list of elements around point x,y
     * @param x pos x
     * @param y pos y
     * @return the query
span class='java-comment'> */
    public Query near(double x, double y);
span class='java-comment'> /**
     * return a sorted list of elements around point x,y
     * spherical distance calculation
     * @param x pos x
     * @param y pos y
     * @return the query
span class='java-comment'> */
    public Query nearSphere(double x, double y, double maxDistance);
span class='java-comment'> /**
     * return a sorted list of elements around point x,y
     * @param x pos x
     * @param y pos y
     * @return the query
span class='java-comment'> */
    public Query near(double x, double y, double maxDistance);

span class='java-comment'> /**
     * search for entries with geo coordinates wihtin the given rectancle - x,y upper left, x2,y2 lower right corner
span class='java-comment'> */
    public Query box(double x, double y, double x2, double y2);
    public Query polygon(double... p);
    public Query center(double x, double y, double r);
span class='java-comment'> /**
     * same as center() but uses spherical geometry for distance calc.
     * @param x - pos x
     * @param y - y pos
     * @param r - radius
     * @return the query
span class='java-comment'> */
    public Query centerSphere(double x, double y, double r);
    public Query getQuery();
    public void setQuery(Query q);
    public ObjectMapper getMapper();
    public void setMapper(ObjectMapper mapper);
    public String getFieldString()
    public void setFieldString(String fld);

Query definitions can be in one line, or as above in several lines. Actually the current query object is changed with every call of f...something combination. The current object is always returned, for making the code more legible and understandable, you should assign the query as shown above. This makes clear: "The object changed"

If you need an "empty" query of the same type, you can call the method q. This method will return an empty query of the same type, using the same mapper etc. But only without conditions or something - just plain empty.

As already mentioned, the query by default creates AND-queries. If you need to create an or query, you can do so using the or method in the query object.

or takes a list of queries as argument, so a query might be built this way:

    Query q=morphium.createQueryFor(MyEntity.class);

This would create an OR-Query asking for all "MyEntities", that have a counter less than or equal to 10 OR whose name is "Morphium". You can add as much or-queries as you like. OR-Queries can actually be combined with and queries as well:

    Query q=morphium.createQueryFor(MyEntity.class);

In that case, the query would be something like: counter is greater than 2 AND (counter is less then or equal to 10 OR name is "Morphium")

Combining and and or-queries is also possible, although the syntax would look a bit unfamiliar:

    Query q=morphium.createQueryFor(MyEntity.class);

This would create a query returning all entries that do have a counter of less than 100 AND where the modulo to base 3 of the value counter equals 0, and the value of the field value equals "v".

Quite complex, eh?

Well, there is more to it... it is possible, to create a query using a "where"-String... there you can add JavaScript code for your query. This code will be executed at the mongodb node, executing your query:

    Query q=morphium.createQueryFor(MyEntity.class);
    q=q.where("this.counter > 10");

Attention: you can javascript code in that where clause, but you cannot access the db object there. This was changed when switching to Mongodb 2.6 with V8 Javascript engine

Declarative Caching

Using the @Cache annotation, you can define cache settings on a per type (= class) basis. This is done totally in background, handled by Morphium 100% transparently. You just add the annotation to your entities and you're good to go. See [Cache] and [Cache Synchronization]

Cache Synchronization

Cache synchronization was already mentioned above. The system of cache synchronization needs a messaging subsystem (see [Messaging] below). You just need to start the cache synchronizer yourself, if you want caches to be synchronized.

CacheSynchronizer cs=new CacheSynchronizer(morphium);

If you want to stop your cache synchronizing process, just call cs.setRunning(false); . The synchronizer will stop after a while (depending on your cache synchronization timeout).

By default no cache synchronizer is running.

Cluster Awareness

Morphium is cluster aware in a sense, that it does poll the state of a replicates periodically in order to know what nodes are life and need to be taken into account. (Same does the Java Driver, this information is now moved into the morphium driver implementation, so the double check is not necessary anymore).

Morphium also has support for clusters using it. Like a cluster of tomcats instances. In this case, Morphium is able to synchronize the caches of those cluster nodes.


Morphium supports a simple Messaging system which uses mongoldb as storage. The messaging is more or less transactional (to the extend that mongo gives) and works multithreaded. To use messaging you only need to instantiate a Messaging-Instance. You may add listeners to this instance to process the messages and you may send messages through this instance.

Messaging is 100% multithreaded and thread safe.

Bulk Operations Support

All operations regarding lists (list updates, writing lists of objects, deleting lists of objects) will be implemented using the new bulk operation available since mongodb 2.6. This gives significant speed boost and adds reliability.

Actually, all method calls to mongo support a list of documents as argument. This means, you can send a list of updates, a list of documents to be inserted, a list of whatever. The ´BulkOperationContext´ only gathers those requests on the java side together, so that they can be sent in one call, instead of several.

With Morphium 3.0 an own implementation of this bulk operation context was introduced.


You can add a number of Listeners to Morphium in order to be informed about what happens, or to influence the way things are handled.

  • MorphiumStorageListeners: will be informed about any write process within morpheme. You can also veto if necessary. Works similar to [Lifecycle] methods, but for all entities.
  • CacheListener: Can be added to Morphium cache, will be informed about things to be added to cache, or if something would be updated or cleared. In all cases, a veto is possible.
  • ShutdownListener: if the system shuts down, you can be informed using this listener. It's not really Morphium specific.
  • ProfilingListener: will be informed about any read or write access to mongo and how long it took. This is useful if you want to track long requests or index misses.

In addition to that, almost all calls to mongo can be done asynchronously - either by defining that in the @Entity annotation or by defining it directly.

That means, an asList() call on a query object can take an AsyncCallback as argument, which then will be called, when the batch is ready. (which also means, the asList call will return null, the batch will be passed on in the callback).

Support for Aggregation

Morphium does have support for Aggregation in mongo. The aggregation Framework was introduced in mongo with V2.6 and is a alternative to MapReduce (which is still used). We implemented support for the new Aggregation framework into mongo. Up until now, there was no request for MapReduce - if you need it, please let me know.

Here is how the aggregation framework is used from mongo (see more info on the aggregation framework at MongoDb

This is the Unit test for Aggregation support in Mongo:

    @Test public void aggregatorTest() throws Exception { 
        Aggregator a = MorphiumSingleton.get().createAggregator(UncachedObject.class, Aggregate.class);
        assert (a.getResultType() != null);
        //eingangsdaten reduzieren
        a = a.project("counter");
        a = a.match(MorphiumSingleton.get().createQueryFor(UncachedObject.class).f("counter").gt(100));
        //Sortieren - für $first/$last
        a = a.sort("counter");
        //limit der Daten
        a = a.limit(15);
        //group by - in dem Fall ALL, könnte auch beliebig sein
        a ="all").avg("schnitt", "$counter").sum("summe", "$counter").sum("anz", 1).last("letzter", "$counter").first("erster", "$counter").end();
        //ergebnis projezieren 
        HashMap projection=new HashMap<>()
        a = a.project(projection);
        List obj = a.toAggregationList();
        for (DBObject o : obj) {
  "Object: " + o.toString());
        List lst = a.aggregate();
        assert (lst.size() == 1) : "Size wrong: " + lst.size();"Sum  : " + lst.get(0).getSumme());"Avg  : " + lst.get(0).getSchnitt());"Last :    " + lst.get(0).getLast());"First:   " + lst.get(0).getFirst());"count:  " + lst.get(0).getAnzahl());
        assert (lst.get(0).getAnzahl() == 15) : "did not find 15, instead found: " + lst.get(0).getAnzahl();
     public static class Aggregate { 
        private double schnitt; 
        private long summe; 
        private int last; 
        private int first; 
        private int anzahl;
        @Property(fieldName = "_id")
        private String theGeneratedId;
        public int getAnzahl() {
            return anzahl;
        public void setAnzahl(int anzahl) {
            this.anzahl = anzahl;
        public int getLast() {
            return last;
        public void setLast(int last) {
            this.last = last;
        public int getFirst() {
            return first;
        public void setFirst(int first) {
            this.first = first;
        public double getSchnitt() {
            return schnitt;
        public void setSchnitt(double schnitt) {
            this.schnitt = schnitt;
        public long getSumme() {
            return summe;
        public void setSumme(long summe) {
            this.summe = summe;
        public String getTheGeneratedId() {
            return theGeneratedId;
        public void setTheGeneratedId(String theGeneratedId) {
            this.theGeneratedId = theGeneratedId;

The class Aggregate is used to hold the batch of the aggregation.


If javax.validation can be found in class path, you are able to validate values of your entities using the validation annotations. Those validations will take place before the object would be saved.

Technically it's implemented as a JavaxValidationStorageListener which is a storage listener and vetoes the write operation if validation fails.

an example on how to use validation:

    @Id private MorphiumId id;
    private int theInt;
    private Integer anotherInt;
    private Date whenever;
    @Pattern(regexp = "m[ueü]nchen")
    private String whereever;
    @Size(min = 2, max = 5)
    private List friends;
    private String email;

Those validation rules will be enforced upon storing the corresponding object:

    @Test(expected = ConstraintViolationException.class)
    public void testNotNull() {
        ValidationTestObject o = getValidObject();


Its possible to have different type of entities stored in one collection. Usually this will only make sense if those entities have some things in common. In an object oriented way: they are derived from one single entity.

In order to make this work, you have to tell Morphium that you want to use a certain entity in a polymorph way (property of the annotation @Entity). If so, the full qualified class name will be stored in the mongo document representing the entity. Actually, you can store any type of entity into one list, if each of those types is marked polymorph. Only reading them is a bit hard, as you would iterate over Objects and would have to decide on type yourself.

Async API

Fully Customizable


on the following lines you get a more in depth view of the

Names of entities and fields

Morphium by defaults converts all java CamelCase identifiers in underscore separated strings. So, MyEntity will be stored in an collection called my_entity and the field aStringValue would be stored in as a_string_value.

When specifying a field, you can always use either the transformed name or the name of the corresponding java field. Collection names are always determined by the classname itself.

CamelCase conversion

But in Morphium you can of course change that behaviour. Easiest way is to switch off the transformation of CamelCase globally by setting camelCaseConversionEnabled to false (see above: Configuration). If you switch it off, its off completely - no way to do switch it on for just one collection or so.

If you need to have only several types converted, but not all, you have to have the conversion globally enabled, and only switch it off for certain types. This is done in either the @Entity or @Embedded annotation.

public class MyEntity {
   private String myField;`

This example will create a collection called MyEntity (no conversion) and the field will be called myField in mongo as well (no conversion).

Attention: Please keep in mind that, if you switch off camelCase conversion globally, nothing will be converted!

using the full qualified classname

you can tell Morphium to use the full qualified classname as basis for the collection name, not the simple class name. This would batch in createing a collection de_caluga_morphium_my_entity for a class called de.caluga.morphium.MyEntity. Just set the flag useFQN in the entity annotation to true.

public class MyEntity {`

Recommendation is, not to use the full qualified classname unless it's really needed.

Specifying a collection / fieldname

In addition to that, you can define custom names of fields and collections using the corresponding annotation (@Entity, @Property).

For entities you may set a custom name by using the collectionName value for the annotation:

public class MyEntity {
    private String myValue;

the collection name will be totallyDifferent in mongo. Keep in mind that camel case conversion for fields will still take place. So in that case, the field name would probably be my_value. (if camel case conversion is enabled in config)

You can also specify the name of a field using the property annotation:

private String something;`

Again, this only affects this field (in this case, it will be called my_wondwerful_field in mongo) and this field won't be converted camelcase. This might cause a mix up of cases in your mongodb, so please use this with care.

Accessing fields

When accessing fields in Morphium (especially for the query) you may use either the name of the Field in Java (like myEntity) or the converted name depending on the config (camelCased or not, or custom).

Using NameProviders

In some cases it might be necessary to have the collection name calculated dynamically. This can be acchieved using the NameProvider Interface.

You can define a NameProvider for your entity in the @Entity annotation. You need to specify the type there. By default, the NameProvider for all Entities is DefaultNameProvider. Which acutally looks like this:

    public final class DefaultNameProvider implements NameProvider {
    public String getCollectionName(Class type, ObjectMapper om, boolean translateCamelCase, boolean useFQN, String specifiedName, Morphium morphium) {
        String name = type.getSimpleName();
        if (useFQN) {
            name = type.getName().replaceAll("\\.", "_");
        if (specifiedName != null) {
            name = specifiedName;
        } else {
            if (translateCamelCase) {
                name = morphium.getARHelper().convertCamelCase(name);
        return name;

You can use your own provider to calculate collection names depending on time and date or for example depending on the querying host name (like: create a log collection for each server separately or create a collection storing logs for only one month each).

Attention: Name Provider instances will be cached, so please implement them threadsafe.

Entity Definition

Entitys in Morphium ar just "Plain old Java Objects" (POJOs). So you just create your data objects, as usual. You only need to add the annotation @Entity to the class, to tell Morphium "Yes, this can be stored". The only additional thing you need to take care of is the definition of an ID-Field. This can be any field in the POJO identifying the instance. Its best, to use MorphiumId as type of this field, as these can be created automatically and you don't need to care about those as well.

If you specify your ID to be of a different kind (like String), you need to make sure, that the String is set, when the object will be written. Otherwise you might not find the object again. So the shortest Entity would look like this:

public class MyEntity {
   @Id private MorphiumId id;
   //.. add getter and setter here


Indexes are very important in mongo, so you should definitely define your indexes as soon as possible during your development. Indexes can be defined on the Entity itself, there are several ways to do so: - @Id always creates an index - you can add an @Index to any field to have that indexed:

private String name;
  • you can define combined indexes using the @Index annotation at the class itself:

    @Index({"counter, name","value,thing,-counter"} public class MyEntity {

This would create two combined indexes: one with counter and name (both ascending) and one with value, thing and descending counter. You could also define single field indexes using this annotations, but it`s easier to read adding the annotation direktly to the field.

  • Indexes will be created automatically if you create the collection. If you want the indexes to be created, even if there is already data stores, you need to callmorphium.ensureIndicesFor(MyEntity.class)- You also may create your own indexes, which are not defined in annotations by callingmorphium.ensureIndex(). As parameter you pass on a Map containing field name and order (-1 or 1) or just a prefixed list of strings (like"-counter","name").

Every Index might have a set of options which define the kind of this index. Like buildInBackground or unique. You need to add those as second parameter to the Index-Annotation:

 @Index(value = {"-name, timer", "-name, -timer", "lst:2d", "name:text"}, 
            options = {"unique:1", "", "", ""})
public static class IndexedObject {

here 4 indexes are created. The first two ar more or less standard, wheres the lst index is a geospacial one and the index on name is a text index (only since mongo 2.6). If you need to define options for one of your indexes, you need to define it for all of them (here, only the first index is unique).

We're working on porting Morphium to java8, and there it will be possible to have more than one @Index annotation, making the syntax a bit more ledgeable

capped collections

Similar as with indexes, you can define you collection to be capped using the @Capped annotation. This annotation takes two arguments: the maximum number of entries and the maximum size. If the collection does not exist, it will be created as capped collection using those two values. You can always ensureCapped your collection, unfortunately then only the size parameter will be honored.


Querying is done via the Query-Object, which is created by Morphium itself (using the Query Factory). The definition of the query is done using the fluent interface:

    Query query=morphium.createQueryFor(MyEntity.class);
    query=query.f("id").eq(new MorphiumId());
    query=query.f("valueField").eq("the value");

In this example, I refer to several fields of different types. The Query itself is always of the same basic syntax:

    queryObject=queryObject.skip(NUMBER)//skip a number of entreis
    queryObject=queryObject.limig(NUMBER)// limit batch

As field name you may either use the name of the field as it is in mongo or the name of the field in java. If you specify an unknown field to Morphium, a RuntimeException will be raised.

For definition of the query, it's also a good practice to define enums for all of your fields. This makes it hard to have mistypes in a query:

    public class MyEntity {
      private MorphiumId id;
      private Double value;
      private String personName;
      private int counter;
      //.... field accessors
      public enum Fields { id, value, personName,counter, }

There is a plugin for intelliJ creating those enums automatically. Then, when defining the query, you don't have to type in the name of the field, just use the field enum:


After you defined your query, you probably want to access the data in mongo. Via Morphium,there are several possibilities to do that: - queryObject.get(): returns the first object matching the query, only one. Or null if nothing matched - queryObject.asList(): return a list of all matching objects. Reads all data in RAM. Useful for small amounts of data - Iterator<MyEntity> it=queryObject.asIterator(): creates a MorphiumIterator to iterate through the data, whch does not read all data at once, but only a couple of elements in a row (default 10).

the Iterators

Morphium has support for special Iterators, which steps through the data, a couple of elements at a time. By Default this is the standard behaviour. But the _Morphium_Iterator ist quite capable:

  • queryObject.asIterable() will step through the results batch by batch. The batch size is determined by the driver settings. This is the most performant, but lacks the ability to "step back" out of the current processed batch.
  • queryObject.asIterable(100) will step through the batch list, 100 at a time using a mongodb cursor iterator.
  • queryObject.asIterable(100,5) will step through the batch list, 100 at a time and keep 5 chunks of 100 elements each as prefetch buffers. Those will be filled in background.
  • queryObject.asIterable(100,1) actually the same as .asIterable(100) but using a query based iterator instead.
  • queryObject.asIterable(100, new PrefetchingIterator())): this is more or less the same as the prefetching above, but using the query based PrefetchingIterator. This is fetching the datachunks using skip and limit functionality of mongodb which showed some decrease in performance, the higher the skip is. It's still there for compatibility reasons.

Internally the default iterator does create queries that are derived from the sort of the query, if there is no sort specified, it will assume you want to sort by _id.

you could put each of those iterators to one of two classes:

  1. the iterator is using the Mongodb Cursor
  2. the iterator is using distinct queries for each step / chunk.

these have significant different behaviour.

query based iterators

the query based iterators use the usual query method of morphium. hence all related functionalities work, like caching, life cycle methods etc. It is just like you would create those queries in a row. one by one.

cursor based iterators

due to the fact that the query is being executed portion by portion, there is no way of having things cached properly. These queries do not use the cache!


Storing is more or less a very simple thing, just call and you're done. Although there is a bit more to it: - if the object does not have an id (id field is null), there will be a new entry into the corresponding collection. - if the object does have an id set (!= null), an update to db is being issued. - you can call morphium.storeList(lst) where lst is a list of entities. These would be stored in bulkd, if possible. Or it does a bulk update of things in mongo. Even mixed lists (update and inserts) are possible. Morphium will take care of sorting it out - there are additional methods for writing to mongo, like update operations set, unset, push, pull and so on (update a value on one entity or for all elements matching a query), delete objects or objects matching a query, and a like - The writer that acutally writes the data, is chosen depending on the configuration of this entity (see Annotations below)


a lot of things can be configured in Morphium using annotations. Those annotations might be added to either classes, fields or both.


Perhaps the most important Annotation, as it has to be put on every class the instances of which you want to have stored to database. (Your data objects).

By default, the name of the collection for data of this entity is derived by the name of the class itself and then the camel case is converted to underscore strings (unless config is set otherwise).

These are the settings available for entities:

  • translateCamelCase: default true. If set, translate the name of the collection and all fields (only those, which do not have a custom name set)
  • collectionName: set the collection name. May be any value, camel case won't be converted.
  • useFQN: if set to true, the collection name will be built based on the full qualified class name. The Classname itself, if set to false. Default is false
  • polymorph: if set to true, all entities of this type stored to mongo will contain the full qualified name of the class. This is necessary, if you have several different entities stored in the same collection. Usually only used for polymorph lists. But you could store any polymorph marked object into that collection Default is false
  • nameProvider: specify the class of the name provider, you want to use for this entity. The name provider is being used to determine the name of the collection for this type. By Default it uses the DefaultNameProvider (which just uses the classname to build the collection name). see above


Marks POJOs for object mapping, but don't need to have an ID set. These objects will be marshaled and unmarshaled, but only as part of another object (Subdocument). This has to be set at class level.

You can switch off camel case conversion for this type and determine, whether data might be used polymorph.


Valid at: Class level

Tells Morphium to create a capped collection for this object (see capped collections above).


maxSizemaximum size in byte. Is used when converting to a capped collection
maxNumbernumber of entries for this capped collection


Special feature for Morphium: this annotation has to be added for at lease one field of type Map<String,Object>. It does make sure, that all data in Mongo, that cannot be mapped to a field of this entity, will be added to the annotated Map properties.

by default this map is read only. But if you want to change those values or add new ones to it, you can set readOnly=false


It's possible to define aliases for field names with this annotation (hence it has to be added to a field).

List<String> strLst;

in this case, when reading an object from Mongodb, the name of the field strLst might also be stringList or string_list in mongo. When storing it, it will always be stored as strLst or str_lst according to config.

This feature comes in handy when migrating data.


has to be added to both the class and the field(s) to store the creation time in. This value is set in the moment, the object is being stored to mongo. The data type for creation time might be:

  • long / Long: store as timestamp
  • Eate: store as date object
  • String: store as a string, you may need to specify the format for that


same as creation time, but storing the last access to this type. Attention: will cause all objects read to be updated and written again with a changed timestamp.

Usage: find out, which entries on a translation table are not used for quite some time. Either the translation is not necessary anymore or the corresponding page is not being used.


Same as the two above, except the timestamp of the last change (to mongo) is being stored. The value will be set, just before the object is written to mongo.


Define the read preference level for an entity. This annotation has to be used at class level. Valid types are:

  • PRIMARY: only read from primary node
  • PRIMARY_PREFERED: if possible, use primary.
  • SECONDARY: only read from secondary node
  • SECONDARY_PREFERED: if possible, use secondary
  • NEAREST: I don't care, take the fastest


Very important annotation to a field of every entity. It marks that field to be the id and identify any object. It will be stored as _id in mongo (and will get an index).

The Id may be of any type, though usage of ObjectId (or MorphiumId in Java) is strongly recommended.


Define indexes. Indexes can be defined for a single field. Combined indexes need to be defined on class level. See above.


If this annotation is present for an entity, this entity would only send changes to mongo when being stored. This is useful for big objects, which only contain small changes.

Attention: in the background your object is being replaced by a Proxy-Object to collect the changes.


Can be added to any field. This not only has documenting character, it also gives the opportunity to change the name of this field by setting the fieldName value. By Default the fieldName is ".", which means "fieldName based".


Mark an entity to be read only. You'll get an exception when trying to store.


If you have a member variable, that is a POJO and not a simple value, you can store it as reference to a different collection, if the POJO is an Entity (and only if!).

This also works for lists and Maps. Attention: when reading Objects from disk, references will be de-referenced, which will batch into one call to mongo each.

Unless you set lazyLoading to true, in that case, the child documents will only be loaded when accessed.


Do not store the field.


Usually, Morphium does not store null values at all. That means, the corresponding document just would not contain the given field(s) at all.

Sometimes that might cause problems, so if you add @UseIfNull to any field, it will be stored into mongo even if it is null.


Sometimes it might be useful to have an entity set to write only (logs). An exception will be raised, if you try to query such a entity.


Sepcify the safety for this entity when it comes to writing to mongo. This can range from "NONE" to "WAIT FOR ALL SLAVES". Here are the available settings:

  • timeout: set a timeout in ms for the operation - if set to 0, unlimited (default). If set to negative value, wait relative to replication lag
  • level: set the safety level:
    • IGNORE_ERRORS None, no checking is done
    • NORMAL None, network socket errors raised
    • BASIC Checks server for errors as well as network socket errors raised
    • WAIT_FOR_SLAVE Checks servers (at lease 2) for errors as well as network socket errors raised
    • MAJORITY Wait for at least 50% of the slaves to have written the data
    • WAIT_FOR_ALL_SLAVES: waits for all slaves to have committed the data. This is depending on how many slaves are available in replica set. Wise timeout settings are important here. See WriteConcern in MongoDB Java-Driver for additional information


If this annotation is present at a given entity, all write access concerning this type would be done asynchronously. That means, the write process will start immediately, but run in background.

You won't be informed about errors or success. If you want to do that, you don't need to set @AsyncWrites, use one of the save method with a Callback for storing your data - those methods are all asynchronous.


Create a write buffer, do not write data directly to mongo, but wait for the buffer to be filled a certain amount:

  • size: default 0, max size of write Buffer entries, 0 means unlimited. STRATEGY is meaningless then
  • strategy: define what happens when write buffer is full and new things would be written. Can be one of WRITE_NEW, WRITE_OLD, IGNORE_NEW, DEL_OLD, JUST_WARN
    • WRITE_NEW: write all new incoming entries to the buffer directly to mongo, buffer won't grow
    • WRITE_OLD: take one of the oldest entries from the buffer, write it, queue the new entry to buffer. Buffer won't grow
    • IGNORE_NEW: do not add new entry to buffer and do not write it. Attention: possible data loss Buffer won't grow
    • DEL_OLD: delete an old entry from the buffer, add new one. Buffer won't grow
    • JUST_WARN: just issue a warning via log4j, but add the new Object anyway. Buffer will grow, no matter what threshold is set!


Read-Cache Settings for the given entity.

  • timeout: How long are entries in cache valid, in ms. Default 60000ms
  • clearOnWrite: if set to true (default) the cache will be cleared, when you store or update an instance of this type
  • maxEntries: Maximum number of entries in cache for this type. -1 means infinite
  • clearStrategy: when reaching the maximum number of entries, how to replace entries in cache.
    • LRU: remove the least recently used entry from cache, add the new
    • RANDOM: remove a random entry from cache, add the new
    • FIFO: remove the oldest entry from cache, add the new (default)
  • syncCache: Set the strategy for syncing cache entries of this type. This is useful when running in a clustered environment to inform all nodes of the cluster to change their caches accordingly. A sync message will be sent to all nodes using the Morphium messaging as soon as an item of this type is written to mongo.
    • NONE: No cache sync
    • CLEAR_TYPE_CACHE: clear the whole cache for this type on all nodes
    • REMOVE_ENTRY_FROM_TYPE_CACHE: remove an updated entry from the type cache of all nodes
    • UPDATE_ENTRY: update the entry in the cache on all nodes
    • This may cause heavy load on the messaging system. All sync strategies except CLEAR_TYPE_CACHE might batch in dirty reads on some nodes.


Explicitly disable cache for this type. This is important if you have a hierarchy of entities and you want the "super entity" to be cached, but inherited entities from that type not.


This is a marker annotation telling Morphium that in this type, there are some Lifecycle callbacks to be called.

Please keep in mind that all lifecycle annotations (see below) would be ignored, if this annotation is not added to the type.


If @Lifecycle is added to the type, @PostLoad may define the method to be called, after the object was read from mongo.


If @Lifecycle is added to the type, @PreStore may define the method to be called, just before the object is written to mongo. It is possible to throw an Exception here to avoid storage of this object.


If @Lifecycle is added to the type, @PostStore may define the method to be called, after the object was written to mongo.


If @Lifecycle is added to the type, @PreRemove may define the method to be called, just before the object would be removed from mongo. You might throw an exception here to avoid storage.


If @Lifecycle is added to the type, @PostRemove may define the method to be called, after the object was removed from mongo.


If @Lifecycle is added to the type, @PreUpdate may define the method to be called, just before the object would be updated in mongo. Veto is possible by throwing an Exception.


If @Lifecycle is added to the type, @PostUpdate may define the method to be called, after the object was updated in mongo.


Morphium does not have many dependencies:

  • log4j
  • mongo java driver (usually the latest version available at that time)
  • a simple json parser (json-simple)

Here is the excerpt from the pom.xml:


There is one kind of "optional" Dependency: If hibernate validation is available, it's being used. If it cannot be found in class path, it's no problem.

Code Examples

All those Code examples are part of the Morphium source distribution. All of the codes are at least part of a unit test.

Simple Write / Read

for (int i = 1; i <= NO_OBJECTS; i++) { 
    UncachedObject o = new UncachedObject(); 
    o.setValue("Uncached " + i % 2); 
 Query<uncachedobject> q = MorphiumSingleton.get().createQueryFor(UncachedObject.class);
 q = q.f("counter").gt(0).sort("-counter", "value");
 List</uncachedobject><uncachedobject> lst = q.asList();
 assert (!lst.get(0).getValue().equals(lst.get(1).getValue()));

    q = q.q().f("counter").gt(0).sort("value", "-counter");
    List<UncachedObject> lst2 = q.asList();
    assert (lst2.get(0).getValue().equals(lst2.get(1).getValue()));"Sorted");

    q = MorphiumSingleton.get().createQueryFor(UncachedObject.class);
    q = q.f("counter").gt(0).limit(5).sort("-counter");
    int st = q.asList().size();
    q = MorphiumSingleton.get().createQueryFor(UncachedObject.class);
    q = q.f("counter").gt(0).sort("-counter").limit(5);
    assert (st == q.asList().size()) : "List length differ?";


Query<complexobject> q = MorphiumSingleton.get().createQueryFor(ComplexObject.class);

    q = q.f("embed.testValueLong").eq(null).f("entityEmbeded.binaryData").eq(null);
    String queryString = q.toQueryObject().toString();;
    assert (queryString.contains("embed.test_value_long") && queryString.contains("entityEmbeded.binary_data"));
    q = q.f("embed.test_value_long").eq(null).f("entity_embeded.binary_data").eq(null);
    queryString = q.toQueryObject().toString();;
    assert (queryString.contains("embed.test_value_long") && queryString.contains("entityEmbeded.binary_data"));

Asynchronous Write

public void asyncStoreTest() throws Exception {
    asyncCall = false;
    waitForWrites();"Uncached object preparation");
    Query<UncachedObject> uc = MorphiumSingleton.get().createQueryFor(UncachedObject.class);
    uc = uc.f("counter").lt(100);
    MorphiumSingleton.get().delete(uc, new AsyncOperationCallback<Query<UncachedObject>>() {
        public void onOperationSucceeded(AsyncOperationType type, Query<Query<UncachedObject>> q, long duration, List<Query<UncachedObject>> batch, Query<UncachedObject> entity, Object... param) {
  "Objects deleted");

        public void onOperationError(AsyncOperationType type, Query<Query<UncachedObject>> q, long duration, String error, Throwable t, Query<UncachedObject> entity, Object... param) {
            assert false;

    uc = uc.q();
    uc.f("counter").mod(3, 2);
    MorphiumSingleton.get().set(uc, "counter", 0, false, true, new AsyncOperationCallback<UncachedObject>() {
        public void onOperationSucceeded(AsyncOperationType type, Query<UncachedObject> q, long duration, List<UncachedObject> batch, UncachedObject entity, Object... param) {
  "Objects updated");
            asyncCall = true;


        public void onOperationError(AsyncOperationType type, Query<UncachedObject> q, long duration, String error, Throwable t, UncachedObject entity, Object... param) {
  "Objects update error");


    assert MorphiumSingleton.get().createQueryFor(UncachedObject.class).f("counter").eq(0).countAll() > 0;
    assert (asyncCall);

Asynchronous Read

public void asyncReadTest() throws Exception {
    asyncCall = false;
    Query<UncachedObject> q = MorphiumSingleton.get().createQueryFor(UncachedObject.class);
    q = q.f("counter").lt(1000);
    q.asList(new AsyncOperationCallback<UncachedObject>() {
        public void onOperationSucceeded(AsyncOperationType type, Query<UncachedObject> q, long duration, List<UncachedObject> batch, UncachedObject entity, Object... param) {
  "got read answer");
            assert (batch != null) : "Error";
            assert (batch.size() == 100) : "Error";
            asyncCall = true;

        public void onOperationError(AsyncOperationType type, Query<UncachedObject> q, long duration, String error, Throwable t, UncachedObject entity, Object... param) {
            assert false;
    int count = 0;
    while (q.getNumberOfPendingRequests() > 0) {
        assert (count < 10);
        System.out.println("Still waiting...");
    assert (asyncCall);


public void basicIteratorTest() throws Exception {

    Query<UncachedObject> qu = getUncachedObjectQuery();
    long start = System.currentTimeMillis();
    MorphiumIterator<UncachedObject> it = qu.asIterable(2);
    assert (it.hasNext());
    UncachedObject u =;
    assert (u.getCounter() == 1);"Got one: " + u.getCounter() + "  / " + u.getValue());"Current Buffersize: " + it.getCurrentBufferSize());
    assert (it.getCurrentBufferSize() == 2);

    u =;
    assert (u.getCounter() == 2);
    u =;
    assert (u.getCounter() == 3);
    assert (it.getCount() == 1000);
    assert (it.getCursor() == 3);

    u =;
    assert (u.getCounter() == 4);
    u =;
    assert (u.getCounter() == 5);

    while (it.hasNext()) {
        u =;"Object: " + u.getCounter());

    assert (u.getCounter() == 1000);"Took " + (System.currentTimeMillis() - start) + " ms");


public void messagingTest() throws Exception {
    error = false;


    final Messaging messaging = new Messaging(MorphiumSingleton.get(), 500, true);

    messaging.addMessageListener(new MessageListener() {
        public Msg onMessage(Messaging msg, Msg m) {
  "Got Message: " + m.toString());
            gotMessage = true;
            return null;
    messaging.storeMessage(new Msg("Testmessage", MsgType.MULTI, "A message", "the value - for now", 5000));

    assert (!gotMessage) : "Message recieved from self?!?!?!";"Dig not get own message - cool!");

    Msg m = new Msg("meine Message", MsgType.SINGLE, "The Message", "value is a string", 5000);
    m.setMsgId(new MorphiumId());
    m.setSender("Another sender");


    assert (gotMessage) : "Message did not come?!?!?";

    gotMessage = false;
    assert (!gotMessage) : "Got message again?!?!?!";

    assert (!messaging.isAlive()) : "Messaging still running?!?";

Cache Synchronization

public void cacheSyncTest() throws Exception {

    Morphium m1 = MorphiumSingleton.get();
    MorphiumConfig cfg2 = new MorphiumConfig();

    Morphium m2 = new Morphium(cfg2);
    Messaging msg1 = new Messaging(m1, 200, true);
    Messaging msg2 = new Messaging(m2, 200, true);


    CacheSynchronizer cs1 = new CacheSynchronizer(msg1, m1);
    CacheSynchronizer cs2 = new CacheSynchronizer(msg2, m2);

    //fill caches
    for (int i = 0; i < 1000; i++) {
        m1.createQueryFor(CachedObject.class).f("counter").lte(i + 10).asList(); //fill cache
        m2.createQueryFor(CachedObject.class).f("counter").lte(i + 10).asList(); //fill cache
    //1 always sends to 2....

    CachedObject o = m1.createQueryFor(CachedObject.class).f("counter").eq(155).get();
    cs2.addSyncListener(CachedObject.class, new CacheSyncListener() {
        public void preClear(Class cls, Msg m) throws CacheSyncVetoException {
  "Should clear cache");
            preClear = true;

        public void postClear(Class cls, Msg m) {
  "did clear cache");
            postclear = true;

        public void preSendClearMsg(Class cls, Msg m) throws CacheSyncVetoException {
  "will send clear message");
            preSendClear = true;

        public void postSendClearMsg(Class cls, Msg m) {
  "just sent clear message");
            postSendClear = true;
    msg2.addMessageListener(new MessageListener() {
        public Msg onMessage(Messaging msg, Msg m) {
  "Got message " + m.getName());
            return null;
    preSendClear = false;
    preClear = false;
    postclear = false;
    postSendClear = false;
    o.setValue("changed it");;

    assert (!preSendClear);
    assert (!postSendClear);
    assert (postclear);
    assert (preClear);

    long l = m1.createQueryFor(Msg.class).countAll();
    assert (l <= 1) : "too many messages? " + l;
//        createCachedObjects(50);
//        Thread.sleep(90000); //wait for messages to be cleared
//        assert(m1.createQueryFor(Msg.class).countAll()==0);
public void nearTest() throws Exception {
    ArrayList<Place> toStore = new ArrayList<Place>();
//        MorphiumSingleton.get().ensureIndicesFor(Place.class);
    for (int i = 0; i < 1000; i++) {
        Place p = new Place();
        List<Double> pos = new ArrayList<Double>();
        pos.add((Math.random() * 180) - 90);
        pos.add((Math.random() * 180) - 90);
        p.setName("P" + i);

    Query<Place> q = MorphiumSingleton.get().createQueryFor(Place.class).f("position").near(0, 0, 10);
    long cnt = q.countAll();"Found " + cnt + " places around 0,0 (10)");
    List<Place> lst = q.asList();
    for (Place p : lst) {"Position: " + p.getPosition().get(0) + " / " + p.getPosition().get(1));

@WriteSafety(level = SafetyLevel.MAJORITY)
public static class Place {
    private MorphiumId id;

    public List<Double> position;
    public String name;

    public MorphiumId getId() {
        return id;

    public void setId(MorphiumId id) { = id;

    public List<Double> getPosition() {
        return position;

    public void setPosition(List<Double> position) {
        this.position = position;

    public String getName() {
        return name;

    public void setName(String name) { = name;

the problems with Logging

today there is a whole bunch of loggin frameworks. Every one is more capable than the other. Most commond probably are java.util.logging and log4j. Morphium used log4j quite some time. But in our high load environment we encountered problems with the logging itself. Also we had problems, that every library did use a different logging framework.

Morphium since V2.2.21 does use its own logger. This can be configured using Environment variables (in linux like export morphium_log_file=/var/log/morphium.log) or java system parameters (like java -Dmorphium.log.level=5).

This logger is built for performance and thread safety. It works find in high load environments. And has the following features:

  • it is instanciated with new - no singleton. Lesser performance / synchronization issues
  • it has several options for configuration. (see above). You can define global settings like morphium.log.file but you can also define settings for a prefix of a fqdn, like For example java -Dmorphium.log.level=2 would switch on debugging only for the messaging package, the default has level 2 (which is ERROR)
  • it is possible to define 3 Things in the way described above (either global or class / package sepcific): FileName (real path, or STDOUT or STDERR), Log level (0=none, 1=FATAL, 2=ERROR, 3=WARN, 4=INFO, 5=DEBUG) and whether the output should be synced or buffered (synced=false)
  • if you want to use log4j or java.util.logging as logging, you can set the log filename to log4j or jul accordingly
  • if you want to use your own logging implementation, just tell morphium the log delegate as filename, e.g. morphium.log.file=de.caluga.morphium.log.MyLogDelegate
  • of course, all this configuration can be done in code as well.

Swtiching to logback in V3.2.0

Yes, keeping an own addintional logger framework alive is not the smartest or easiest thing to do. So we decided to use logback for configuration of logging, using slf4j in morphium ourselves (in performance checks this seemed to have almost no negative impact fortunately)

So with upcoming V3.2.0 the own logger implementation is gone...

category: global

New Release of Morphium V2.2.6

2014-09-02 - Tags:

sorry, no english version available

category: Computer

New release V2.2.4 of #morphium - the #MongoDB POJO #mapper

2014-08-20 - Tags:

sorry, no english version available

category: Computer

New Morphium Release V2.2.3

2014-08-08 - Tags:

sorry, no english version available

category: Java --> programming --> Computer

Neues Release von #Morphium V2.1.1 - DER #MongoDB POJO Mapper

2014-04-16 - Tags:

sorry, no english version available

category: global

New Release of #Morphium V2.1.1 - THE #MongoDB POJO Mapper

2014-04-16 - Tags:

sorry, no english version available

found results: 53

<< 1 ... 2 ... >>