this blog will cover topics all around Java, opensource, Mongodb and alike. Especially Morphium - THE POJO Mapper for mongodb
latest blog entries
2020-11-27 - Tags:
First of all, you usually do not need to care about Indices in Mongodb, just define it in your entity code:
There are more options for defining indices, please consult the morphium documentation for details.
As already mentioned, Morphium usually takes care of creation of indices following a simple rule:
The reason being, that if you have big collections, it might cause problems to create new indices when writing - it might affect the system. But if the collection does not exist, this is quite a small operation.
But there's a catch: this check costs time, every write morphium checks, if the collection exists.
You can change this behaviour by switching automatic index creation off in MorphiumConfig
Starting with morphium V4.2.3, you can also have morphium check those indices only on startup.
This will issue a warning in log, if an index for an entity is missing. There are more options, you can choose from:
NO_CHECK: do not check, if indices match
WARN_ON_STARTUP: check indices for all Entities on startup/connect. This will slow down startup depending on your code.
CREATE_ON_WRITE_NEW_COL: create missing indices, when writing to a new collection (Default behaviour)
CREATE_ON_STARTUP: check for indices only once when starting up / connecting to mongo.
Hint: There is no option to warn on write, this would slow down mongodb access significantly!
Warnings for missing indices look like this:
Collection 'person' for entity 'de.caluga.test.mongo.suite.TextIndexTest$Person' does not exist.
Morphium does combine handles the capped status of collection together with indices. That means, if the collection does not exists, but @Capped-Annotation is used, the collection is created capped. This is done together with the indices, so if you check those, capped information is also checked. Depending on settings, the collection might also be automatically converted to a capped collection!
The easiest way to create indices is, to call the method
This will create all indices, that are defined in code for this entity (using the @Index annotation).
If you also want to ensure, the corresponding collection is capped, if defined accordingly, call
Both methods will not create indices or capped collections, if no index is defined in code. But it will call
ensureIndex on mongodb.
Hint: when an index is created for a non existend collection, there will be an empty collection in mongodb.
But you can also create indices more manually:
the Index map contains the Document that describes the index (similar as directly in mongodb) and the options are the corresponding options for this index. e.g.
In order to get just the information on missing indices for a specific Entity (or all of them), you can get this information starting with MongoDB V4.2.3:
returns a Map with the entity class as key and a list of missing Index definitions as value. you can use this map to manually create an index:
checkIndices()also adds missing capped information to the result map!
checkIndices()runs through the whole classpath, which can take some milliseconds, if you want to reduce the load, you can use
checkIndices(filter), and filter classes according to your needs:
checkIndices(classInfo -> !classInfo.getPackageName().startsWith("de.caluga.morphium"));
The performance of Databases strongly depends on proper creation of indices. With morphium you can define them all in code, which reduces the need to know, how to create indices in Mongodb manually for java engineers and hence improves performance.
2020-09-14 - Tags: java mongodb morphium
We just released Morphium V4.2.0 including a lot of fixes and features:
2020-08-10 - Tags: morphium java mongodb
We released Morphium 4.1.4. This includes, as usual a bunch of improvements and fixes. Here is the changelog since V4.1.0:
AutoClosablenow - simplifies usage
nullhandling: only overwrite a POJO value, when mongo delivers a value including null, keep it as default otherwise.
A warm thanks goes out to all that helped build this Releases! Not olny with code and pull requests, but also by filing in an issue! Thanks a lot!
2020-03-03 - Tags: java mongodb morphium
We just released morphium V4.1.2 via maven central. The lots of changes do contain:
in addtion to those features, some fixes:
ObjectIdas key in entities would lead to exceptions
to use morphium, include this into your pom.xml in
2020-02-25 - Tags: java morphium mongo
Because I've now received this question a few times, here is a brief summary of morphine messaging and how to use it:
The first parameter in the constructor is always the morphium instance. This determines which Mongo is used and also some settings of morphine affect the messaging subsystem:
threadPoolMessagingCoreSize: determines the core size of the thread pool in messaging.
threadPoolMessagingMaxSize: the maximum size of the thread pool in messaging
threadPoolMessagingKeepAliveTime: Time that a thread in the pool can survive unused in ms.
This thread pool is actually only used to process messages in own threads. If the maximum size of the ThreadPool (or the window size) is reached, it is paused until enough capacity is available in the pool again.
Then we may have other parameters in the constructors:
queueName- the name of the queue (!) To be used. the collection name is then usually
mmsg_+ the queue name to avoid collisions with the" normal "names. If no name is set, "msg" is assumed as the collection and queue name. The collection name can be found with
getCollectionName ()on the messaging instance.
pausein ms: this time determines how often to search for new messages (polling) or, if you have connected to a replica set and thus
ChangeStreamscan use how much time between two checks older messages should pass. These messages can occur when MessageProcessing is paused in between. All messages reported to ChangeStream will be ignored during this time and will not be processed. So that these are not "lost", these messages are searched for every
pausems. In general, if you have connected to a Replica set, the pause should be increased to minimize the load on the Mongo. If polling has to be used, the time should be set relatively short to guarantee fast response times.
processMultiple: if true, several messages are always marked for processing by this messaging (locking)
multithreaded: determines whether processing (calling the MessageListener) should take place in a separate thread. If set, a thread pool is created according to the settings in
windowSize: how many messages should be processed at once. Determines the maximum number of messages that can be marked at the same time, or how many threads can be active in the thread pool at the same time.
useChangeStream: this can be used to force the use of a changeStream. If set, you will be informed about new messages from Mongo via push. If not set, polling takes place. The default is
Again, I would like to point out that using a replica set for messaging is a much better solution!
What is there to take into account in the values
multithreadedare a bit related. Because
processMultipleonly makes sense - especially for listeners who really do some work - if you also set
true. Otherwise, several messages are simply marked for processing by this messaging system, but are being processed one after the other. Setting both to
falsereduces the load in the system (single threaded, one by one), but can in turn to high latencies.
windowSizeis strongly related to the ThreadPool. Actually the values should be about the same. Because the maximum of how threads are created at the same time is specified by windowSize. it would be best if the thread pool also allowed this, otherwise it won't work. Therefore, the
windowSizeshould be about the same as
threadPoolMessagingMaxSize(to be on the safe side, you could also give the thread pool a few more threads). If
processMultipleis set to
all in all you can't do much wrong. Just have a look at the tests to get some sample codes.
2020-02-23 - Tags: morphium java messaging
Morphium can also be messaging ... wow ... right?
Why do you still need such a solution? how does it compare to others, such as RabbitMQ, MQSeries etc.? Those are the top dogs ... Why another solution?
I would like to list some differences here
Morphium Messaging is designed as a "persistent messaging system". The idea is to be able to "distribute" data more easily within the MongoDB. Especially in connection with
@Reference it is very easy to "transfer" complex objects via messaging.
Furthermore, the system should be easy to use, additional hardware should not be needed if possible. This was an important design criterion, especially for the cluster-aware cache synchronization.
Morphium is to be understood as a persistent queue, which means that new consumers can be added to a cluster at any time and they also receive older, unprocessed messages.
Because Morphium persists the messages, you can use every MongoDB client to see what is happening purely in terms of messaging. there is a rudimentary implementation for Morphium messaging in python. This is used to e.g. display a messaging monitor, a status of what is happening right now.
Morphium is super easy to integrate into existing systems. The easiest of course, of course, if you already have Morphium in use. Because then it's just a three-line:
To e.g. To get RabbitMQ a "simple" producer-consumer setup, more lines and setup are necessary.
in the concept of Morphium, an "answer" was also provided. This means that every MessageListener can simply "reply" a message as a response. This message is then sent directly to the recipient. Something that is not easily achieved in other messaging systems.
An important feature is the synchronization of the caches in a cluster. This runs via annotations in the entity and then all you have to do is start the
CacheSynchronizer, and everything runs automatically in the background.
Morphium also provides a solution to "reject" an incoming message. Every listener can throw a
MessageRejectedException. The message is then no longer processed in the current messaging instance and marked so that it can be processed by other recipients. This also happens in particular if an error / exception occurred during message processing.
Morphium also supports JMS, but there is a bit of a logical and conceptual "breach" ...
JMS sends messages e.g. in topics or queues ... in Morphium there is no or only a limited distinction. In this nomenclature, each message can be either a topic or a queue message.
If you send a message in Morphium Messaging that is marked as "non-exclusive" (default), then it is a broadcast, every participant can receive the message within the TTL (time to live). This is only decided based on whether you have registered a listener or not.
Every Morphium messaging listener can get topics, queues, channels, and direct messages. That is more or less determined by the broadcaster. The sender determines whether the message
This is what you e.g. reads again and again about RabbitMQ. That is not the same with Morphium. The messages remain in the queue for a while and delete themselves when reaching the TTL. If I put a broadcast message in the queue with a lifespan of one day, then this message can still be processed within a day. And will it too, e.g. when new "consumers" register. (replay of messages).
This does not apply to exclusive messages, as you explicitly don't want to process them multiple times, i.e. the message is deleted after successful processing (only until V4.1.2 - after that, messages are only deleted when TTL is reached).
A Morphium message queue is always filled to a certain extent with this and that's a good thing.
Morphium messaging does not want to and cannot be a "replacement" for existing message systems and solutions. That was not the direct goal in development at all. There was a specific problem that could be solved most easily and efficiently.
Nevertheless, the areas of application of Morphium messaging are similar or overlap with other messaging systems. But that's in the nature of things. A migration from one system to another should definitely be possible in finite time. Morphium supports e.g. JMS, both as a client and as a server. This allows cache synchronization to be implemented using other, possibly already existing messaging solutions without having to forego the convenience of Morphium annotation-based cache definition. Or integrate Morphium into your own architecture as a messaging solution via JMS.
| ---- | ----: |: -----: | | Description | Morphium | RabbitMQ | | runs without additional hardware X | - | | nondestructive peek | X | - | | high speed | - | X | | high security | X | X | | simple to use | X | depending on the use case | | persistent news | X | not mandatory | | get pending messages on connect | X | - |
2020-01-09 - Tags: java
There will be a new feature in the next version of Morphium: automatic encryption and decryption of data.
This ensures that the data is only stored in encrypted form in the database or is decrypted when it is read. This also works with sub-documents. Technically, more complex data structures than JSON are encoded and this string is stored encrypted.
We currently have support for 2 types of encryption AES (default) and RSA.
here is the test for AES encryption.
There are a few classes you should know:
AESEncryptionProvider: this is an implementation of the interface
EncryptionProviderand offers AES encryption
RSAEncryptionProvider: this is an implementation of the interface
EncryptionProviderand offers RSA encryption
DefaultEncryptionKeyProvider: you save the keys yourself
PropertyEncryptionKeyProvider: the keys are read in via properties, the keys themselves can be stored in encrypted form (AES encryption)
MongoEncryptionKeyProvider: reads the keys from the Mongo, Collection and encryption key can be specified
2019-08-20 - Tags: morphium java
Morphium 220.127.116.11 was released this week and this version contains one feature, which I want to show here:
Morphium Messaging via JMS!
Here a little proof of concept test from the source:
Consider this only being a teaser, it is still BETA or even ALPHA. Not production ready yet.
Unfortunately, some features of morphium won't work using JMS:
But again: this is not production ready, BETA state at best!
2019-07-29 - Tags: jblog
Some time ago I "dared" to leap away from Wordpress and reported about it (https://caluga.de/v/2017/5/19/jblog_java_blogging_software). Now it's been over 2 years (: astonished :) and there it is time to draw a conclusion and to report a little how it went.
The good anticipation: the server was not hacked. The hits that were aimed at any PHP gaps or so are all ineffective "bounced" - that's what I wanted to achieve. The update topic is also solved: there was no: smirk: I took care of the further development myself. However, we already had a few bugs, and some are a bit annoying. Also, some things have turned out to be less practical. The quite elaborate versioning of the individual blog posts is actually unnecessary and complicates everything only.
What is currently behind Jblog:
yes, that was it already, has become quite clear. That The application is currently running as a Spring Boot app behind a nginx.
I knit my blog as ich wanted it. So very much in my way of blogging, which may be the reason why nobody forked on Github and I changed the project to privat. : Sob:
Jblog is consistently bilingual. Theoretically, more would be possible, but I wanted to focus on German and English. That every element on the website is translated, all blog entries must be recorded in 2 languages.
For this a
MessageSource is used which reads the data from Mongo. The nice: if a translation is not found, a link to the translation mask will be issued. That when I run Jblog on an empty translation database, I get a screen full of links that take me to the translation tool. Gradually this will be corrected.
Morphium to access the Mongo, these translations can also be cached (just write the @Cache annotation to the class).
The nice thing is that the cache, if wanted, is cluster-aware. That I can run multiple instances of Jblog and access through a loadbalancer. And these would keep their caches in sync across the Mongo, i. there are no dirty reads even with round-robin attitude!
Markdown is indeed der hit lately to enter text. I think that's pretty cool too, especially since I type very well and the grip on the mouse always slows me down. That's why it's so easy to type everything. (and who like me used to do word processing with LaTeX knows this: smirk :.)
Jblog is completely designed for Markdown. That You enter his blogs in Markdown and then see the preview right away. I have also made some extensions, especially to simply emojis (: confused:: smile:: sweat :) integrate or embed a nicely formatted Java code block.
I also implemented the embedding of pictures.
I used a Markdown implementation called [Markingbird] (https://github.com/pmattos/Markingbird) and adjusted it a bit. So I've built something to embed the emojis easier and easier to view images.
Each blog post is written in Markdown and will be rendered on first access. And the result is cached, of course: smirk:
When typing, I have a live preview and can see exactly how the post will look at the end.
new articles are published via Twitter. Some things have been taken into consideration: specify the time of the tweet, only for new articles etc.
And it is tweeted in 2 languages, German and English
Yes, you can actually use Jblog with several users. Although I have a few friends and acquaintances, the time here from time to time pure snow and possibly what to post, but otherwise ... Is so a very little used feature, which is why I have then deactivated.
At the moment you can not register, the registration process is De-Enabled. If you want to leave a comment, you can do that via Disqus.
The implementation has changed a bit in the last 2 years. Launched with JDK1.8 as a classic Spring Web project with WAR deployment, we now have a Spring Boot application with integrated Tomcat 9 on JDK11.
I use AdaptOpenJdk V11 here because I need Longtermsupport so I do not have to rebuild my blog every 6 months.
Of course, this should also be a small "showcase" for morphium, so here are a few examples:
In order to keep the application simple multilingual, it is advisable to keep the translation data in the database. This has many advantages:
Of course these features come with a price tag:
Just the last point annoyed me, so I wrote Localization so that in the case that no occupation is found, the link to the appropriate maintenance interface is issued. Although this had its strange effects (link in a link, link in the title etc), was on the whole but really easy.
An important point was that I can only show the entries in the translation mask with a mouse click, which still lacks a translation. This is a quick way to a complete translation.
This is the message source:
And this is the service returning the html-link if the translation is missing:
This is very handy if you bring in new features.
If you did that for a bigger website, it could be awkward. Especially if you want to support more than 2 languages. Therefore, in such a case, an import / export of the data is recommended, preferably in a readable format for translation agencies!
There is another "problem" in translations: orphans! It happens quite often that during the development, translations are made for something that becomes obsolete over time. Then you have translations in the database that nobody needs anymore. It is not so easy to identify them because the keys for the translations are partly dependent on data and only available at runtime.
For such a case, morphine offers the option of placing a
lastAccessTimestamp in a field. This will let you see when entries were last read. (Attention: use this only in conjunction with caching, since every read access to these elements has a write access result!).
If you put it all together, the following entity comes out:
If there is a write access to the translations, the cache is completely emptied.
If you use Jblog in the cluster, you need a way to keep the caches in sync on the nodes. For this you define a
SyncCacheStrategy. The following happens:
SyncCacheStrategy.CLEAR_TYPE_CACHEclears the whole cache for this entity
SyncCacheStrategy.REMOVE_ENTRY_FROM_TYPE_CACHEremoves the entry from all search results in the cache (Caution: the element will not be output in cached search results, although it might fit)
SyncCacheStrategy.UPDATE_ENTRYupdates the entry in the cache (Caution: can cause the item in the cache to be counted as search results, even though it does not really fit the criteria anymore - dirty read!)
SyncCacheStrategy.NONE- well ... just do not sync: smirk:
But for this to work properly, the application needs to know which language is being used. Various steps are used for this:
This was implemented with an interceptor:
To calculate the tag cloud we simply use the aggregator located in Mongo, and of course there is also support for that in morphium:
Here a simple aggregation is made on the
BlogEntry entries stored in Mongo. The whole thing is filtered to white label (
wl) (Jblog supports several whitelabel, for me it is boesebeck.biz, boesebeck.name and caluga.de) and only posts in the status
LIVE are considered.
On this data, a projection is performed to filter out the individual categories (
$ arrayElementAt). Then we still count the occurrences of these categories and voila! A cloud ...
and so that the whole is mapped correctly in Java, you can put the result of the aggregation into an
Now only the list of
WordCounts is displayed in the frontend, with the corresponding size
Sz - and to make it look "funny", the whole thing is still randomized ...
Caching uses Jblog quite often to keep performance high. In particular, the translations are cached. This gives you the ability to change the translations at any time via a frontend, without having to re-deploy or restart, on the other hand, these data are rarely changed, so it is good to keep them in memory to access the database minimize and thus maximize performance.
Morphium this is quite simple: just add the annotation
@Cache to any
Apart from the fact that morphium has its own statistics e.g. the cache efficiency of individual entities (to be retrieved via
morphium.getStatistics (); - returns a
Map <String, Double> with particular number of elements in the cache, cache hit ratio, etc.), so needs a blog but a few Information about the accesses to it.
The whole is handled in Jblog on its own service, which counts the daily accesses:
inccommandos to increase the number of hits / visitors:` morphium.createQueryFor (Stats.class) .f (Stats.Fields.id) .eq (dateString) .inc (Stats.Fields.hits, 1); '
To pack that into the mongo is a bit awkward and is enough for my mini-blog. If I had more traffic, I would not do it that way, but use a dedicated Time Series DB, like Influx or graphite.
This is a feature that does not really need it and can certainly be ticked off under "over-engineering". The idea: you have a history of changes for each block post and can jump back to every change. That's quite nice, but in my case completely useless: smirk:
Here is the entity for a BlogPost:
and the embedded revisions:
As mentioned in the introduction, this is a feature that I will probably expand again. It does not really bring much. But it's nice to have something like that implemented.
Jblog is certainly not suitable for every blogger, especially the many features, the adaptability of e.g. Wordpress are completely missing here! If I want to make my blog look different, I have to put myself ran and rebuild the thing. There are no plugins.
However, I have to say that I've had virtually no problems with Jblog since it's been used. And that's also something ... for me as a private hobby blogger certainly one of the most important, if not the most important feature at all.
If anyone is interested in taking a closer look at jblog, I'll put it online again - just leave a comment here, or send me a mail. At the moment the whole project is running as a private project in GitHub, because it is so simple, e.g. to work with deployment keys etc.
2019-02-26 - Tags: Apple MacMini OSX
originally posted on: https://boesebeck.name
I am a Mac user for quite some time now and I was always happy this way. Apple managed it to deliver a combination from operating system and hardware that is stable, secure and easy to use (even for non IT-guys) but also has a lot of features for power users.
(i already described my IT-history at another place https://boesebeck.name/v/2013/4/19/meine_it_geschichte)
My iMac which I used for quite some time (since beginning of 2011) did die in a rather spectacular way (for a Mac): it just did a little "zing" and it was off. Could not switch it back on again, broken beyond repair... :frown:
So, I needed some new Hardware. Apple unfortunately missed the opportunity at the last hardware event end of 2018 to add newer hardware to the current iMacs. There is still an more then 2 year old cpu working. Not really "current" tech, but quite expensive.
The pricing of Apple products is definitely something you could discuss about. The hardware prices were increased almost for everything, same as with the costs for new iPhones. This is kind of outrageous...
In this context, the new MacMini is a very compelling alternative. The "mini Mac" always was the entry level mac and was the cheapest one in the line up. Well, you need to have your own keyboard, mouse and screen.
now, the MacMini got finally some love from Apple. The update is quite good: recent CPU, a lot of useful ports (and fast: 4x Thunderbolt-3, 2x USB-A 3.0, HDMI). This is the Mac for people, who want a fast desktop, but do not want to pay 5000€ for an iMac Pro.
I was a bit put off by the MacMini at first, because it does not have a real GPU. Well, there is one form Intel - but you could hardly name it a Graphics Processing Unit.
That always was the problem with the MiniMac - if you want to use it as Server, fine. (I have one to backup the photos library) But as Media-PC? or even a gaming machine? No way.... as soon as decent graphics is involved, the MacMini failed.
But with thunderbolt 3 you can now solve this "problem" using an eGPU (external graphics card). How should that work? External is always slower than internal, right?
Well, not always. Thunderbolt 3 is capable of delivering up to 40GBit/s transfer speed and current GPUs only need 32GBit/S (PCI-express x16). This sounds quite ok... (although there is some overhead in the communication)
But it is quite ok. I bought the MacMini with an external eGPU and I am astonished, how much power this little machine has. All the connectors, cables, dongles etc do not look as good as the good old iMac. And the best thing: if you want to upgrade your eGPU, because there is a better one fine... or upgrade the mac mini and keep the eGPU - flexibility increase!
Of course, my 8 year old iMac cannot keep up with the current MacMini, that would be an unfair comparison. But I have to admit that the 2011 iMac was a lot quicker when it comes to graphics performance. So for gaming the Mini is not the right choice.
The built in Harddisk, of course, is a SSD. Unfortunately it is soldered fix and cannot be replaced. But it is blazingly fast and does read/write with up to 2000MB/sec.
If I take a look at my GeekBench results of the Mini, the single core benchmark is similar to the curren iMac Pro with a Xeon processor. That is truly implressive. But, of course, in the multicore benchmark the mini can't keep up - it just has not enough cores to compete with a 8-Core machine - I have the "bigger" MacMini with the current generation i7 CPU.
I plugged in (or better on) an external Vega64 eGPU. This way I could compare the Graphivs performace with other current machines using the Unigine benchmarks. In those benchmarks, my Mini has about the same speed as an iMac Pro with the Vega64. This is astonishing.
Well, how much does all this performance cost? Is it cheaper than a good speced iMac 27"?
The calculation is relatively simple. To get something comparable in an iMac you need to take the i7 Processor - although this one is about 2 generations behind. As an SSD-Storage, 128GB is probably not enough, 512 sounds more reasonable. Anything else can be attached over Thunderbolt-3. A Samsung X5 SSD connected via Thunderbolt-3 is even faster than an internal SSD - so no drawback here.
You should increase the memory yourself, as Apple is very expensive. This way an upgrade to 32GB is about 300€ - Apple charges 700€!
But for comparison the RAM is not important, as with the iMAc I would do it exactly same. So lets put that together. Right now, an eGPU case is about 400€, than a Vega64, also about 400€, the MacMini is about 1489,- € plus 250€ for a screen (LG 4k,works great), and additional 100€ for Mouse and Keyboard. All in all you end up with 2539,- +/- 200€!
Just for comparison: the iMac would cose about 2839,- € - but in this configuration it would be slower than the Mini. With a Vega64 and a comparable CPU the mini in this configuration is more comparable to the base model of the iMac pro, which is 5499,-€ (but still has a slower GPU!).
The new MacMini is definitely worth a thought. Considering the costs in comparison to other Macs, especially when you do not have to buy everything at once (like buy the MacMini, 3Monts later the RAM upgrade, 3 Months later eh eGPU case and again later the GFX-Card). The biggest disadvantage of the Mini is, that you now have more cables on your desk compared with the iMac...
I do have the Mini now running for some months and I love it! If you need a desktop, the MacMini is worth a try! Even compared with a MacBook!
2018-11-09 - Tags: mongo mongodb java morphium
What was that again? Morphium is a sophisticated Object Mapper and Messaging System for MongoDB. It adds a lot of features to MongoDB, for example a dedicated InMemoryDriver, so that you can run all your Tests just in RAM without the need to install a MongoDB.
and good things need some time... so we are happy to announce, we just released Morphium 4.0.0. This release contains a ton of changes and improvements.
this is a big update which took 8 Release Candidates to test.
Morphium is available at maven central:
or at github.
2018-10-19 - Tags: mongodb java morphium
We went quite a long way to get here, but... eventually we will
We put a lot of time and effort in this new Release Candidate #5 of Morphium and we get close to the major release.
Morphium was started as a simple Object mapper for Mongodb 1.0 which made querying in java simple using a fluent api. In addition there were a lot of unique features at that time like automatice dereferencing and lazy loading of references, caching, cluster aware cache synchronization etc.
Messaging was implemented in an early stage in order to get cache sync to work.
Messaging got more and more core feature of morphium and one USP of the API. So we gave this feature some love with this release...
With this major release we added a lot of features and enhancements. We still need to adapt documentation though, but we will. And then we issue the release - promise
This is some of the new features of Morphium 4.0.0:
watchfunctionality we can achieve a push functionality for messaging. Maximum performance, minimum load (only available in ReplicaSets)
The codebase is already in a very good testet stage and is used in productional environments in some places.
Morphium is OpenSource and can be downloaded (including Releases and Release Candidates) from github.
Or, even easier, from maven central:
<dependency> <groupId>de.caluga</groupId> <artifactId>morphium</artifactId> <version>4.0.0-RC5</version> </dependency>
2018-08-15 - Tags: java mongodb morphium
We are still working on getting morphium 4.0.0 done. We are behind schedule a bit but want to explain here, what is going on at the moment:
Unfortunately V4.0.0 is not 100% compatible with the predecessors. It might be necessary to migrate data - although not very probably. If you need assistence with that, please contact us on google-groups or githup.
All of that is taking more time, than estimated. But BETA4 of morphium 4.0.0 can be downloaded on maven central:
<dependency> <groupId>de.caluga</groupId> <artifactId>morphium</artifactId> <version>4.0.0-BETA4</version> </dependency>
In order for Morphium to work, you need to add the mongo drivers libs to your dependencies. We tested morphium with V3.8.0 of the drivers libs, but later versions should also work. (this is one of the reasons, why morphium does not create a direct dependency to the driver! Use the version you have in your project already, if not, use 3.8.0)
<dependency> <groupId>org.mongodb</groupId> <artifactId>bson</artifactId> <version>3.8.0</version> </dependency> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongodb-driver-core</artifactId> <version>3.8.0</version> </dependency> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongodb-driver</artifactId> <version>3.8.0</version> </dependency>
2018-07-17 - Tags: morphium java mongo
MongoDB released the long awaited V4.0 just recently. And of course there are a lot of new features, but the most exiting one was probably the support for multi document ACID transactions!
The mongo driver does support all those features since V3.8.0 - and morphium supports all of that with V4.0.0 - to match the mongodb version. The current BETA1 should be available via maven central or from github.com.
of course, this had to be added. But it is quite simple to use:
Attention: if your mongodb does not support transactions, this will cause an exception
We put a lot of effort into messaging, simplified the API at some places and added some new features. Messaging now supports kind of "push" notifications, if connected to a replicaset.
using the new
watch functionality of mongod, which is nothing but another way to access the oplog, we could implement a feature to synchronize the cache via database push.
It works similar as with the messaging, just that the messaging is not used. The notification about changes comes directly from database. the
@Cache annotation is used to determine how to delete / update the cache.
we introduced a new cache synchronizer for that:
We re-included and improved the In-Memory-Driver into morphium. So you can more easy Mock access to mongo, which is very important when testing.
Transactions are not working (yet) and aggregation might come in some later version of the driver. All other, more basic functionality is working.
2018-07-17 - Tags:
originally posted on: https://boesebeck.name
there is no english version of this text available
Anm.: Dieser Text wurde zur Verfügung gestellt von homepage-erstellen.de
Hat man erst einmal die ersten Hürden bei der Erstellung einer Homepage gemeistert, muss man sich um ein passendes Layout kümmern. Ein übersichtliches und ansprechendes Layout sorgt dafür, dass relevante Inhalte leichter gefunden werden können und Seitenbesucher eher zurückkehren.
Beim Layout einer Homepage gilt es zunächst darauf zu achten, welchem Zweck die Homepage dienen soll. Soll ein Produkt vorgestellt werden? Möchte man über die Dienstleistung einer Firma informieren? Oder nutzt man die Homepage, um über ein persönliches Anliegen aufzuklären? Wichtig ist, dass alle relevanten Informationen jederzeit gefunden werden können. Ein gutes Layout besteht aus Überschriften, Bildern, Fußzeilen und Spalten. So werden Informationen sinnvoll vorgefiltert und können schon mit wenigen Blicken erfasst werden. Das erhöht für den Besucher der Seite den Bedienkomfort und die Wahrscheinlichkeit, dass man zu einem späteren Zeitpunkt die Seite nochmal aufsuchen wird. Zuerst werden beim Layout Farben und Formen wahrgenommen. Ein farbenfrohes Layout kann sich zum Beispiel für ein Portfolio eignen, das Kreativität ausdrücken soll, passt aber kaum zu bestimmten Firmen oder Dienstleistern. Bei diesen ist es wichtig, dass man die Informationen zu jedem Produkt sofort finden kann.
Laut www.homepage-erstellen.de kann sich eine Seitenleiste als sehr nützlich für Besucher der Seite erweisen. Dort sollten aber nicht die wichtigsten Inhalte, sondern hauptsächlich ergänzende Informationen zusammengefasst werden. Die Ausrichtung spielt dabei keine große Rolle und die Seitenleiste kann sowohl auf der rechten als auch auf der linken Seite angebracht werden. In der oberen linken Ecke sollte sich ein Logo befinden. Bei E-Commerce-Seiten ist der Warenkorb meist in der rechten Ecke angebracht. Das Suchfeld befindet sich oftmals direkt neben dem oder in direkter Nähe zum Warenkorb.
2018-05-20 - Tags: java mongodb morphium cache
since the first version of Morphium it provided an internal cache for all Entities maked with the annotation
Cache. This cache was configured mainly via those annotations.
This cache has proven its usefulness in countless projects and the synchronizing of caches in clustered environments via Morphium Messaging is working flawlessly.
But there are new and more sophisticated Cache Implementations out there . It would not be clever to built all those features also into morphium, better leverage on those projects. So we decided to include JCache-Support (JSR107) into morphium.
Of course, we had to adapt some things here and there, especially the
MorphiumCahce-Interface needed to be overhauled.
Morphium itself always had the option to use an own MorphiumCache Implementation. But this was not always easy to achieve in own projects. Hence we use that now in order to be able to offer both the old, proven implementation and the new, future-implementation.
As always, morphium can be used out of the box, so we implemented a JCAche-Version of our cache as well into morphium.
With the upcoming V3.2.2BETA1 (via maven central oder auf github ) morphium will use the JCache compatible implementation. If you want to switch back to the old, proved Version of caching, you just need to change the config:
if you create your
MorphiumConfig via properties or via JSon, you need to set the class name accordingly:
If you leave all those settings to default, the JCache API is being used. By Default the cache creates the cache manager using
Caching.getCachingProvider().getCacheManager(). This way you get the default of the default
If you want to configure the cache on your own (ehcache properties for example), you just need to pass on the CacheManager to the morphiumCache:
of course in this example, there are no additional options set, but I think you see, how that might work.
BTW: the morphium internal JCache implementation can be used via JCache API in your application also, if you want to. Just add the system setting
-Djavax.cache.spi.CachingProvider=de.caluga.morphium.cache.jcache.CachingProviderImpl and with
Caching.getCachingProvider() you will get the Morphium Implementation of the cache.
Attention All JCache implementation support expiration of oldest / least recently used entries in cache. Unfortunately the policy of morphium is a bit more complex (especially regarding the number of entries), so moprhium implements an own JCache-Housekeeping for now.
Additional Info: Whatever Cache Implementation you use, you might still use the CacheSynchronizer in order to synchronize caches. And this synchronization should be working via Mongo even if you are not storing any Entities using the cache as an application cache!
<groupId>de.caluga</groupId> <artifactId>morphium</artifactId> <version>3.2.2BETA1</version>
There are some minor known bugs in the current Beta, you might want to know:
2018-05-06 - Tags: java programming morphium
One of the many advantages of Morphium is the integrate messaging system. This is used for synchronizing the caches in a clustered environment for example. It is part of Morphium for quite some time, it was introduced with one of the first releases.
Messaging uses a sophisticated locking mechanism in order to ensure that messages for one recipient will only get there. Unfortunately this is usually being solved using polling, which means querying the db every now and then. Since Morphium V3.2.0 we can use the OplogMonitor for Messaging. This creates kind of a "Push" for new messages, which means that the DB informs clients about incoming messages.
This reduces load and increases speed. Lets have a look how that works...
As mentioned above with V3.2.0 we need to destinguish 2 cases: are we connected to a replicaset (only then there is an oplog the listener could listen to) or not.
No replicaset is also true, if you are connected to a sharded cluster via MongoS. Here also messaging uses polling to get the data. Of course, this can be configured. Like, how long should the system pause between polls, should messages be processed one by one or in a bulk...
All comes down to the locking. The algorithm looks like this (you can have a closer look at
Messaging.java for mor details):
The OplogMonitor is part of Morphium for quite a while now. It uses a
TailableCursor on the oplog to get informed about changes. A tailable cursor will stay open, even if thera are no more matching documents. But will send all incoming documents to the client. So the client gets informed about all changes in the system.
With morphium 4.0 we use the change stream instead the oplog to get polling of messages done. This is working as efficient, but does not need admin access.
So why not use a TailableCursor directly on the Msg-Collection then? for several reasons:
Messaging based on the OplogMonitor looks quite similar to the algorithm above, but the polling simplifies things a bit. on new messages, this happens:
usually, when an update on messages comes in, nothing interesting happens. But for being able to reject messages (see below) we just start the locking mechanism to be sure.
Well, that is quite simple. Kust create an instance of
Messaging and hit
of course, you could instanciate it using spring or something.
to send a message, just do:
this message here does have a ttl (time to live) of 5 secs. The default ttl is 30secs. Older messages will automatically be deleted by mongo.
Messages are broadcast messages by default, meaning, all listeners may process it. if you set the message to be exclusive, only one of the listeners gets the permission to process ist (see locking above).
this message will only be processed by one recipient!
And the sender does not read his own messages!
Of course, you can send a message directly to a specifiy recipient. This happens automatically when sending answers for example. To send a message to a specific recipient you need to know his UUID. You can get that by messages being sent (sender for example) or you implement some kind of discovery...
in the integration tests of Morphium both methods are being used. The difference is quite simple:
storeMessage stores the message directly do mongodb whereas
queueMessage works asynchronously - which might be the better choice when it comes to performance.
just register a Message listener to the messaging system:
messaging is the messaging system and
message the message that was sent. This listener returns
null, but it could also return a Message, that should be send back as an answer to the sender.
messaging-object, the listener can also publish own messages, which should not be answers or something.
in addition to that, the listener may "reject" a Message by sending a
MessageRejectedException - then the message is unlocked so that all clients might use it again (if it was not sent directly to me).
Within Morphium the
CacheSynchronizer uses Messaging. It needs a messaging system in the constructor.
The implementation of it is not that complicated. The CacheSynchronizer just registers as
MorphiumStorageListener, so that it gets informed about all writing accesses (and only then caches need to be syncrhonized).
on write access, it checks if a cached entity is affected and if so, a
ClearCachemessage is send using messaging. This message also contains the strategy to use (like, clear whole cache, update the element and so on).
Of course, incoming messages also have to be processed by the CacheSynchronizer. But that is quite simple: if a message comes in, erase the coresponding cache mentioned in the message according to the strategy.
And you might send those clear messages manually by accessing the CacheSynchronizer directly.
And we should mention, that there you could be informed about all cache sync activities using a specific listener interface.
the messaging feature of morphium is not well known yet. But it might be used as a simple replacement for full-blown messaging systems and with the new
OplogMonitor-Feature it is even better than it ever was.
2018-05-02 - Tags: morphium java mongodb mongo POJO
a new pre-release of morphium is available now V3.2.0Beta2. This one includes a lot of minor bugfixes and one big feature: Messaging now uses the Oplogmonitor when connected to a Replicaset. This means, no polling anymore and the system gets informed via kind of push!
This is also used for cache synchronization.
Release can be downloaded here: https://github.com/sboesebeck/morphium
The beta is also available on maven central.
This is still Beta, but will be released soon - so stay tuned
2017-11-21 - Tags: java mongo
We just released V3.1.7 of morphium - caching mongodb pojo layer.
Details can be found at the project page on github. You can easily add it to your projects using maven:
<dependency> <groupId>de.caluga</groupId> <artifactId>morphium</artifactId> <version>3.1.7</version> </dependency>
2017-09-29 - Tags: morphium
This release is about tidying up things a bit and includes some minor fixes and tweaks.
Available via Maven Central
2017-08-21 - Tags: git qnap storage
for quite some time now, I have a qnap runnin in my basement storing whatever Storage needs, my servers or family members might have.
The qnap is also being used as git server - wich was totally fine the last couple of years but failed recently...
I just saw, that there is a firmware update pending for the TS and that this is more or less the last one for this old model (hey, the qnap is not that old, is it 3 years maybe?). There also was a warning message in the release notes that the switch to 64bit might cause some apps not to work anymore...
so far, so uninteresting. The usual blah blah... Unfortunately
Optware (which installs addidional opensource software) is only available in 32 bit obviously.
But this is something you will only learn the hard way: the software just woud not work. trying to access the GUI of it, will just result in "internal server error", "page not found" or simply "Permission denied" - depending on what you just tried to make it work.
if you log in via ssh and try to use
ipkg in the shell, you will get a
file not found error, although you tried to exec the file specifying the absolute path.
The linux gurus know - this means some lib is missing!
And in that case, I do not need to dig further - the 32 bit libs are not there anymore.
That would not be the problem, but everything installed by Optware also relies on those libs and hence fails now... bummer!
.. you install an alternative to optware called entware. You download the file and install it via Qnap-GUI. Unfortunately this tool does not have a "nice" (the so called gui of Optware was never nice) GUI for this, just a command line command called
opkg (mind the o).
after that you only need to create symlinks for the git-binaries (after fiddling with the sshd for enabling pk auth and more than just the admin user):
And as always - happy hacking!
2017-05-29 - Tags: virus security
originally posted on: https://boesebeck.name
As some of my readers are not that good in reading and understanding German, I'll try to write some of my posts, which might be interesting in english also. I hope everything is understandable so far - This is not a translation, just a rewrite in English. Lets start with the last post about Anti-Virus Software
People are more and more concerned about viruses. Also Mac users start to worry about that threat. So, is it neccessary to install anti-virus software on the mac? I was asked that question several times lately...
First of all, this question it totally justified. Everyone should harden his computers and phones as far as he feels safe. Actually, more than a feeling would installing an anti virus software on the mac not produce. As of now there is a handfull of harmful software known for the mac, all of them will be filtered by macs own security mechanisms and thus are not really a thread anymore.
"Soon it will be very bad for Mac users. Viruses will come..."
I hear that every year. When the new market share numbers are published and OSX gains. Then everybody tells me, that the marketshare is soon reaching some magic percentage when it will be so interesting for Virus-Programmers to write Viruses for Macs that ther will be a flood of malware. Or will there?
Of course, marketshare is definitely influencing the number of malware for a certain system. But in addition to that, you should take the necessary effort and feasibility into account. And the use... (in terms of malware: what could I gain? Keylogging? Botnet?)
I think, one should take both into account: Is the system easy to hack, it will be hacked, even if almost nobody is using it. Is the systems' marketshare not that high, but relatively simple to hack - it will be hacked! For example: the Microsoft Internet Information Server (IIS) is being attacked far more often than the marketshare leader Apache. When a system is very hard to hack, you need some good incentive to take the effort. Which could be the reason why there is no real virus for Linux or OSX.
And when I write "hacked" its more in a viruses term of use - not remote hacking of user accounts. And: it needs to be done more or less automatically by software. Otherwise there will be no real virus or worm. If somebody wants to hack a certain machine and has the knowledge, he can do it - depending on resource, effort and motivation ;-) I knew a hacker once, you could hire to hack the servers of an competitor for example. Those things are always possible. But this is almost always an administrative problem. There is no real protection against those guys. You can hack any machine you can physically touch - resources and motivation required, of course. Best example: the Jailbreaking of iOS! But if there is enough motivation, resources and knoledge, you're not really safe (see NSA & Co). So it's a question of effort: to hack the machine of a 14 year old student is definitely not as interesting as hacking the machine of a CEO of a big company or a politician.
Same thing is valid for malware and viruses: Malware is not developed for the fun of it (well, at least most of the time it's not). People want to make money with them. This is the only reason why there are Viruses! Maybe that's the reason why there is still the rumor, that actually the Anti-Virus-Software vendors pay some virus developers to spread viruses every once in a while. who knows... i cannot rule that out for sure. I met some Russian guys who claimed that to be true. If so, then I don't understand why there is so few malware for Linux and OSX. That would be a huge market for Anti-Virus-software vendors - millions of users, complete new market segment worth millions or billions of dollar.
I think, viruses are only developed to directly (data theft, credit card fraud etc) or indirectly (by spamming, using hacked machines as bots on the way to the real target, bot nets etc) to MAKE MONEY! And when money is involved: the effort and resources necessary to achieve that must be lower as the estimated revenue of course. So we are at the combination of effort and marketshare again. Marketshare influences the potential revenue (assuming that when more machines are hacked or affected by malware, more money is being made), efforts are the cots. And in some cases this is obviously not a positive figure...
First of all, you need to distinguish between the different kinds of malware. In media and the heads of non-IT-guys all malware is named "Virus". But it's necessary to know what kind of software pest is out there in order to be able to protect yourself against those effectively.
The media and in the heads of non IT guys usually every malware is called a "virus". But in order to be able to protect yourself from those malware, it is important to know exactly what you're dealing with. You can classify three different kinds of malware: Viruses, Trojans and Worms - but there are some mixtures of those in the wild, like a virus which spreads like a worm - hence toe umbrella term "malware").
virusis a little program, which reproduces itself on the system and does dort it's dirty stuff. most of the time, those viruses do exploit some security holes in order to get more privileges. If those privileges are gained, the virus will do things things, you usually do not want him to do - like deleting things, sending data do a server...
trojanis most similar to a virus, but needs the users help to get installed. Usually it looks like some useful piece of software, a tool of some kind, but in addition to the funktionality you desire, it also installes some malware on the system. Usually the user is being asked, that the software needs more access - on OSX at least. But even if it does not seek privilege escalation, your data still is at risk. See wikipedia
wormis a piece of malware, that is capable of spreading itself over the network (either locally or over the internet, see wikipedia). You can easily protect yourself against worms if you just unplug the network from your computer (and/or disable WiFi) or at least disable internet access. Sounds insane, but I myself was at some offices and departments, who do exactly that: They are unplugged from the internet in the whole building, only a certain room, which is specially secured, does have internet access - but not into the local network.
ransomwarethese are usually some trojans which do then use bugs in the system to encrypt all data. And you only can decrypt it, if you send a couple of bitcoin to the author.
you always get such "warning messages" on the mac, if any malware wants to do something, that is out of the ordenary and does need system privileges. Exactly that happened a couple of months ago when there was a Trojan, who was installed using Java and a security issue therein. But still, the users were asked, that the software needs more privileges. And enough people just said "yes" to very question...
Please do not get me wrong, I do not want to deemphasize malware. It is out there, and does cause a lot of harm and costs. But you can be saved by trojans more or less by using common sense:
It is getting harder, if the trojan uses its newly gained privileges to hack the system itself, maybe even exploiting additional security issues there, so that the user is not being asked. Then a secure operating system architecture is helping to avoid those kind of things. Which is usually implemented by all unix OS.
Viruses and worms can not be avoided so easily hence those do exploit bugs in the system. But even then, Unix based systems are a bit better suited for that case than others.
This is according to a very strict separation between "System" and "Userprocesses" and between the users themselves. And, especially on OSX, we have Sandboxing as an additional means against those malwares. And the graphical user interface is not bound so tightly to the operating system kernel like it is in Windows NT for example.
But, overall, the Admin of the system is the one, really determining how secure a system is. He should know about the problems, his OS has and can take counter measures accordingly.
if we are talking about malware, whe should also have a closer look at mobile devices. Especially Smartphones and alike are often attaced, because they do have a lot of interesting data which are just worth a lot of money. Or you can just make money directly (e.g. by sending expensive SMS).
to "beak into" such a closed system, very often security relevant bugs are exploited. But sometimes just social engineering is also successful.
Usualy the user is than made to do some certain action, that does involve downloading something, that is installing a trojan on the system. or just opening the system that the attacer than can install some malware. Or you just "replace" an official app in the corresponding appstore.
Trojans on the smartphone usualy are masked as litte useful tools, like a flashlight app. But they then copy the addressbook and send out expensive short text messages, switch on video and audio for surveillance and so on.
It's hard to actually do something against that, because you do not know, ob the app, you install does something evil or not. Apple is trying to address this problem with the mandatory review process that all apps in the Appstore need to pass. All apps need to pass an automated and a manual check before anyone can download it. The apps are for example not allowed to use unofficial API (for accessing the internals of the os) and that the app does exactly what the description of the app tells the users it does.
This is no 100% protecion, but it is quite good (at least, i do not know any malware on the appstore right now).
But I would also name WhatsApp, Viber and alike as malware. Those do exaclty that, what a trojan would do. Grab data, upload them to a server. But here the user happily agrees and likes it.... but that is a different topic.
on iOS users are a bit more secure, than on andriod (if you do not jailbreak your iphone). Android is based on Unix, but some of the security mechanisms within uinx have bin "twisted". So there is a "kind of" Sandbox, just by creating a new user for every app on the device. So all processes are separated from each other. Sounds like a plan. But then you end up having problems with access to shared resources, like the SD-Card. This needs to be global readable!
Also the Security settings of apps can at the moment only take "all or nothing" (that did change in later versions, at least a bit). So you can either grant the app all the permissions, it wants. or No permission at all.
Problematic is, you need to set the permissions before actually using it. This makes it very easy for malware programmers, as people are used to just allow everything the app needs.
IN addition to that, Andriod apps do have an option to download code over the internet - this is forbidden in iOS. And there is a reason for it: How should any reviewer find out, that the code downloaded stays the same after the review? Today I download weather data, tomorrow some malware wich sends chareable short texts?
Another problem is, that there is not one single store for android but more like a quadrillion of them. Hence you can install software from almost any source onto your andriod device.
of course, every os does have bugs which might be used to execute good or evil code on the device. Hence there are updates on those OS on a regular basis, which should fix security relevant bugs and issues. with iOS you can be sure, that you get updates for your device and the OS on that for at least a couple of years. (current iOS run on 3 to 4 year old hardware still). With android it is not as easi to make such a statement as the support is strongly depending on the vendor. It might be, that support for devices not older than 1,5 years are stopped. Especially the cheap Android phones loos support quite ealry, which means there are still Android 2.x out there (and you actually still can buy new devices with that installed). Including all the bugs, that the old OS version had - which makes it quit interesting for malware authors.
in combination with the a bit more insecure system and the unsecure sources of software, this makes android a lot more prone to be hacked or infected by malware. And this makes it especially interesting for the bad guys out there.
This is leading to really rediculous things like virus scanners and firewalls for smartphones. read it here in German
You can say about apple, what you want, but the approach of the review of every app for the appstore is at least for the enduser a good thing (and by that I do not mean the power user who wants to have his own Version of the winterboard installed). Even if you are not allowed to do "all" with your phone - Normal users usually do not need to.
And the poweruser can jailbreak his iphone still - and if he knows what he is doing, it might be ok.
Unfortunately viruses, trojans or more generic malware, can use any bug of any software on the system, no matter if it is part of the OS or not. So a breach can be done via a 3rd party software. Like the "virus" that was infecting a couple of thousand macs through a installed java. In this case, the user again was asked several times(!) if he wants to grant admin permission to a java app - if you agree to that, your system is infected. If not - well, nothing happens. Common sense is a great "Intrusion Prevention System" here.
Of course, osx or any other operating system cannot avoid 3rd party software of doing some dubious things - especially, if the user agreed to it. But the software is only able to gain the permissions, what the software that was used as gateway has. An on OSX and iOS all applications run in a Sandbox with very limited permissions. If the app, a malware uses as gateway does not have admin permissions, well, the malware won't have it neither.
If all 3rd party software you run on your system only has minimal permissions, then a malware that would use those as a gateway would also have minimal permissions, and could not do too much harm (and could easily be removed).
But the thing is, just getting access as a normal user is not the goal of such a virus vendor - they want your machine to be part of a botnet in order to sell your computing power or to use it in the next DDOS-attack. Or just use it as spambot.
Also it is in the best interest of this virus vendor to make it as hard as possible to remove the software from the system. So everything needs to be burried deeply into the system files, where normaly no user takes a closer look at.
And this is usually only possible, if the malware would get admin permissions. It could use "privilege escalation" hacks in order to gain more permissions - best case, without the user knowing.
Usually, the user should be asked, if any process tries to gain more permissions, and the user may or may not agree to that (that happens every time, a process tires to do something outside of the sandbox). of course, that would be bad, as it would reduce the success of the virus. So virus vendors try a lot to avoid this kind of informing or asking the user.
on unix systems this is quite some hard task, or at least a lot harder as on windows OS see here or here. In almost all of the cases, on osx the user is informed about software that does do something strange.
But there is one thing, we should think about even more: if any software could be used as a gateway, I should reduce the number of programs on it to a minimum (especially those, with network functionality... which is almost any app nowadays). Especially I should keep software that runs with admin permissions do the absolute minimum - which is 0! Unfortunately, virus scanners and firewalls and such "security" software, need admin permissions to do their job. This is one of the reasons, why anti virus software is very often target of attacks from malware and viruses and end up as spreading the very thing they try to protect us from. (this has happened on windows machines)
Then, count in that a Anti-Virus software can only detect viruses, that are publicly known for a while, you actually would not increase the protection a lot by installing this on your machine.
Same thing goes for firewalls, which have their use on windows systems unfortunately, but not on unixes or osx. How come?
Well, on unix systems the network services are usually disabled, or not installed! so the visible footprint on the internet for such a machine is quite low.
Windows on the other hand, is depending on some network services to run, even if you do not actively use it. Disabling those serivces (and SMB is one of them - this was used by wannacry!) would affect the system in a bad way and some things would not run as expected see here.
Hence, if your system does have a minimal footprint - or attackable surface - you do not need a firewall.
Btw: do not mix up this local firewall, with a real IP-filter firewall that is installed in routers!
So, there is a lot that explains, why using virus scanners on the desktop (especially if it is a unix desktop) can have negative effects or at least no effect. So, you're probably fine without them...
But on servers, things look a bit different.
If i have clients are not well maintained or I just do not know (or just windows ), I want to avoid storing data on my server, that could infect them. So, even if the viruses do not infect my server, or my mac. The mails could be read by other clients, that might then be infected. So, be nice to your neighbors...
Do not forget, virus scanners do need some resources. And sometimes a lot of it (they monitor every access to/from the system, which in return can or will slow it down to a certain extend).
Whatever you do, security comes with a cost. in "best" case, things get inconvenient to use, cause you need to do complex authentications or need to agree to a lot of popups that pop up every second (remember Windows Vista? )
in the worst case, there are errors because of the high complexity, or expensive bacause you need additional hardware (iris scanner, external firewalls, Application-level firewalls that scan data for viruses...) and still being inconvenient at the same time. And time consuming (those systems need to me maintained).
So, you need to decide, what level of security do you want, and what is senseable. The use of an Iris Scanner for the Bathroom is probably a bit over the top... don't you think?
the best weapon in our hands against malware still is the thing between the ears! Use it when surfing, when installing software. No software will ever be able to stop you from doing something stupid to your system.
So, it is not ok to feel to safe when being on a mac. This leads to sloppiness! Passwords for example, need to be real passwords. If the password could easily be guessed, why should a malware take the detour for hacking the system? It could just "enter" it and you lost your system to the bad guys....
I don't want you to get paranoid on that neither! Just keep your eyes open. When installing software, only do it from trusted sources. And, from time to time, have a closer look. There was malware available in the AppStore for a couple of days / weeks before apple removed it. Even the best system can be outwitted.
You should think about, which apps you use and which not. And even apps, that are not really malware per se, dan do harmful things - like whatsapp and viber. You should ask what is happening there! I mean, whatsapp is uploading the addressbook to facebooks servers and the people whos data you upload there, are not asked if they like that... just a small example...
Just remember: if the product is for free, then YOU are the product
There is no such thing as free beer!
I tried to be not tooo aniti microsoft - which is hard, because most of the security issues are only existing on windows systems. Unfortunately on windows the user needs to make it secure and stop it from doing harmful things.
Anti Virus software does lull in the user to make him feel safe, but most of them really have a louse detection rate. And really new viruses are not detected at all.
So, should you install anti virus software on a mac? You need to decide yourself, but I tend to "no, you should not". But there are valid reasons to see it differently. But I am not alone with my thoughts: see here and here.
But you definitely should distinguish between desktop and server, as you may be serving out data to windows machines as well, a virus scanner might be a useful thing.
Almost all I wrote here is valid for osx and for linux or other unixes. Right now, there is no know wide spread malware out for unix based systems, that I know of.
2017-05-26 - Tags: markdown
We do write a lot of texts every day. Everyone of us, who is working with computers. And we all struggle with formatting texts. Text without proper indentations, emphasizing or boldface is a bit boring to read and you lack an additional means of expression.
Most editors, even in web nowadays support "WYSIWYG": What you see is what you get. This is looking nice, most of the time it acutally works. But is hard to use (you switch between mouse and keyboard very often). Well, it annoyed me at least. Especially if you edit text, and after editing it, the italics are not at the right place anymore and vice versa.
So, why not put special sequences in normal text an have it rendered afterwords?
This is not really a "new" idea: Not so long ago, this was one of the prefered ways of formatting Text. You enrich standard text with so called markup codes to make it possible to define special formats and what not. Most of you probably use this but do not realize that: HTML is the most pupular version of a Markup Language. Of course, nowadays you see it in your browser, and not the code. So it became a lot more, than just "Hypertext".
One other well known (well, if you are a nerd ) implementation of a markup language ist LaTeX - this is a markup language especially concentrating on typesetting. So you will get a very good looking printout (but can use your favorite Text editor). Maybe that is one of the reasons, why it is so popular when using for master thesis or diplomas.
Those "languages" are a bit too complicated and complex to just create an email or so. And that is, where Markdown comes to help.
But why would you want to have your text enriched with command sequences, if you can do it with the mouse cursor?
Yes, it works using a mouse. But there are people amongst us, which type so quickly that switching between mouse and keyboard actually slows them down. And I am one of them.
And the beauty of markdown is, that the sequences being used for formating text, are easy to reach more or less standard characters (nothing special).
For Example: If I want something to be amphasized (italics), I just put an underscore _ before and after the sequence - done!
or I want to have a numbered list... I start a paragraph with a 1. - this will be indeted as expected and als subsequent lines accordingly.
But there is a lot more, a list of what is possible can be viewed here.
I for instance write everything here in the blog with Markdown! So I did not have to use a WYSIWYG-Editor for the Frontend. And texts are stored every "simple" and can be indexed quite easily. So no proprietary XML-Stuff or even worse some binary format.
Especially if you are a developer, Mardown does have advantages. If it is properly configured, your sourcecode (which are usually part of the documentation) can be highlighted (if markdown is configured properly):
I will add another post here some time, because in order to get the syntax highlighting to work with the markdown renderer library I use, I had to extend the software a bit.
Well, if you want to use Markdown, you will have to learn those "commands" or Sequences. But, it is really worth it, to do. Especially if you are a touch typist and fast at it
There is a lot of tools to easily create stunning texts. And there is a huge community for markdown, who do come up with new features, new tools. So there is an extension fro Markdown like
CriticsMarkup, or even board the exensions from multimarkdown.
All in all , this is a really powerful toolset which helps you concentrate about the thing that really matters: The Text!
here you can see a list of all standard functionalities in markdown.
on the mac there is already a list of good tools that support markdown or help with syntax highlighting. Those editors sometimes have a live preview and are able to export your text as HTML, RTF or PDF. I will create a couple of Test of Tools posts for those editors. Hier a little list about some tools I used:
Also most IDEs do support Markdown (Xcode, IntelliJ & Co). So, to create documentation in your Project directly, markdown definitely is an option.
Unfortunately the support in mail is a bit of a problem now. Right now, apple Mail does not support markdown. But you can use markdown-here to help minder the pain.
Of course, that is a bit inconvinient to use. Better use Mailclients that support markdown natively like MailMate or AirMail2 (the latter one does have some severy data privacy issues, but that is a different topic).
Markdown does have a lot of advantages especially because of the excellent tools available already. You just type your text, concentrate on typing. Formatting is done automagically. Afterwards you can export it as PDF, RTF or whatever.
So markdown definitely is an alternative not only for developers, but also for users who create a lot of texts like books, reports, emails and what not.
But you will have to get used to the tools, and you need to add the rendering of the document to your time sheet as well....
2017-05-20 - Tags: english ergodox-ez
I was a little bit bored of always creating the layout by "hand" and I have to admit that I never got the Massdrop Configurator to work properly. And I never managed it to get the Overview PNG up to date with all of my changes. Hence I decided to create a little tool, to help me with that.
So... The first milestone is finished, I created a little java application that can read my
keymap.c and is able to show the layout graphically. Yes I know, it does not look that good, but it is ok...
Things still to be done:
so for now, this tool actually helps with documenting of keymaps. It reads in a
keymap.c file and shows it in a more graphical way. This is an example:
The tool is available on Github: https://github.com/sboesebeck/ErgodoxLayoutGenerator
Disclaimer: This is in prototype phase. Not really more than a proof of concept. So use it at own risk. Same for the code - it works, but there are some ugly parts in it...
Although some people thought this would be an April's fool prank - it isn't. This tool really exists and really works! It is now capable of creating overview PNGs with a click of a button. This is the overview PNG for my own layout
But it works with all other layouts so far as well. Like the default one:
This is an example, where the parsing worked fine, but the file lacks some information. The layers do not have descriptive names. And there you also see that there are a lot of macros being called. Here most of them are just unicode output of special letters, but the ELG does not show them properly:
Things still to be done:
EXCLMshould be a
First BETA Release available Go have a look here here. This is a BETA Release, not all functionality is implemented yet. But you can create your documentation overview PNG file...
should run on all machines having java installed (current JDK8!)
2017-05-20 - Tags: tweet
got the first version of editing ready. Only Macros missing for now. Next: keymap.c creation... https://t.co/Ftc4bUl5Fp
2017-05-20 - Tags: tweet
2017-05-20 - Tags: english ergodox ergodox-ez java-2 keyboard tastatur
If you read my blog, you might have noticed, that I'm fond of cool keyboards. We IT-guys use them the whole day, but most keyboards are just awful to work with. So I'm glad I found a "proper" ergonomic one, my ErgodoxEZ (look at http://ergodox-ez.com for more information or read my review here).
One of the greatest things about the ErgodoxEZ is its programmability. But you actually need to know how to code, in order to get it to run easily. And even if you do, it is not very intuitive to create a c-program that runs on your keyboard and is showing your layout.
There is a WYSIWYG-Editor at massdrop.com, but unfortunately I never got it to work properly - and it is somewhat limited in functionality (like Macro support etc).
Hence, I started creating my own little tool for creating my layouts and optimizing them...
The ErgodoxLayoutGenerator (very clumsy name, but I could not think of something more better for now - lets just call it ELG for short) is programmed in Java, so it should work on all machines and OSs java is available for. You need to have the latest version of the Java8 installed... As I got some feedback already: It needs to be the latest official Oracle JDK or JRE, it does not work with OpenJDK out of the box. If you need any help using OpenJDK, just contact me...
The idea is pretty much similar to the one in massdrop, but the ErgodoxLayoutGenerator is built around the
qmk firmware, and it generates a
keymap.c file for your local installation!
keymap.cfiles for you, but you cannot flash them to your keyboard.
keymap.cfile and parses it to some extend. But that means, it is very relying on the structure of the file to be more or less similar to the "official" ergodox layouts that come with the qmk-firmware. If you want to use the ELG to work with your layout, you should make sure, that your keymap file does follow this lead
DE_OSX_ACTall "meaning" the same key.
keymap.cfile, things might get messy. If you use one of the standard keymaps, all should go fine. But if you use some of the advanced stuff, things will go wrong. It will probably parse the keymap and most of the functions will show up fine, but some things might go missing. At the moment there is no support for the
FN[1-9]-keys! So, if your keyboard uses those, please make sure, that you replace the functionality with a macro.
The parsing of the C-Files does have its drawbacks, but the great advantage is the possibility to have ELG read in existing keymaps! That does work most of the time.
Usually it should be ok to get the latest release from the gitub page. Download the JAR-File attached to the release and double click it. If java is installed properly, it should start up fine and you should see a screen with an empty layout:
If the above does not work, you can try to run the jar file from commandline with
java -jar ergodoxgenerator-1.0BETA2.jar - or whatever file you downloaded. If it still does not work, you'd get a proper errormessage then. On the github page you can create an issue for that.
Sometimes it might be helping, to not use the JAR-Start funktionality of java, but run it manually:
java -cp ergodoxgenerator-1.0BETA2.jar de.caluga.ergodox.Main. If you get the same error as above, please create an issue at github.
If you're a java-guy, you can compile it yourself (and hopefully contribute to the project). The project is a standard maven project. So if you cloned the repository to your local machine, running
mvn install should compile everything. Your executeable will then be in the directory
target and called
ergodoxgenerator-1.0-SNAPSHOT-shaded.jar. This one should be executeable...
At the bottom of the main window, there is a button called
set qmk sourcedir. Here you should set the root directory of the qmk sources. This is necessary for putting the layout at the proper position at the end. If you did not specify this directory, you always need to navigate there manually.
If you defined the QMK-Sourcedir, the open dialog will start in the correct directory for the ergodox layouts. You choose the directory of the keymap, as all keymap files actually are called
When the keymap was parsed successfully you should get a display of the base layer of this layout.
Usually an ergodox layout consists of several different layers. Like when hitting ALT on a keyboard, all keys do something else. But here you are more or less free to define as many layers as you want (not really, your keyboard has limited memory). To switch to the different layers, you need to press or hold a key (see below). when changing the layer in this combobox, the layout will be shown accordingly.
When creating a layer, you only have one layer called
base defined. The buttons on the top let you create new layers, rename them or delete them. Attention: Deleting layers and still having layer toggles or macros referencing them, will cause unexpected behavior. Also you should not rename the base layer, as this might also cause problems later.
on the top right of the window, you can see the 3 LEDs the ergodox does have. You can switch them on and off by clicking for the selected layer. This reflects the behaviour of the LEDs when flashed on your keyboard.
the main portion of the screen is filled with keys. These represent the corresponding key on the ergodox keyobard. If you select a key, it will be marked (green border) and a more detailed description of the key is shown in the lower part of the window. Assigning functionality to a specific key can be done via the context menu. Just right click on a key, it will be marked and then you can
KC_TRNSis assigned to that key. This code states, that this key should behave as defined in the "previous" layer (usually base). This can of course not work in the base layer!
KC_are the "default" ones. There are also different keycodes for different locales or OS, like
DE_OSX_. You can also assign here a combination of keys. Like "Shift-A" or "CMD-S". If you specify more than one modifier, a macro will be created for you!
At the lower part of the window there are representations of some colors and keys. Those state, that a green marked key would be of type
Layertoggle / type, and hence shows two informations: first line is the key being typed, 2nd line is the layer to switch to as long as the key is held
There you will be asked for the file to store the keymap to. This file should always be named
keymap.c and should be stored in the QMK-Sourcedir at
keyboards/ergodox-ez/keymaps/YOUR_KEYMAP, where YOUR_KEYMAP needs to be replaced with the name of your keymap.
When you want to store a completely new keymap, you need to create this directory yourself. You can do that from within the save dialog.
This will create a PNG showing all layers. This is useful to add to your layouts, if you want to publish them and have them merged to the official qmk repository as it makes it easier for others to use your layout. like this one:
Open a keymap. You need to choose the directory, not the file!
If you are a bit like me, you usually work on your own layout again and again. The button "reopen last" will open the file you last opened or saved!
Creates a completely new layout - Attention There is no "are you sure"-Question yet! IF you hit that button now, you'll end up with a new empty layout!
When assigning keys, you first need to choose the "prefix" of the keycode names. Usually the prefix is related to the locale. Like "DE_OSX" is the German OSX version of some keycodes. all keycodes starting with "KC_" are the default (US-layout) keycodes.
You can add a modifier to the key if you want. And there are 2 different ways these modifiers might work: all at once (like SHIFT-A for a capital A) or the modifier when held, the key when typed! Like when holding the key Y, you hold CTRL, when typing it, it is just a plain old y.
For this functionality you only need to define the layer you want to switch to. Quite simple. When flashed to your keyboard, the corresponding key will switch on a specific layer when hit, and switch it off again when hit again. If you switch to such a layer in that way, it is probably a good idea, to set the leds properly.
as already mentioned above, this will create a key, that will temporarily switch to a layer as long as the key is pressed. If you only type the key (= pressing it shortly), you will just type a normal key.
The assignment of macros is quite easy, you can just choose one from the dropdown and then hit "assign macro". This works only like that, if the file you opened has some macros defined.
If you hit the "new Macro" or "edit macro" button, the Macro editor is shown. You can create, delete, or edit the macros in this layout.
The ErgodoxLayoutGenerator supports these kind of macros:
ATTENTION The Macros only support keycodes that do not represent a combination of keys. For example the keycode
DE_OSX_QUOT is actually a replacement for
LSFT(DE_OSX_HASH). This will not work in a macro, it will only send the keycode
DE_OSX_HASH without the modifier. If you want your macro to work in your locale you need to be aware if this key is typed with a modifier or not.
All actions a macro can do, are the following:
This little project was first of all only built to run on my machine and make it more easy for myself to tweak with the layout. So it is only tested on a Mac OSX machine, not sure how it will work on windows or linux.
There are still a lot of things missing:
the latest versions of the ELG do have a "compile" button. When you have your keymap saved and the qmk-sourcedir is set, you can compile it. This is done by running the commands
make clean and
make in the qmk directory of the ergodox-ez.
This can only work, if your system is capable of compiling it. Please ensure that you have everything installed and is in your path. Take a look at the qmk-github page for more information on how to prepare your system.
When the compilation finished successfully, you can read the log output. USually that is not very interesting, if everything worked fine. On errors you can check closely what went wrong.
When this dialog is dismissed, you will be asked if you want to copy the
.hex file - which is the result of the compilation - to the keymap directory. This is useful, if you want to submit your code to the official github project. When your keymap does also have a
.hex file, everybody can just download an use it without having to deal with compilation and stuff.
If you just compile for yourself, hit "no" there...
The release Candidate is here... completely with support for a new Type of Macro called
ToggleLayerAndHold which will toggle a layer as long as the key is pressed, or, if the key is only typed, toggle the layer as with the
TG() function call.
And now there is a compile button, which will compile your layout if everything is set correctly! The resulting
.hex file can then be uploaded to your Ergodox or Ergodox-EZ!
If you find anything not properly working and you think it is a bug in the ELG, do not hesitate and create an issue at the github page. Please provide the following:
java -jar ergodoxgenerator-VERSION.jar
2017-05-20 - Tags: jblog
as you all know, the software here is quite new and of course i found some bugs i did not realize till I went live...
unfortunately I cannot deploy without downtime yet sorry
I will try to keep it to a minimum!
2017-05-20 - Tags: java blog
I explained here why PHP and Wordpress is a pain in the ass and that I decided to build a new software on a stack I knew.
As I am the only user of this software, I did not have to care about Mulit-User-Role-Permission Management, or Theming or plugins. Just a straight forward webapplication. But it should to it in the following:
Flexmark- available at github.
Well... er.... that’s it. You see, quite simple actually. I used
Bootstrap 4 in the frontend although it is not officially out yet. But I did think, by the time I finished this blog software, bootstrap 4 would be out... ok...
The application runs within a tomcat, called by an NGinx which also does SSL-Termination. All more or less standard.
Of course there will be changes over time. but for the first try it is not too bad...
2017-05-17 - Tags: java morphium mongodb
This release contains some minor fixes and improvements:
end()on group anymore)
ConcurrentModificationExceptionin whe flushing buffered writer
You can either get it from github or via maven central:
<dependency> <groupId>de.caluga</groupId> <artifactId>morphium</artifactId> <version>3.1.3</version> </dependency>