Caluga
all about java, coding and hacking

Caluga - The Java Blog

this blog will cover topics all around Java, opensource, Mongodb and alike. Especially Morphium - THE POJO Mapper for mongodb


found results: 80

<< 1 ... 2 ... 3 ... >>


category: Computer --> programming --> MongoDB --> morphium

Morphium version 5

2022-11-15 - Tags: java mongodb morphium


Morphium V5

Morphium originated in 2017 and has had 4 major releases since then. Now it's time to update some more and tweak some internals. This update is a major one, so parts of the internals have been completly rewritten while keeping Morphium itself sourcecode compatible.

Targets of the rewrite

The aim is to cut off old "braids" and update the code. It should remain as compatible as possible so that migration is easy.

Furthermore, the complexity should be reduced, there are some features in Morphium that are certainly rarely used or not at all, such as "optimistic locking" with the use of automatic versioning. These "features" are removed and the code is streamlined.

Furthermore, Morphium should remain compatible with the older versions. That means the changes to Morphium itself will be limited. The concepts will not change either, at most they will be tightened up a bit. So Morphium is still the linchpin of the API and every write access will continue to run via Morphium and every read access via Query (exception: aggregation with the help of the aggregator). In short: the use of Morphium essentially does not change.

JDK 11

The JDK used should also be updated during the refresh. Morphium V4 stays on JDK1.8, from V5 we will support JDK11 and later. Of course, the code should also use the new features of JDK11.

Morphium Object Mapping

In current tests, the Morphium Object Mapper is still one of the fastest and most feature-rich object mappers that we have found. Compared to 'Jackson', the higher performance and the features for handling lists and maps (handling 'Generics') have to be mentioned in particular - this makes the MorphiumObjectMapper really indispensable.

During the rewrite, however, some features that are also reflected in the object mapping were removed, which simplifies the code and makes it easier to maintain.

Morphium Cache and WriteBuffer

Morphium had declarative caching as part of the system from the start. This continues to be a key component. The cache implements the JCache API and could therefore be used for other purposes as well. The other way around, you can also use a JCache in Morphium for caching.

During the rewrite we will adapt the code to JDK11 and improve the structure of the code.

Morphium Messaging

Another key component is messaging. Particular attention will be paid to messaging to improve stability and reduce load. Specifically, by efficiently using the Driver API and WireProtocol. Another change will be the connection handling in messaging: currently messaging uses Morphium itself as a connector to the mongo. Since none of the features of Morphium are necessary for messaging or are sometimes even more of a hindrance, the messaging will communicate directly with the MongoDB in V5. In the current V5 implementation, the messaging uses a dedicated connection to the MongoDB for the push notifications (Watch) and a dedicated connection pool for the connection to the mongo. This means that the messaging is a little more detached from Morphium, generates less load and can be configured separately. This is implemented and the tests show, that the performance increases whereas the load on mongo is reduced.

MorphiumConfig

This "thing" has also grown a little over the years and is currently hardly usable with the 100s of getters and setters. The settings have to be split up a bit and used in different categories. In particular, because Morphium's own wire -protocol driver requires different settings than e.g. the InMemDriver.

Focus on Community Edition

However, we only implement this for the Community Edition. This means that the enterprise features are not considered in the rewrite. It can also cause the drivers not to work properly with the Enterprise MongoDB version. The same currently applies to the Atlas MongoDB implementation, which will certainly be the first to be submitted. Support for the Enterprise Edition features may also be implemented in one of the following versions if required.

Rewrite bottom up

The rewrite is done from "below", i.e. first the Mongo driver (see below) was implemented, which implements the Wire protocol (currently only usable for MongoDB from V5! For older MongoDB versions please use Morphium V4.x!).

The necessities and the structure of this driver is mainly determined by the MongoDB and propagates the architecture "up". On the way there, unused features are removed (e.g. auto-versioning), others are improved simply by adapting them to the new drivers.

In the end, a Morphium should come out that is almost 100% source compatible with the old version, but much more modern, slimmer, more beautiful!

Morphium driver

Morphium is more or less a replacement for the existing mongoDB driver with a lot more features and in some places significantly more security and / or performance.

Currently (i.e. in Morphium V4.x) there are 2 drivers: MongoDriver and InMemDriver. The MongoDriver in particular creates a bit of overhead because it is based on the official MongoDB driver, which unfortunately is now not just a driver, but also an object mapper. Many functions of the driver are not required at all, but some others are missing. So the aim is to introduce our own WireProtocol[^1:WireProtocol is the protocol that MongoDB uses for communication] driver, which is optimized for the needs of morphium. This can result in a significant performance advantage (currently between 10% and 20%).

2 drivers are implemented in V5:

  • SingleConnectDriver- a driver that only keeps a single connection to the MongoDb, but monitors it and also implements a failover if necessary. This is used, for example, in the new messaging substructure (see below)
  • PooledDriver - a full featured cluster connector with connection pooling and automatic failover.

InMemory Driver

Strictly speaking, there is a third driver, the InMemoryDriver, which makes it possible to run Morphium with an "in memory" mongo. This is particularly useful for tests. We use this as a TestDB in some projects and it is also comparable to other databases implemented for test purposes. The InMemory database should be (nearly) feature compatible to a real MongoDB installation. We're not 100% there yet, but we've reached a good 90%. Most features are already working, but some things take a little longer and might come later (such as "real" text indexes).

Why not the MongoDB driver

This question is asked more often and I would like to say a few words about it:

Features of the driver

The official MongoDB Java driver is not only a driver, but "embedded" in the driver is also an object mapper. This is probably super useful for many applications and completely sufficient.

Unfortunately not for us, because we want to configure some things rather than program them (declarative indices, etc.).

Two different ways of thinking collide. Which in the example of the MongoDB driver and Morphium has repeatedly led to... discrepancies. Some changes to the driver API have been a lot of work in Morphium.

The MorphiumDriver project has been around for a few years. It was brought to life because we had problems with the official MongoDB driver, especially in the area of ​​failover. Also, some features of the driver are at least a little questionable in my opinion.

These features, especially the object mapping, make access via Morphium a bit unwieldy. And some things happen unnecessarily twice. And that's exactly why we want to try to develop our own driver that does exactly what we need - and nothing more.

When accessing the Mongo in the stack trace, there are currently a good 20 calls within the MongoDB driver. With the new driver that's a maximum of 5!

new feature: InMemoryServer

Morphium in version 5 got its own server, which is also based on the InMemory database. This is naturally intended exclusively for test purposes and does not offer any security or anything similar. However, it is always helpful for testing a (simple) in memory mongodb (with the limitations of the InMemoryDriver). If you want to test it now:

java -cp morphium-5.0.0-SNAPSHOT.jar de.caluga.morphium.server.MorphiumServer

The MorphiumServer understands the following command line options:

  • -mt or --maxThreads: maximum number of threads in the server (= maximum number of connections)
  • -mint or --minThreads: minimal number of threads (kept ready)
  • -h or --host: which host does this server report back as (in the response to the hello or isMaster command)
  • -p or --port: which port should be opened (all network IPs)

I would like to reiterate that this server is purely for testing purposes and is certainly not intended to be a replacement for any MongoDB installations.

migration

Migration should be relatively easy since Morphium's API itself hasn't changed. However, the configuration has changed:

  • the class names were removed. It makes no sense to include your own query implementation. At least we were not aware that anyone had done that. The same applies to the defined factories etc.
  • The drivers are no longer referenced via the ClassName, now the DriverName is specified. Valid drivers currently are: InMemoryDriver, SingleConnectionDriver, PooledDriver, MongoDriver. Unfortunately, the MongoDriver is currently not working and it is not 100% certain that it will remain part of Morphium
  • direct access to the driver - in case someone did it - is still possible, but the driver API has changed significantly. That's why you have to reckon with major renovation work here.
  • the optimistic locking feature has been removed. Unfortunately, everyone who has used this feature currently has to find their own implementation

Current status

Currently, at the end of August 2022, Morphium V5 is available as an 'ALPHA' version in Maven Central and can be used for your own tests. However, some features are not yet 100% implemented. This is how Morphium works only with MongoDB versions 5.0 and later. Also, the transaction feature is currently not working.

plan

in the coming weeks these things will be addressed in this priority:

  1. Test stability / failover etc and adjust if necessary
  2. Implement auto retry read/write everywhere
  3. Finalise InMemoryDriver - add missing features, increase performance.
  4. Rewrite MorphiumConfig and remove old settings, adapt new ones, possibly make them more modular.
  5. Implement transaction handling (at driver level, morphium has supported this for a long time)
  6. Complete or remove MongoDriver.


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.2.13

2021-11-15 - Tags: morphium java mongodb

Morphium V4.2.13

We have just released a new version, which again contains some improvements and fixes:

  • Feature: EarlyProcessed - this allows incoming messages to be marked as "processed" before the listener is called. Standard behavior is only to be marked after a successful call. Useful for longer running processes
  • Feature: messageListener StatusInfo. If you send a message called `morphium.status_info ', all connected messaging systems respond with status information. Useful for debugging and monitoring. The feature can be deactivated and the name can be customized.
  • Fix: handling of entities with maps without generic definition - Fix: maps without generics, which could contain a list lead to a null pointer
  • Improvement: store() slowly becomes save() in order to better match the MongoDB commands
  • improvement: messaging shouldn't handle messages that don't have a listener either.
  • Adjustment of some tests
  • minor improvements

Installation

Morphium V4.2.13 is available via Maven Central and on https://github.com/sboesebeck/morphium

Morphium works with all Mongodb-java-drivers starting with V4.1.0. So it can easily be included into projects, that already have mongodb in use.

Maven

Include this snipplet in the pom.xml:

Morphium does not introduce new dependencies, you might need to define them yourself:

    <properties>
       <mongodbDriver.version>[4.1.0,)</mongodbDriver.version>
    </properties>

    <dependency>
            <groupId>de.caluga</groupId>
            <artifactId>morphium</artifactId>
            <version>4.2.13</version>
        </dependency>
        <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>bson</artifactId>
            <version>${mongodbDriver.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>mongodb-driver-sync</artifactId>
            <version>${mongodbDriver.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>mongodb-driver-core</artifactId>
            <version>${mongodbDriver.version}</version>
        </dependency>


category: Computer --> Test of Tools

Passwordmanager: PASS

2021-08-24 - Tags: shell

originally posted on: https://boesebeck.name

PASS - the standard Unix passwordstore

I already reported on PASS when I looked at the password manager.

What I forgot is that pass is super expandable. Since everything is a bit shell-related, I will leave the explanations out here, but you can easily install extensions.

Useful extensions

The functionality of pass can be greatly expanded with the extensions. You can find a useful list of extensions here. and now one from me

pass-index

Unfortunately, searching through the entries in pass with pass grep was quite slow. This was mainly due to the fact that he had to decrypt every single entry, requesting the GPG agent every time, etc.

I created an index file with the extension, which contains all searchable fields. You only have to search through one entry, it runs about 100 times faster, i.e. the pass grep version.

pass-file

a very useful extension. This is how you get the missing "Attchment" functionality. The file is base64-encoded, stored encrypted and can thus also be read again. practically. There are two versions, I use the linked one

pass-pwned

This allows you to check whether your password is still secure or whether it has already been found somewhere on the Internet. No password is sent around the world, only the password's checksum. That's bad enough, but if you suspect it makes perfect sense.

Alfred Integration

There are still a few Alfred integrations, but they didn't go far enough for me. Most of the time you could just read the passwords, save them to the clipboard and that was it.

I addressed the problem and built a more sophisticated version that can also search (with pass-index)

The integration supports some keywords:

  • pass TEXT: a search is made for text, and matching directories as well as entries are displayed


  • pfind PATTERN: searches the passwordstore
  • psync: sync with git repository
  • pupdate ': update the index - if pass index` is installed
  • pgen path: generate a new random password

so you can use Alfred as a quasi GUI for pass. At least for the most important functions.

It is still planned:

  • some kind of integration into the browser
  • store user-defined fields.


category: Computer --> Test of Tools

Passwordmanager

2021-08-21 - Tags: tools osx

originally posted on: https://boesebeck.name

1Password

I've been using 1Password for a long time and I've always been happy with it. However, there are now some reasons that speak in favor of switching to something else (purely subjective, everyone can evaluate things differently):

  • The costs: you switch from the normal "I buy a software" model to the "I rent a software" model. Which is understandable, but the prices are increasing dramatically. the OSX version
  • There is no lifetime license. That means I have to take out a subscription if I want to use 1Password V8!
  • The two points would still be bearable if they weren't for the quasi-online compulsion. Sure, there is no compulsion, I can choose not to use the features that I paid for. : smirk: And to save my passwords, no matter where, online, together with thousands or millions of others? What makes this particularly exciting for hackers. I renounce ...
  • And last but not least, new "features" have been added that somehow just get on your nerves. The 1Password extension for Safari now shows an input dialog for every form - and most of them in such a way that the field can no longer be operated. That used to be better.

But don't forget what's good about 1Password:

  • clearly the "prettiest" of the password managers
  • the operation is simple and the sync works flawlessly
  • good support in the browsers and also in iOS

Change to where?

So, it's time to look for a decent password manager. The following is important (to me):

  • Offline! He should put a file down for me somewhere that I can open even when the internet is not working. And ideally even without the software! This is practically impossible to find at all at the moment, as well as all password managers only work online.
  • Possibility to sync via e.g. iCloud, or DropBox or Mega - but encrypted please! If the password store is encrypted on the iCloud drive, that should be enough
  • Reasonably usable surface - although I also like to mean the shell (see below)
  • iOS client! And the sync should also work there.
  • and it should be possible to import the data from 1Password more or less completely. I have some data in here, not just passwords. That doesn't exactly make it easier.

Importing data from 1Password can work in more or less 2 ways:

  1. export / import of the "1Password-Export-Format" named 1pif. This is recommended, as attachments are also included
  2. export / import in CSV or tab-separated text format. Information can be lost in the process. In particular the attachments, but also with certain data types, the export becomes difficult.

The problem with the 2nd variant is that they strongly differentiate the fields between the different types:

  • a credit card has a PIN but not necessarily a password
  • A bank account has no password at all, but has an owner
  • A login requires a URL

The fields are therefore different and the import is therefore really difficult. The only reasonably useful solution is to export the categories individually with the fields that are important for this type and then assign them accordingly in the "recipient program".

candidates

With these prerequisites there aren't that many left, because I also exclude those who also have an offline mode. I can therefore assume that they will soon completely jump into the online bandwagon.

But a few are left after all:

KeePassXC

This is one of the programs from the KeePass family. They are all opensource and relatively powerful. However, also relatively ugly. Also, unfortunately, they can't do a lot.

The security features leave little to be desired, for example you can define a key file, specify the encryption algorithm, the bit width and and and ... this is really exemplary, albeit complex.

Most KeePass clients do not offer any import for 1Password files, with KeePassXC there is an import of" 1Password Vaults ", but I have no idea what I have to select for it to import something. I didn't get it going.

But there are also quite a few clients for iOS and extensions for most browsers.

The import of CSV files works and you come up against other limits: I didn't manage to import the different files that I exported into _ the same_ database. A new DB is created each time it is imported. I guess I'll do something there, but the bigger problem is that a lot of things can't be imported properly because everything is seen as a password.

You can theoretically create your own fields, but this is only possible for each individual entry and not e.g. for a group and not during the import. There you feel f...ed

Conclusion

Pro: security features, client diversity, open source Con: Looks from super gruesome to ugly, import of non-password data not really possible, import of 1 password not easily possible

Therefore, after a few days of trying around, I refrained from using KeePass. Especially because I just couldn't get the data in.

Enpass

I probably bought Enpass ages ago - and luckily, because as a former "Pro" user I can now enjoy Enpass without having to take out a subscription.

Basically, however, this is to be rated rather negatively, here too they switched back to the subscription model and just didn't want to scare away their existing customer base. Who knows whether it will stay that way and whether it won't change at some point. We also know similar statements from 1Passwort and a few years later it was no longer worth anything.

The costs with Enpass are significantly cheaper than with 1Password (about half) and there is (still) a lifetime license! Since Enpass works completely offline, it is definitely worth considering. I have also not read anything that they want to offer any online service that can only be used as a subscription. So in this respect it is still a good alternative, even if you are not a pro user.

As far as security is concerned, we are also at the forefront here, as a key file can also be used here, which increases security even more (for example, it could be stored on a secured USB storage device or something).

There is also synchronization. This does not work via its own server but via iCloud, Dropbox and similar services.

You can also create several "vaults", e.g. to separate work from private life.

And there we are already at a catch: for the synchronization via iCloud of 2 safes you need 2 iCloud accounts. This is a bit strange, but it probably has a technical background. This is not really important to me, I never used this functionality with 1Password either, but I can imagine that it is a no-go for some.

The import of the data from 1Passwort was ... complicated to say the least:

  • the export file in 1PIF format was not recognized by enpass. Something has changed, because these 1pif files are not files but directories (similar to the photo media library or something).
  • you have to copy the data out of the 1pif directory (right click -> show package contents), enpass then name the directory in which these files are located.
  • but that didn't work for me for everything ("found nothing to import"). I had to export all categories separately again, then it worked.

Now I have all the data from 1Password in Enpass, including the attachments and the bank accounts or notes!

Conclusion

Pro: Cheaper than 1Password, lifetime license, import of 1Password works including the non-password data types, iOS clients, good browser support, iCoud Sync Con: the look could be better, sometimes quite slow, subscription model - who knows how long the lifetime licenses will still work, strange iCloud connection to the account, sometimes extremely slow (search takes longer than 5secs)

Enpass is really worth considering and I'll be looking at the password manager for a while. But right now it's one of my favorites.

Secrets

Also a nice app, straight from the app store and also offers an iOS counterpart. The prices are one-time prices, not a subscription model. Secrets also looks quite appealing, is prettier than Enpass.

Secrets also works really fast, much faster than Enpass.

The sync with iOS went smoothly.

The import of the 1password data went smoothly, there is also a browser plug-in, the sync works via the iCloud. All in all, a really solid password manager.

However, there was little that I could find out about how secrets worked on the website. It appears to work completely offline and synchronize via the iCloud if necessary. How and which encryption is used is also not mentioned. The site has little information in general.

Conclusion

Pro: nice GUI, fast, easy to use, one time payment Con: a few entry types are missing, the searches are somewhat limited, the browser plug-in can fill out logins, but not save them

It's super easy to use, it's free in the app store and can be made "pro" with a one-off payment. The iOS app costs the same.

pass - Unix passwordstore

One is a "weird" among the password managers, but a really interesting one. pass is actually nothing more than a shell script. But one thing that has it all. But I have to go back a little.

GPG

the "Gnu Privacy Guard" was created a few years ago as an open source counterpart to "PGP" (Pretty good privacy) and is particularly popular when sending emails.

GPG or PGP rely on the so-called "public key" procedure. You don't have one, but 2 keys. One of the keys (public key) can be used to encrypt and the counterpart to decrypt. So everyone can get the one key (hence the name public key) and encrypt data / texts that can only be decrypted with the associated private key.

Pass now makes use of this functionality: it uses the commandline version of gpg to securely encrypt the passwords.

The passwords are encrypted with the public key, similar to an email to me. I am the only one who has the associated private key and only I can decrypt these password files.

The encryption methods on which GPG is based are still considered extremely secure even after years and are therefore superior to the other, symmetrical encryption methods (such as AES256) in this case.

So if you want to use GPG for the safe filing of important information, you can do it "like this" even without aids. The command line tool simply takes any text and encrypts it with the public key - this is also what the mailers that support GPG do.

These tools now use pass to store passwords securely. And that works amazingly well. But not only that. Because Pass "only" deals with encrypted text files, you can save any content. The only rule: the first line is the password, then you can enter fields in the format Name: Value. Multi-line values ​​also go with:

Name: Line 1>
  Line 2>
  Line 3
OtherName: Value2

It is important to stick to this syntax, then you can also use the Pass-App for Ios. There are also implementations for Android.

For synchronization, pass uses thegit actually intended for developers. There are umpteen public servers, or you can create your own somewhere. If it is also secured via SSH or other mechanisms, it can simply be backed up. And is completely transparent. The nice thing about it is that you automatically get a history of the passwords and - if you are familiar with git - you can restore an old version at any time. It can also be used to share the password store with others. More complicated setups are also possible with several sub-Git repositories, e.g. to separate common passwords in the company from private ones, etc.

That’s what I like best about pass: it stores sensitive data and so it is only of course an advantage if it is based on standards that are known to be secure. And I have more freedom, somewhere else ...

In theory, with this approach, I can put everything down securely. And if I want, I put my private key on a USB drive and voila - nobody can turn it unless the USB drive is connected.

The operation in the command line is of course a bit cumbersome (but also practical if you need a password in the shell). But there are some tools that will help you enter the passwords. I myself wrote a Workflow for Alfred that helps you to pass and find the passwords you need.

Importing into 1Password was also something like that ... actually not possible. I also helped myself and wrote a small script that imports a 1pif file exported by 1password.

Conclusion

Pro: OpenSource, uses known standards, very powerful, iOS support, very flexible, very secure Cons: complex setup, not really easy to use, hardly any iOS support, no real GUI, sync infrastructure has to be set up first, without experience in the shell it is not advisable

You can say that pass is clearly not for beginners. But the technologies and the possibilities speak for themselves. I will also use pass for a while.


category: Computer --> programming --> shell scripting

pass Alfred workflow

2021-08-20 - Tags:

This is a simple workflow for using the pass password store Alfread is the main GUI here.

Several keywords:

  • pass: search for entries, hit enter to see details and copy fields accordingly
  • pgen: generate a password in a category
  • psync: sync your pass storage
  • pb: browse data

Download workflow: AlfredPass


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.2.12 released

2021-08-13 - Tags: java morphium mongodb

The latest Version of morphium (V4.2.12) was released a couple of days ago. As usual, the changes contain Bugfixes and improvements.

  • Tests were changed so that they run more smoothly in total, less side effects
  • Tests were simplified
  • BugFix: CamelCase Conversion did not work properly in some cases
  • BugFix: fieldNames were not used properly in some cases
  • added some Convenience methods
  • BugFix: updateUsingFields properly uses settings from annotation @Property and camelCasing
  • BugFix: Cursors sometimes were not closed properly at the end of an iteration
  • BugFix: handling of capped collection
  • Timeout adjustements for Mongodb5.0
  • BugFix: some fixes for typeID handling fix with serializing of entities in maps

the latest version of Morphium is available on github and maven central.


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.2.8 released

2021-06-15 - Tags: morphium java mongodb

Morphium V4.2.8 was released including the following features:

  • FEATURE:basic support for Aggregation with the InMemoryDriver using Expr
  • FEATURE: Support for Expr in Qureries
  • FIX: imrpvoved Support for sharding
  • FIX: increased InMemoryDriver's thread safety
  • FIX: improved tests, reduced side effects
  • FIX: CachSynchronizer could cause a message storm in certain circumstances
  • FIX: set(Entity e, String Field, Object value, boolean upsert, Callback) does not need a multple parameter, does not make sense
  • IMPROVEMENT: increased number of tests to >900
  • IMPROVEMENT: refactoring of auto variable handling in MorphiumWriterImpl
  • IMPROVMENT: better error handling for index creation during startup
  • IMRPOVEMENT: code improvements for increased readablility, stability and/or performance. Added a couple of convenience Methods

morphium V4.2.8 is available on maven central, or on github.


category: global --> Games

Text Attack From Outer Space

2021-03-18 - Tags: Game Java

Somehow you have a little too much time during the pandemic. And since I type a lot, I wanted to write a game with which you can improve a little ... whereby the focus was less on typing and more on trying out.

Idea

The idea was to write a very simple game in Java that lets random words fall on you (similar to space invaders). Every word has to be typed in good time before the word "hits" below emoji people:smirk

Then there should be something like different levels so that the level of difficulty adapts.

Then a little effects, sounds etc. and that's it emoji people:smirk well, almost.

implementation

The whole thing was amazingly easy. It was more complicated to dig difficult words from somewhere on the Internet. And look for sound effects on [soundbible] (https://soundbible.com).

The program actually consists of 2 classes:

  • the MainFrame class is also executable. Start the game window and add the 2nd class
  • the 2nd class is GamePanel - that draws the play area.

Then there are a few objects that should be displayed. These implement an interface called Obj:

public interface Obj {

     void draw(Graphics2D g);

     //returns true, if it needs to be deleted
     boolean animate();

     int getX();

     int getY();
}

A timer then runs in the GamePanel, which callsrepaint ()every 15ms. In the paint method, all objects are iterated anddraw is called for all of them. Then a "drumrum" is also output, such as the current score, how many ships there are still to be destroyed, etc.

the animations

In the mainframe there is a function that - also controlled by a SwingTimer - the animate() Method for all Obj in the game panel. If this returns true, the corresponding object is removed from the game panel because the animation has ended and the object is no longer needed.

Every object makes its animations "by itself", i.e. normally an obj does not know any other. Only the "shot", i.e. the projectile that is shot at the bad ships, knows its target object and "animates itself towards the target", so to speak: smirk: That is also the reason why the objects have getters for X and Y. an example of the animation is Star - these are the asterisks that are in the background buzz around (of which 150 pieces are created at the start):


public class Star implements Obj {
    private int x = 0;
    private int y = 0;
    private int sz = 1;
    private int vy = 0;


    public Star(int x, int y, int r, int v) {
        this.x = x;
        this.y = y;
        this.vy = v;
        this.sz = r;
    }

    @Override
    public void draw(Graphics2D g) {
        if (sz > 1) {
            g.setColor(Color.gray);
            g.fillOval(x - sz, y - sz, sz * 2, sz * 2);

            g.setColor(Color.WHITE);
            g.fillOval(x - (sz / 2), y - (sz / 2), sz, sz);
        } else {
            g.setColor(Color.WHITE);
            g.fillOval(x - sz, y - sz, 2 * sz, 2 * sz);
        }
    }

    @Override
    public boolean animate() {
        y = y + vy;
        if (y > 1090) {
            x = (int) (1920 * Math.random());
            y = 0;
        }
        return false;
    }

    @Override
    public int getX() {
        return x;
    }

    @Override
    public int getY() {
        return y;
    }
}

Well, and actually that's it with the entire animation. Then a KeyListener is connected to MainFrame, which goes through the currently visible ships and "shoots" accordingly.

The game can be found on [Github] (https://github.com/sboesebeck/text_attack_from_outer_space), where you can also download current releases. you can start the game using your local java installation with java -jar TextAttack.jar - but be aware that you need to have a recent JDK version installed (>= JDK15)

The whole project took me about 6 hours in total. Starting with the concept and including the implementation. So please be gentle, if you find anything buggy or suboptimal... Thanks.


category: Computer --> programming --> MongoDB --> morphium

Indices in Morphium

2020-11-27 - Tags:

Morphium and Indices

Defining Indices

First of all, you usually do not need to care about Indices in Mongodb, just define it in your entity code:

@Entity
public class MyEntity {
    @Id
    private MorphiumId id;
    @Index
    private String myIndexedString;
    @Index(decrement = true)
    private Integer aNumber;
}

There are more options for defining indices, please consult the morphium documentation for details.

Index creation

As already mentioned, Morphium usually takes care of creation of indices following a simple rule:

  • when writing, check if the collection already exists, if not, create indexes
  • do nothing if collection is already there

The reason being, that if you have big collections, it might cause problems to create new indices when writing - it might affect the system. But if the collection does not exist, this is quite a small operation.

But there's a catch: this check costs time, every write morphium checks, if the collection exists.

You can change this behaviour by switching automatic index creation off in MorphiumConfig config.setAutoIndexAndCappedCreationOnWrite(false).

Starting with morphium V4.2.3, you can also have morphium check those indices only on startup. morphiumConfig.setIndexCappedCheck(IndexCappedCheck.WARN_ON_STARTUP)

This will issue a warning in log, if an index for an entity is missing. There are more options, you can choose from:

  • NO_CHECK: do not check, if indices match
  • WARN_ON_STARTUP: check indices for all Entities on startup/connect. This will slow down startup depending on your code.
  • CREATE_ON_WRITE_NEW_COL: create missing indices, when writing to a new collection (Default behaviour)
  • CREATE_ON_STARTUP: check for indices only once when starting up / connecting to mongo.

Hint: There is no option to warn on write, this would slow down mongodb access significantly!

Warnings for missing indices look like this: Collection 'person' for entity 'de.caluga.test.mongo.suite.TextIndexTest$Person' does not exist.

Capped collections

Morphium does combine handles the capped status of collection together with indices. That means, if the collection does not exists, but @Capped-Annotation is used, the collection is created capped. This is done together with the indices, so if you check those, capped information is also checked. Depending on settings, the collection might also be automatically converted to a capped collection!

programmatically creating indices

The easiest way to create indices is, to call the method ensureIndicesFor(ENTITYCLASS). This will create all indices, that are defined in code for this entity (using the @Index annotation). If you also want to ensure, the corresponding collection is capped, if defined accordingly, call morphium.ensureCapped(ENTITYCLASS).

Both methods will not create indices or capped collections, if no index is defined in code. But it will call ensureIndex on mongodb.

Hint: when an index is created for a non existend collection, there will be an empty collection in mongodb.

But you can also create indices more manually:

  public <T> void ensureIndex(Class<T> cls, String collection, Map<String, Object> index, Map<String, Object> options, AsyncOperationCallback<T> callback);

the Index map contains the Document that describes the index (similar as directly in mongodb) and the options are the corresponding options for this index. e.g.

 morphium.ensureIndex(UncachedObject.class, "uncached_object",Utils.getMap("counter",1),Utils.getMap("unique","true"),null);

get Information about missing indices

In order to get just the information on missing indices for a specific Entity (or all of them), you can get this information starting with MongoDB V4.2.3:

Map<Class<?>, List<Map<String, Object>>> missing = morphium.checkIndices();

returns a Map with the entity class as key and a list of missing Index definitions as value. you can use this map to manually create an index:

Map<Class<?>, List<Map<String, Object>>> missing = morphium.checkIndices();
morphium.ensureIndex(EntObject.class,missing.get(EntObject.class).get(0))

ATTENTION:

  • the checkIndices() also adds missing capped information to the result map!
  • checkIndices() runs through the whole classpath, which can take some milliseconds, if you want to reduce the load, you can use checkIndices(filter), and filter classes according to your needs: checkIndices(classInfo -> !classInfo.getPackageName().startsWith("de.caluga.morphium"));

conclusion

The performance of Databases strongly depends on proper creation of indices. With morphium you can define them all in code, which reduces the need to know, how to create indices in Mongodb manually for java engineers and hence improves performance.


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.2.0 released

2020-09-14 - Tags: java mongodb morphium

We just released Morphium V4.2.0 including a lot of fixes and features:

  • Feature: new MongoDB Driver 4.1 is used and so we do support mongodb 4.4 now
  • Feature: direct support of all aggregation stages
  • Feature: Expr-Language
  • Feature: Collation support
  • Feature: improved Geospacial searches
  • Feature: iterable aggregation
  • Feature: sending of exclusive messages to a list of recipients
  • Fix: centralize id creation
  • Fix: improved handling of fields, which have concrete list or map implementation as type (like HashMap or ArrayList)
  • Fix: InMemoryDriver throws an Exception, if a feature is not possible inMem
  • Fix: added lots of tests
  • Fix: Sequences more stable now
  • Fix: InMemoryDriver handling of $in on lists

in addition to that, tha documentation was improved a lot. It is available as .html and markdown in the project it self (on github) or here.


category: Computer --> programming --> MongoDB --> morphium

Morphium 4.1.4 released

2020-08-10 - Tags: morphium java mongodb

We released Morphium 4.1.4. This includes, as usual a bunch of improvements and fixes. Here is the changelog since V4.1.0:

V4.1.4:

  • Morphium is AutoClosable now - simplifies usage
  • Checking for a field existence in sub documents was causing problems with aggregation. This was fixed and the output is only an error now, not exception thrown.
  • Improving the null handling: only overwrite a POJO value, when mongo delivers a value including null, keep it as default otherwise.
  • Replace morphium driver property acceptableLatencyDifference with localThreshold because only the latter exists in mongo driver; Additional morphium property serverSelectionTimeout

V4.1.3:

  • adding complexQueryCount to Query interfact and implementation
  • messaging now has a flag whether polling is necessary or not. This reduces load.
  • the stats now honor this flag
  • Bugfix: a little bug that caused unneccessary load in messaging
  • Bugfix: fixing a bug in messaging regarding pausing/unpausing with exclusive messages, reducing load on mongo from messaging, finall fixing log format
  • Bugfix: must not change read preference SECONDARY_PREFERRED to SECONDARY or reading from a cluster that only has a Primary(on node cluster) will fail
  • SSL/TLS Support for Morphium (tested with AWS DocumentDB)
  • Bugfix: fixing a bug in changestream monitor for the InMemoryDriver
  • Dump InMemory DBs

V4.1.2:

  • Bugfix: fixing changestream handling for InMemoryDriver, adapting tests
  • reducing write security for tests, making all work with enterprise mongo inMem
  • update of some dependent libs
  • code improvements, removed some unnecessary stuff
  • Bugfix: fixing message rejection

V4.1.1/V4.1.0:

  • Bugfix: fixing exclusive message handling
  • making rsmonitor sync host seed in config.
  • Bugfix: get messaging and changestream monitor to exit gracefully.
  • JMS Support for messaging
  • Bugfix: Validation
  • changing DriverTailableIterationCallback to have better control over stopping the tail
  • Build improvements
  • reorganizing tests, speeding up messaging

A warm thanks goes out to all that helped build this Releases! Not olny with code and pull requests, but also by filing in an issue! Thanks a lot!


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.1.2

2020-03-03 - Tags: java mongodb morphium

Release Morphium V4.1.2

We just released morphium V4.1.2 via maven central. The lots of changes do contain:

  • complete overhaul of messaging for increased stability, speed and less load on mongodb
  • new Feature Encrypted Fields:
 @Entity
    public static class EncryptedEntity {
        @Id
        public MorphiumId id;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public String enc;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Integer intValue;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Float floatValue;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public List<String> listOfStrings;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Subdoc sub;


        public String text;
    }

in addtion to those features, some fixes:

  • fixing a bug which could lead to a StackOverflowError when using the InMemoryDriver
  • fixing a bug which could lead to errors and deadlocks when using the changestream with the InMemoryDriver
  • using ObjectId as key in entities would lead to exceptions
  • fixing some bugs which did not detect errors properly
  • reducing memory consumption of messaging

to use morphium, include this into your pom.xml in dependencies:

<dependency>
    <groupId>de.caluga</groupId>
    <artifactId>morphium</artifactId>
    <version>4.1.2</version>
</dependency>


category: Computer --> programming --> MongoDB --> morphium

Morphium Messaging Options

2020-02-25 - Tags: java morphium mongo

Because I've now received this question a few times, here is a brief summary of morphine messaging and how to use it:

The first parameter in the constructor is always the morphium instance. This determines which Mongo is used and also some settings of morphine affect the messaging subsystem:

  • threadPoolMessagingCoreSize: determines the core size of the thread pool in messaging.
  • threadPoolMessagingMaxSize: the maximum size of the thread pool in messaging
  • threadPoolMessagingKeepAliveTime: Time that a thread in the pool can survive unused in ms.

This thread pool is actually only used to process messages in own threads. If the maximum size of the ThreadPool (or the window size) is reached, it is paused until enough capacity is available in the pool again.

Then we may have other parameters in the constructors:

  1. Always morphine instance
  2. queueName - the name of the queue (!) To be used. the collection name is then usually mmsg_ + the queue name to avoid collisions with the" normal "names. If no name is set, "msg" is assumed as the collection and queue name. The collection name can be found with getCollectionName () on the messaging instance.
  3. pause in ms: this time determines how often to search for new messages (polling) or, if you have connected to a replica set and thusChangeStreams can use how much time between two checks older messages should pass. These messages can occur when MessageProcessing is paused in between. All messages reported to ChangeStream will be ignored during this time and will not be processed. So that these are not "lost", these messages are searched for every pause ms. In general, if you have connected to a Replica set, the pause should be increased to minimize the load on the Mongo. If polling has to be used, the time should be set relatively short to guarantee fast response times.
  4. processMultiple: if true, several messages are always marked for processing by this messaging (locking)
  5. multithreaded: determines whether processing (calling the MessageListener) should take place in a separate thread. If set, a thread pool is created according to the settings in MorphiumConfig.
  6. windowSize: how many messages should be processed at once. Determines the maximum number of messages that can be marked at the same time, or how many threads can be active in the thread pool at the same time.
  7. useChangeStream: this can be used to force the use of a changeStream. If set, you will be informed about new messages from Mongo via push. If not set, polling takes place. The default is morphium.isReplicaSet ().

Again, I would like to point out that using a replica set for messaging is a much better solution!

What is there to take into account in the values

  • if you set the pause too small (regardless of whether with or without ChangeStream), the load on the Mongodb is increased. Something around 100ms turned out to be a good value. If the value is too large, this can lead to long delays between sending and receiving the message. If this latency exceeds the TTL of the message, the message is no longer processed.
  • processMultiple and multithreaded are a bit related. Because processMultiple only makes sense - especially for listeners who really do some work - if you also set multithreaded to true. Otherwise, several messages are simply marked for processing by this messaging system, but are being processed one after the other. Setting both to false reduces the load in the system (single threaded, one by one), but can in turn to high latencies.
  • the windowSize is strongly related to the ThreadPool. Actually the values ​​should be about the same. Because the maximum of how threads are created at the same time is specified by windowSize. it would be best if the thread pool also allowed this, otherwise it won't work. Therefore, the windowSize should be about the same as threadPoolMessagingMaxSize (to be on the safe side, you could also give the thread pool a few more threads). If processMultiple is set to false, the windowSize is 1.

all in all you can't do much wrong. Just have a look at the tests to get some sample codes.


category: Computer --> programming --> MongoDB --> morphium

Morphium messaging vs messaging systems

2020-02-23 - Tags: morphium java messaging

Morphium can also be messaging ... wow ... right?

Why do you still need such a solution? how does it compare to others, such as RabbitMQ, MQSeries etc.? Those are the top dogs ... Why another solution?

I would like to list some differences here

Conceptually

Morphium Messaging is designed as a "persistent messaging system". The idea is to be able to "distribute" data more easily within the MongoDB. Especially in connection with @Reference it is very easy to "transfer" complex objects via messaging.

Furthermore, the system should be easy to use, additional hardware should not be needed if possible. This was an important design criterion, especially for the cluster-aware cache synchronization.

Features

Morphium is to be understood as a persistent queue, which means that new consumers can be added to a cluster at any time and they also receive older, unprocessed messages.

Because Morphium persists the messages, you can use every MongoDB client to see what is happening purely in terms of messaging. there is a rudimentary implementation for Morphium messaging in python. This is used to e.g. display a messaging monitor, a status of what is happening right now.

Morphium is super easy to integrate into existing systems. The easiest of course, of course, if you already have Morphium in use. Because then it's just a three-line:

Messaging messaging = new messaging (Morphium);
messaging.start ();

messaging.addMessageListener (...);

To e.g. To get RabbitMQ a "simple" producer-consumer setup, more lines and setup are necessary.

in the concept of Morphium, an "answer" was also provided. This means that every MessageListener can simply "reply" a message as a response. This message is then sent directly to the recipient. Something that is not easily achieved in other messaging systems.

An important feature is the synchronization of the caches in a cluster. This runs via annotations in the entity and then all you have to do is start the CacheSynchronizer, and everything runs automatically in the background.

 Messaging msg = new messaging (Morphium, 100, true);
        msg.start ();
        MessagingCacheSynchronizer cs = new MessagingCacheSynchronizer (msg, morphium);

  Morphium also provides a solution to "reject" an incoming message. Every listener can throw a MessageRejectedException. The message is then no longer processed in the current messaging instance and marked so that it can be processed by other recipients. This also happens in particular if an error / exception occurred during message processing.

JMS

Morphium also supports JMS, but there is a bit of a logical and conceptual "breach" ...

JMS sends messages e.g. in topics or queues ... in Morphium there is no or only a limited distinction. In this nomenclature, each message can be either a topic or a queue message.

If you send a message in Morphium Messaging that is marked as "non-exclusive" (default), then it is a broadcast, every participant can receive the message within the TTL (time to live). This is only decided based on whether you have registered a listener or not.

Every Morphium messaging listener can get topics, queues, channels, and direct messages. That is more or less determined by the broadcaster. The sender determines whether the message

  • to be sent directly to a recipient (direct messaging)
  • to all listeners (usually for a certain message name) - i.e. a broadcast (default)
  • or to one of the listeners, but no matter which one (exclusive message, similar to the topics in the JMS)

Empty message-queue = healthy message queue

This is what you e.g. reads again and again about RabbitMQ. That is not the same with Morphium. The messages remain in the queue for a while and delete themselves when reaching the TTL. If I put a broadcast message in the queue with a lifespan of one day, then this message can still be processed within a day. And will it too, e.g. when new "consumers" register. (replay of messages).

This does not apply to exclusive messages, as you explicitly don't want to process them multiple times, i.e. the message is deleted after successful processing (only until V4.1.2 - after that, messages are only deleted when TTL is reached).

A Morphium message queue is always filled to a certain extent with this and that's a good thing.

Conclusion

Morphium messaging does not want to and cannot be a "replacement" for existing message systems and solutions. That was not the direct goal in development at all. There was a specific problem that could be solved most easily and efficiently.

Nevertheless, the areas of application of Morphium messaging are similar or overlap with other messaging systems. But that's in the nature of things. A migration from one system to another should definitely be possible in finite time. Morphium supports e.g. JMS, both as a client and as a server. This allows cache synchronization to be implemented using other, possibly already existing messaging solutions without having to forego the convenience of Morphium annotation-based cache definition. Or integrate Morphium into your own architecture as a messaging solution via JMS.

comparison table

DescriptionMorphiumRabbitMQ
runs without additional hardwareX-
nondestructive peekX-
high speed-X
high securityXX
simple to useXdepending on the use case
persistent messagesXnot mandatory
get pending messages on connectX-


category: Computer --> programming --> MongoDB --> morphium

fieldbased encryption in mongo using Morphium

2020-01-09 - Tags: java

There will be a new feature in the next version of Morphium: automatic encryption and decryption of data.

This ensures that the data is only stored in encrypted form in the database or is decrypted when it is read. This also works with sub-documents. Technically, more complex data structures than JSON are encoded and this string is stored encrypted.

We currently have support for 2 types of encryption AES (default) and RSA.

here is the test for AES encryption.

    @Test
    public void objectMapperTest() throws Exception {
        morphium.getEncryptionKeyProvider().setEncryptionKey("key", "1234567890abcdef".getBytes());
        morphium.getEncryptionKeyProvider().setDecryptionKey("key", "1234567890abcdef".getBytes());
        MorphiumObjectMapper om = morphium.getMapper();
        EncryptedEntity ent = new EncryptedEntity();
        ent.enc = "Text to be encrypted";
        ent.text = "plain text";
        ent.intValue = 42;
        ent.floatValue = 42.3f;
        ent.listOfStrings = new ArrayList<>();
        ent.listOfStrings.add("Test1");
        ent.listOfStrings.add("Test2");
        ent.listOfStrings.add("Test3");

        ent.sub = new Subdoc();
        ent.sub.intVal = 42;
        ent.sub.strVal = "42";
        ent.sub.name = "name of the document";

        Map<String, Object> serialized = om.serialize(ent);
        assert (!ent.enc.equals(serialized.get("enc")));

        EncryptedEntity deserialized = om.deserialize(EncryptedEntity.class, serialized);
        assert (deserialized.enc.equals(ent.enc));
        assert (ent.intValue.equals(deserialized.intValue));
        assert (ent.floatValue.equals(deserialized.floatValue));
        assert (ent.listOfStrings.equals(deserialized.listOfStrings));
    }


    @Entity
    public static class EncryptedEntity {
        @Id
        public MorphiumId id;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public String enc;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Integer intValue;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Float floatValue;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public List<String> listOfStrings;

        @Encrypted(provider = AESEncryptionProvider.class, keyName = "key")
        public Subdoc sub;


        public String text;
    }


    @Embedded
    public static class Subdoc {
        public String name;
        public String strVal;
        public Integer intVal;
    }

There are a few classes you should know:

  • AESEncryptionProvider: this is an implementation of the interfaceEncryptionProvider and offers AES encryption
  • RSAEncryptionProvider: this is an implementation of the interfaceEncryptionProvider and offers RSA encryption
  • All encryption systems access the EncryptionKeyProvider, which can be set in the MorphiumConfig.
    • DefaultEncryptionKeyProvider: you save the keys yourself
    • PropertyEncryptionKeyProvider: the keys are read in via properties, the keys themselves can be stored in encrypted form (AES encryption)
    • MongoEncryptionKeyProvider: reads the keys from the Mongo, Collection and encryption key can be specified
  • Of course, all of these interfaces can be implemented yourself.


category: Computer --> programming --> MongoDB --> morphium

Morphium messaging JMS implementation

2019-08-20 - Tags: morphium java

Morphium 4.0.6.2 was released this week and this version contains one feature, which I want to show here: Morphium Messaging via JMS!

Here a little proof of concept test from the source:


public class BasicJMSTests extends MongoTest {

    @Test
    public void basicSendReceiveTest() throws Exception {
        JMSConnectionFactory factory = new JMSConnectionFactory(morphium);
        JMSContext ctx1 = factory.createContext();
        JMSContext ctx2 = factory.createContext();

        JMSProducer pr1 = ctx1.createProducer();
        Topic dest = new JMSTopic("test1");

        JMSConsumer con = ctx2.createConsumer(dest);
        con.setMessageListener(message -> log.info("Got Message!"));
        pr1.send(dest, "A test");

        ctx1.close();
        ctx2.close();
    }

    @Test
    public void synchronousSendRecieveTest() throws Exception {
        JMSConnectionFactory factory = new JMSConnectionFactory(morphium);
        JMSContext ctx1 = factory.createContext();
        JMSContext ctx2 = factory.createContext();

        JMSProducer pr1 = ctx1.createProducer();
        Topic dest = new JMSTopic("test1");
        JMSConsumer con = ctx2.createConsumer(dest);

        final Map<String, Object> exchange = new ConcurrentHashMap<>();
        Thread senderThread new Thread(() -> {
            JMSTextMessage message = new JMSTextMessage();
            try {
                message.setText("Test");
            } catch (JMSException e) {
                e.printStackTrace();
            }
            pr1.send(dest, message);
            log.info("Sent out message");
            exchange.put("sent", true);
        });
        Thread receiverThread new Thread(() -> {
            log.info("Receiving...");
            Message msg = con.receive();
            log.info("Got incoming message");
            exchange.put("received", true);
        });
        receiverThread.start();
        senderThread.start();
        Thread.sleep(5000);
        assert (exchange.get("sent") != null);
        assert (exchange.get("received") != null);
    }
}

Consider this only being a teaser, it is still BETA or even ALPHA. Not production ready yet.

Unfortunately, some features of morphium won't work using JMS:

  • via JMS you probably can't send direct messages to nodes
  • via JMS you can't youse custom message classes
  • in total, JMS is a lot less flexible than morphium messaging itself it seems. Morphium Messaging allows a lot more details in definition, what message is sent where and how.

But again: this is not production ready, BETA state at best!


category: Computer --> programming --> MongoDB --> morphium

Morphium based blogsoftware

2019-07-29 - Tags: jblog

Morphium based blog software Jblog

Some time ago I "dared" to leap away from Wordpress and reported about it (https://caluga.de/v/2017/5/19/jblog_java_blogging_software). Now it's been over 2 years (: astonished :) and there it is time to draw a conclusion and to report a little how it went.

Jblog - the good the bad the ugly

The good anticipation: the server was not hacked. The hits that were aimed at any PHP gaps or so are all ineffective "bounced" - that's what I wanted to achieve. The update topic is also solved: there was no: smirk: I took care of the further development myself. However, we already had a few bugs, and some are a bit annoying. Also, some things have turned out to be less practical. The quite elaborate versioning of the individual blog posts is actually unnecessary and complicates everything only.

  But otherwise...

Technology

What is currently behind Jblog:

  • Morphium V4.0.6
  • Spring boat
  • Apache Freemarker
  • Bootstrap 4
  • JDK11
  • MongoDB 4.0

yes, that was it already, has become quite clear. That The application is currently running as a Spring Boot app behind a nginx.  

features

I knit my blog as ich wanted it. So very much in my way of blogging, which may be the reason why nobody forked on Github and I changed the project to privat. : Sob:

multilingualism

Jblog is consistently bilingual. Theoretically, more would be possible, but I wanted to focus on German and English. That every element on the website is translated, all blog entries must be recorded in 2 languages.

For this a MessageSource is used which reads the data from Mongo. The nice: if a translation is not found, a link to the translation mask will be issued. That when I run Jblog on an empty translation database, I get a screen full of links that take me to the translation tool. Gradually this will be corrected.

By using Morphium to access the Mongo, these translations can also be cached (just write the @Cache annotation to the class).

The nice thing is that the cache, if wanted, is cluster-aware. That I can run multiple instances of Jblog and access through a loadbalancer. And these would keep their caches in sync across the Mongo, i. there are no dirty reads even with round-robin attitude!

Markdown

Markdown is indeed der hit lately to enter text. I think that's pretty cool too, especially since I type very well and the grip on the mouse always slows me down. That's why it's so easy to type everything. (and who like me used to do word processing with LaTeX knows this: smirk :.)

Jblog is completely designed for Markdown. That You enter his blogs in Markdown and then see the preview right away. I have also made some extensions, especially to simply emojis (: confused:: smile:: sweat :) integrate or embed a nicely formatted Java code block.

I also implemented the embedding of pictures.

I used a Markdown implementation called [Markingbird] (https://github.com/pmattos/Markingbird) and adjusted it a bit. So I've built something to embed the emojis easier and easier to view images.

Each blog post is written in Markdown and will be rendered on first access. And the result is cached, of course: smirk:

When typing, I have a live preview and can see exactly how the post will look at the end.

Twitter connection

new articles are published via Twitter. Some things have been taken into consideration: specify the time of the tweet, only for new articles etc.

And it is tweeted in 2 languages, German and English

multiuser capability

Yes, you can actually use Jblog with several users. Although I have a few friends and acquaintances, the time here from time to time pure snow and possibly what to post, but otherwise ... Is so a very little used feature, which is why I have then deactivated.

At the moment you can not register, the registration process is De-Enabled. If you want to leave a comment, you can do that via Disqus.

Implementation

The implementation has changed a bit in the last 2 years. Launched with JDK1.8 as a classic Spring Web project with WAR deployment, we now have a Spring Boot application with integrated Tomcat 9 on JDK11.

I use AdaptOpenJdk V11 here because I need Longtermsupport so I do not have to rebuild my blog every 6 months.

morphium

Of course, this should also be a small "showcase" for morphium, so here are a few examples:

Translations

In order to keep the application simple multilingual, it is advisable to keep the translation data in the database. This has many advantages:

  • You can change the translations at runtime
  • You can also adjust number formats, etc. at runtime and thus change the appearance of the application
  • if you allow it, you can also incorporate links, IMG tags etc in the translation table and thus achieve even more flexibility

Of course these features come with a price tag:

  • a little programming effort
  • The performance could suffer because for each page build, multiple data must be read from the database. This is where the caching feature of morphium helps us
  • You always need a database to get a page to face, otherwise it comes out nonsense. And the care can be a bit awkward.

Just the last point annoyed me, so I wrote Localization so that in the case that no occupation is found, the link to the appropriate maintenance interface is issued. Although this had its strange effects (link in a link, link in the title etc), was on the whole but really easy.

An important point was that I can only show the entries in the translation mask with a mouse click, which still lacks a translation. This is a quick way to a complete translation.

This is the message source:

@Service
public class LocalizationMessageSource extends AbstractMessageSource {
    @Autowired
    private LocalizationService localizationService;


    @Override
    protected MessageFormat resolveCode(String s, Locale locale) {

        String lang = locale.getLanguage();
        if (!LocalizationService.supportedLangs.contains(lang)) {
            return null;
        }
        return createMessageFormat(localizationService.getText(lang, s), locale);
    }


}

And this is the service returning the html-link if the translation is missing:


@Service
public class LocalizationService {
    public final static List<String> supportedLangs = Arrays.asList("de", "en");

    @Autowired
    private Morphium morphium;


    public String getText(String locale, String key) {
        key = key.replaceAll("[ \"'?!`´<>(){}-]", "_");
        Localization lz = morphium.findById(Localization.class, key);
        if (lz == null) {
            lz = new Localization();
            lz.setKey(key);
            for (String l : supportedLangs) {

                lz.getTxt().put(l, l + "_" + key);

            }
            morphium.store(lz);
        }

        if (key.startsWith("format_") && lz.getTxt().get(locale).equals(locale + "_" + key)) {
            lz.getTxt().put(locale, "yyyy-MM-dd HH:mm:ss 'default'");
            morphium.store(lz);
            return lz.getTxt().get(locale);
        }

        if (lz.getTxt().get(locale).equals(locale+"_"+key)){
            return "<a href=/admin/translations/query?queryKey="+key+">"+key+"</a>";
        }
        return lz.getTxt().get(locale);
    }


}

This is very handy if you bring in new features.

If you did that for a bigger website, it could be awkward. Especially if you want to support more than 2 languages. Therefore, in such a case, an import / export of the data is recommended, preferably in a readable format for translation agencies!

There is another "problem" in translations: orphans! It happens quite often that during the development, translations are made for something that becomes obsolete over time. Then you have translations in the database that nobody needs anymore. It is not so easy to identify them because the keys for the translations are partly dependent on data and only available at runtime.

For such a case, morphine offers the option of placing a lastAccessTimestamp in a field. This will let you see when entries were last read. (Attention: use this only in conjunction with caching, since every read access to these elements has a write access result!).

If you put it all together, the following entity comes out:

@Entity
@Cache(maxEntries = 10000, timeout = 60 * 60 * 1000, clearOnWrite = true, strategy = Cache.ClearStrategy.FIFO, syncCache = Cache.SyncCacheStrategy.CLEAR_TYPE_CACHE)
@LastAccess
public class Localization {
    @Id
    private String key;
    @LastAccess
    @Index
    private long lastAccess;
    private Map<String, String> txt;


    public String getKey() {
        return key;
    }

    public void setKey(String key) {
        this.key = key;
    }

    public Map<String, String> getTxt() {
        if (txt == null) txt = new HashMap<>();
        return txt;
    }

    public void setTxt(Map<String, String> txt) {
        this.txt = txt;
    }

    public long getLastAccess() {
        return lastAccess;
    }

    public void setLastAccess(long lastAccess) {
        this.lastAccess = lastAccess;
    }
}

If there is a write access to the translations, the cache is completely emptied.

If you use Jblog in the cluster, you need a way to keep the caches in sync on the nodes. For this you define a SyncCacheStrategy. The following happens:

  • all Jblog instances initialize a messaging system in morphine
  • every update on cached elements sends an update message via the messaging
  • depending on the strategy, the individual nodes update their cache.
  • SyncCacheStrategy.CLEAR_TYPE_CACHE clears the whole cache for this entity
  • SyncCacheStrategy.REMOVE_ENTRY_FROM_TYPE_CACHE removes the entry from all search results in the cache (Caution: the element will not be output in cached search results, although it might fit)
  • SyncCacheStrategy.UPDATE_ENTRY updates the entry in the cache (Caution: can cause the item in the cache to be counted as search results, even though it does not really fit the criteria anymore - dirty read!)
  • SyncCacheStrategy.NONE - well ... just do not sync: smirk:

But for this to work properly, the application needs to know which language is being used. Various steps are used for this:

  • is in the URL a lang parameter in it, overwrites everything that is set
  • is there a cookie with the language, then we take that
  • If the browser sends info about the supported languages, we use that.
  • if everything fails, take English: smirk:

This was implemented with an interceptor:


@Component
public class ValidLocaleCheckInterceptor extends HandlerInterceptorAdapter {
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        String lang = "de";
        if (request.getParameter("lang") != null) {
            String ln = request.getParameter("lang");
            if (ln.contains("_")) {
                ln = ln.substring(0, ln.indexOf("_"));
            }
            ln = ln.toLowerCase();
            if (ln.isEmpty()) {
                ln = "de";
            }

            if (!LocalizationService.supportedLangs.contains(ln)) {
                StringBuilder url = new StringBuilder();
                url.append(request.getRequestURI());
                Map<String, String[]> map = request.getParameterMap();
                String sep = "?";
                for (String n : map.keySet()) {
                    if (n.equals("lang")) {
                        continue;
                    }
                    for (String v : map.get(n)) {
                        url.append(sep);
                        sep = "&";
                        url.append(n);
                        url.append("=");
                        url.append(v);
                    }

                }
                url.append(sep);
                url.append("lang=de")//Default locale
                response.sendRedirect(url.toString());
                //Utils.showError(404,"unsupported language",response);
                return false;
            }
            lang = ln;
        } else {
            Cookie[] cookies = request.getCookies();
            boolean found = false;
            if (cookies != null) {
                for (int i = 0; i < cookies.length; i++) {
                    if (cookies[i].getName().equals("lang")) {
                        lang = cookies[i].getValue();
                        found = true;
                        break;
                    }
                }
            }
            if (!found) {
                LocaleResolver localeResolver = RequestContextUtils.getLocaleResolver(request);
                Locale l = localeResolver.resolveLocale(request);
                if (l != null) {
                    lang = l.getLanguage();
                }
            }
        }
        request.getSession().setAttribute("lang", lang);
        Cookie c = new Cookie("lang", lang);
        c.setPath("/");
        c.setMaxAge(365 * 24 * 60 * 60);
        response.addCookie(c);
        return true;
    }

Tag Cloud

To calculate the tag cloud we simply use the aggregator located in Mongo, and of course there is also support for that in morphium:

 Aggregator<BlogEntry,WordCloud> a=morphium.createAggregator(BlogEntry.class,WordCloud.class);
 a.match(morphium.createQueryFor(BlogEntry.class).f(BlogEntry.Fields.state).eq(Status.LIVE).f("wl").eq(wl));

 Map<String,Object> projection=new HashMap<>();
 Map<String,Object> arrayElemAt=new HashMap<>();
 ArrayList<Object> l = new ArrayList<>();
 l.add("$category_path");
 l.add(0);
 arrayElemAt.put("$arrayElemAt", l);
 projection.put("category",arrayElemAt);
 a.project(projection);

 a.group("$category").sum("count",1).end();
 a.sort("-count");
 log.debug("Preparing category cloud");
 List<WordCloud> cCloud = a.aggregate();

Here a simple aggregation is made on the BlogEntry entries stored in Mongo. The whole thing is filtered to white label (wl) (Jblog supports several whitelabel, for me it is boesebeck.biz, boesebeck.name and caluga.de) and only posts in the statusLIVE are considered.

On this data, a projection is performed to filter out the individual categories ($ arrayElementAt). Then we still count the occurrences of these categories and voila! A cloud ...

and so that the whole is mapped correctly in Java, you can put the result of the aggregation into an @Entity:

    @Entity
    public class WordCloud {
        @Id
        public String word;
        public int count;
        public int sz=0;

        public String getWord() {
            return word;
        }

        public void setWord(String word) {
            this.word = word;
        }

        public int getCount() {
            return count;
        }

        public void setCount(int count) {
            this.count = count;
        }

        public int getSz() {
            return sz;
        }

        public void setSz(int sz) {
            this.sz = sz;
        }
    }

Now only the list of WordCounts is displayed in the frontend, with the corresponding size Sz - and to make it look "funny", the whole thing is still randomized ...

Finished!

caching

Caching uses Jblog quite often to keep performance high. In particular, the translations are cached. This gives you the ability to change the translations at any time via a frontend, without having to re-deploy or restart, on the other hand, these data are rarely changed, so it is good to keep them in memory to access the database minimize and thus maximize performance.

In Morphium this is quite simple: just add the annotation @Cache to any Entity.

@Cached
@Entity
public class Translation {
...
}

statistics

Apart from the fact that morphium has its own statistics e.g. the cache efficiency of individual entities (to be retrieved via morphium.getStatistics (); - returns a Map <String, Double> with particular number of elements in the cache, cache hit ratio, etc.), so needs a blog but a few Information about the accesses to it.

The whole is handled in Jblog on its own service, which counts the daily accesses:

  • The stats are stored in a separate collection, and the date is used as _id
  • on the stats are then run together with the date inc commandos to increase the number of hits / visitors:` morphium.createQueryFor (Stats.class) .f (Stats.Fields.id) .eq (dateString) .inc (Stats.Fields.hits, 1); '
  • so that the accesses are not falsified by bots, you should filter out bots largely. To do this, look at the request header, the UserAgent provides information (at least a little - you do not get caught 100%).
  • I also log the maximum number Conccurent User. According to my definition for my applications, this is the number of unique session ids that have had a hit in the last 3 minutes. To measure this, the application must carry timestamps and aggregate the data accordingly.

To pack that into the mongo is a bit awkward and is enough for my mini-blog. If I had more traffic, I would not do it that way, but use a dedicated Time Series DB, like Influx or graphite.

Versioning of BlogPosts

This is a feature that does not really need it and can certainly be ticked off under "over-engineering". The idea: you have a history of changes for each block post and can jump back to every change. That's quite nice, but in my case completely useless: smirk:

Here is the entity for a BlogPost:

@Entity
@CreationTime
@LastChange
@Lifecycle
@Cache(timeout = 60000 * 10, maxEntries = 1000)
@Index(value = {"text.de:text,text.en:text,title.de:text,title.en:text", "state,visible_since"})
public class BlogEntry extends BlogEntryEmbedded {
    @Id
    protected MorphiumId id;
    private List<BlogEntryEmbedded> revisions;
    private BlogEntryEmbedded currentEdit;

    public List<BlogEntryEmbedded> getRevisions() {
        return revisions;
    }

    public void setRevisions(List<BlogEntryEmbedded> revisions) {
        this.revisions = revisions;
    }

    public BlogEntryEmbedded getCurrentEdit() {
        return currentEdit;
    }

    public void setCurrentEdit(BlogEntryEmbedded currentEdit) {
        this.currentEdit = currentEdit;
    }

    public MorphiumId getId() {
        return id;
    }

    public void setId(MorphiumId id) {
        this.id = id;
    }

    @PreStore
    public void preStore() {
        if (year == 0) {
            GregorianCalendar cal = new GregorianCalendar();
            year = cal.get(Calendar.YEAR);
            month = cal.get(Calendar.MONTH) + 1;
            day = cal.get(Calendar.DAY_OF_MONTH);
        }
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        //        if (!(o instanceof BlogEntry)) {
        //            return false;
        //        }
        if (!super.equals(o)) {
            return false;
        }

        BlogEntry entry (BlogEntry) o;

        if (id != null ? !id.equals(entry.id) : entry.id != null) {
            return false;
        }
        if (revisions != null ? !revisions.equals(entry.revisions) : entry.revisions != null) {
            return false;
        }
        return currentEdit != null ? currentEdit.equals(entry.currentEdit) : entry.currentEdit == null;
    }

    @Override
    public int hashCode() {
        int result = super.hashCode();
        result = 31 * result + (id != null ? id.hashCode() : 0);
        result = 31 * result + (revisions != null ? revisions.hashCode() : 0);
        result = 31 * result + (currentEdit != null ? currentEdit.hashCode() : 0);
        return result;
    }

    public enum Fields {tweetAboutIt, tweetedAt, titleEn, creator, created, lastUpdate, year, month, day, tags, textDe, textEn, renderedPreviewDe, renderedPreviewEn, visibleSince, state, categoryPath, titleEnEscaped, titleDeEscaped, titleDe, revisions, tweetId, id}
}

and the embedded revisions:

 @Embedded
public class BlogEntryEmbedded {

    protected Map<String, String> title;
    //@Reference(lazyLoading = true)
    protected MorphiumId creator;

    @Index
    protected String wl;

    @CreationTime
    protected Long created;
    @LastChange
    protected Long lastUpdate;
    protected int year;
    protected int month;
    protected int day;
    protected List<String> tags;


    protected Map<String, String> text;
    protected Map<String, String> renderedPreview;
    protected long visibleSince;
    @Index
    protected Status state = Status.NEW;
    private List<String> categoryPath;
    @Index
    private Map<String, String> titleEscaped;
    private boolean tweetAboutIt;
    private long tweetedAt;
    private long tweetId;
    private List<String> additionalVisibleOn;

    public MorphiumId getCreator() {
        return creator;
    }

    public void setCreator(MorphiumId creator) {
        this.creator = creator;
    }

    public long getTweetId() {
        return tweetId;
    }

    public void setTweetId(long tweetId) {
        this.tweetId = tweetId;
    }

    public Long getCreated() {
        return created;
    }

    public void setCreated(Long created) {
        this.created = created;
    }

    public Long getLastUpdate() {
        return lastUpdate;
    }

    public void setLastUpdate(Long lastUpdate) {
        this.lastUpdate = lastUpdate;
    }

    public int getYear() {
        return year;
    }

    public void setYear(int year) {
        this.year = year;
    }

    public int getMonth() {
        return month;
    }

    public void setMonth(int month) {
        this.month = month;
    }

    public int getDay() {
        return day;
    }

    public void setDay(int day) {
        this.day = day;
    }

    public List<String> getTags() {
        if (tags == null) tags = new ArrayList<>();
        return tags;
    }

    public void setTags(List<String> tags) {
        this.tags = tags;
    }



    public long getVisibleSince() {
        return visibleSince;
    }

    public void setVisibleSince(long visibleSince) {
        this.visibleSince = visibleSince;
    }

    public Status getState() {
        return state;
    }

    public void setState(Status state) {
        this.state = state;
    }

    public Map<String, String> getTitle() {
        if (title == null) title = new HashMap<>();
        return title;
    }

    public void setTitle(Map<String, String> title) {
        this.title = title;
    }

    public Map<String, String> getText() {
        if (text == null) text = new HashMap<>();
        return text;
    }

    public void setText(Map<String, String> text) {
        this.text = text;
    }

    public Map<String, String> getRenderedPreview() {
        if (renderedPreview == null) renderedPreview = new HashMap<>();
        return renderedPreview;
    }

    public void setRenderedPreview(Map<String, String> renderedPreview) {
        this.renderedPreview = renderedPreview;
    }

    public List<String> getCategoryPath() {
        if (categoryPath == null) categoryPath = new ArrayList<>();
        return categoryPath;
    }

    public void setCategoryPath(List<String> categoryPath) {
        this.categoryPath = categoryPath;
    }

    public Map<String, String> getTitleEscaped() {
        if (titleEscaped == null) titleEscaped = new HashMap<>();
        return titleEscaped;
    }

    public void setTitleEscaped(Map<String, String> titleEscaped) {
        this.titleEscaped = titleEscaped;
    }

    public String getWl() {
        return wl;
    }

    public void setWl(String wl) {
        this.wl = wl;
    }

    public boolean isTweetAboutIt() {
        return tweetAboutIt;
    }

    public void setTweetAboutIt(boolean tweetAboutIt) {
        this.tweetAboutIt = tweetAboutIt;
    }

    public long getTweetedAt() {
        return tweetedAt;
    }

    public void setTweetedAt(long tweetedAt) {
        this.tweetedAt = tweetedAt;
    }
    
    public List<String> getAdditionalVisibleOn() {
        if (additionalVisibleOn == null) additionalVisibleOn = new ArrayList<>();
        return additionalVisibleOn;
    }

    public void setAdditionalVisibleOn(List<String> additionalVisibleOn) {
        this.additionalVisibleOn = additionalVisibleOn;
    }
}

As mentioned in the introduction, this is a feature that I will probably expand again. It does not really bring much. But it's nice to have something like that implemented.

Conclusion

Jblog is certainly not suitable for every blogger, especially the many features, the adaptability of e.g. Wordpress are completely missing here! If I want to make my blog look different, I have to put myself ran and rebuild the thing. There are no plugins.

However, I have to say that I've had virtually no problems with Jblog since it's been used. And that's also something ... for me as a private hobby blogger certainly one of the most important, if not the most important feature at all.

If anyone is interested in taking a closer look at jblog, I'll put it online again - just leave a comment here, or send me a mail. At the moment the whole project is running as a private project in GitHub, because it is so simple, e.g. to work with deployment keys etc.


category: Computer --> Apple

MacMini 2018

2019-02-26 - Tags: Apple MacMini OSX

originally posted on: https://boesebeck.name

MacMini2018

I am a Mac user for quite some time now and I was always happy this way. Apple managed it to deliver a combination from operating system and hardware that is stable, secure and easy to use (even for non IT-guys) but also has a lot of features for power users.

(i already described my IT-history at another place https://boesebeck.name/v/2013/4/19/meine_it_geschichte)

My iMac which I used for quite some time (since beginning of 2011) did die in a rather spectacular way (for a Mac): it just did a little "zing" and it was off. Could not switch it back on again, broken beyond repair... :frown:

So, I needed some new Hardware. Apple unfortunately missed the opportunity at the last hardware event end of 2018 to add newer hardware to the current iMacs. There is still an more then 2 year old cpu working. Not really "current" tech, but quite expensive.

The pricing of Apple products is definitely something you could discuss about. The hardware prices were increased almost for everything, same as with the costs for new iPhones. This is kind of outrageous...

In this context, the new MacMini is a very compelling alternative. The "mini Mac" always was the entry level mac and was the cheapest one in the line up. Well, you need to have your own keyboard, mouse and screen.

now, the MacMini got finally some love from Apple. The update is quite good: recent CPU, a lot of useful ports (and fast: 4x Thunderbolt-3, 2x USB-A 3.0, HDMI). This is the Mac for people, who want a fast desktop, but do not want to pay 5000€ for an iMac Pro.

I was a bit put off by the MacMini at first, because it does not have a real GPU. Well, there is one form Intel - but you could hardly name it a Graphics Processing Unit.

That always was the problem with the MiniMac - if you want to use it as Server, fine. (I have one to backup the photos library) But as Media-PC? or even a gaming machine? No way.... as soon as decent graphics is involved, the MacMini failed.

But with thunderbolt 3 you can now solve this "problem" using an eGPU (external graphics card). How should that work? External is always slower than internal, right?

Well, not always. Thunderbolt 3 is capable of delivering up to 40GBit/s transfer speed and current GPUs only need 32GBit/S (PCI-express x16). This sounds quite ok... (although there is some overhead in the communication)

But it is quite ok. I bought the MacMini with an external eGPU and I am astonished, how much power this little machine has. All the connectors, cables, dongles etc do not look as good as the good old iMac. And the best thing: if you want to upgrade your eGPU, because there is a better one fine... or upgrade the mac mini and keep the eGPU - flexibility increase!

Performance comparison

Of course, my 8 year old iMac cannot keep up with the current MacMini, that would be an unfair comparison. But I have to admit that the 2011 iMac was a lot quicker when it comes to graphics performance. So for gaming the Mini is not the right choice.

The built in Harddisk, of course, is a SSD. Unfortunately it is soldered fix and cannot be replaced. But it is blazingly fast and does read/write with up to 2000MB/sec.

If I take a look at my GeekBench results of the Mini, the single core benchmark is similar to the curren iMac Pro with a Xeon processor. That is truly implressive. But, of course, in the multicore benchmark the mini can't keep up - it just has not enough cores to compete with a 8-Core machine - I have the "bigger" MacMini with the current generation i7 CPU.

I plugged in (or better on) an external Vega64 eGPU. This way I could compare the Graphivs performace with other current machines using the Unigine benchmarks. In those benchmarks, my Mini has about the same speed as an iMac Pro with the Vega64. This is astonishing.

Costs

Well, how much does all this performance cost? Is it cheaper than a good speced iMac 27"?

The calculation is relatively simple. To get something comparable in an iMac you need to take the i7 Processor - although this one is about 2 generations behind. As an SSD-Storage, 128GB is probably not enough, 512 sounds more reasonable. Anything else can be attached over Thunderbolt-3. A Samsung X5 SSD connected via Thunderbolt-3 is even faster than an internal SSD - so no drawback here.

You should increase the memory yourself, as Apple is very expensive. This way an upgrade to 32GB is about 300€ - Apple charges 700€!

But for comparison the RAM is not important, as with the iMAc I would do it exactly same. So lets put that together. Right now, an eGPU case is about 400€, than a Vega64, also about 400€, the MacMini is about 1489,- € plus 250€ for a screen (LG 4k,works great), and additional 100€ for Mouse and Keyboard. All in all you end up with 2539,- +/- 200€!

Just for comparison: the iMac would cose about 2839,- € - but in this configuration it would be slower than the Mini. With a Vega64 and a comparable CPU the mini in this configuration is more comparable to the base model of the iMac pro, which is 5499,-€ (but still has a slower GPU!).

conclusion

The new MacMini is definitely worth a thought. Considering the costs in comparison to other Macs, especially when you do not have to buy everything at once (like buy the MacMini, 3Monts later the RAM upgrade, 3 Months later eh eGPU case and again later the GFX-Card). The biggest disadvantage of the Mini is, that you now have more cables on your desk compared with the iMac...

I do have the Mini now running for some months and I love it! If you need a desktop, the MacMini is worth a try! Even compared with a MacBook!


category: Computer --> programming --> MongoDB --> morphium

Morphium 4.0.0 released

2018-11-09 - Tags: mongo mongodb java morphium

What was that again? Morphium is a sophisticated Object Mapper and Messaging System for MongoDB. It adds a lot of features to MongoDB, for example a dedicated InMemoryDriver, so that you can run all your Tests just in RAM without the need to install a MongoDB.

and good things need some time... so we are happy to announce, we just released Morphium 4.0.0. This release contains a ton of changes and improvements.

  • Simplifying the API at a lot of places in code
  • a lot new features and improvements for the Messaging System (Rejecting of Messages, Pausing/Unpausing of message processing, message replay upon startup, ...)
  • Code quality improvements
  • adding a new jackson based ObjectMapper (has to be enabled in Settings)
  • Transaction Support
  • Improvement of Enum handling
  • lots of improvements with aggregation
  • the InMemDriver does now support ChangeStreams
  • and countless other changes

this is a big update which took 8 Release Candidates to test.

Morphium is available at maven central:

<dependency>
    <groupId>de.caluga</groupId>
    <artifactId>morphium</artifactId>
    <version>4.0.0</version>
</dependency>

or at github.


category: Computer --> programming --> MongoDB --> morphium

Morphium V4.0.0-RC5

2018-10-19 - Tags: mongodb java morphium

We went quite a long way to get here, but... eventually we will emoji people:smirk

We put a lot of time and effort in this new Release Candidate #5 of Morphium and we get close to the major release.

what again was morphium?

Morphium was started as a simple Object mapper for Mongodb 1.0 which made querying in java simple using a fluent api. In addition there were a lot of unique features at that time like automatice dereferencing and lazy loading of references, caching, cluster aware cache synchronization etc.

Messaging was implemented in an early stage in order to get cache sync to work.

Messaging got more and more core feature of morphium and one USP of the API. So we gave this feature some love with this release...

Morphium V4.0.0

With this major release we added a lot of features and enhancements. We still need to adapt documentation though, but we will. And then we issue the release - promise emoji people:smirk

This is some of the new features of Morphium 4.0.0:

  • enhanced messaging. Using the watch functionality we can achieve a push functionality for messaging. Maximum performance, minimum load (only available in ReplicaSets)
  • added priorities in messages
  • re-implemented object mapper on base of jackson
  • stream-lined api, removed unused parts
  • full support for Mongodb 4.0 multi-document transactions

The codebase is already in a very good testet stage and is used in productional environments in some places.

Availability

Morphium is OpenSource and can be downloaded (including Releases and Release Candidates) from github.

Or, even easier, from maven central:

<dependency>
    <groupId>de.caluga</groupId>
    <artifactId>morphium</artifactId>
    <version>4.0.0-RC5</version>
</dependency>


category: Computer --> programming --> MongoDB --> morphium

Morphium 4.0.0 - work in progress

2018-08-15 - Tags: java mongodb morphium

We are still working on getting morphium 4.0.0 done. We are behind schedule a bit but want to explain here, what is going on at the moment:

  • complete new ObjectMapper implementation based on Jackson. Using this, we are about 30% faster on average (even twice as fast in some cases). And we use a standard lib here
  • new messaging functionalities. This is going to be the new core of Morphium, the Messaging! We put a lot of effort in this component, improved a lot, made it faster
  • de-cluttering of the API and of the code on a lot of places

Unfortunately V4.0.0 is not 100% compatible with the predecessors. It might be necessary to migrate data - although not very probably. If you need assistence with that, please contact us on google-groups or githup.

All of that is taking more time, than estimated. But BETA4 of morphium 4.0.0 can be downloaded on maven central:

 <dependency>
            <groupId>de.caluga</groupId>
            <artifactId>morphium</artifactId>
            <version>4.0.0-BETA4</version>
        </dependency>

In order for Morphium to work, you need to add the mongo drivers libs to your dependencies. We tested morphium with V3.8.0 of the drivers libs, but later versions should also work. (this is one of the reasons, why morphium does not create a direct dependency to the driver! Use the version you have in your project already, if not, use 3.8.0)

  <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>bson</artifactId>
            <version>3.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>mongodb-driver-core</artifactId>
            <version>3.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.mongodb</groupId>
            <artifactId>mongodb-driver</artifactId>
            <version>3.8.0</version>
        </dependency>

Happy hacking!


category: global

Morphium 4.0.0 in the works

2018-07-17 - Tags: morphium java mongo

MongoDB released the long awaited V4.0 just recently. And of course there are a lot of new features, but the most exiting one was probably the support for multi document ACID transactions!

The mongo driver does support all those features since V3.8.0 - and morphium supports all of that with V4.0.0 - to match the mongodb version. The current BETA1 should be available via maven central or from github.com.

new Features in Morphium 4.0.0

Transaction Support

of course, this had to be added. But it is quite simple to use:

//start a transaction via
morphium.startTransaction();
//all calls in this thread use the transaction now

morphium.commitTransaction()//commit
morphium.abortTransaction()//abort

MorphiumTransactionContext ctx=morphium.getTransaction();

//to have other threads use the transaction, you need to pass on the transaction context
//Transactions are Threadlocal - so you need to set the context in threads accordingly

morphium.setTransaction(ctx);

Attention: if your mongodb does not support transactions, this will cause an exception

improved Messaging.

We put a lot of effort into messaging, simplified the API at some places and added some new features. Messaging now supports kind of "push" notifications, if connected to a replicaset.

Improved Cluster-Aware Caching

using the new watch functionality of mongod, which is nothing but another way to access the oplog, we could implement a feature to synchronize the cache via database push. It works similar as with the messaging, just that the messaging is not used. The notification about changes comes directly from database. the @Cache annotation is used to determine how to delete / update the cache.

we introduced a new cache synchronizer for that:

WatchingCacheSynchronizer sync = new WatchingCacheSynchronizer(morphium);
sync.start();

inMemoryDriver

We re-included and improved the In-Memory-Driver into morphium. So you can more easy Mock access to mongo, which is very important when testing.

MorphiumConfig cfg = new MorphiumConfig();
        cfg.addHostToSeed("inMem");
        cfg.setDatabase("test");
        cfg.setDriverClass(InMemoryDriver.class.getName());
        cfg.setReplicasetMonitoring(false);

Transactions are not working (yet) and aggregation might come in some later version of the driver. All other, more basic functionality is working.


category: global

Worauf es beim Homepage-Layout ankommt

2018-07-17 - Tags:

originally posted on: https://boesebeck.name

there is no english version of this text available

Anm.: Dieser Text wurde zur Verfügung gestellt von homepage-erstellen.de

Worauf es beim Homepage-Layout ankommt

Hat man erst einmal die ersten Hürden bei der Erstellung einer Homepage gemeistert, muss man sich um ein passendes Layout kümmern. Ein übersichtliches und ansprechendes Layout sorgt dafür, dass relevante Inhalte leichter gefunden werden können und Seitenbesucher eher zurückkehren.

Was ein gutes Layout auszeichnet

Beim Layout einer Homepage gilt es zunächst darauf zu achten, welchem Zweck die Homepage dienen soll. Soll ein Produkt vorgestellt werden? Möchte man über die Dienstleistung einer Firma informieren? Oder nutzt man die Homepage, um über ein persönliches Anliegen aufzuklären? Wichtig ist, dass alle relevanten Informationen jederzeit gefunden werden können. Ein gutes Layout besteht aus Überschriften, Bildern, Fußzeilen und Spalten. So werden Informationen sinnvoll vorgefiltert und können schon mit wenigen Blicken erfasst werden. Das erhöht für den Besucher der Seite den Bedienkomfort und die Wahrscheinlichkeit, dass man zu einem späteren Zeitpunkt die Seite nochmal aufsuchen wird. Zuerst werden beim Layout Farben und Formen wahrgenommen. Ein farbenfrohes Layout kann sich zum Beispiel für ein Portfolio eignen, das Kreativität ausdrücken soll, passt aber kaum zu bestimmten Firmen oder Dienstleistern. Bei diesen ist es wichtig, dass man die Informationen zu jedem Produkt sofort finden kann.

Seitenleiste zum Teil sehr nützlich

Laut www.homepage-erstellen.de kann sich eine Seitenleiste als sehr nützlich für Besucher der Seite erweisen. Dort sollten aber nicht die wichtigsten Inhalte, sondern hauptsächlich ergänzende Informationen zusammengefasst werden. Die Ausrichtung spielt dabei keine große Rolle und die Seitenleiste kann sowohl auf der rechten als auch auf der linken Seite angebracht werden. In der oberen linken Ecke sollte sich ein Logo befinden. Bei E-Commerce-Seiten ist der Warenkorb meist in der rechten Ecke angebracht. Das Suchfeld befindet sich oftmals direkt neben dem oder in direkter Nähe zum Warenkorb.


category: Computer --> programming --> MongoDB --> morphium

Custom Caching in Morphium

2018-05-20 - Tags: java mongodb morphium cache

since the first version of Morphium it provided an internal cache for all Entities maked with the annotation Cache. This cache was configured mainly via those annotations.

This cache has proven its usefulness in countless projects and the synchronizing of caches in clustered environments via Morphium Messaging is working flawlessly.

But there are new and more sophisticated Cache Implementations out there . It would not be clever to built all those features also into morphium, better leverage on those projects. So we decided to include JCache-Support (JSR107) into morphium.

Of course, we had to adapt some things here and there, especially the MorphiumCahce-Interface needed to be overhauled.

Morphium itself always had the option to use an own MorphiumCache Implementation. But this was not always easy to achieve in own projects. Hence we use that now in order to be able to offer both the old, proven implementation and the new, future-implementation.

As always, morphium can be used out of the box, so we implemented a JCAche-Version of our cache as well into morphium.

How to use

With the upcoming V3.2.2BETA1 (via maven central oder auf github ) morphium will use the JCache compatible implementation. If you want to switch back to the old, proved Version of caching, you just need to change the config:

    MorphiumConfig cfg=new MorphiumConfig();
    cfg.setCache(new MorphiumCacheImpl());

if you create your MorphiumConfig via properties or via JSon, you need to set the class name accordingly:

  cacheClassName=de.caluga.morphium.cache.MorphiumCacheImpl

JCache Support

If you leave all those settings to default, the JCache API is being used. By Default the cache creates the cache manager using Caching.getCachingProvider().getCacheManager(). This way you get the default of the default emoji people:smirk

If you want to configure the cache on your own (ehcache properties for example), you just need to pass on the CacheManager to the morphiumCache:

  CachingProvider provider = Caching.getCachingProvider();
   morphium.getCache.setCacheManager(provider.getCacheManager());

of course in this example, there are no additional options set, but I think you see, how that might work.

BTW: the morphium internal JCache implementation can be used via JCache API in your application also, if you want to. Just add the system setting -Djavax.cache.spi.CachingProvider=de.caluga.morphium.cache.jcache.CachingProviderImpl and with Caching.getCachingProvider() you will get the Morphium Implementation of the cache.

Attention All JCache implementation support expiration of oldest / least recently used entries in cache. Unfortunately the policy of morphium is a bit more complex (especially regarding the number of entries), so moprhium implements an own JCache-Housekeeping for now.

Additional Info: Whatever Cache Implementation you use, you might still use the CacheSynchronizer in order to synchronize caches. And this synchronization should be working via Mongo even if you are not storing any Entities using the cache as an application cache!

Maven settings

    <groupId>de.caluga</groupId>
    <artifactId>morphium</artifactId>
    <version>3.2.2BETA1</version>

known bugs

There are some minor known bugs in the current Beta, you might want to know:

  • the CacheListener Callbacks do not seem to work properly with JCache implementations. That is when using EHCache at least. The Morphium Internal implementation works
  • there is a bug with Global Cache override Settings, that are not properly passed on to the underlying Caches
  • Messaging seems sometimes be affected as well by that. For some reason, we get a mongo exception here and there


category: Computer --> programming --> MongoDB --> morphium

MongoDB Messaging via Morphium

2018-05-06 - Tags: java programming morphium

One of the many advantages of Morphium is the integrate messaging system. This is used for synchronizing the caches in a clustered environment for example. It is part of Morphium for quite some time, it was introduced with one of the first releases.

Messaging uses a sophisticated locking mechanism in order to ensure that messages for one recipient will only get there. Unfortunately this is usually being solved using polling, which means querying the db every now and then. Since Morphium V3.2.0 we can use the OplogMonitor for Messaging. This creates kind of a "Push" for new messages, which means that the DB informs clients about incoming messages.

This reduces load and increases speed. Lets have a look how that works...

Messaging in Morphium - how it works

As mentioned above with V3.2.0 we need to destinguish 2 cases: are we connected to a replicaset (only then there is an oplog the listener could listen to) or not.

no Replicaset => Polling

No replicaset is also true, if you are connected to a sharded cluster via MongoS. Here also messaging uses polling to get the data. Of course, this can be configured. Like, how long should the system pause between polls, should messages be processed one by one or in a bulk...

All comes down to the locking. The algorithm looks like this (you can have a closer look at Messaging.java for mor details):

  1. send a command to mongo, which will lock all messages either send directly to me (= this messaging) or is for all and exclusive (should only be processed once). Every system can be identified with a unique UUID and this id is use for locking, too. it will either lock one or all matching messages - depending if you want to process one or all
  2. read in all locked messages
  3. process them
  4. mark message as processed by me (UUID->processed_by)
  5. do a pause (configured) and go to 1.

Replicaset => OpLogMonitor or ChangeStreamListener

The OplogMonitor is part of Morphium for quite a while now. It uses a TailableCursor on the oplog to get informed about changes. A tailable cursor will stay open, even if thera are no more matching documents. But will send all incoming documents to the client. So the client gets informed about all changes in the system.

With morphium 4.0 we use the change stream instead the oplog to get polling of messages done. This is working as efficient, but does not need admin access.

So why not use a TailableCursor directly on the Msg-Collection then? for several reasons:

  1. it only works with capped collections. Which is not a showstopper in our case, but unpleasant
  2. it only informs about new entries in the collection. But the locking algorithm depends on the update being atomic - hence this is not working. We could try to lock messages by erasing old ones and creating new ones, but this is not atomic and will lead to misreads.

Messaging based on the OplogMonitor looks quite similar to the algorithm above, but the polling simplifies things a bit. on new messages, this happens:

  1. is the incoming message an exclusive one, just try the locking described above. But as we now have the ID, it is a lot simpler and more efficient.
  2. is it non exclusive (and not send by myself), just process it
  3. is it an exclusive message and sent directly to me, process it

usually, when an update on messages comes in, nothing interesting happens. But for being able to reject messages (see below) we just start the locking mechanism to be sure.

how to use Messaging

Well, that is quite simple. Kust create an instance of Messaging and hit start. emoji people:smirk

   Messaging messaging=new Messaging(morphium, 500, true);
   messaging.start();

of course, you could instanciate it using spring or something.

Message sending

to send a message, just do:

    messaging.storeMessage(new Msg("Testmessage", "A message", "the value - for now", 5000));

this message here does have a ttl (time to live) of 5 secs. The default ttl is 30secs. Older messages will automatically be deleted by mongo.

Messages are broadcast messages by default, meaning, all listeners may process it. if you set the message to be exclusive, only one of the listeners gets the permission to process ist (see locking above).

        Msg m = new Msg();
        m.setExclusive(true);
        m.setName("A message");

this message will only be processed by one recipient!

And the sender does not read his own messages!

Of course, you can send a message directly to a specifiy recipient. This happens automatically when sending answers for example. To send a message to a specific recipient you need to know his UUID. You can get that by messages being sent (sender for example) or you implement some kind of discovery...

        Msg m = new Msg("testmsg1", "The message from M1", "Value");
        m.addRecipient(recipientId);
        m1.storeMessage(m);
storeMessage vs queueMessage

in the integration tests of Morphium both methods are being used. The difference is quite simple: storeMessage stores the message directly do mongodb whereas queueMessage works asynchronously - which might be the better choice when it comes to performance.

receiving messages

just register a Message listener to the messaging system:

           messaging.addMessageListener((messaging, message) -> {
            log.info("Got Message: " + message.toString());
            gotMessage = true;
            return null;
        });

here, messaging is the messaging system and message the message that was sent. This listener returns null, but it could also return a Message, that should be send back as an answer to the sender.

Using the messaging-object, the listener can also publish own messages, which should not be answers or something.

in addition to that, the listener may "reject" a Message by sending a MessageRejectedException - then the message is unlocked so that all clients might use it again (if it was not sent directly to me).

usage of messaging - cache synchronization

Within Morphium the CacheSynchronizer uses Messaging. It needs a messaging system in the constructor.

The implementation of it is not that complicated. The CacheSynchronizer just registers as MorphiumStorageListener, so that it gets informed about all writing accesses (and only then caches need to be syncrhonized).

public class CacheSynchronizer implements MessageListener, MorphiumStorageListener {

}

on write access, it checks if a cached entity is affected and if so, a ClearCachemessage is send using messaging. This message also contains the strategy to use (like, clear whole cache, update the element and so on).

Of course, incoming messages also have to be processed by the CacheSynchronizer. But that is quite simple: if a message comes in, erase the coresponding cache mentioned in the message according to the strategy.

And you might send those clear messages manually by accessing the CacheSynchronizer directly.

And we should mention, that there you could be informed about all cache sync activities using a specific listener interface.

conclusion

the messaging feature of morphium is not well known yet. But it might be used as a simple replacement for full-blown messaging systems and with the new OplogMonitor-Feature it is even better than it ever was.


category: Computer --> programming --> MongoDB --> morphium

New Release Morphium 3.2.0Beta2 - Java Mongo Pojo Mapper

2018-05-02 - Tags: morphium java mongodb mongo POJO

Hi,

a new pre-release of morphium is available now V3.2.0Beta2. This one includes a lot of minor bugfixes and one big feature: Messaging now uses the Oplogmonitor when connected to a Replicaset. This means, no polling anymore and the system gets informed via kind of push!

This is also used for cache synchronization.

Release can be downloaded here: https://github.com/sboesebeck/morphium

The beta is also available on maven central.

This is still Beta, but will be released soon - so stay tuned emoji people:smirk


category: Computer --> programming --> MongoDB --> morphium

New Release of Morphium V3.1.7

2017-11-21 - Tags: java mongo

We just released V3.1.7 of morphium - caching mongodb pojo layer.

  • performance increase insert vs upsert
  • update handling of non-mongoid ID-fields (bugfix)
  • updated Tests
  • new strategy for buffered writer: WAIT
  • setting maxwait / timeout for waitstrategy in bufferedwriter
  • moving id creation to morphium, implementing proper inserts, fixing bugs
  • fixing buffered writing on sharded environments
  • performance increase
  • mongodb driver version update, checkfornew default fix

Details can be found at the project page on github. You can easily add it to your projects using maven:

 <dependency>
            <groupId>de.caluga</groupId>
            <artifactId>morphium</artifactId>
            <version>3.1.7</version>
 </dependency>


category: Computer --> programming --> MongoDB --> morphium

new release of Morphium 3.1.5

2017-09-29 - Tags: morphium

This release is about tidying up things a bit and includes some minor fixes and tweaks.

  • fixed some statistics
  • removing drivers into different project
  • improving byte array / binary data handling
  • fixing some tests
  • fixing checkForNew behaviour, is now a bit easier to understand. If switched off globally, setting it at the annotation does not have an effect.

Available via Maven Central


category: global

git and optware fail after Qnap update to 4.3.3

2017-08-21 - Tags: git qnap storage

for quite some time now, I have a qnap runnin in my basement storing whatever Storage needs, my servers or family members might have.

The qnap is also being used as git server - wich was totally fine the last couple of years but failed recently...

fail after update

I just saw, that there is a firmware update pending for the TS and that this is more or less the last one for this old model (hey, the qnap is not that old, is it 3 years maybe?). There also was a warning message in the release notes that the switch to 64bit might cause some apps not to work anymore...

so far, so uninteresting. The usual blah blah... Unfortunately Optware (which installs addidional opensource software) is only available in 32 bit obviously.

But this is something you will only learn the hard way: the software just woud not work. trying to access the GUI of it, will just result in "internal server error", "page not found" or simply "Permission denied" - depending on what you just tried to make it work.

if you log in via ssh and try to use ipkg in the shell, you will get a file not found error, although you tried to exec the file specifying the absolute path. The linux gurus know - this means some lib is missing!

And in that case, I do not need to dig further - the 32 bit libs are not there anymore.

That would not be the problem, but everything installed by Optware also relies on those libs and hence fails now... bummer!

to solve the problem..

.. you install an alternative to optware called entware. You download the file and install it via Qnap-GUI. Unfortunately this tool does not have a "nice" (the so called gui of Optware was never nice) GUI for this, just a command line command called opkg (mind the o).

after that you only need to create symlinks for the git-binaries (after fiddling with the sshd for enabling pk auth and more than just the admin user):

cd  /bin
for in $(/opt/bin/git*); do
   ln  -s $i
done

And as always - happy hacking!


category: security

Virenscanner for Mac or Iphone

2017-05-29 - Tags: virus security

originally posted on: https://boesebeck.name

As some of my readers are not that good in reading and understanding German, I'll try to write some of my posts, which might be interesting in english also. I hope everything is understandable so far emoji people:smirk - This is not a translation, just a rewrite in English. Lets start with the last post about Anti-Virus Software

Anti-Virus software on the Mac or iPhone?

People are more and more concerned about viruses. Also Mac users start to worry about that threat. So, is it neccessary to install anti-virus software on the mac? I was asked that question several times lately...

First of all, this question it totally justified. Everyone should harden his computers and phones as far as he feels safe. Actually, more than a feeling would installing an anti virus software on the mac not produce. As of now there is a handfull of harmful software known for the mac, all of them will be filtered by macs own security mechanisms and thus are not really a thread anymore.

At the moment the Mac is safe - but soon...

"Soon it will be very bad for Mac users. Viruses will come..."

I hear that every year. When the new market share numbers are published and OSX gains. Then everybody tells me, that the marketshare is soon reaching some magic percentage when it will be so interesting for Virus-Programmers to write Viruses for Macs that ther will be a flood of malware. Or will there?

Of course, marketshare is definitely influencing the number of malware for a certain system. But in addition to that, you should take the necessary effort and feasibility into account. And the use... (in terms of malware: what could I gain? Keylogging? Botnet?)

I think, one should take both into account: Is the system easy to hack, it will be hacked, even if almost nobody is using it. Is the systems' marketshare not that high, but relatively simple to hack - it will be hacked! For example: the Microsoft Internet Information Server (IIS) is being attacked far more often than the marketshare leader Apache. When a system is very hard to hack, you need some good incentive to take the effort. Which could be the reason why there is no real virus for Linux or OSX.

And when I write "hacked" its more in a viruses term of use - not remote hacking of user accounts. And: it needs to be done more or less automatically by software. Otherwise there will be no real virus or worm. If somebody wants to hack a certain machine and has the knowledge, he can do it - depending on resource, effort and motivation ;-) I knew a hacker once, you could hire to hack the servers of an competitor for example. Those things are always possible. But this is almost always an administrative problem. There is no real protection against those guys. You can hack any machine you can physically touch - resources and motivation required, of course. Best example: the Jailbreaking of iOS! But if there is enough motivation, resources and knoledge, you're not really safe (see NSA & Co). So it's a question of effort: to hack the machine of a 14 year old student is definitely not as interesting as hacking the machine of a CEO of a big company or a politician.

Same thing is valid for malware and viruses: Malware is not developed for the fun of it (well, at least most of the time it's not). People want to make money with them. This is the only reason why there are Viruses! Maybe that's the reason why there is still the rumor, that actually the Anti-Virus-Software vendors pay some virus developers to spread viruses every once in a while. who knows... i cannot rule that out for sure. I met some Russian guys who claimed that to be true. If so, then I don't understand why there is so few malware for Linux and OSX. That would be a huge market for Anti-Virus-software vendors - millions of users, complete new market segment worth millions or billions of dollar.

I think, viruses are only developed to directly (data theft, credit card fraud etc) or indirectly (by spamming, using hacked machines as bots on the way to the real target, bot nets etc) to MAKE MONEY! And when money is involved: the effort and resources necessary to achieve that must be lower as the estimated revenue of course. So we are at the combination of effort and marketshare again. Marketshare influences the potential revenue (assuming that when more machines are hacked or affected by malware, more money is being made), efforts are the cots. And in some cases this is obviously not a positive figure...

malware in general

First of all, you need to distinguish between the different kinds of malware. In media and the heads of non-IT-guys all malware is named "Virus". But it's necessary to know what kind of software pest is out there in order to be able to protect yourself against those effectively.

The media and in the heads of non IT guys usually every malware is called a "virus". But in order to be able to protect yourself from those malware, it is important to know exactly what you're dealing with. You can classify three different kinds of malware: Viruses, Trojans and Worms - but there are some mixtures of those in the wild, like a virus which spreads like a worm - hence toe umbrella term "malware").

  • a virus is a little program, which reproduces itself on the system and does dort it's dirty stuff. most of the time, those viruses do exploit some security holes in order to get more privileges. If those privileges are gained, the virus will do things things, you usually do not want him to do - like deleting things, sending data do a server...
  • a trojan is most similar to a virus, but needs the users help to get installed. Usually it looks like some useful piece of software, a tool of some kind, but in addition to the funktionality you desire, it also installes some malware on the system. Usually the user is being asked, that the software needs more access - on OSX at least. But even if it does not seek privilege escalation, your data still is at risk. See wikipedia
  • a worm is a piece of malware, that is capable of spreading itself over the network (either locally or over the internet, see wikipedia). You can easily protect yourself against worms if you just unplug the network from your computer (and/or disable WiFi) or at least disable internet access. Sounds insane, but I myself was at some offices and departments, who do exactly that: They are unplugged from the internet in the whole building, only a certain room, which is specially secured, does have internet access - but not into the local network.
  • a new type of malware just got famouse with wanacry: ransomware these are usually some trojans which do then use bugs in the system to encrypt all data. And you only can decrypt it, if you send a couple of bitcoin to the author.
  • of course, there are mixrures of all those types. Usually there is a trojan, that acts like a virus on the system to gain root (or admin) access and uses that to spread himself over the network (worm).

on the Mac?

you always get such "warning messages" on the mac, if any malware wants to do something, that is out of the ordenary and does need system privileges. Exactly that happened a couple of months ago when there was a Trojan, who was installed using Java and a security issue therein. But still, the users were asked, that the software needs more privileges. And enough people just said "yes" to very question...

Please do not get me wrong, I do not want to deemphasize malware. It is out there, and does cause a lot of harm and costs. But you can be saved by trojans more or less by using common sense:

  • Why does the new calculator app need access to my contacts?
  • Why does my new notes app need admin permissions?
  • why does software XY ask about this or that permission?
  • is it clever to download tools from an untrusted source, especially if this source does offer cracks or exploits or something?

It is getting harder, if the trojan uses its newly gained privileges to hack the system itself, maybe even exploiting additional security issues there, so that the user is not being asked. Then a secure operating system architecture is helping to avoid those kind of things. Which is usually implemented by all unix OS.

Viruses and worms can not be avoided so easily hence those do exploit bugs in the system. But even then, Unix based systems are a bit better suited for that case than others.

This is according to a very strict separation between "System" and "Userprocesses" and between the users themselves. And, especially on OSX, we have Sandboxing as an additional means against those malwares. And the graphical user interface is not bound so tightly to the operating system kernel like it is in Windows NT for example.

But, overall, the Admin of the system is the one, really determining how secure a system is. He should know about the problems, his OS has and can take counter measures accordingly.

Malware on mobile devices

if we are talking about malware, whe should also have a closer look at mobile devices. Especially Smartphones and alike are often attaced, because they do have a lot of interesting data which are just worth a lot of money. Or you can just make money directly (e.g. by sending expensive SMS).

to "beak into" such a closed system, very often security relevant bugs are exploited. But sometimes just social engineering is also successful.

Usualy the user is than made to do some certain action, that does involve downloading something, that is installing a trojan on the system. or just opening the system that the attacer than can install some malware. Or you just "replace" an official app in the corresponding appstore.

Trojans on the smartphone usualy are masked as litte useful tools, like a flashlight app. But they then copy the addressbook and send out expensive short text messages, switch on video and audio for surveillance and so on.

It's hard to actually do something against that, because you do not know, ob the app, you install does something evil or not. Apple is trying to address this problem with the mandatory review process that all apps in the Appstore need to pass. All apps need to pass an automated and a manual check before anyone can download it. The apps are for example not allowed to use unofficial API (for accessing the internals of the os) and that the app does exactly what the description of the app tells the users it does.

This is no 100% protecion, but it is quite good (at least, i do not know any malware on the appstore right now).

But I would also name WhatsApp, Viber and alike as malware. Those do exaclty that, what a trojan would do. Grab data, upload them to a server. But here the user happily agrees and likes it.... but that is a different topic.

on iOS users are a bit more secure, than on andriod (if you do not jailbreak your iphone). Android is based on Unix, but some of the security mechanisms within uinx have bin "twisted". So there is a "kind of" Sandbox, just by creating a new user for every app on the device. So all processes are separated from each other. Sounds like a plan. But then you end up having problems with access to shared resources, like the SD-Card. This needs to be global readable!

Also the Security settings of apps can at the moment only take "all or nothing" (that did change in later versions, at least a bit). So you can either grant the app all the permissions, it wants. or No permission at all.

Problematic is, you need to set the permissions before actually using it. This makes it very easy for malware programmers, as people are used to just allow everything the app needs.

IN addition to that, Andriod apps do have an option to download code over the internet - this is forbidden in iOS. And there is a reason for it: How should any reviewer find out, that the code downloaded stays the same after the review? Today I download weather data, tomorrow some malware wich sends chareable short texts?

Another problem is, that there is not one single store for android but more like a quadrillion of them. Hence you can install software from almost any source onto your andriod device.

of course, every os does have bugs which might be used to execute good or evil code on the device. Hence there are updates on those OS on a regular basis, which should fix security relevant bugs and issues. with iOS you can be sure, that you get updates for your device and the OS on that for at least a couple of years. (current iOS run on 3 to 4 year old hardware still). With android it is not as easi to make such a statement as the support is strongly depending on the vendor. It might be, that support for devices not older than 1,5 years are stopped. Especially the cheap Android phones loos support quite ealry, which means there are still Android 2.x out there (and you actually still can buy new devices with that installed). Including all the bugs, that the old OS version had - which makes it quit interesting for malware authors.

in combination with the a bit more insecure system and the unsecure sources of software, this makes android a lot more prone to be hacked or infected by malware. And this makes it especially interesting for the bad guys out there.

This is leading to really rediculous things like virus scanners and firewalls for smartphones. read it here in German

You can say about apple, what you want, but the approach of the review of every app for the appstore is at least for the enduser a good thing (and by that I do not mean the power user who wants to have his own Version of the winterboard installed). Even if you are not allowed to do "all" with your phone - Normal users usually do not need to.

And the poweruser can jailbreak his iphone still - and if he knows what he is doing, it might be ok.

installed software as a gateway for malware

Unfortunately viruses, trojans or more generic malware, can use any bug of any software on the system, no matter if it is part of the OS or not. So a breach can be done via a 3rd party software. Like the "virus" that was infecting a couple of thousand macs through a installed java. In this case, the user again was asked several times(!) if he wants to grant admin permission to a java app - if you agree to that, your system is infected. If not - well, nothing happens. Common sense is a great "Intrusion Prevention System" here.

Of course, osx or any other operating system cannot avoid 3rd party software of doing some dubious things - especially, if the user agreed to it. But the software is only able to gain the permissions, what the software that was used as gateway has. An on OSX and iOS all applications run in a Sandbox with very limited permissions. If the app, a malware uses as gateway does not have admin permissions, well, the malware won't have it neither.

If all 3rd party software you run on your system only has minimal permissions, then a malware that would use those as a gateway would also have minimal permissions, and could not do too much harm (and could easily be removed).

But the thing is, just getting access as a normal user is not the goal of such a virus vendor - they want your machine to be part of a botnet in order to sell your computing power or to use it in the next DDOS-attack. Or just use it as spambot.

Also it is in the best interest of this virus vendor to make it as hard as possible to remove the software from the system. So everything needs to be burried deeply into the system files, where normaly no user takes a closer look at.

And this is usually only possible, if the malware would get admin permissions. It could use "privilege escalation" hacks in order to gain more permissions - best case, without the user knowing.

Usually, the user should be asked, if any process tries to gain more permissions, and the user may or may not agree to that (that happens every time, a process tires to do something outside of the sandbox). of course, that would be bad, as it would reduce the success of the virus. So virus vendors try a lot to avoid this kind of informing or asking the user.

on unix systems this is quite some hard task, or at least a lot harder as on windows OS see here or here. In almost all of the cases, on osx the user is informed about software that does do something strange.

But there is one thing, we should think about even more: if any software could be used as a gateway, I should reduce the number of programs on it to a minimum (especially those, with network functionality... which is almost any app nowadays). Especially I should keep software that runs with admin permissions do the absolute minimum - which is 0! Unfortunately, virus scanners and firewalls and such "security" software, need admin permissions to do their job. This is one of the reasons, why anti virus software is very often target of attacks from malware and viruses and end up as spreading the very thing they try to protect us from. (this has happened on windows machines)

Then, count in that a Anti-Virus software can only detect viruses, that are publicly known for a while, you actually would not increase the protection a lot by installing this on your machine.

Same thing goes for firewalls, which have their use on windows systems unfortunately, but not on unixes or osx. How come?

Well, on unix systems the network services are usually disabled, or not installed! so the visible footprint on the internet for such a machine is quite low.

Windows on the other hand, is depending on some network services to run, even if you do not actively use it. Disabling those serivces (and SMB is one of them - this was used by wannacry!) would affect the system in a bad way and some things would not run as expected see here.

Hence, if your system does have a minimal footprint - or attackable surface - you do not need a firewall.

Btw: do not mix up this local firewall, with a real IP-filter firewall that is installed in routers!

Virus scanners on servers

So, there is a lot that explains, why using virus scanners on the desktop (especially if it is a unix desktop) can have negative effects or at least no effect. So, you're probably fine without them...

But on servers, things look a bit different.

If i have clients are not well maintained or I just do not know (or just windows emoji people:smirk ), I want to avoid storing data on my server, that could infect them. So, even if the viruses do not infect my server, or my mac. The mails could be read by other clients, that might then be infected. So, be nice to your neighbors...

Do not forget, virus scanners do need some resources. And sometimes a lot of it (they monitor every access to/from the system, which in return can or will slow it down to a certain extend).

Security is not for free

Whatever you do, security comes with a cost. in "best" case, things get inconvenient to use, cause you need to do complex authentications or need to agree to a lot of popups that pop up every second (remember Windows Vista? emoji people:smirk )

in the worst case, there are errors because of the high complexity, or expensive bacause you need additional hardware (iris scanner, external firewalls, Application-level firewalls that scan data for viruses...) and still being inconvenient at the same time. And time consuming (those systems need to me maintained).

So, you need to decide, what level of security do you want, and what is senseable. The use of an Iris Scanner for the Bathroom is probably a bit over the top... don't you think?

common sense

the best weapon in our hands against malware still is the thing between the ears! Use it when surfing, when installing software. No software will ever be able to stop you from doing something stupid to your system.

So, it is not ok to feel to safe when being on a mac. This leads to sloppiness! Passwords for example, need to be real passwords. If the password could easily be guessed, why should a malware take the detour for hacking the system? It could just "enter" it and you lost your system to the bad guys....

I don't want you to get paranoid on that neither! Just keep your eyes open. When installing software, only do it from trusted sources. And, from time to time, have a closer look. There was malware available in the AppStore for a couple of days / weeks before apple removed it. Even the best system can be outwitted.

You should think about, which apps you use and which not. And even apps, that are not really malware per se, dan do harmful things - like whatsapp and viber. You should ask what is happening there! I mean, whatsapp is uploading the addressbook to facebooks servers and the people whos data you upload there, are not asked if they like that... just a small example...

Just remember: if the product is for free, then YOU are the product

There is no such thing as free beer!

conclusion

I tried to be not tooo aniti microsoft - which is hard, because most of the security issues are only existing on windows systems. Unfortunately on windows the user needs to make it secure and stop it from doing harmful things.

Anti Virus software does lull in the user to make him feel safe, but most of them really have a louse detection rate. And really new viruses are not detected at all.

So, should you install anti virus software on a mac? You need to decide yourself, but I tend to "no, you should not". But there are valid reasons to see it differently. But I am not alone with my thoughts: see here and here.

But you definitely should distinguish between desktop and server, as you may be serving out data to windows machines as well, a virus scanner might be a useful thing.

Almost all I wrote here is valid for osx and for linux or other unixes. Right now, there is no know wide spread malware out for unix based systems, that I know of.

found results: 80

<< 1 ... 2 ... 3 ... >>