Data serialization format used by this receiver to parse the incoming traffic. The receiver format must correspond to the transmitter (appender) format.


xml is a classical log4j v1.2 xml dtd format This format is used by most log4xxx frameworks such as log4j, log4net, log4php, NLog and others.


json is proprietary JSON format used by logFaces Java appenders.


gelf is the Graylog Extended Log Format Convenient format that is used by a number of tools such as Graylog, Logstash, and Fluentd.


Access to this server can be secured with SSL using your own certificates and keys. Private key and certificate chains are stored in a key store at server configuration directory. The store file is password protected, you will be asked to supply this password during the import.


Certificates should come in X.509 DER encoding. Sometimes, there is a chain of certificates which need to be used together. In such case, make sure that entire chain is submitted.


Private key should come in PKCS#8 DER encoding. If your private key is not in this format, make sure to convert to PKCS#8, otherwise server will not be able to import it.


Access to this server can be secured with SSL using self signed certificate generated by this server instance. Use this option for testing and debugging environments. Access to such server instance over SSL will create warnings in browsers.

Web access to this server by can be secured with SSL. Once enabled, the access to this server at given port is going to be strictly by means of https protocol. It will affect admin, desktop and API clients.


The SSL certifcates are stored in the key store file in local configuration directory. You can use your own or self signed certificates. Go to security settings for more the details.

If server is installed on a computer with several network cards, you can bind server sockets to a particular address or a host name. This is a good idea if you need physical separation of the transport.


When 'any-address' is specified, the server will pick the default binding, this will allow access to server by any address it supports. Otherwise, the access will be strictly by the address you select.

All web receivers hosted in logFaces are mapped to /receivers/* URLs. Each web receiver corresponds to a single servlet which complements the URL above.


For example, if this receiver path is specified as xxx, then the full URL for posting log data to this receiver from outside will be http://this-host:8050/receivers/xxx.

There are two modes for administrator authentication. It will be used whenever users try to login into this application.


In local mode, the authentication of admin users will be performed against locally stored credentials. This is the default authentication mode.


In ldap mode, the authentication of admin user will be delegated to your LDAP server. Make sure to specify the admin user name, this is what will be sent to your LDAP. The password is not required.

Receiver will be listening for the incoming logs on this port.


Your application appenders intending to use this receiver must set this port number in its configuration. In most cases the name of appender parameter is 'port'.


When specified, the receiver will override application (domain) name of the originated log events received by this receiver. Should be used with applications which unable provide consistent application name.


Leave blank to keep the original names.


Because syslog standard is very loose in terms of formats, you may want to provide your own interpretation of syslog messages by setting a regular expression pattern for this receiver. It will then extract relevant information as you specify by named groups.


For example, you may want to include MDC context variable when using RFC5424. Or even with older RFC3164 by adding some words to a message and then extracting them with the pattern.


Use patterns library to build and test patterns, see user manual for the details. If pattern is not specified, logFaces will try its best to structure the incoming data. Note that this is not always possible and very much depends on the log data source.


Leave blank if name of the application is properly transmitted by syslog source.


When name of the application is not transmitted by syslog and can not be extracted from the message body, this name will be used as default substitute.


If no default name is specified and can't be resolved from the messages, logFaces will use 'appliances' as a default.


Note that this is a good practice to always have some meaningful application name. Properly tagged events will help clients to display structure of your logs.


This options allows mapping of the remote address to the host name used in logFaces tables. Often syslog sources don't follow the standard and it's difficult to figure out how to interpret the orginating address. The following settings are to the rescue:


standard - the host name will be taken from the message header as specified in syslog standards


ip - will use remote IP address from the client socket


dns - will attempt reverse DNS lookup of client IP. Use with caution! Reverse DNS lookups can be extreamly expensive in many circumstances.


Enable this option if you want to see what exactly your sources transmit over the wire. logFaces server will log incoming traffic in its internal log file. This option should help you to pick a best pattern and structure the data for indexing.


When something gets recorded into internal log, simply pick up the raw text and use pattern debugger for creating matching patterns. Or use 'test' link on this page to inject the message directly into receiver.


Make sure to enable verbose logging to see the traces. Do not leave verbose logging for production use for better performance.


This is another destination to which this receiver can forward received logs. Leave blank if not used.


The format of the destination should be in a form protocol:hostip:port, where protocol could be either tcp or udp, hostip is a host name or ip of the destination server, and port is destination server listeneing port.


Use this option to specify which time stamp should be used when log event is received.


When used source time, the server will preserve the original time stamp of the log event. Most logging frameworks use the local time of the host where event is generated. This is the default option.


When used server time, the server will use its local time to stamp the received events. This could be a preferred option for highly distributed systems where time is not fully synchronized among hosts.


Drop zone is a monitored location where you can drop raw text files for import. Those locations are relative to your server installation /dropzone directory.


Files can be in any format provided that they can be parsed by means of regular expressions.


Files will be permanently deleted from this location as soon as server attempts to process them.


When a file gets processed partially, the server will create a special file containing lines which failed parsing. It will be located under /unprocessed directory in this drop zone location. You will then be able to examine unprocessed entries, adjust the patterns and re-drop.


Server will look at the head and tail of each dropped file (first and last bytes), calculate their CRC check sums and keeps the track of every record. This parameter defines the length of the head and tail in bytes to be used for calculating the CRC. Must be positive non-zero value.


CRC is used for dealing with duplicated and appended content. Looking into a head/tail CRC server will decide whether the content is new, partially processed or already processed in the past. For example, if several lines were added to the file since its last import, server will detect and import only those lines which were added


Server keeps small local database where all CRC's are recorded. Every processed file CRC's gets registered in this database. If you want to clean up this database, remove directory named /dzcache under your server installation. By default the size of this database is 10000 and specified in /conf/environment.properties file. When this size is reached the database will start rotating by removing older records while inserting new.


This is a regular expression pattern to match the text in dropped files and extract log data. It may be a conventional regular expression for matching event attributes, or a combination of pre-built patterns.


We are heavily relying on regular expressions named groups. Use patterns library to build and test complex regular expressions.


See user manual for more information on how to construct patterns and use them throughout logFaces.


If you are expecting to process exception stack traces which are normally multi-line fragments of pre-formatted text, consider specifying this pattern to extract this structure.


Typical Java-like stack traces can be matched with pre-built pattern %{JEX}. If your stack traces look different, consider adding it to pattern library and re-use.


Here you specify the expected date time format of the logs to process in this drop zone. Look here for supported formats. Internally logFaces will use this format to covert parsed text into numeric epoch time (number of milliseconds past since 1970).


If no time format specified, the server will import all the logs with current server time, incrementing each event by one millisecond.


These are the default names which parser should use when this information is not available in parsed log files.


Note that names of applications, hosts and loggers are very important for further extracting the information. Consider supplying some information for your own convenience.


If nothing specified, the parser will use 'default' word as substitute.


Drop zone is a monitored location where you can drop raw text files for import. Those locations are relative to your server installation /dropzone directory.


Files can be in any format provided that they can be parsed by means of regular expressions You will have to carefully select appropriate Pattern for each drop zone individually If your logs contain exception stack traces, use X Pattern to match them, this is optional.


Use patterns library to build complex regular expressions, see user manual for the details.


Time format is very important for parsing the time stamps in imported files, make sure to carefully select it. If your logs don't have time stamps, leave this field blank - the server will use time of import and increment each entry by 1 msec.


Application, Host and Logger are default names which will be used to fill in those attributes in case your logs don't have them. Leave blank if not relevant.

Flood detection ensures that only specified number of events (matching the criteria) may be present within a time window specified here.


When time window contains more events than specified by the threshold, the detector will discard any further events matching the criteria until there is a room in the time window.


The value is in seconds.

Flood detection ensures that only specified number of events (matching the criteria) may be present within a time window specified here.


When time window contains more events than specified by the threshold, the detector will discard any further events matching the criteria until there is a room in the time window.


Concept of criteria filter is used throughout logFaces. It describes set of rules to match log events for various purposes. Criteria is a collection of Boolean rules which you can manipulate to achieve a fine tuned filter. When either of the rules is 'true' the criteria will qualify the event for whatever purposes criteria is used.


Each rule in turn may contain one or several conditions. When all conditions in the rule set is 'true', the whole rule set is 'true'.


Conditions are based on event attributes and operation applied to the values they carry. Below is the list of log event attributes which you can use to construct conditions:
domainName Name of the application originating the logs, configured in your appenders.
hostName Host name of the event origin
loggerName Typically a class name in log4j apps or Facility with syslogs
loggerLevel Severity of the event
message Message body text
threadName Thread originating the log event or process ID with structured syslogs
throwableInfo Stack trace of the thrown exceptions
thrown An indication whether the event is a thrown exception
loc... Attributes related to location info where log event is originated from.
ndc Nested Diagnostic Context
... Other names you may have mapped as Mapped Diagnostic Context (MDC)

This is a collection of regular expression patterns which you can use to construct fairly complex patterns by combining them to match text structures.


Format of the library is properties-like with space delimiter between key and value. Key is the pattern name which you can reference in other patterns or final expressions. Value is the actual regular expression which may combine other patterns from this library. Comments are allowed and must begin with #. Each line must contain exactly one pattern.


Use %{NAME:group} notation where 'NAME' is one of the patterns from this library and 'group' is the name of the group to be matched (optional) and passed to parsers.


For example, expression like %{LEVEL:loggerLevel} can be used to match severity levels and extract the value into 'loggerLevel' group.


Another example, expression like %{JEX} can be used to match exception stack traces usually seen in Java apps.


Feel free to adjust or add your own patterns, keep in mind that these patterns are global to this instance of the server


Before you can put your regular expressions to work, you may want to test them and see what results they produce.


Here you can input a sample of your text to parse along with the pattern to use for parsing.


Server will respond with a JSON-like collection of log events structured and ready to be processed.


Look at the results and adjust your patterns to fit the expected input.

Connecting clients will be prompted to login when authentication is enabled. Note that this is not the same as admin access, the authentication here is dedicated for users getting access to the log data hosted by this server instance. There are two authentication methods available:


Simple authentication is a basic password protection for anyone trying to access this server data. Password is stored on the local disk in obfuscated form.


LDAP authentication allows integration with existing name directory in your organization. Users will supply their user name and password which will be delegated to your name directory for validation which is done by binding with supplied credentials. Note that logFaces server does not have any knowledge about user credentials, doesn't store them or validates them. They are blindly delegated to your name directory for authentication.


Before enabling, please make sure that you have all relevant information about the structure of your name directory.


User name and password for connecting clients is stored in /conf/real.properties file on local disk. Used for the simple authentication.


Enable this option if your LDAP server is configured to use TLS/SSL traffic encryption for client connections.


For the logFaces server to be able to establish SSL connections, it needs to be configured to use a trust store holding the SSL certificate. Go to security settings and follow the instrutions to create the trust store.


Note that logFaces server must be prepared to use SSL certificate before you can enable this option.

This is the distinguished name for binding to your LDAP server.


logFaces will use this DN in order to gain an access to user base. Usually those credentials are obtained from LDAP server administrator and must have permissions for walking user base tree.


logFaces doesn't do any operations on directory tree other than trying to delegate incoming credentials for validation.


Corresponding password for the binding.


logFaces will use this password in order to gain an access to user base. This password is stored locally in obfuscated form.


Distinguished name corresponding to the location of users to be authenticated.


This is the location where logFaces will look for the users to be authenticated.


LDAP filter for matching users in user base DN. This parameter gives a very sophisticated way to match users in the user base. The default value attr={0} will match any user whose user ID is mapped to the attribute named 'attr'. This attribute name varies in different LDAP implementations, for example in Apache DS this is normally uid while in MS Active Directory it show as sAMAccountName. Note the {0} parameter – it must be present all the time to match the actual user ID supplied by the user.


When you want to do more complex matching of users, you can specify fairly complex LDAP filters in this field – please refer to LDAP documentation for the syntax details.


For example, the filter (&(ou=SALES)(uid={0})) will only match users from SALES organization unit. Even when user is part of user base (uid={0}), it will only be attempted for authentication when she belongs to SALES unit. Otherwise the authentication will fail.


This way, having fairly large user base DN you filter out only relevant users for accessing logFaces.


Location of user groups sub-tree. Groups will be used for authorization, if you don't need authorization - leave this field with default value.


LDAP filter for matching user membership in groups. This parameter is very similar to the user filter except that it acts on user groups. The default value is attr{0} which will match group members mapped to the attribute named attr. Often this attribute is named member in most LDAP implementations.


For example, a filter to grant authorities matching the group description (USA) and user ID matching the one provided by actual user during login will look like this: (&(description=USA)(member={0}))


If you don't need this flexibility, just leave group filter to its default member={0}


When enabled, the server will count datadabse records in main storage without restriction. This is done during start ups, connection recoveries, client requests and other use cases. Counting SQL database records is benign with relatively small storages.


However, with large databases this option should be disabled because it may severily affect its performance. When disabled, the only way to get actual number of records is from status admin page.

Retention period is specified in days of log. If you specify "1 week" for example, then latest week of data will always be available. As time goes, older records are automatically removed while new ones are appended.


You should carefully specify this value according to your needs; it affects overall performance as well as disk space usage. When using MongoDB with capped or TTL collections, this field will be automatically set to 'unlimited' to let MongoDB manage the storage size automatically.


Number of days to retain collected log data. As time goes, older records are automatically removed while new ones are appended.


You should carefully specify this value according to your needs; it affects overall performance as well as disk space usage. When using MongoDB with capped or TTL collections, this field will be automatically set to 'unlimited' to let MongoDB manage the storage size automatically.


With RDBMS logFaces server can automatically create database schema based on the sql template defined in its environment. When you need to manage schema externally, select No. When specified Auto, the server will validate and attempt to create working schema automatically.


The default is Auto.


This is the size of the buffer used to insert log statements into database as a batch. The smaller the buffer the more frequently commits will be performed. Depending on the data inflow intensity, the buffer should be adjusted in such way that it does less frequent commits. On the other hand, large commit buffer size could be stressful for the database.


Optimal sizes are usually in range of 50 - 500. You should use higher number if your system has frequent spikes of log data, this will improve the performance of server overall.


Half full commit buffers will be committed with a timer job running every minute.


Specifies how many commits are allowed to fail in a row to trigger the recovery mechanism.


Recovery mechanism is designed specifically for situations when database goes down for maintenance or temporally unavailable for some other reasons.


When recovery is taking place, incoming data is directed to temporary storage on local disk. This storage will be flushed into database when recovery succeeds.

During recovery, the reconnection attempts are done periodically with specified frequency. This parameter specifies how frequently to try reconnection with database when it goes offline.


The value should be specified in minutes.

Specifies how many recovery attempts should be made before giving up on database and switching to a router mode. In router mode, the incoming traffic is delegated to listening clients and may activate triggers. Nothing will be persisted.


Use this parameter in a combination with recovery rate to determine how much time you wish to allow your database to be offline before giving up on it.


Specify 0 to ensure that recovery process continues indefinitely until successful


Enables or disables recording of host names in repository.


You may want to disable the recording of host names when you don't use this information in your workflow. Or when your system gets often re-depployed to other hosts, e.g. in the cloud while the rest of the repository is the same.


Maximum size of repository collection. When repository grows larger than specified, the server will issue nagging warning messages to do the cleanup. Use blank value to disable this mechanism.


Maximum length of the message body is dedicated to prevent flooding the storage with unusually sized log events. Server will trim messages longer than this value. Trimming is done only on the 'message' part of log events.


Value must be specified in kilobytes.


Maintenance schedule is an optional cron expression which will trigger database maintenance job.


If /conf/maintenance.sql file is present, each line of this file will be executed as SQL statement in a separate transaction. Feel free to specify whatever is required to be done in this file. Typical usage is to manage indexes, remove orphaned data, etc.


Depending on the database and script run, the operation may time to execute, use with caution. Make sure to schedule this job off peak hours.

This is the name of the database logFaces will create in MongoDB during initialization. All necessary collections will be automatically created under this name.


These are the MongoDB connection end points - a collection of host:port pairs.


If you work with more than one MongoDB instance (like replica sets or shards) you can specify several pairs separated by comma.


The driver will select and use the relevant end point automatically.


If your database requires client authentication, select 'Yes' and provide credentials for logFaces server to authenticate itself as client.


Note that this parameter must match to what your database actually requires. If database is not secured and you select 'Yes' (or vice versa), the connection will be rejected.


The password is kept in obfuscated form locally on server.

Enable this option if your database is configured to use TLS/SSL traffic encryption for client connections.


For the logFaces server to be able to establish SSL connections, it needs to be configured to use a trust store holding the SSL certificate. Go to security settings and follow the instrutions to create the trust store.


Note that logFaces server must be prepared to use SSL certificate before you can enable this option.

MongoDB write concern controls the write behaviour as well as the error handling during commits. This affects directly the overall write throughput of this node.


For more details, refer to MongoDB documenation.


MongoDB 3.2 introduces the readConcern query option for replica sets and replica set shards. Allows clients to choose a level of isolation for their reads. Use DEFAULT with older versions of MongoDB.


For more details, refer to MongoDB documenation.


Name of the database where user is defined.


When authentication is enabled, logFaces server will authenticate itself as with user name and password against this database. Make sure that user credentials and proper permissions are stored in this database when enabling authentication. For more information about enabling MongoDB access control refer to MongoDB documentation.


Effective only when replica sets are used. Read preference describes how MongoDB clients route read operations to members of a replica set.


primary: All read operations use only the current replica set primary. This is the default. If the primary is unavailable, read operations produce an error or throw an exception.


primaryPreferred: In most situations, operations read from the primary member of the set. However, if the primary is unavailable, as is the case during fail-over situations, operations read from secondary members.


secondary: Operations read only from the secondary members of the set. If no secondaries are available, then this read operation produces an error or exception.


secondaryPreferred: In most situations, operations read from secondary members, but in situations where the set consists of a single primary (and no other members,) the read operation will use the set's primary.


nearest: The driver reads from the nearest member of the set according to the member selection process. Reads in the nearest mode do not consider the member's type. Reads in nearest mode may read from both primaries and secondaries. Set this mode to minimize the effect of network latency on read operations without preference for current or stale data.


Disabling sorted results may improve query time in some situations, especially with large databases.


But this may also result in un-ordered result sets, particularly when several logFaces nodes are sharing the same database or there are many applications spread across different hosts.


The default is 'Yes', don't modify if unsure.


There are several types of collections you can use to store log data in MongoDB. Make sure you pick up a correct option.


Regular collection (default) applies no automatic data retention policy. Make sure that you set proper retention period, or manage the retention with your own external scripts which is a preferable option with very large databases.


Capped collection is very fast for inserts and manages retention by storage size you specify. logFaces can convert regular collection to capped collection but not the other way around.


TTL collections are convenient for retention by time. You will have to specify how old the data should be in the collection before MongoDB removes it. Note that this collection will automatically allocate additional index for tracking time. So normally TTL collection will take more database space. logFaces can convert regular collection to TTL collection and vice versa.


Partitioned store is our propriterary implementation of data storage where entire data set is partitioned into collection of databases each holding predefined number of data days. Use partitions when your storage will be very large or spanning many days. It's much easier to drop entire database for managing storage size, and it may be more efficient with indexes.


The project ID will be obtained from your Google Cloud account when you start the project. The project can be real or a sandbox project for evaluations. You will find project ID in your project settings.


More details


A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. logFaces will create its schema tables in the dataset you specify here.


More details


Geographical location of data center storing the dataset. After dataset is created, the location cannot be changed, but you can copy the dataset to a different location, or manually move (recreate) the dataset in a different location. List of supported locations is frequently updated, use link below to see available locations.


More details


Credentials are used by logFaces to securely communicate with your cloud storage. The credentials obtained from your Google service account come in the form of a key file which you request in JSON format. This file will be kept in your logFaces /conf directory file named bq.credentials.json. Make sure to keep it safe.


More details


There are two distinct client API's you can setup this server instance for sending your data to BigQuery. Use the one which is more suitable for your environment. More details here.


HTTP REST calls is the original legacy API.


gRPC streaming is a more efficient approach better suitable for larger data volumes.


Size of the thread pool used for streaming log data form this server instance to the BQ storage in the cloud. Raise this value when the inflow of log data is high to take advantage of parallelism.


This number should be in range of 1-100.


Reports are scheduled cron jobs and here you define the cron expression. It will specify when report is triggered and how it repeats.


Here you will find some examples of commonly used cron expressions.


For example, daily report triggered at 8AM every day will looks like this: 0 0 8 1/1 * ? *


When triggered, the report will perform a database query based on the criteria you define. Time coverage parameter specifies the time range to cover starting from current trigger time and looking back for X hours.


Time coverage is specified in hours. Fractions of the hour are allowed too. For example, to cover past 15 minutes use 0.25


Attached content is a raw text log file built as a result of a query or real-time interception of logs. Here you specify the layout format of the log file.


logFaces is using log4j formatting rules for laying out raw text logs.


Feel free to improvise!


When looking at received emails it is often convenient to have the subject stand out. This parameter lets you choose the most expressive subject. It can be a plain text or an expression.


Use ${variable} notation where variable could be 'domainName', 'hostName', 'loggerName', 'loggerTimeStamp', 'serverTime', 'message', 'throwableInfo' and any of the mapped MDC names.


When email is built, the variable will be substituted with the corresponding value taken from the first log statement in the report.


Do not disturb period specifies when trigger should NOT sent notification even when all of its rules are met.


The period is defined by means of a cron expression where you can set ranges on particular time fields. Leave blank to disable DND period. See practical examples below:


Cron expression '* 0-59 0-11 ? * *' will be affective between hours 00:00 - 11:59 on any day


Cron expression '* 0-59 14-18 ? * MON-WED' will be affective between hours 14:00 - 18:59 on Monday, Tuesday and Wednesday


Regular expression pattern with named groups for matching variables in log messages captured by the trigger. These variables can then be referenced in notification subject or message body to make a notification more descriptive with specific context.


Refer to user manual about usage of Regular Expression tools .


For example: a pattern like 'login by %{USER:user} detected' will attempt to extract 'user' from captured log message. The extracted value can then be used in trigger subject or message body by referencing it like this: ${user}.


This tecnique can help automate tasks on the side receiving the triggered notifications.


When looking at received notifications it is often convenient to have the subject to stand out. This parameter lets you choose the most expressive subject. It can be a plain text or an expression.


Use ${variable} notation where variable could be 'domainName', 'hostName', 'loggerName', 'loggerTimeStamp', 'serverTime', 'message', 'throwableInfo', any of the mapped MDC names or split expression group.


When notification is built, the variable will be substituted with the corresponding value taken from the trigger context.


If you want to have a customized notification message text for this trigger, this is the place to specify it. Leaving this field blank will result in default messages constructed from the context at hand.


It is possible to use variables taken from trigger context. Use ${variable} notation where variable could be 'domainName', 'hostName', 'loggerName', 'message', 'throwableInfo', any of the mapped MDC names or split expression group (if specified).


When email is built, the variable will be substituted with the corresponding value taken from the trigger context.


When trigger email is generated, it is possible to include the attachment of the log events which actually caused this trigger to fire up the notification.


If attachment is required, additional parameter to define the text layout of attached log file will be used to generate the attachment text.


Simple triggers are counting number of matching events within specified time window.


Split triggers are like simple triggers but they are capable of tracing context and fire separately per context captured.


Silence triggers will detect that no events matching criteria arrived within specified time window.


The trigger will fire only when at least this many events are trapped by the criteria during specified time window.


The trigger will fire only when events are captured within this time frame (in minutes). If used with silence detecting trigger, it means the oposite, - how long the 'silence' is stretching for.


For example, when counter is set to 10 and time window to 1, the following will occur. Server will count captured events and when reaching 10 it will examine if they took place within 1 minute. If yes, the trigger will fire but not necessarily result in email - see frequency limit.


Time window is important for capturing certain patterns of behavior and prevent noise. To ignore time window set its value to zero.


Frequency limitation is here to prevent email flood. It means that email notification will not be sent more often than specified even if trigger is fired.


For example, if you set frequency limitation to 5 minutes, it will be guaranteed that you will not receive notifications more often than 5 minutes even if trigger did fire more often.


To ignore frequency limitation set this value to zero. This will ensure that trigger emails are queued for delivery immidiately. Queued emails are sent out on a 1 minute basis.



Log event attribute to use for extracting triggering value. If specified, the trigger will try to split incoming events into groups using regular expression.


Background:
Splitting triggers are designed to differentiate notifications based on the actual content and not only on the amount of captured events. The content is extracted from the log event by means of regular expression groups. For example, split trigger can be used to detect which user logged in provided that 'user name' could be extracted from one of the log event attributes.



Regular expression for extracting triggering value. Must contain valid regular expression with named group. The name of the group will be used for splitting events captured by this trigger. The name of the group can be used as a context variable to construct email subjects and bodies.


For example:
Regex %{WORD:serialNumber} will match single word and its value will be assigned to a variable named 'serialNumber'. The serialNumber in this case is called triggering value because trigger will fire only when certain amount of serialNumber's are detected. The same trigger may fire several notifications - each for different serialNumber. This is why these types of triggers are called split triggers. To use this variable in email body or subject, simply use '${serialNumber}' - it will be replaced with actual value when trigger fires.

When trigger fires, logFaces will send email to specified recipients using trigger specific options. It is possible to use several recipients, priorities and file attachements.


Make sure that outgoing emails are properly configured in SMTP section.

logFaces can be integrated with Slack and forward notifications to its channels. The integration involves obtaining the web hook URL from your Slack account. logFaces will post customized messages to this URL.


Read about Slack integration for more details here.

logFaces can POST JSON content to any HTTP/s URL of your choice. The content has the following format:

{subject: 'string', message : 'string'}


Where message and subject are the parameters specified in this trigger.

logFaces can invoke any of the existing plugins when trigger fires. The plugin will be called with all the events captured by the trigger.


When this trigger is fired the logFaces server can optionally raise an Alert - persistent states which stay in server memory until acknowledged by user.


Alerts inherit name and description from the trigger raising them. They also have severity levels and the time stamp of when trigger raised them.


This information is then used to present Alerts in client UI so that users could review, acknowledge or query into the log events which contributed to the alert state to be raised.


This is the webhook URL for integrating with Slack platform.


logFaces will post specified message body to this URL. Read about Slack integration for more details here.

This is an optional channel name for integrating with Slack platform. When not specified the message will go to the default channel.


Read about Slack integration for more details here.

By default (no pattern specified), the payload sent to slack channel will be what specified in the "message" field of this trigger.


When relay pattern is specified, the server will forward to slack channel logs formatted in accordance with log4j patterns.


For example:
[%-5p] %d{dd MMM HH:mm:ss} %-20C{1} - %m%n

logFaces will perform conventional HTTP(s) POST to this URL.


The payload posted to the server is a JSON object containing subject and message atributes.


Maximum heap size is specified in /bin/lfs.conf file and defines maximum size of RAM the JVM is allowed to claim from operating system. It corresponds directly yo JVM -Xmx parameter.


Total heap size is the amount of RAM currently claimed from the OS and used by JVM. This value floats between minimum (-Xms) and maximum (-Xmx) specified in /bin/lfs.conf


Free heap size is the amount of RAM still available for JVM from total allocation. This value floats between minimum and total allocated values.


logFaces server uses fixed small amount of threads for its internal purposes. Along with this, additional threads are created for TCP appenders and client sessions on demand.


It is possible to setup a warning threshold in order to get a notification that amount of threads gets too high. Look into /conf/environment.properties for com.moonlit.logfaces.monitoring.highThreadCount


received - total number of log events received by this server since its start. Note that black-listed events are not counted.


committed - total number of log events committed to the data store since server start.


Measured throughput of the server on the applications (appenders) side. It indicates the amount of log events coming through logFaces server from the appenders in a second.


Inflow rate directly affects server performance and allocation of resources. Actual value will depend mostly on average size of log events and network performance.


Normally you want keep this value below the database throughput to prevent continuous overflow.


Current number of live TCP connections holding by the server.


users - number of clients using the server right now


apps - number of applications (appenders) using the server right now.


Measured value that indicates how many log events per second the database commits.


Keep an eye on this metric to be higher than inflow rate most of the time. When database throughput is significantly lower than inflow rate for a long time, the data will be sent to local disk storage. Normally this is an expensive operation and may result in higher than usual CPU and IO use.


Overload is the percentage ratio of total number of events went through an overflow buffer on local disk to a total committed to the database. This ratio is very important for detecting the database bottle neck. When overload gets too high, it will be emphasized in red color. The default threshold is set to 10% but you can adjust it in environment properties. You will also see a flag icon indicating that currently server handles its internal overflow cache trying to push it into the database.


Overflow is a mechanism designed to guard against inflow spikes and prevent data loss when logs can't be committed to your database. This mechanism gets engaged when database is unavailable, or database is slower than the inflow. Your database may be super capable but when massive spike of inflow takes place, we will buffer the impact to prevent major disruption. When this happens, the overflow mechanism will delegate incoming data to a temporal local storage and then try flushing it whenever database permits.


Note that overflow buffer is limited by number of log events it can hold. You specify this in environment configuration file, default is 500K. When this number crossed, logFaces will start loosing the data as unsustainable.


Whenever new space disk space is required, the overflow directory gets allocated by 32MB or bigger chunks. Note that total allocated disk space is not returned back to operating system unless explicitly requested to do so. You can manually do it here, make sure that actual cache is empty when you release the disk space.


Total number of log events stored in lfs_log table (RDBMS) or log collection (MongoDB)


Click on update count to get the most recent value.


Physical size of the database storage on disk. Only available for MongoDB and embedded databases.


If /conf/maintenance.sql file is present, each line of this file will be executed as SQL statement in a separate transaction. Feel free to specify whatever is required to be done in this file. Typical usage is to manage indexes, remove orphaned data.


If maintenance script is not present, the server will attempt to re-build existing indexes if applicable.


Depending on the database size this operation may take time to execute, use with caution. Do not run on live busy systems.