DataFlux Observability

DataFlux Collector

What you collect is what you are.

We’ve created a microservice that collects metrics from different interfaces. DataFlux Collector is a plugin-based architecture and it is used for monitoring the load on virtual machines, collecting the metrics and send it to the Kafka for threshold-based or Machine Learning anomaly detection.

Great Collection for outstanding monitoring

DataFlux Collector can be used to monitor the virtual machine's overload as a component that collects data and sends them to Kafka where this data is later processed and where some threshold-based or ML anomaly detection is performed on.

Perfect your collection

Make the difference with special 5 collector plugins
Plugin-Based Architecture

The core part of the program is independent of later applications. If necessary, we write the use-case plugins in which the program is used. Plugins can be customized for a specific use-case, but can also be developed inside common plugins used in various use-cases.

Metadata Reading Plugin

A program output is a message that has a “header” that contains various metadata. Metadata is loaded using one or more metadata readers that can be added depending on the use-case. For example, sometimes information about the machine on which the program is running (processor, operating system) is important to us. Sometimes information about the network (IP, MAC address) is important, and sometimes we use both.

Metric Reading Plugin

The main part of the output is a metric field generated by one or more metric reading plugins. The metric reader plugin queries the interface of the virtual machine on which it is running (CPU, memory, disk, network traffic status) or the interfaces of some other machines (information displayed by an application running on the computer on which the installation agent is not possible - then the agent on one machine monitors the application/service on the other machine).

Output Formatting Plugin

Depending on the use-case and the system in which the agent is integrated, it is necessary to undergo a certain message format. Therefore, it is possible to write a plugin that formats the message generated by each plug-in by combining information about metadata, metric source, metrics themselves and timestamps (start and end of readings - start time, stop time), and sending output (trans time).

Output Logging Plugin

The program output can be written to different repositories. These can be databases (SQL, NoSQL), rolling logs, CSV files, data streams (Kafka!!), etc. The data logging plugin receives the formatted output and writes it to a specific repository. Again, depending on the use-case, custom (or shared) plugins can be developed for each type of repository.