4 [Live Demo Site](http://netdata.firehol.org)
7 ### Realtime time data collection and charts!
9 **Netdata** is a daemon that collects data in realtime (up to per second) and presents a web site to view and analyze them.
10 The presentation is full of charts that precisely render all system values, in realtime.
12 It has been designed to be installed on every system, without desrupting its operation:
14 1. It will just use some spare CPU cycles.
16 You can even control its CPU consumption by lowering its data collection frequency.
17 It is also running with the lowest possible priority.
19 2. It will use the memory you want it have.
21 You can control the memory it will use by sizing its round robin in memory database.
23 3. It does not use disk I/O.
25 All its database is in memory, and it is only saved on disk and loaded back when netdata restarts. You can also disable the access log of its embedded web server.
28 You can use it to monitor all your applications, servers, linux PCs or linux embedded devices.
30 Out of the box, it comes with plugins for data collection about system information and popular applications.
35 - **highly optimized C code**
37 It only needs a few milliseconds per second to collect all the data.
38 It will nicelly run even on a raspberry pi with just one cpu core, or any other embedded system.
40 - **extremely lightweight**
42 It only needs a few megabytes of memory to store all its round robin database.
44 Although `netdata` does all its calculation using `long double` (128 bit) arithmetics, it stores all values using a **custom-made 32-bit number**. This custom-made number can store in 29 bits values from -167772150000000.0 to 167772150000000.0 with a precision of 0.00001 (yes, it is a floating point number, meaning that higher integer values have less decimal precision) and 3 bits for flags (2 are currently used and 1 is reserved for future use). This provides an extremely optimized memory footprint with just 0.0001% max accuracy loss (run: `./netdata --unittest` to see it in action).
46 It also supports KSM memory deduplication to lower its memory requirements even further.
48 - **per second data collection**
50 Every chart, every value, is updated every second. Of course, you can control collection period per module.
52 **netdata** can perform several calculations on each value (dimension) collected:
54 - **absolute**, stores the collected value, as collected (this is used, for example for the number of processes running, the number of connections open, the amount of RAM used, etc)
56 - **incremental**, stores the difference of the collected value to the last collected value (this is used, for example, for the bandwidth of interfaces, disk I/O, i.e. for counters that always get incremented) - **netdata** automatically interpolates these values to second boundary, using nanosecond calculations so that small delays at the data collection layer will not affect the quality of the result - also, **netdata** detects arithmetic overflows and presents them properly at the charts.
58 - **percentage of absolute row**, stores the percentage of the collected value, over the sum of all dimensions of the chart.
60 - **percentage of incremental row**, stores the percentage of this collected value, over the sum of the the **incremental** differences of all dimensions of the chart (this is used, for example, for system CPU utilization).
62 - **visualizes QoS classes automatically**
64 If you also use FireQOS for QoS, it collects class names automatically.
66 - **appealing web site**
68 The web site uses bootstrap and google charts for a very appealing result.
69 It works even on mobile devices and adapts to screen size changes and rotation (responsive design).
71 - **web charts do respect your browser resources**
73 The charts adapt to show only as many points are required to have a clear view.
74 Also, the javascript code respects your browser resources (stops refreshing when the window looses focus, when scrolling, etc).
76 - **highly configurable**
78 All charts and all features can be enabled or disabled.
79 The program generates its configuration file based on the resources available on the system it runs, for you to edit.
81 - It reads and renders charts for all these:
82 - `/proc/net/dev` (all netwrok interfaces for all their values)
83 - `/proc/diskstats` (all disks for all their values)
84 - `/proc/net/snmp` (total IPv4, TCP and UDP usage)
85 - `/proc/net/netstat` (more IPv4 usage)
86 - `/proc/net/stat/nf_conntrack` (connection tracking performance)
87 - `/proc/net/ip_vs/stats` (IPVS connection statistics)
88 - `/proc/stat` (CPU utilization)
89 - `/proc/meminfo` (memory information)
90 - `/proc/vmstat` (system performance)
91 - `/proc/net/rpc/nfsd` (NFS server statistics for both v3 and v4 NFS)
92 - `tc` classes (QoS classes - [with FireQOS class names](http://firehol.org/tutorial/fireqos-new-user/))
94 - It supports **plugins** for collecting information from other sources!
96 Plugins can be written in any computer language (pipe / stdout communication for data collection).
98 It ships with 2 plugins: `apps.plugin` and `charts.d.plugin`:
100 - `apps.plugin` is a plugin that attempts to collect statistics per process. It groups the entire process tree based on your settings (for example, mplayer, kodi, vlc are all considered `media`) and for each group it attempts to find CPU usage, memory usages, physical and logical disk read and writes, number of processes, number of threads, number of open files, number of open sockets, number of open pipes, minor and major page faults (major = swapping), etc. 15 stackable (per group) charts in total.
102 - `charts.d.plugin` provides a simple way to script data collection in BASH. It includes example plugins that collect values from:
104 - `nut` (UPS load, frequency, voltage, etc, for multiple UPSes)
105 - `sensors` (temperature, voltage, current, power, humidity, fans rotation sensors)
106 - `cpufreq` (current CPU clock frequency, for all CPUs)
107 - `postfix` (e-mail queue size)
108 - `squid` (web proxy statistics)
109 - `mysql` (mysql global statistics)
110 - `opensips` (opensips statistics)
112 Of course, you can write your own using BASH scripting.
114 - netdata is a web server, supporting gzip compression
116 It serves its own static files and dynamic files for rendering the site.
117 It does not support authentication or SSL - limit its access using your firewall.
118 It does not allow ` .. ` or ` / ` in the files requested (so it can only serve files stored in the web directory `/usr/share/netdata/web`).
123 1. You run a daemon on your linux: netdata.
124 This deamon is written in C and is extremely lightweight.
128 - Spawns threads to collect all the data for all sources
129 - Keeps track of the collected values in memory (no disk I/O at all)
130 - Generates JSON and JSONP HTTP responses containing all the data needed for the web graphs
131 - Is a standalone web server.
133 For example, you can access JSON data by using:
136 http://127.0.0.1:19999/data/net.eth0
139 This will give you the JSON file for traffic on eth0.
140 The above is equivalent to:
143 http://127.0.0.1:19999/data/net.eth0/3600/1/average/0/0
148 - 3600 is the number of entries to generate.
149 - 1 is grouping count, 1 = every single entry, 2 = half the entries, 3 = one every 3 entries, etc
150 - `average` is the grouping method. It can also be `max`.
151 - 0/0 they are `before` and `after` timestamps, allowing panning on the data
154 2. If you need to embed a **netdata** chart on your web page, you can add a few javascript lines and a `div` for every graph you need. Check [this example](http://195.97.5.206:19999/datasource.html) (open it in a new tab and view its source to get the idea).
156 3. Graphs are generated using Google Charts API (so, your client needs to have internet access).
161 ## Automatic installation
163 Before you start, make sure you have `zlib` development files installed.
164 To install it in Ubuntu, you need to run:
167 apt-get install zlib1g-dev
170 You also need to have a basic build environment in place. You will need packages like
171 `gcc`, `autoconf`, `autogen`, `automake`, `pgk-config`, etc.
173 Then do this to install and run netdata:
178 git clone https://github.com/ktsaou/netdata.git netdata.git
182 ./netdata-installer.sh
186 The script `netdata-installer.sh` will build netdata and install it to your system.
188 Once the installer completes, the file `/etc/netdata/netdata.conf` will be created.
189 You can edit this file to set options. To apply the changes you made, you have to restart netdata.
191 - You can start netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
193 - You can stop netdata by killing it with `killall netdata`.
194 You can stop and start netdata at any point. Netdata saves on exit its round robbin
195 database to `/var/cache/netdata` so that it will continue from where it stopped the last time.
197 To access the web site for all graphs, go to:
200 http://127.0.0.1:19999/
203 You can get the running config file at any time, by accessing `http://127.0.0.1:19999/netdata.conf`.
205 To start it at boot, just run `/usr/sbin/netdata` from your `/etc/rc.local` or equivalent.