This post was finally edited by ylin30 at 2023-4-4 02:28
5. Max cardinality: Resource consumption comparisonWe would also like to know what is the max cardinality TickTockDB and InfluxDB can handle. So we increased client number (and device number correspondingly) gradually (5k, 10k, 12k, 100k, 140k devices) to see when TickTockDB/InfluxDB would start to saturate one of OS resources, or the whole test would take longer than 6 hours to finish (it means averagely operations can't finish within 10 seconds).
The following figures show all kinds of OS resources during the tests. TickTockDB and InfluxDB consumes more and more resources when cardinality is higher and higher, almost proportionally. Please refer to the following figures for details.
5.1. CPU
InfluxDB saturated CPU with 12k devices (i.e., 120K cardinality). CPU idle was almost 0.
TickTockDB consumes much less CPU than InfluxDB. With 100k devices (i.e., 1M cardinality), CPU idle was 40%-50%. With 140K devices (i.e., 1.4M cardinality), CPU idle was 10%-20%. There are still some small room left in CPU. 5.2. IO UtilWe just saw above that InfluxDB saturated CPU with 12k devices (i.e., 120K cardinality). At that load, IO util was only about 50%. It was even smaller than IO util (80%) with 10k devices. This means that writes were not slower than the case of 10k devices. InfluxDB can't handle 12k devices (i.e., 120k cardinality). Actually the whole test took 22458.01 seconds (much larger than the planned 21600 seconds). So we concluded that the max cardinality InfluxDB can handle is 100K, at this experimental setup (including 10 sensors per device, sleep 10 seconds, 10% read vs 90% write etc).
TickTockDB consumes much less IO than InfluxDB at the same load. With 10k devices, IO util was almost negligible. With 100k devices (i.e., 1M cardinality), IO util was about 10%. With 140k devices (i.e., 1.4M cardinality) IO util was less than 30% mostly.
5.3 Write bytes rate
Write bytes rate patterns are very similar to IO util. To InfluxDB, write bytes rate with 12K devices is even smaller than 10k devices. This is because CPU was already saturated and it couldn't handle 12k devices.
To TickTockDB, write byte rates went up proportionally to device number. The max is less than 2.3MB/second which happened with 140k devices (1.4M cardinality). Note that we used a v30 SanDisk SD card. We tested write and read bytes rate using dd. They are both 22.8MB/s and 22.6MB/s, respectively. So write bytes rate was still far from saturation.
- ylin30@orangepizero2:~$ dd if=/dev/zero of=./test bs=512k count=4096
Copy code
5.4 Read bytes rate
We can see in the figure above that read byte rates went up while cardinality was increased. But read bytes rate was relatively small (less than 1MB/s). 5.5 Memory
RSS memory of both InfluxDB and TickTockDB went up proportionally to cardinality. InfluxDB used 550MB at its max cardinality (120K=12K devices * 10 sensors/device). There were still 450MB memory available. TickTockDB used 750MB at its max cardinality (1.4M = 140k devices * 10 sensors/device). There were still 250MB memory available.
5.6 Summary
InfluxDB saturated with 12k devices (i.e., 120K cardinality). CPU was saturated completely. IO util was only about 50%. It was even smaller than IO util (80%) at 10k devices. This means that writes were not slower than the case of 10k devices. InfluxDB can't handle 12k devices (i.e., 120k cardinality). Actually the whole test took 22458.01 seconds (much longer than the planned 21600 seconds). So we concluded that the max cardinality InfluxDB can handle is 100K, at this experimental setup (including 10 sensors per device, sleep 10 seconds, 10% read vs 90% write etc).
TickTockDB was close to saturation with 140k devices (i.e., 1.4M cardinality). CPU was also the bottleneck. Other resources (memory, IO, read/write rate) were still available. We consider TickTockDB max cardinality to be 1.4M.
(To be continued)
|