L
Listicler
Data Warehousing

7 Best Time Series Databases for IoT & Monitoring (2026)

7 tools compared
Top Picks
<p>Time series data is growing faster than any other data type. Every IoT sensor, every infrastructure metric, every application trace generates a stream of timestamped data points that traditional databases were never designed to handle. A typical IoT deployment with 10,000 sensors reporting every second produces <strong>864 million data points per day</strong>. An infrastructure monitoring stack watching 500 microservices generates similar volumes. At that scale, your choice of time series database isn't a technical preference — it's the foundation that determines whether your dashboards load in milliseconds or minutes.</p><p>The time series database market has matured significantly, valued at roughly $2.5 billion in 2024 and projected to reach $5.1 billion by 2033. But that maturity has also created confusion. There are now dozens of options, each claiming to be the fastest, most scalable, or most cost-effective. The reality is that <strong>no single TSDB is best for every workload</strong> — and the one that benchmarks fastest for ingestion might be the worst choice for your specific query patterns, team skills, or operational constraints.</p><p>Here's what actually separates good TSDB choices from bad ones in 2026:</p><ul><li><strong>Write throughput vs. query patterns</strong> — Some databases optimize for ingestion speed (IoT firehoses), others for complex analytical queries (business dashboards). Few do both well.</li><li><strong>Cardinality tolerance</strong> — The cross-product of your tags (device × region × metric × firmware version) can produce millions of unique series. Some databases handle this gracefully; others hit a "cardinality wall" where performance collapses.</li><li><strong>Compression and retention</strong> — The difference between 10:1 and 70:1 compression ratios means paying 7x more for storage. Automated downsampling and tiered retention policies are essential for cost control.</li><li><strong>Query language ecosystem</strong> — SQL, PromQL, Flux, or custom DSL? Your team's existing skills and tooling ecosystem (Grafana, alerting rules, BI tools) should drive this choice more than benchmark numbers.</li><li><strong>Operational overhead</strong> — A database that's 20% faster but requires a dedicated DevOps engineer to run isn't a win for most teams.</li></ul><p>This guide evaluates seven <a href="/categories/data-warehousing">time series databases</a> specifically for IoT telemetry and infrastructure monitoring — the two most common TSDB use cases. Each tool is assessed on write performance, query flexibility, compression, scalability, and ecosystem fit. Whether you're building an edge-to-cloud IoT pipeline or replacing a Prometheus setup that's running out of steam, one of these will fit your architecture.</p>

Full Comparison

PostgreSQL++ for time-series data, analytics, and AI workloads

💰 Usage-based cloud pricing starting around $10/month. Free 30-day trial. Open-source self-hosted option available at no cost

<p><a href="/tools/timescaledb">TimescaleDB</a> takes the top spot for a reason most benchmarks won't tell you: <strong>it's PostgreSQL</strong>. Every tool your team already uses — pgAdmin, psql, Grafana's PostgreSQL plugin, your ORM, your backup scripts, your monitoring queries — works unchanged. For IoT and monitoring teams that also need to JOIN time series data with relational business data (device inventory, customer records, SLA definitions), this isn't just convenient — it's a genuine architectural advantage that no other TSDB on this list offers.</p><p>The performance story is compelling too. TimescaleDB's hypertable architecture automatically partitions data by time and provides up to 289x faster queries compared to vanilla PostgreSQL on time-series workloads. Continuous aggregates pre-compute rollups in the background, so your real-time dashboards query materialized results instead of scanning raw data. The advanced compression engine achieves 94-97% compression ratios, and tiered storage automatically moves older data to cheap object storage while keeping recent data on fast SSDs.</p><p>For IoT specifically, TimescaleDB shines when you need <strong>geospatial queries alongside time series</strong>. PostGIS integration means you can query "show me all temperature readings from sensors within 5km of this GPS coordinate in the last hour" — a single query that would require multiple databases elsewhere. The trade-off is that TimescaleDB isn't the fastest for pure write throughput — TDengine and QuestDB both benchmark higher. But for teams that value ecosystem compatibility, query flexibility, and the ability to handle mixed workloads, TimescaleDB delivers the best overall package.</p>
Hypertables & Automatic PartitioningContinuous AggregatesAdvanced CompressionTiered StorageFull SQL CompatibilityHorizontal ScalingReal-Time AnalyticsPostgreSQL Ecosystem Integration

Pros

  • Full PostgreSQL compatibility means zero learning curve and instant integration with existing tools, ORMs, and workflows
  • Continuous aggregates deliver pre-computed real-time dashboards without re-scanning millions of raw data points
  • 94-97% compression with automatic tiered storage dramatically reduces IoT data storage costs
  • PostGIS integration enables geospatial + time-series queries in a single database for fleet and asset tracking
  • Open-source community edition is free with no feature gates on core time-series functionality

Cons

  • Write throughput doesn't match purpose-built TSDBs like TDengine or QuestDB for high-frequency sensor firehoses
  • Promscale (PromQL bridge) was deprecated — monitoring teams using PromQL need a separate solution
  • Cloud pricing is usage-based and can be hard to predict for variable IoT workloads

Our Verdict: Best overall choice for teams that value PostgreSQL compatibility, mixed workload flexibility, and the ability to JOIN time series data with relational business data in a single database.

Purpose-built time series database for metrics, events, and real-time analytics

💰 Free tier available, Usage-Based cloud plans, Dedicated plans for enterprise

<p><a href="/tools/influxdb">InfluxDB</a> is the most widely deployed purpose-built time series database, and its ecosystem is the primary reason. <strong>Telegraf</strong>, InfluxData's open-source collection agent, has over 300 input plugins that collect metrics from virtually anything — cloud services, databases, IoT protocols (MQTT, SNMP, Modbus), system metrics, and application frameworks. For teams that need to start collecting data quickly from diverse sources, no other TSDB matches this breadth of out-of-the-box data collection.</p><p>InfluxDB 3.0 represents a significant architectural shift: the engine was rewritten in Rust with Apache Parquet columnar storage and native SQL support (alongside the legacy InfluxQL). This addresses two historical criticisms — the proprietary Flux query language (now being deprecated) and storage efficiency. The new engine targets faster queries and better compression, though it's still maturing and lacks some features from 2.x like continuous aggregates.</p><p>For IoT deployments, InfluxDB's <strong>edge data replication</strong> is a standout feature — collect and process data at the edge with automatic replication to a central cloud instance when connectivity is available. The free tier allows experimentation without commitment, and the usage-based cloud pricing scales naturally with data volume. The limitation to be aware of is <strong>cardinality</strong>: InfluxDB's inverted index approach struggles with millions of unique series, which can happen quickly in large IoT deployments where each device generates many distinct metrics.</p>
Time Series Data EngineFlux Query LanguageTelegraf Data CollectionGrafana IntegrationCloud ServerlessEdge Data ReplicationContinuous QueriesREST API & Client Libraries

Pros

  • Telegraf's 300+ input plugins provide the broadest device and service data collection ecosystem of any TSDB
  • Edge data replication enables IoT edge-to-cloud pipelines with automatic sync during connectivity windows
  • Free tier and usage-based pricing make it easy to start small and scale without upfront commitment
  • Largest TSDB community means extensive documentation, tutorials, and third-party integrations
  • InfluxDB 3.0's native SQL support removes the Flux learning curve for new deployments

Cons

  • Cardinality limitations degrade performance with millions of unique series — a real concern for large IoT fleets
  • Horizontal scaling locked to paid tiers — self-hosted OSS version is single-node only
  • InfluxDB 3.0 still lacks continuous aggregates and materialized views available in competitors

Our Verdict: Best for teams that need the broadest data collection ecosystem and fastest time-to-value. Telegraf's 300+ plugins and edge replication make it the most versatile IoT data platform.

High-performance open-source time-series database for demanding workloads

💰 Open Source (free), Enterprise (custom pricing)

<p><a href="/tools/questdb">QuestDB</a> is the performance-first time series database, built from scratch in Java, C++, and Rust for one purpose: <strong>maximum ingestion speed with minimum query latency</strong>. Independent benchmarks consistently show QuestDB ingesting 12-36x faster than InfluxDB 3 Core and 6-13x faster than TimescaleDB, while delivering 43-418x faster analytical queries. For IoT workloads where sensors generate millions of data points per second and dashboards need sub-second response times, QuestDB's raw speed is its defining advantage.</p><p>Despite its performance focus, QuestDB uses <strong>standard SQL with time-series extensions</strong> — no proprietary query language to learn. ASOF JOINs (correlating data across different time intervals) and SAMPLE BY (time-bucket aggregation) handle the queries that are awkward or impossible in standard SQL. Nanosecond-precision timestamps support financial and scientific workloads. The PostgreSQL wire protocol means existing PostgreSQL drivers and tools connect without modification.</p><p>QuestDB also supports the <strong>InfluxDB Line Protocol</strong> for ingestion, so you can switch from InfluxDB to QuestDB without changing your data collection pipeline. The built-in web console provides a browser-based SQL editor with visualization for quick exploration. The trade-off is ecosystem maturity — QuestDB has a smaller community and fewer integrations than InfluxDB or TimescaleDB, and its monitoring-specific tooling (alerting, PromQL) is limited compared to purpose-built monitoring databases.</p>
High-Speed Data IngestionSQL Query EngineNanosecond TimestampsASOF JoinsMaterialized ViewsApache Parquet SupportMulti-Tier Storage EngineBuilt-in Web ConsoleTime-Partitioned StorageILP & PostgreSQL Wire Protocol

Pros

  • Benchmark-leading ingestion speed at 4M+ rows/second per instance — ideal for high-frequency IoT sensor firehoses
  • Standard SQL with time-series extensions (ASOF JOIN, SAMPLE BY) eliminates proprietary query language lock-in
  • InfluxDB Line Protocol support enables drop-in migration from InfluxDB without changing collection pipelines
  • Nanosecond timestamp precision for financial trading, scientific instrumentation, and high-frequency IoT
  • Open-source with no feature gates — full performance available in the free community edition

Cons

  • Smaller ecosystem and community compared to InfluxDB or TimescaleDB — fewer tutorials and third-party integrations
  • No native alerting or PromQL support — not a direct replacement for monitoring-focused databases
  • Younger project with less production deployment history than established alternatives

Our Verdict: Best for teams where raw ingestion speed and query performance are the top priorities. Ideal for high-frequency IoT sensor data and financial time series workloads.

#4
VictoriaMetrics

VictoriaMetrics

Simple & Reliable Monitoring for Everyone

💰 Free open-source Community edition with all core features; Enterprise and Cloud plans starting at ~$190/month with tiered support (Silver, Gold, Platinum)

<p><a href="/tools/victoriametrics">VictoriaMetrics</a> exists because Prometheus — the industry standard for monitoring — has fundamental limitations that become painful at scale: single-node only, short-term storage (15-30 days), high memory usage, and poor performance at high cardinality. VictoriaMetrics solves all four while maintaining <strong>full PromQL compatibility</strong>, which means your existing Grafana dashboards, alerting rules, and recording rules work unchanged. For monitoring teams that have outgrown Prometheus, VictoriaMetrics is the most seamless upgrade path.</p><p>The efficiency numbers are striking: VictoriaMetrics uses <strong>up to 10x less RAM</strong> than Prometheus, 7x less disk space, and can store 70x more data points per gigabyte than TimescaleDB. In production at organizations like CERN, Wix, and Adidas, it handles billions of active time series across clusters. The single-binary deployment model (download, configure, run) makes it operationally simple compared to distributed systems like Thanos or Grafana Mimir.</p><p>For IoT use cases, VictoriaMetrics supports <strong>both pull and push ingestion models</strong> — a critical advantage over Prometheus's pull-only approach. IoT devices behind NAT firewalls or with intermittent connectivity can push metrics via remote write, InfluxDB Line Protocol, or OpenTelemetry. The built-in anomaly detection component automatically identifies irregular patterns in metrics data, reducing the manual alert tuning that plagues monitoring teams. MetricsQL extends PromQL with convenience functions (range_first, range_last, histogram_quantile improvements) that make complex monitoring queries simpler.</p>
High-Performance Time Series DatabasePromQL & MetricsQL SupportGrafana IntegrationAnomaly DetectionHorizontal & Vertical ScalingMulti-Protocol IngestionVictoriaLogs & VictoriaTracesDownsampling & Multiple RetentionsKubernetes NativeLong-Term Storage

Pros

  • Drop-in Prometheus replacement with full PromQL compatibility — existing dashboards and alerts work unchanged
  • 10x less RAM and 7x less disk than Prometheus with superior compression for long-term metric storage
  • Push and pull ingestion models support both traditional monitoring (scraping) and IoT device push patterns
  • Built-in anomaly detection reduces manual alert threshold tuning for monitoring teams
  • Single-binary deployment with optional cluster mode — operationally simpler than Thanos or Mimir

Cons

  • MetricsQL extensions create mild vendor coupling — queries using VM-specific functions won't port back to Prometheus
  • Smaller commercial backing than InfluxData or ClickHouse Inc. — enterprise support depends on a smaller company
  • Downsampling and multi-tenancy require the Enterprise edition — not available in open-source

Our Verdict: Best Prometheus replacement for monitoring teams that need long-term storage, better compression, and horizontal scaling. The most resource-efficient TSDB on this list.

Fast open-source columnar database for real-time analytics

<p><a href="/tools/clickhouse">ClickHouse</a> isn't technically a time series database — it's a columnar analytics engine. But it's increasingly chosen for time series workloads because it solves the problem that purpose-built TSDBs struggle with most: <strong>high cardinality</strong>. When your IoT deployment has millions of unique device/metric/tag combinations, databases using inverted time-series indexes (InfluxDB, Prometheus) degrade sharply. ClickHouse's columnar architecture handles this by design — Tesla runs 1 billion rows per second of ingestion through ClickHouse, and organizations like Uber, Cloudflare, and Netflix rely on it for observability at massive scale.</p><p>The recent launch of <strong>ClickStack</strong> positions ClickHouse directly in the monitoring space: a unified observability stack that handles logs, metrics, traces, and session replay in a single engine, fully OpenTelemetry-native. For teams tired of running separate databases for each telemetry type (Prometheus for metrics, Elasticsearch for logs, Jaeger for traces), ClickStack consolidates everything into one queryable system with a rich SQL dialect.</p><p>ClickHouse Cloud offers a fully managed experience with automatic scaling, starting from $25/month. The open-source version can be self-hosted on any infrastructure. The trade-off for IoT teams is that ClickHouse requires more configuration for time-series-specific patterns (no native PromQL, no built-in downsampling policies, no time-series-specific compression codecs by default). It's the right choice when you need <strong>analytical depth across massive datasets</strong> rather than a turnkey time-series-specific solution.</p>
Columnar storage with advanced compressionSub-second query performance on billions of rowsSQL-compatible query languageReal-time data ingestionDistributed query processing across clustersMaterialized views (incremental and refreshable)100+ integrations with data toolsClickHouse Cloud fully managed serviceAutomatic scaling of compute and storageWindow functions and complex JOINsTime-series and observability optimizationsSecondary data skipping indexes

Pros

  • Handles high cardinality by design — no 'cardinality wall' that plagues inverted-index TSDBs
  • Sub-second analytical queries across billions of rows with advanced columnar compression
  • ClickStack unifies logs, metrics, traces, and session replay in one OpenTelemetry-native engine
  • Rich SQL dialect with JOINs, window functions, and materialized views — familiar to analytics teams
  • Proven at extreme scale: Tesla, Uber, Cloudflare, Netflix, Deutsche Bank

Cons

  • Not a native TSDB — requires more configuration for time-series patterns like retention policies and downsampling
  • No native PromQL support — monitoring teams must translate existing dashboards and alerts to SQL
  • Heavier resource footprint than purpose-built TSDBs for simple metric storage and retrieval

Our Verdict: Best for teams that need analytical queries across massive, high-cardinality datasets. The right choice when your data volume or tag cardinality exceeds what purpose-built TSDBs can handle.

AI-powered time-series database for Industrial IoT

💰 Open-source core is free. Cloud from \u0024975/month (5K tags). Self-hosted enterprise from \u00245,000/year.

<p><a href="/tools/tdengine">TDengine</a> is the only database on this list built specifically for <strong>Industrial IoT</strong>. Its "one table per device" data model (supertables) maps directly to how IoT systems are structured — each device gets its own table, and supertables define shared schemas across device types. This eliminates the tag-explosion cardinality problems that plague other TSDBs in large IoT deployments and enables queries like "aggregate all temperature sensors of type X across all factories" without scanning irrelevant data.</p><p>The all-in-one architecture is TDengine's most distinctive feature. It bundles a <strong>TSDB engine, stream processing, caching, and data subscription</strong> in a single deployment — replacing what would otherwise require InfluxDB + Kafka + Redis + a stream processor. Built-in stream processing handles windowed aggregations with millisecond latency, the latest-value cache eliminates a separate Redis layer, and Kafka-like pub/sub distributes data to downstream consumers. For industrial teams that want fewer moving parts, this consolidation is compelling.</p><p>Performance benchmarks consistently show TDengine writing <strong>1.8-16x faster than InfluxDB</strong> with 2.3-25.8x better compression. Native connectors for MQTT, OPC-UA, OPC-DA, and OSIsoft PI System mean industrial data sources connect directly without middleware. The AI copilot adds anomaly detection and forecasting for predictive maintenance scenarios. The limitation is ecosystem: TDengine has a smaller community outside China, and its cloud pricing starts at $975/month — steep for small teams experimenting with IoT.</p>
High-Performance Time-Series IngestionBuilt-in Stream ProcessingBuilt-in CachingBuilt-in Data SubscriptionAI CopilotIndustrial Data ModelingNative SQL SupportIndustrial Protocol ConnectorsStorage EfficiencyMulti-Cloud Deployment

Pros

  • Purpose-built 'supertable' data model maps perfectly to IoT device hierarchies without cardinality explosion
  • All-in-one architecture replaces separate TSDB, Kafka, Redis, and stream processing with a single deployment
  • 1.8-16x faster writes and 2.3-25.8x better compression than InfluxDB in independent benchmarks
  • Native industrial protocol support for MQTT, OPC-UA, OPC-DA, and OSIsoft PI System
  • Built-in AI copilot for anomaly detection and forecasting on industrial time-series data

Cons

  • Cloud pricing starts at $975/month — expensive for small teams and early-stage IoT projects
  • Smaller community outside China with fewer English-language resources and third-party integrations
  • Optimized exclusively for time-series — not suitable for mixed workloads or general-purpose queries

Our Verdict: Best for Industrial IoT deployments that need native MQTT/OPC-UA support, edge-to-cloud replication, and an all-in-one architecture that eliminates middleware complexity.

Open-source monitoring and alerting toolkit for cloud-native environments

💰 Free and open-source under Apache 2 License

<p><a href="/tools/prometheus">Prometheus</a> isn't the fastest TSDB, the most scalable, or the most feature-rich. It's ranked seventh on raw database capability. But it's ranked first in the <strong>Kubernetes monitoring ecosystem</strong> — and that ecosystem advantage makes it the most deployed monitoring solution in cloud-native environments. PromQL is the industry-standard monitoring query language. Thousands of pre-built exporters cover every major service and infrastructure component. Grafana dashboards, alerting rules, and recording rules are overwhelmingly written for Prometheus first.</p><p>As a CNCF graduated project, Prometheus has the broadest community support of any monitoring tool. <strong>Service discovery</strong> automatically detects and monitors new Kubernetes pods, services, and nodes as they appear — no manual configuration required. Alertmanager handles alert routing, silencing, grouping, and deduplication with production-tested reliability. The pull-based scraping model provides strong control over collection intervals and eliminates the risk of being overwhelmed by rogue metric producers.</p><p>The limitations are well-known: <strong>single-node only</strong> (no native clustering), short-term storage (15-30 days by default), and performance degradation at high cardinality. For most teams, the answer isn't replacing Prometheus but <strong>extending it</strong>. Use Prometheus for collection and short-term queries, and pair it with VictoriaMetrics, Thanos, or Grafana Mimir for long-term storage and horizontal scaling. This architecture gives you the best monitoring ecosystem with none of the scaling limitations.</p>
PromQL Query LanguageMulti-Dimensional Data ModelAlerting with AlertmanagerService DiscoveryPull-Based Metrics CollectionExporters & IntegrationsGrafana IntegrationBuilt-in Expression Browser

Pros

  • Industry-standard PromQL and the broadest ecosystem of exporters, dashboards, and alerting rules
  • CNCF graduated project with massive community — virtually every cloud-native tool has a Prometheus exporter
  • Automatic Kubernetes service discovery detects and monitors new pods and services without manual configuration
  • Alertmanager provides battle-tested alert routing, grouping, silencing, and deduplication
  • Completely free and open-source with no licensing costs or paid tiers

Cons

  • Single-node architecture with no native clustering — requires Thanos, Mimir, or VictoriaMetrics for scale
  • Default 15-30 day retention — not suitable as a long-term time series database without external storage
  • Pull-based scraping model doesn't suit IoT devices behind NAT firewalls or with intermittent connectivity

Our Verdict: Best for Kubernetes and cloud-native infrastructure monitoring. Pair with VictoriaMetrics or Thanos for long-term storage to build a complete, production-grade monitoring stack.

Our Conclusion

<p>The right time series database depends on your specific workload pattern, team expertise, and where you are on the scale curve.</p><p><strong>If your team already uses PostgreSQL</strong>, <a href="/tools/timescaledb">TimescaleDB</a> is the safest and most productive choice. You keep your existing tools, skills, and ecosystem while gaining purpose-built time series performance. The ability to JOIN time series data with relational business data in a single query is a genuine differentiator that no other TSDB matches.</p><p><strong>If you need the broadest data collection ecosystem</strong>, <a href="/tools/influxdb">InfluxDB</a> with Telegraf's 300+ plugins gives you the fastest path from zero to collecting metrics from virtually any source. It's the most battle-tested option with the largest community.</p><p><strong>If raw ingestion speed is your priority</strong> — high-frequency sensor data, financial tick data, or dense telemetry — <a href="/tools/questdb">QuestDB</a> delivers benchmark-leading write throughput with familiar SQL syntax. It's the performance pick.</p><p><strong>If you're outgrowing Prometheus</strong>, <a href="/tools/victoriametrics">VictoriaMetrics</a> is the drop-in replacement that gives you long-term storage, better compression, and horizontal scaling while keeping your existing PromQL dashboards and alerting rules intact.</p><p><strong>If you need analytics across massive datasets</strong> with high cardinality (millions of unique device/metric combinations), <a href="/tools/clickhouse">ClickHouse</a> handles what purpose-built TSDBs choke on. Its columnar architecture was designed for exactly this kind of analytical workload.</p><p><strong>If you're building Industrial IoT</strong> with MQTT devices, OPC-UA connections, and edge-to-cloud pipelines, <a href="/tools/tdengine">TDengine</a>'s purpose-built industrial data model eliminates the middleware layer that other databases require.</p><p><strong>If you're running Kubernetes monitoring</strong> and want the industry-standard approach, <a href="/tools/prometheus">Prometheus</a> remains the default. Pair it with VictoriaMetrics or Thanos for long-term storage, and you have a production-grade monitoring stack.</p><p>One common mistake: don't benchmark on small data and extrapolate. Performance characteristics change dramatically at scale — test with realistic cardinality and retention windows before committing. Most of these databases offer free tiers or open-source editions, so prototype your actual workload before choosing.</p><p>For related guides, see our roundup of <a href="/best/best-open-source-bi-data-visualization-tools">best open-source BI and data visualization tools</a> or browse all <a href="/categories/data-warehousing">data warehousing tools</a> in our directory.</p>

Frequently Asked Questions

What is a time series database and when do you need one?

A time series database (TSDB) is a database optimized for storing and querying data that's indexed by time — sensor readings, server metrics, stock prices, or any measurement that changes over time. You need one when a general-purpose database like PostgreSQL or MySQL starts struggling with the write volume (typically millions of inserts per second), query speed on time-range aggregations, or storage costs of your timestamped data. If you're running IoT sensors, infrastructure monitoring, or real-time analytics on more than a few hundred metrics, a purpose-built TSDB will outperform a general-purpose database by 10-100x on these workloads.

What is the cardinality problem in time series databases?

Cardinality refers to the number of unique time series in your database — the cross-product of all your tags and labels. For example, 1,000 devices × 50 metrics × 10 regions = 500,000 unique series. Some databases (especially those using inverted indexes like InfluxDB or Prometheus) degrade sharply as cardinality grows into the millions, causing slow queries, high memory usage, and compaction problems. ClickHouse and QuestDB handle high cardinality better by design, while TimescaleDB and VictoriaMetrics offer good middle-ground performance. Planning your cardinality upfront is critical when choosing a TSDB.

Should I use Prometheus or a dedicated TSDB for monitoring?

Prometheus is excellent for Kubernetes and cloud-native monitoring with short-term retention (15-30 days). If you need longer retention, multi-cluster monitoring, or higher cardinality, pair Prometheus with a long-term storage backend like VictoriaMetrics (best compression and PromQL compatibility), Thanos (multi-cluster with object storage), or Grafana Mimir (horizontally scalable). For IoT monitoring where devices push data rather than being scraped, consider InfluxDB, TDengine, or VictoriaMetrics (which supports push via remote write) instead of Prometheus's pull model.

How much does it cost to run a time series database at scale?

Costs vary dramatically based on compression, retention, and deployment model. For self-hosted open-source options (Prometheus, VictoriaMetrics, QuestDB, TimescaleDB Community), the primary cost is infrastructure — typically $200-500/month for a moderate monitoring workload on cloud VMs. Managed cloud services range from free tiers (InfluxDB, TimescaleDB, ClickHouse) to $500-3,500/month for production workloads. The biggest hidden cost is storage: a database with 10:1 compression stores 10x less than one with baseline compression, directly impacting monthly cloud storage bills. Automated downsampling and tiered retention policies can reduce long-term storage costs by 80-90%.

Which time series database is best for IoT edge deployments?

For edge-to-cloud IoT pipelines, TDengine is the strongest option with native MQTT support, edge-node deployment, and built-in data replication to cloud. InfluxDB also offers edge deployment through its Edge Data Replication feature with Telegraf as a lightweight collector. For lightweight edge needs, VictoriaMetrics has a small resource footprint suitable for edge gateways. The key consideration is intermittent connectivity — your edge database needs to buffer data locally during outages and sync when connectivity resumes, which TDengine and InfluxDB handle natively.