Recap: Performant time-series data management & analytics with Postgres

Recap: Performant time-series data management & analytics with Postgres

Watch the webinar recording to learn more about TimescaleDB’s inception and how it improves insert rates by 20x over vanilla Postgres

As you may already know, time-series databases are one of the fasting growing segments of the database market, spreading across industries and use cases. Unfortunately, many developers working with time-series data think they have to turn to NoSQL databases for storage with scale, and relational databases for managing associated metadata and key business data. (Don’t do this!)

This approach leads to engineering complexity, operational challenges, and even referential integrity concerns. So this week, we hosted a webinar titled Performant Time-Series Data Management and Analytics with Postgres which dives deeper into how we built TimescaleDB on top of Postgres to manage all of your time-series data.

More specifically, in this webinar we touched on several topics including:

  • Information on the new wave of computing due to the rise of machine data
  • Background on time-series databases and what makes them unique compared to traditional relational databases
  • Story of how we re-engineer Postgres to serve as a general data platform
  • Details on TimescaleDB’s features which allow us to improves insert rates by 20x over vanilla Postgres and achieve much faster queries

If this sounds interesting to you, watch the video below!

WATCH NOW:

At the end of the webinar we had some time for a Q&A. Here is a selection of questions:

What is the lowest level of granularity timestamp supports? For example, can the timestamp have microsecond or nanosecond granularity?

One of our main principles is creating flexibility for users. We don’t try and force your data model into our particular storage model. That said, we offer timestamps with or without timezones. Internally, the timestamp data type is kept as a microsecond, however, you are also able to use integers.

Is any latency added to writes when a new chunk is created or required?

When creating a chunk, we’ve determined that latency is either trivial or unnoticeable. We typically recommend that each chunk has in the millions (or 10s of millions) of rows, so the frequency of the number of times you do chunk creation compared to the number of rows or inserts is relatively small. Overall, we’ve really optimized the path to chunk creation within the database.  

How do “day” aggregates get calculated when users are in different time zones?

This is typically a use case specific problem. We often suggest that people store data in UTC time and then have an optional field that describes a local timestamp.

If the data is chronological, but doesn’t arrive in chronological order and requires frequent updates and inserts, would TimescaleDB still be a good choice?

TimescaleDB fully supports things like updates, and upserts, so yes, you could build a unique constraint and insert the data as new. Make sure you pay attention to the amount of data you are inserting and adjust your insert rate to avoid taking up too much memory.

When first getting started with TimescaleDB, what’s the best way to ask the team questions?

There’s two main venues. If there is a bug or you want to request an enhancement, we invite you to file an issue on GitHub. If you just want to talk to the broader community and our engineers, our Slack channel is a great place for that.

If you have other questions, feel free to leave a comment in the section below.

This post was written by
2 min read
Events & Recaps
Contributors

Related posts

TimescaleDB - Timeseries database for PostgreSQL

Explore TimescaleDB

Learn more about how TimescaleDB works, compare versions, and get technical guidance and tutorials.

Go to docs Go to products