Ana içeriğe geç

TimescaleDB Yavasladi

Semptomlar

  • Dashboard sayfalari yavas yukleniyor (5s+ bekleme)
  • Measurement API endpoint'leri timeout (504 Gateway Timeout) veriyor
  • Grafiklerde veri yuklenmesi cok uzun suruyor
  • Rapor olusturma islemi zaman asimina ugruyor

Olasi Sebepler

  1. Chunk boyutlari cok buyuk, sikistirilmamis chunk'lar disk doldurmus
  2. Eksik veya bozuk index'ler — sorgu planlayicisi full table scan yapiyor
  3. Compression calistirilmamis, eski veriler sikistirilmamis
  4. Retention policy calismiyor, gereksiz eski veriler birikmis
  5. Connection pool tukenmus, yeni baglantilar beklemede
  6. VACUUM uzun suredir calistirilmamis, dead tuple'lar birikmis
  7. Disk I/O doymus (slow disk veya IOPS limiti)

Teshis Adimlari

1. Aktif Sorgulari ve Surelerini Kontrol Et

# Yavaş calısan aktif sorgulari bul (5 saniyeden uzun)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT pid, state, wait_event_type, wait_event,
NOW() - query_start AS suresi,
LEFT(query, 150) AS sorgu
FROM pg_stat_activity
WHERE datname = 'zeus_db'
AND state = 'active'
AND query_start < NOW() - INTERVAL '5 seconds'
ORDER BY query_start;
"

# Kilitlenen sorgulari kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT blocked.pid AS engellenen_pid,
blocking.pid AS engelleyen_pid,
LEFT(blocked.query, 100) AS engellenen_sorgu,
LEFT(blocking.query, 100) AS engelleyen_sorgu
FROM pg_stat_activity blocked
JOIN pg_locks bl ON bl.pid = blocked.pid
JOIN pg_locks bll ON bll.locktype = bl.locktype
AND bll.relation = bl.relation
AND bll.pid != bl.pid
JOIN pg_stat_activity blocking ON blocking.pid = bll.pid
WHERE NOT bl.granted;
"

2. Chunk Boyutlarini Kontrol Et

# Hypertable chunk bilgilerini listele
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT hypertable_name,
chunk_name,
pg_size_pretty(total_bytes) AS toplam_boyut,
pg_size_pretty(table_bytes) AS tablo_boyut,
pg_size_pretty(index_bytes) AS index_boyut,
is_compressed,
range_start,
range_end
FROM timescaledb_information.chunks
WHERE hypertable_name = 'measurements'
ORDER BY range_end DESC
LIMIT 20;
"

# Toplam hypertable boyutunu kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT hypertable_name,
pg_size_pretty(hypertable_size('measurements')) AS toplam_boyut,
pg_size_pretty(pg_total_relation_size('measurements')) AS relation_boyut;
"

3. Compression Durumunu Kontrol Et

# Compression policy ayarlarini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT * FROM timescaledb_information.compression_settings
WHERE hypertable_name = 'measurements';
"

# Sikistirilmamis chunk sayisini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT is_compressed, COUNT(*) AS chunk_sayisi,
pg_size_pretty(SUM(total_bytes)) AS toplam_boyut
FROM timescaledb_information.chunks
WHERE hypertable_name = 'measurements'
GROUP BY is_compressed;
"

# Compression job durumunu kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT job_id, schedule_interval, max_runtime,
last_run_status, last_run_duration,
next_start
FROM timescaledb_information.jobs
WHERE proc_name = 'policy_compression';
"

4. Slow Query Log Kontrolu

# PostgreSQL slow query log'unu kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SHOW log_min_duration_statement;
"

# Log dosyasinda yavas sorgulari ara
docker logs --tail 500 zeus-postgres 2>&1 | grep -i "duration\|slow\|LOG:"

# En yavas sorgularin istatistiklerini goster (pg_stat_statements aktifse)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT LEFT(query, 100) AS sorgu,
calls AS cagri_sayisi,
ROUND(total_exec_time::numeric, 2) AS toplam_sure_ms,
ROUND(mean_exec_time::numeric, 2) AS ort_sure_ms,
rows AS satir_sayisi
FROM pg_stat_statements
WHERE dbid = (SELECT oid FROM pg_database WHERE datname = 'zeus_db')
ORDER BY mean_exec_time DESC
LIMIT 10;
"

5. Disk ve Bellek Durumunu Kontrol Et

# PostgreSQL veri dizininin disk kullanimini kontrol et
docker exec zeus-postgres df -h /var/lib/postgresql/data

# PostgreSQL bellek ayarlarini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT name, setting, unit
FROM pg_settings
WHERE name IN ('shared_buffers', 'work_mem', 'effective_cache_size',
'maintenance_work_mem', 'max_connections');
"

# Cache hit oranini kontrol et (dusukse bellek yetersiz)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT
sum(heap_blks_read) AS disk_okumalari,
sum(heap_blks_hit) AS cache_vuruslari,
ROUND(sum(heap_blks_hit) * 100.0 / NULLIF(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2)
AS cache_hit_orani
FROM pg_statio_user_tables;
"

Cozum Adimlari

Manuel Compression Calistir

# 2 gundan eski sikistirilmamis chunk'lari sikistir
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT compress_chunk(i)
FROM show_chunks('measurements', older_than => INTERVAL '2 days') i
WHERE NOT EXISTS (
SELECT 1 FROM timescaledb_information.chunks c
WHERE c.chunk_name = i::text AND c.is_compressed = true
);
"

# Compression policy'nin dogru calistigini dogrula
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT * FROM timescaledb_information.jobs
WHERE proc_name = 'policy_compression';
"

VACUUM ANALYZE Calistir

# measurements tablosu icin VACUUM ANALYZE
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
VACUUM (VERBOSE, ANALYZE) measurements;
"

# Tum veritabani icin VACUUM (buyuk veritabanlarinda uzun surebilir)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
VACUUM ANALYZE;
"

# Dead tuple istatistiklerini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT relname AS tablo,
n_live_tup AS canli_satirlar,
n_dead_tup AS olu_satirlar,
ROUND(n_dead_tup * 100.0 / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS olu_oran,
last_vacuum, last_autovacuum
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC
LIMIT 10;
"

Retention Policy Kontrol Et (730 gun)

# Mevcut retention policy'yi kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT * FROM timescaledb_information.jobs
WHERE proc_name = 'policy_retention';
"

# Retention policy yoksa veya yanlis ayarlanmissa olustur/guncelle
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
-- 730 gunluk retention policy (2 yil)
SELECT add_retention_policy('measurements', INTERVAL '730 days', if_not_exists => true);
"

# Eski verileri manuel temizle (dikkatli kullanin)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT drop_chunks('measurements', older_than => INTERVAL '730 days');
"

Index Rebuild

# Measurements tablosundaki index'leri listele
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT indexname, pg_size_pretty(pg_relation_size(indexname::regclass)) AS boyut
FROM pg_indexes
WHERE tablename = 'measurements'
ORDER BY pg_relation_size(indexname::regclass) DESC;
"

# Bozuk index'leri yeniden olustur (CONCURRENTLY ile kesintisiz)
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
REINDEX TABLE CONCURRENTLY measurements;
"

Connection Pool Artir

# Mevcut baglanti kullanimini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "
SELECT count(*) AS toplam,
count(*) FILTER (WHERE state = 'active') AS aktif,
count(*) FILTER (WHERE state = 'idle') AS bosta,
count(*) FILTER (WHERE state = 'idle in transaction') AS islemde_bosta
FROM pg_stat_activity
WHERE datname = 'zeus_db';
"

# PostgreSQL max_connections ayarini kontrol et
docker exec -it zeus-postgres psql -U zeus -d zeus_db -c "SHOW max_connections;"

# Backend connection pool ayarlarini guncelle
# docker-compose.yml icinde:
# DATABASE_POOL_SIZE=30 (varsayilan: 10)
# DATABASE_MAX_OVERFLOW=50 (varsayilan: 20)
# DATABASE_POOL_TIMEOUT=60 (varsayilan: 30)
docker compose restart backend

Eskalasyon

Asagidaki durumlarda DBA veya DevOps ekibine eskalasyon yapin:

  • Disk dolulugu %90'i asmissa — acil disk genisletme veya eski veri temizleme gerekli
  • Cache hit orani %95'in altina dusmisse — bellek artirimi gerekebilir
  • Chunk bozulmasi suphesi varsa — TimescaleDB destek ekibine basin
  • Sorgu suresi 30 saniyeyi geciyor ve index/vacuum ise yaramadiysa
  • Connection pool surekli doluyorsa ve backend olcekleme gerekiyorsa
  • Veri kaybi suphesi varsa (chunk'lar okunamiyor, consistency sorunu)