我想将查询除以数百万,以实现可伸缩性和并发性,我想用几天前的日期来获取带有update_at字段的所有数据。使用Heroku的Ronin硬件在100秒内实现最高性能。
我正在寻找一些建议,我并没有尝试使其尽可能高效。
TRY#1
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (date(updated_at)) < (date(now())-7) AND id >= 5000001 AND id < 6000001;
INDEX USED: (date(updated_at),id)
268578.934 ms
TRY#2
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE ((date(now()) - (date(updated_at)) > 7)) AND id >= 5000001 AND id < 6000001;
INDEX USED: primary key
335555.144 ms
TRY#3
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (date(updated_at)) < (date(now())-7) AND id/1000000 = 5;
INDEX USED: (date(updated_at),(id/1000000))
243427.042 ms
TRY#4
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (date(updated_at)) < (date(now())-7) AND id/1000000 = 5 AND updated_at IS NOT NULL;
INDEX USED: (date(updated_at),(id/1000000)) WHERE updated_at IS NOT NULL
706714.812 ms
TRY#5(用于一个月的过时数据)
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (EXTRACT(MONTH from date(updated_at)) = 8) AND id/1000000 = 5;
INDEX USED: (EXTRACT(MONTH from date(updated_at)),(id/1000000))
107241.472 ms
TRY#6
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (date(updated_at)) < (date(now())-7) AND id/1000000 = 5;
INDEX USED: ( (id/1000000 ) ASC ,updated_at DESC NULLS LAST)
106842.395 ms
TRY#7(请参阅:http://explain.depesz.com/s/DQP)
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE id/1000000 = 5 and (date(updated_at)) < (date(now())-7);
INDEX USED: ( (id/1000000 ) ASC ,date(updated_at) DESC NULLS LAST);
100732.049 ms
Second try: 87280.728 ms
TRY#8
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE (date(updated_at)) < (date(now())-7) AND id/1000000 = 5 AND updated_at IS NOT NULL;
INDEX USED: ( (id/1000000 ) ASC ,date(updated_at) ASC NULLS LAST);
129133.022 ms
TRY#9(根据Erwin的建议,部分索引,请参阅:
http://explain.depesz.com/s/p9A)
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE id BETWEEN 5000000 AND 5999999 AND (date(updated_at)) < '2012-10-23'::date;
INDEX USED: (date(updated_at) DESC NULLS LAST)
WHERE id BETWEEN 5000000 AND 6000000 AND date(updated_at) < '2012-10-23'::date;
73861.047 ms
尝试#10(根据Erwin的建议,群集)。
CREATE INDEX ix_8 on objects ( (id/1000000 ) ASC ,date(updated_at) DESC NULLS LAST);
CLUSTER entities USING ix_8;
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE id/1000000 = 5 and (date(updated_at)) < (date(now())-7) ;
4745.595 ms
EXPLAIN ANALYZE SELECT count(*) FROM objects
WHERE id/1000000 = 10 and (date(updated_at)) < (date(now())-7) ;
17573.639 ms
==>这种解决方案似乎是制胜法宝。我必须进行彻底测试以验证应用程序中各处的反作用。
数据库设置:
从pg_settings中选择名称,min_val,max_val,boot_val;
name | min_val | max_val | boot_val
--------------------------------+-----------+--------------+-------------------
allow_system_table_mods | | | off
application_name | | |
archive_command | | |
archive_mode | | | off
archive_timeout | 0 | 2147483647 | 0
array_nulls | | | on
authentication_timeout | 1 | 600 | 60
autovacuum | | | on
autovacuum_analyze_scale_factor | 0 | 100 | 0.1
autovacuum_analyze_threshold | 0 | 2147483647 | 50
autovacuum_freeze_max_age | 100000000 | 2000000000 | 200000000
autovacuum_max_workers | 1 | 536870911 | 3
autovacuum_naptime | 1 | 2147483 | 60
autovacuum_vacuum_cost_delay | -1 | 100 | 20
autovacuum_vacuum_cost_limit | -1 | 10000 | -1
autovacuum_vacuum_scale_factor | 0 | 100 | 0.2
autovacuum_vacuum_threshold | 0 | 2147483647 | 50
backslash_quote | | | safe_encoding
bgwriter_delay | 10 | 10000 | 200
bgwriter_lru_maxpages | 0 | 1000 | 100
bgwriter_lru_multiplier | 0 | 10 | 2
block_size | 8192 | 8192 | 8192
bonjour | | | off
bonjour_name | | |
bytea_output | | | hex
check_function_bodies | | | on
checkpoint_completion_target | 0 | 1 | 0.5
checkpoint_segments | 1 | 2147483647 | 3
checkpoint_timeout | 30 | 3600 | 300
checkpoint_warning | 0 | 2147483647 | 30
client_encoding | | | SQL_ASCII
client_min_messages | | | notice
commit_delay | 0 | 100000 | 0
commit_siblings | 1 | 1000 | 5
constraint_exclusion | | | partition
cpu_index_tuple_cost | 0 | 1.79769e+308 | 0.005
cpu_operator_cost | 0 | 1.79769e+308 | 0.0025
cpu_tuple_cost | 0 | 1.79769e+308 | 0.01
cursor_tuple_fraction | 0 | 1 | 0.1
custom_variable_classes | | |
DateStyle | | | ISO, MDY
db_user_namespace | | | off
deadlock_timeout | 1 | 2147483 | 1000
debug_assertions | | | off
debug_pretty_print | | | on
debug_print_parse | | | off
debug_print_plan | | | off
debug_print_rewritten | | | off
default_statistics_target | 1 | 10000 | 100
default_tablespace | | |
default_text_search_config | | | pg_catalog.simple
default_transaction_isolation | | | read committed
default_transaction_read_only | | | off
default_with_oids | | | off
effective_cache_size | 1 | 2147483647 | 16384
effective_io_concurrency | 0 | 1000 | 1
enable_bitmapscan | | | on
enable_hashagg | | | on
enable_hashjoin | | | on
enable_indexscan | | | on
enable_material | | | on
enable_mergejoin | | | on
enable_nestloop | | | on
enable_seqscan | | | on
enable_sort | | | on
enable_tidscan | | | on
escape_string_warning | | | on
extra_float_digits | -15 | 3 | 0
from_collapse_limit | 1 | 2147483647 | 8
fsync | | | on
full_page_writes | | | on
geqo | | | on
geqo_effort | 1 | 10 | 5
geqo_generations | 0 | 2147483647 | 0
geqo_pool_size | 0 | 2147483647 | 0
geqo_seed | 0 | 1 | 0
geqo_selection_bias | 1.5 | 2 | 2
geqo_threshold | 2 | 2147483647 | 12
gin_fuzzy_search_limit | 0 | 2147483647 | 0
hot_standby | | | off
ignore_system_indexes | | | off
integer_datetimes | | | on
IntervalStyle | | | postgres
join_collapse_limit | 1 | 2147483647 | 8
krb_caseins_users | | | off
krb_srvname | | | postgres
lc_collate | | | C
lc_ctype | | | C
lc_messages | | |
lc_monetary | | | C
lc_numeric | | | C
lc_time | | | C
listen_addresses | | | localhost
lo_compat_privileges | | | off
local_preload_libraries | | |
log_autovacuum_min_duration | -1 | 2147483 | -1
log_checkpoints | | | off
log_connections | | | off
log_destination | | | stderr
log_disconnections | | | off
log_duration | | | off
log_error_verbosity | | | default
log_executor_stats | | | off
log_hostname | | | off
log_line_prefix | | |
log_lock_waits | | | off
log_min_duration_statement | -1 | 2147483 | -1
log_min_error_statement | | | error
log_min_messages | | | warning
log_parser_stats | | | off
log_planner_stats | | | off
log_rotation_age | 0 | 35791394 | 1440
log_rotation_size | 0 | 2097151 | 10240
log_statement | | | none
log_statement_stats | | | off
log_temp_files | -1 | 2147483647 | -1
log_timezone | | | UNKNOWN
log_truncate_on_rotation | | | off
logging_collector | | | off
maintenance_work_mem | 1024 | 2097151 | 16384
max_connections | 1 | 536870911 | 100
max_files_per_process | 25 | 2147483647 | 1000
max_function_args | 100 | 100 | 100
max_identifier_length | 63 | 63 | 63
max_index_keys | 32 | 32 | 32
max_locks_per_transaction | 10 | 2147483647 | 64
max_prepared_transactions | 0 | 536870911 | 0
max_stack_depth | 100 | 2097151 | 100
max_standby_archive_delay | -1 | 2147483 | 30000
max_standby_streaming_delay | -1 | 2147483 | 30000
max_wal_senders | 0 | 536870911 | 0
password_encryption | | | on
port | 1 | 65535 | 5432
post_auth_delay | 0 | 2147483647 | 0
pre_auth_delay | 0 | 60 | 0
random_page_cost | 0 | 1.79769e+308 | 4
search_path | | | "$user",public
segment_size | 131072 | 131072 | 131072
seq_page_cost | 0 | 1.79769e+308 | 1
server_encoding | | | SQL_ASCII
server_version | | | 9.0.8
server_version_num | 90008 | 90008 | 90008
session_replication_role | | | origin
shared_buffers | 16 | 1073741823 | 1024
silent_mode | | | off
sql_inheritance | | | on
ssl | | | off
ssl_renegotiation_limit | 0 | 2097151 | 524288
standard_conforming_strings | | | off
statement_timeout | 0 | 2147483647 | 0
superuser_reserved_connections | 0 | 536870911 | 3
synchronize_seqscans | | | on
synchronous_commit | | | on
syslog_facility | | | local0
syslog_ident | | | postgres
tcp_keepalives_count | 0 | 2147483647 | 0
tcp_keepalives_idle | 0 | 2147483647 | 0
tcp_keepalives_interval | 0 | 2147483647 | 0
temp_buffers | 100 | 1073741823 | 1024
temp_tablespaces | | |
TimeZone | | | UNKNOWN
timezone_abbreviations | | | UNKNOWN
trace_notify | | | off
trace_recovery_messages | | | log
trace_sort | | | off
track_activities | | | on
track_activity_query_size | 100 | 102400 | 1024
track_counts | | | on
track_functions | | | none
transaction_isolation | | |
transaction_read_only | | | off
transform_null_equals | | | off
unix_socket_group | | |
unix_socket_permissions | 0 | 511 | 511
update_process_title | | | on
vacuum_cost_delay | 0 | 100 | 0
vacuum_cost_limit | 1 | 10000 | 200
vacuum_cost_page_dirty | 0 | 10000 | 20
vacuum_cost_page_hit | 0 | 10000 | 1
vacuum_cost_page_miss | 0 | 10000 | 10
vacuum_defer_cleanup_age | 0 | 1000000 | 0
vacuum_freeze_min_age | 0 | 1000000000 | 50000000
vacuum_freeze_table_age | 0 | 2000000000 | 150000000
wal_block_size | 8192 | 8192 | 8192
wal_buffers | 4 | 2147483647 | 8
wal_keep_segments | 0 | 2147483647 | 0
wal_level | | | minimal
wal_segment_size | 2048 | 2048 | 2048
wal_sender_delay | 1 | 10000 | 200
wal_sync_method | | | fdatasync
wal_writer_delay | 1 | 10000 | 200
work_mem | 64 | 2097151 | 1024
xmlbinary | | | base64
xmloption | | | content
zero_damaged_pages | | | off
(195 rows)
#1 楼
首先,可以吗?您写道:我想用几天前的日期
来获取带有update_at字段的所有数据。
但是您的
WHERE
条件是:(date(updated_at)) < (date(now())-7)
那不是
>
吗?索引
为获得最佳性能,您可以...
对索引进行分区
从索引中排除不相关的行
在非工作时间使用更新的谓词自动重新创建索引。
您的索引应类似于:
CREATE INDEX objects_id_updated_at_idx (updated_at::date DESC NULLS LAST)
WHERE id BETWEEN 0 AND 999999
AND updated_at > '2012-10-01 0:0'::timestamp -- some minimum date
CREATE INDEX objects_id_updated_at_idx (updated_at::date DESC NULLS LAST)
WHERE id BETWEEN 1000000 AND 1999999
AND updated_at > '2012-10-01 0:0'::timestamp -- some minimum date
...
第二个条件从立即建立索引,这将使其变得更小,更快-取决于您的实际数据分布。根据我的初步评论,我假设您想要更新的行。
该条件还会自动排除
updated_at
中的NULL值-您似乎在表中允许并且显然希望在查询中排除。该指数的实用性会随着时间而下降。查询始终检索最新条目。定期使用更新的WHERE
子句重新创建索引。这需要在桌子上设置排他锁,因此请在下班时间进行。还有CREATE INDEX CONCURRENTLY
可以最大程度地减少锁定时间:CREATE INDEX CONCURRENTLY objects_id_up_201211_idx; -- create new idx
DROP INDEX objects_id_up_201210_idx; -- then drop old
SO的相关答案:错误的顺序
要进一步优化,可以使用我们在评论中提到的
CLUSTER
。但是您需要一个完整的索引。不适用于部分索引。您会临时创建:CREATE INDEX objects_full_idx (id/1000000, updated_at::date DESC NULLS LAST);
这种完整索引的形式与上述部分索引的排序顺序匹配。
CLUSTER objects USING objects_full_idx;
ANALYZE objects;
这将需要一段时间,因为该表已被物理重写。它实际上也是
VACUUM FULL
。它需要在表上具有排他性的写锁定,因此可以在下班时间进行—只要您负担得起。同样,还有一种侵入性较小的方法:pg_repack 然后您可以再次删除索引。这是一次性的效果。我至少会尝试一次,以查看您的查询可从中受益。随着后续的写操作,该效果变差。如果您看到了显着效果,则可以在下班时间重复此过程。
如果表进行了大量写操作,则必须权衡此步骤的成本和收益。对于许多更新,请考虑将
FILLFACTOR
设置为小于100。在执行CLUSTER
之前,请执行此操作。 />这个相关的答案提出了一种更高级的索引分区技术:空间索引是否可以帮助“范围-按限制范围排序”查询
它提供了自动索引(重新)创建的示例代码。
PostgreSQL 9.2+为您提供了几个新功能。仅索引扫描就值得使用。
请确保
autovacuum
运行正常。您报告的CLUSTER
的巨大收益可能部分归因于您从VACUUM FULL
获得的隐式CLUSTER
。也许不确定,这是由Heroku自动设置的。问题中的设置看起来不错。因此,这可能不是问题,并且
CLUSTER
确实有效。声明性分区
终于在Postgres 12中成熟了。我会考虑现在使用它代替手动索引分区(或至少另外)。使用
updated_at
作为分区键进行范围分区。 (除了在总体性能,尤其是大数据和btree索引性能方面的多项改进。)
评论
该表结构具有约20-30列,一些外键整数,字符串,文本,布尔值。索引定义在上面的发布中INDEX USEd旁边。 (我发布了查询使用的8个索引)。我还有一些索引可以更快地更新并为我的应用程序选择。最后,我正在使用云数据库,并且没有任何更改。我最想知道的是,在进行这些优化之前,我的索引定义是否足够好。不过,我将更新信息。