使用 Redis

支持Redis需要安装额外的依赖,可以使用pip install -U "celery[redis]"同时安装Celery以及这些依赖。

配置连接

直连:

app.conf.broker_url = 'redis://localhost:6379/0'

其URL格式为:

redis://:password@hostname:port/db_number

Scheme之后所有字段都是可选的,默认会使用6379端口连接localhost ,使用db0

使用Unix socket

格式:

redis+socket:///path/to/redis.sock

也可以通过virtual_host参数指定需要使用的db_number

redis+socket:///path/to/redis.sock?virtual_host=db_number

连接sentinel

格式:

app.conf.broker_url = 'sentinel://localhost:26379;sentinel://localhost:26380;sentinel://localhost:26381'
app.conf.broker_transport_options = { 'master_name': "cluster1" }

还可以使用sentinel_kwargs传递其他选项:

app.conf.broker_transport_options = { 'sentinel_kwargs': { 'password': "password" } }

Visibility Timeout

The visibility timeout defines the number of seconds to wait for the worker to acknowledge the task before the message is redelivered to another worker. Be sure to see Caveatsopen in new window below.

这个时间定义的是任务被调度到某个worker之后,该任务的等待时间。即如果超过这个时间这个worker仍未确认该任务(确认任务表示的应该是任务被某个worker接收),则任务会被调度到其他worker。(这个句子太长了,这是我的理解。)务必阅读下面的注意事项。

这个选项是通过 broker_transport_options选项设置的:

app.conf.broker_transport_options = {'visibility_timeout': 3600}  # 1 hour.

对于Redis,该选项默认值是一个小时,也就是3600秒。

存储结果

如果要使用Redis存储结果,配置如下:

app.conf.result_backend = 'redis://localhost:6379/0'

完整的关于Redis后端的配置列表,参考:Redis后端设置

对于Sentinel,需要使用 result_backend_transport_options 选项指定 master_name:

app.conf.result_backend_transport_options = {'master_name': "mymaster"}

官方文档没有关于Sentinel的后端配置示例。

连接超时

使用result_backend_transport_options 选项中的retry_policy来配置Redis作为结果后端时的连接超时。

app.conf.result_backend_transport_options = {
    'retry_policy': {
       'timeout': 5.0
    }
}

查看 retry_over_time()获取可能的重试策略

注意事项

Visibility timeout

如果一个任务在该时间内没有执行完成,则会被调度到其他worker执行。

这可能会导致ETA/countdown/retry这类任务超时,实际上,如果真的发生了,任务会被一遍一遍的重复执行。因此你需要延长该时间以匹配此类任务。

注意,Celery会在worker关闭时重新调度,因此在电源故障或者强制终止导致的任务丢失中,长的超时时间会导致此类事件的调度推迟。

周期性任务不受该时间影响,它和ETA/countdown是不同的概念。

app.conf.broker_transport_options = {'visibility_timeout': 43200}

该值必须是一个整数,用于描述秒数。

Key eviction

Reids会在某些情况下清理Key,这种情况会收到下面的错误:

InconsistencyError: Probably the key ('_kombu.binding.celery') has been removed from the Redis database.

then you may want to configure the redis-server to not evict keys by setting in the redis configuration file:

  • the maxmemory option
  • the maxmemory-policy option to noeviction or allkeys-lru

See Redis server documentation about Eviction Policies for details:

https://redis.io/topics/lru-cacheopen in new window

这个问题应该是Redis清理Key的策略导致的,一般情况下应该不会遇到。

Group result order

Versions of Celery up to and including 4.4.6 used an unsorted list to store result objects for groups in the Redis backend. This can cause those results to be be returned in a different order to their associated tasks in the original group instantiation. Celery 4.4.7 introduced an opt-in behaviour which fixes this issue and ensures that group results are returned in the same order the tasks were defined, matching the behaviour of other backends. In Celery 5.0 this behaviour was changed to be opt-out. The behaviour is controlled by the result_chord_ordered configuration option which may be set like so:

包括4.4.6之前的版本,对于Group的结果使用未排序的列表来存储结果对象。这可能导致顺序不一致,4.4.7中修复了这个问题。5.0中又改了。通过下面的配置选项控制这个行为。

## Specifying this for workers running Celery 4.4.6 or earlier has no effect
app.conf.result_backend_transport_options = {
    'result_chord_ordered': True    # or False
}

This is an incompatible change in the runtime behaviour of workers sharing the same Redis backend for result storage, so all workers must follow either the new or old behaviour to avoid breakage. For clusters with some workers running Celery 4.4.6 or earlier, this means that workers running 4.4.7 need no special configuration and workers running 5.0 or later must have result_chord_ordered set to False. For clusters with no workers running 4.4.6 or earlier but some workers running 4.4.7, it is recommended that result_chord_ordered be set to True for all workers to ease future migration. Migration between behaviours will disrupt results currently held in the Redis backend and cause breakage if downstream tasks are run by migrated workers - plan accordingly.

5.0对于这部分的改动和之前版本不兼容。升级需要注意仔细看看上面怎么说的。