Skip to content

Two celery workers got the lock at the same time #93

@fz-gaojian

Description

@fz-gaojian

I have deployed Celery in both machines, and is the same code,and Broker is the same redis,but I just wants celery beat to execute once, so I used python-redis-lock.

But two celery workers got the lock at the same time.

I configure it every 10 minutes to execute it,task code is roughly like this:

from redis import Redis
import redis_lock
conn = Redis()
lock = redis_lock.Lock(conn, "test-lock", expire=60 * 10 - 1)
if lock.acquire(blocking=False):
    print("Doing some work ...")

Then I look at most of the logs is the expected result,but I noticed that there is a result that does not meet the expected expectations.
One of the machines logs:

[2022-04-08 18:10:00,463: INFO/MainProcess] Received task: test_task[xxxxxxxxxxxxxxxxx1]
[2022-04-08 18:10:00,467: INFO/ForkPoolWorker-3] Got lock for 'lock:test-lock'.
[2022-04-08 18:10:00,493: INFO/ForkPoolWorker-3] Task test_task[xxxxxxxxxxxxxxxxx1] succeeded in 0.027071014046669006s: xxxxxxxxxxx

Another:

[2022-04-08 18:10:00,323: INFO/MainProcess] Received task: test_task[xxxxxxxxxxxxxxxxx2]
[2022-04-08 18:10:00,326: INFO/ForkPoolWorker-3] Got lock for 'lock:test-lock'.
[2022-04-08 18:10:00,546: INFO/ForkPoolWorker-3] Task test_task[xxxxxxxxxxxxxxxxx2] succeeded in 0.22096195071935654s: xxxxxxxxxxx

If you can tell me the reason, I will be grateful.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions