Skip to content

Commit e7a6cec

Browse files
committed
Use run endpoint instead of schedule
1 parent 8e7d604 commit e7a6cec

File tree

4 files changed

+20
-20
lines changed

4 files changed

+20
-20
lines changed

README.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ Jobs instance is described well in ``Jobs`` section below.
8888

8989
For example, to schedule a spider run (it returns a job object)::
9090

91-
>>> project.jobs.schedule('spider1', job_args={'arg1':'val1'})
91+
>>> project.jobs.run('spider1', job_args={'arg1':'val1'})
9292
<scrapinghub.client.Job at 0x106ee12e8>>
9393

9494
Project instance also has the following fields:
@@ -151,7 +151,7 @@ Like project instance, spider instance has ``jobs`` field to work with the spide
151151

152152
To schedule a spider run::
153153

154-
>>> spider.jobs.schedule(job_args={'arg1:'val1'})
154+
>>> spider.jobs.run(job_args={'arg1:'val1'})
155155
<scrapinghub.client.Job at 0x106ee12e8>>
156156

157157
Note that you don't need to specify spider name explicitly.
@@ -174,30 +174,30 @@ Also there's a shortcut to get same job with client instance::
174174

175175
>>> job = client.get_job('123/1/2')
176176

177-
schedule
178-
^^^^^^^^
177+
run
178+
^^^
179179

180-
Use ``schedule`` method to schedule a new job for project/spider::
180+
Use ``run`` method to run a new job for project/spider::
181181

182-
>>> job = spider.jobs.schedule()
182+
>>> job = spider.jobs.run()
183183

184184
Scheduling logic supports different options, like
185185

186186
- spider_args to provide spider arguments for the job
187-
- units to specify amount of units to schedule the job
187+
- units to specify amount of units to run the job
188188
- job_settings to pass additional settings for the job
189189
- priority to set higher/lower priority of the job
190190
- add_tag to create a job with a set of initial tags
191191
- meta to pass additional custom metadata
192192

193-
For example, to schedule a new job for a given spider with custom params::
193+
For example, to run a new job for a given spider with custom params::
194194

195-
>>> job = spider.jobs.schedule(units=2, job_settings={'SETTING': 'VALUE'},
195+
>>> job = spider.jobs.run(units=2, job_settings={'SETTING': 'VALUE'},
196196
priority=1, add_tag=['tagA','tagB'], meta={'custom-data': 'val1'})
197197

198-
Note that if you schedule a job on project level, spider name is required::
198+
Note that if you run a job on project level, spider name is required::
199199

200-
>>> job = project.jobs.schedule('spider1')
200+
>>> job = project.jobs.run('spider1')
201201

202202
count
203203
^^^^^

scrapinghub/client/jobs.py

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -179,9 +179,9 @@ def list(self, count=None, start=None, spider=None, state=None,
179179
lacks_tag=lacks_tag, startts=startts, endts=endts)
180180
return list(self.iter(**params))
181181

182-
def schedule(self, spider=None, units=None, priority=None, meta=None,
183-
add_tag=None, job_args=None, job_settings=None, cmd_args=None,
184-
**params):
182+
def run(self, spider=None, units=None, priority=None, meta=None,
183+
add_tag=None, job_args=None, job_settings=None, cmd_args=None,
184+
**params):
185185
"""Schedule a new job and returns its job key.
186186
187187
:param spider: a spider name string
@@ -200,7 +200,7 @@ def schedule(self, spider=None, units=None, priority=None, meta=None,
200200
201201
Usage::
202202
203-
>>> project.jobs.schedule('spider1', job_args={'arg1': 'val1'})
203+
>>> project.jobs.run('spider1', job_args={'arg1': 'val1'})
204204
'123/1/1'
205205
"""
206206
if not spider and not self.spider:
@@ -218,10 +218,9 @@ def schedule(self, spider=None, units=None, priority=None, meta=None,
218218
update_kwargs(params, units=units, priority=priority, add_tag=add_tag,
219219
cmd_args=cmd_args, job_settings=job_settings, meta=meta)
220220

221-
# FIXME improve to schedule multiple jobs
221+
# FIXME improve to run multiple jobs
222222
try:
223-
response = self._client._connection._post(
224-
'schedule', 'json', params)
223+
response = self._client._connection._post('run', 'json', params)
225224
except BadRequest as exc:
226225
if 'already scheduled' in str(exc):
227226
raise DuplicateJobError(exc)

scrapinghub/client/projects.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ def summary(self, state=None, **params):
7171
:param state: a string state or a list of states.
7272
:return: a list of dictionaries: each dictionary represents a project
7373
summary (amount of pending/running/finished jobs and a flag if it
74-
has a capacity to schedule new jobs).
74+
has a capacity to run new jobs).
7575
:rtype: list[dict]
7676
7777
Usage::

scrapinghub/legacy.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,8 @@ class Connection(object):
4545
'eggs_list': 'eggs/list',
4646
'as_project_slybot': 'as/project-slybot',
4747
'as_spider_properties': 'as/spider-properties',
48-
'schedule': 'schedule',
48+
'run': 'run',
49+
'schedule': 'schedule', # deprecated in favour of run
4950
'items': 'items',
5051
'log': 'log',
5152
'spiders': 'spiders/list',

0 commit comments

Comments
 (0)