Skip to content

Commit 229e0f0

Browse files
committed
Update schedule with run in README, fix title
1 parent 30d46ba commit 229e0f0

File tree

2 files changed

+18
-14
lines changed

2 files changed

+18
-14
lines changed

README.rst

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
1-
Scrapinghub command line client
2-
===============================
1+
====================================
2+
Client interface for Scrapinghub API
3+
====================================
4+
5+
.. image:: https://secure.travis-ci.org/scrapinghub/python-scrapinghub.png?branch=master
6+
:target: http://travis-ci.org/scrapinghub/python-scrapinghub
37

48
The ``scrapinghub`` is a Python library for communicating with the `Scrapinghub API`_.
59

docs/client/overview.rst

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Jobs instance is described well in ``Jobs`` section below.
5555

5656
For example, to schedule a spider run (it returns a job object)::
5757

58-
>>> project.jobs.schedule('spider1', job_args={'arg1':'val1'})
58+
>>> project.jobs.run('spider1', job_args={'arg1':'val1'})
5959
<scrapinghub.client.Job at 0x106ee12e8>>
6060

6161
Project instance also has the following fields:
@@ -118,7 +118,7 @@ Like project instance, spider instance has ``jobs`` field to work with the spide
118118

119119
To schedule a spider run::
120120

121-
>>> spider.jobs.schedule(job_args={'arg1:'val1'})
121+
>>> spider.jobs.run(job_args={'arg1:'val1'})
122122
<scrapinghub.client.Job at 0x106ee12e8>>
123123

124124
Note that you don't need to specify spider name explicitly.
@@ -141,30 +141,30 @@ Also there's a shortcut to get same job with client instance::
141141

142142
>>> job = client.get_job('123/1/2')
143143

144-
schedule
145-
^^^^^^^^
144+
run
145+
^^^
146146

147-
Use ``schedule`` method to schedule a new job for project/spider::
147+
Use ``run`` method to run a new job for project/spider::
148148

149-
>>> job = spider.jobs.schedule()
149+
>>> job = spider.jobs.run()
150150

151151
Scheduling logic supports different options, like
152152

153-
- spider_args to provide spider arguments for the job
154-
- units to specify amount of units to schedule the job
153+
- job_args to provide spider arguments for the job
154+
- units to specify amount of units to run the job
155155
- job_settings to pass additional settings for the job
156156
- priority to set higher/lower priority of the job
157157
- add_tag to create a job with a set of initial tags
158158
- meta to pass additional custom metadata
159159

160-
For example, to schedule a new job for a given spider with custom params::
160+
For example, to run a new job for a given spider with custom params::
161161

162-
>>> job = spider.jobs.schedule(units=2, job_settings={'SETTING': 'VALUE'},
162+
>>> job = spider.jobs.run(units=2, job_settings={'SETTING': 'VALUE'},
163163
priority=1, add_tag=['tagA','tagB'], meta={'custom-data': 'val1'})
164164

165-
Note that if you schedule a job on project level, spider name is required::
165+
Note that if you run a job on project level, spider name is required::
166166

167-
>>> job = project.jobs.schedule('spider1')
167+
>>> job = project.jobs.run('spider1')
168168

169169
count
170170
^^^^^

0 commit comments

Comments
 (0)