@@ -3373,83 +3373,79 @@ Google BigQuery (Experimental)
33733373The :mod: `pandas.io.gbq ` module provides a wrapper for Google's BigQuery
33743374analytics web service to simplify retrieving results from BigQuery tables
33753375using SQL-like queries. Result sets are parsed into a pandas
3376- DataFrame with a shape derived from the source table. Additionally,
3377- DataFrames can be uploaded into BigQuery datasets as tables
3378- if the source datatypes are compatible with BigQuery ones .
3376+ DataFrame with a shape and data types derived from the source table.
3377+ Additionally, DataFrames can be appended to existing BigQuery tables if
3378+ the destination table is the same shape as the DataFrame .
33793379
33803380For specifics on the service itself, see `here <https://developers.google.com/bigquery/ >`__
33813381
3382- As an example, suppose you want to load all data from an existing table
3383- : `test_dataset.test_table `
3384- into BigQuery and pull it into a DataFrame .
3382+ As an example, suppose you want to load all data from an existing BigQuery
3383+ table : `test_dataset.test_table ` into a DataFrame using the :func: ` ~pandas.io.read_gbq `
3384+ function .
33853385
33863386.. code-block :: python
3387-
3388- from pandas.io import gbq
3389-
33903387 # Insert your BigQuery Project ID Here
3391- # Can be found in the web console, or
3392- # using the command line tool `bq ls`
3388+ # Can be found in the Google web console
33933389 projectid = " xxxxxxxx"
33943390
3395- data_frame = gbq .read_gbq(' SELECT * FROM test_dataset.test_table' , project_id = projectid)
3391+ data_frame = pd .read_gbq(' SELECT * FROM test_dataset.test_table' , project_id = projectid)
33963392
3397- The user will then be authenticated by the `bq ` command line client -
3398- this usually involves the default browser opening to a login page,
3399- though the process can be done entirely from command line if necessary.
3400- Datasets and additional parameters can be either configured with `bq `,
3401- passed in as options to `read_gbq `, or set using Google's gflags (this
3402- is not officially supported by this module, though care was taken
3403- to ensure that they should be followed regardless of how you call the
3404- method).
3393+ You will then be authenticated to the specified BigQuery account
3394+ via Google's Oauth2 mechanism. In general, this is as simple as following the
3395+ prompts in a browser window which will be opened for you. Should the browser not
3396+ be available, or fail to launch, a code will be provided to complete the process
3397+ manually. Additional information on the authentication mechanism can be found
3398+ `here <https://developers.google.com/accounts/docs/OAuth2#clientside/ >`__
34053399
3406- Additionally, you can define which column to use as an index as well as a preferred column order as follows:
3400+ You can define which column from BigQuery to use as an index in the
3401+ destination DataFrame as well as a preferred column order as follows:
34073402
34083403.. code-block :: python
34093404
3410- data_frame = gbq .read_gbq(' SELECT * FROM test_dataset.test_table' ,
3405+ data_frame = pd .read_gbq(' SELECT * FROM test_dataset.test_table' ,
34113406 index_col = ' index_column_name' ,
3412- col_order = ' [col1, col2, col3,...]' , project_id = projectid)
3413-
3414- Finally, if you would like to create a BigQuery table, `my_dataset.my_table `, from the rows of DataFrame, `df `:
3407+ col_order = [' col1' , ' col2' , ' col3' ], project_id = projectid)
3408+
3409+ Finally, you can append data to a BigQuery table from a pandas DataFrame
3410+ using the :func: `~pandas.io.to_gbq ` function. This function uses the
3411+ Google streaming API which requires that your destination table exists in
3412+ BigQuery. Given the BigQuery table already exists, your DataFrame should
3413+ match the destination table in column order, structure, and data types.
3414+ DataFrame indexes are not supported. By default, rows are streamed to
3415+ BigQuery in chunks of 10,000 rows, but you can pass other chuck values
3416+ via the ``chunksize `` argument. You can also see the progess of your
3417+ post via the ``verbose `` flag which defaults to ``True ``. The http
3418+ response code of Google BigQuery can be successful (200) even if the
3419+ append failed. For this reason, if there is a failure to append to the
3420+ table, the complete error response from BigQuery is returned which
3421+ can be quite long given it provides a status for each row. You may want
3422+ to start with smaller chuncks to test that the size and types of your
3423+ dataframe match your destination table to make debugging simpler.
34153424
34163425.. code-block :: python
34173426
34183427 df = pandas.DataFrame({' string_col_name' : [' hello' ],
34193428 ' integer_col_name' : [1 ],
34203429 ' boolean_col_name' : [True ]})
3421- schema = [' STRING' , ' INTEGER' , ' BOOLEAN' ]
3422- data_frame = gbq.to_gbq(df, ' my_dataset.my_table' ,
3423- if_exists = ' fail' , schema = schema, project_id = projectid)
3424-
3425- To add more rows to this, simply:
3426-
3427- .. code-block :: python
3428-
3429- df2 = pandas.DataFrame({' string_col_name' : [' hello2' ],
3430- ' integer_col_name' : [2 ],
3431- ' boolean_col_name' : [False ]})
3432- data_frame = gbq.to_gbq(df2, ' my_dataset.my_table' , if_exists = ' append' , project_id = projectid)
3430+ df.to_gbq(' my_dataset.my_table' , project_id = projectid)
34333431
3434- .. note ::
3432+ The BigQuery SQL query language has some oddities, see ` here < https://developers.google.com/bigquery/query-reference >`__
34353433
3436- A default project id can be set using the command line:
3437- `bq init `.
3434+ While BigQuery uses SQL-like syntax, it has some important differences
3435+ from traditional databases both in functionality, API limitations (size and
3436+ qunatity of queries or uploads), and how Google charges for use of the service.
3437+ You should refer to Google documentation often as the service seems to
3438+ be changing and evolving. BiqQuery is best for analyzing large sets of
3439+ data quickly, but it is not a direct replacement for a transactional database.
34383440
3439- There is a hard cap on BigQuery result sets, at 128MB compressed. Also, the BigQuery SQL query language has some oddities,
3440- see `here <https://developers.google.com/bigquery/query-reference >`__
3441-
3442- You can access the management console to determine project id's by:
3443- <https://code.google.com/apis/console/b/0/?noredirect>
3441+ You can access the management console to determine project id's by:
3442+ <https://code.google.com/apis/console/b/0/?noredirect>
34443443
34453444.. warning ::
34463445
3447- To use this module, you will need a BigQuery account. See
3448- <https://cloud.google.com/products/big-query> for details.
3449-
3450- As of 1/28/14, a known bug is present that could possibly cause data duplication in the resultant dataframe. A fix is imminent,
3451- but any client changes will not make it into 0.13.1. See:
3452- http://stackoverflow.com/questions/20984592/bigquery-results-not-including-page-token/21009144?noredirect=1#comment32090677_21009144
3446+ To use this module, you will need a valid BigQuery account. See
3447+ <https://cloud.google.com/products/big-query> for details on the
3448+ service.
34533449
34543450.. _io.stata :
34553451
0 commit comments