Hippo standalone version supports performing batch export to CSV files.
curl -u shiva:shiva -XPOST 'localhost:8902/hippo/v1/_standalone_export_data?pretty' -H 'Content-Type: application/json' -d'{
"database_name": "default",
"table_name" : "book",
"file" : "/tmp/book-1.csv",
"csv_options" : {
"with_header" : true
},
"output_fields": ["book_id", "book_intro"],
"expr" : "word_count >= 11000",
"limit" : 1000,
"wait_for_completion" : true
}';
Result:
{
"job_type" : "export_data",
"table" : "default#book",
"file_path" : "/tmp/book-1.csv",
"expr" : "word_count >= 11000",
"limit" : 1000,
"output_fields" : [
"book_id",
"book_intro"
],
"count" : 90
}
Parameter description:
Parameters | Description | Required |
---|---|---|
database_name | Database name | No, defaults to "default" database |
table_name | Table name | Yes |
files | Local file path | Yes |
csv_options | CSV related parameters | No |
array_separator (csv_options) | Array separator | No, defaults to "," |
separator (csv_options) | Separator to parse CSV file | No, defaults to ";" |
with_header | If output CSV file contains header | No, defaults to false |
expr | Filter condition to export data | No, defaults to empty |
limit | Max volume of data to be exported | No, defaults to all data |
wait_for_completion | Whether to wait until the job is done | No, defaults to true; if set to false, job ID will be returned |