Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cloud-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ It is not available on Hadoop 3.3.4 or earlier.

IBM provide the Stocator output committer for IBM Cloud Object Storage and OpenStack Swift.

Source, documentation and releasea can be found at
Source, documentation and release can be found at
[Stocator - Storage Connector for Apache Spark](https://github.com/CODAIT/stocator).


Expand Down
2 changes: 1 addition & 1 deletion docs/streaming/apis-on-dataframes-and-datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ Here are the details of all the sources in Spark.
<td><b>Rate Source</b></td>
<td>
<code>rowsPerSecond</code> (e.g. 100, default: 1): How many rows should be generated per second.<br/><br/>
<code>rampUpTime</code> (e.g. 5s, default: 0s): How long to ramp up before the generating speed becomes <code>rowsPerSecond</code>. Using finer granularities than seconds will be truncated to integer seconds. <br/><br/>
<code>rampUpTime</code> (e.g. 5s, default: 0s): How long to ramp up before the generating speed becomes <code>rowsPerSecond</code>. Using finer granularity than seconds will be truncated to integer seconds. <br/><br/>
<code>numPartitions</code> (e.g. 10, default: Spark's default parallelism): The partition number for the generated rows. <br/><br/>

The source will try its best to reach <code>rowsPerSecond</code>, but the query may be resource constrained, and <code>numPartitions</code> can be tweaked to help reach the desired speed.
Expand Down