Skip to content

Commit cfc6c7a

Browse files
docs: migrate Dgraph content (#98)
* initial port all files unlisted at this point * . * Update index.mdx * update paths * . * . * . * Update trunk.yaml * . * Update overview.mdx * . * . * . * Update schema.mdx * . * Update tips.mdx * . * . * Update javascript.mdx * . * . * Update indexes.mdx * Update docs.json * Update docs.json * . * . * . * . * . * . * . * . * . * . * . * . * . * Update introduction.mdx * Update http.mdx * . * . * . * . * . * Update types-and-operations.mdx * . * . * . * . * . * . * reduce Dgraph Cloud references * . * . * meta titles * guides * . * update why dgraph * remove Dgraph Cloud * Update overview.mdx * Update trunk.yaml * Update provision-backend.mdx --------- Co-authored-by: William Lyon <[email protected]>
1 parent 6bcd40c commit cfc6c7a

File tree

460 files changed

+42048
-141
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

460 files changed

+42048
-141
lines changed

.trunk/configs/.vale.ini

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,5 @@ BasedOnStyles = Vale, Google
1414
Google.Exclamation = OFF
1515
Google.Parens = OFF
1616
Google.We = OFF
17+
Google.Passive = OFF
1718
CommentDelimiters = {/*, */}

.trunk/trunk.yaml

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
version: 0.1
33

44
cli:
5-
version: 1.22.10
5+
version: 1.22.11
66

77
plugins:
88
sources:
@@ -18,11 +18,11 @@ runtimes:
1818

1919
lint:
2020
enabled:
21-
- renovate@39.192.0
22-
23-
- vale@3.9.6
21+
- renovate@39.210.1
22+
23+
- vale@3.10.0
2424
25-
25+
2626
- git-diff-check
2727
2828
@@ -31,8 +31,8 @@ lint:
3131
- "@mintlify/[email protected]"
3232
3333
34-
35-
- yamllint@1.35.1
34+
35+
- yamllint@1.36.2
3636
ignore:
3737
- linters: [ALL]
3838
paths:

README.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ The design and hosting of our docs site is provided by
3838
[Mintlify](https://mintlify.com/). The vast majority of configuration is in code
3939
in `mint.json`.
4040

41-
Changes will be deployed to [production](https://docs.hypermode.com)
42-
automatically after pushing to the `main` branch.
41+
Changes are deployed to [production](https://docs.hypermode.com) automatically
42+
after pushing to the `main` branch.
4343

4444
### Development Environment Setup
4545

@@ -49,15 +49,15 @@ The following components are useful when developing locally:
4949

5050
See live changes as you write and edit.
5151

52-
```bash
52+
```sh
5353
npm i -g mintlify
5454
```
5555

5656
#### Trunk CLI
5757

5858
Format and lint changes for easy merging.
5959

60-
```bash
60+
```sh
6161
npm i -g @trunkio/launcher
6262
```
6363

@@ -70,7 +70,7 @@ to make it easier to build easy-to-consume documentation.
7070
To spin up a local server, run the following command at the root of the docs
7171
repo:
7272

73-
```bash
73+
```sh
7474
mintlify dev
7575
```
7676

@@ -102,14 +102,14 @@ types. It is implemented within CI/CD, but also executable locally.
102102
Formatting should run automatically on save. To trigger a manual formatting of
103103
the repo, run:
104104

105-
```bash
105+
```sh
106106
trunk fmt
107107
```
108108

109109
To run lint checks, run:
110110

111-
```bash
112-
trunk check # appending --all will run checks beyond changes on the current branch
111+
```sh
112+
trunk check # appending --all runs checks beyond changes on the current branch
113113
```
114114

115115
Note that Trunk also has a
@@ -118,7 +118,7 @@ you can install.
118118

119119
However, when installing it please be aware of the `trunk.autoInit` setting,
120120
which is `true` (enabled) by default This controls whether to auto-initialize
121-
trunk in non-trunk repositories - meaning _any_ folder you open with VS Code
122-
will get configured with a `.trunk` subfolder, and will start using Trunk. You
123-
should probably set this to `false` in your VS Code user settings, to not
124-
interfere with other projects you may be working on.
121+
trunk in non-trunk repositories - meaning _any_ folder you open with VS Code is
122+
configured with a `.trunk` subfolder, and starts using Trunk. You should
123+
probably set this to `false` in your VS Code user settings, to not interfere
124+
with other projects you may be working on.

badger/overview.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ mode: "wide"
55
"og:title": "Overview - Badger"
66
---
77

8-
## What is Badger? {/* <!-- vale Google.Contractions = NO --> */}
8+
## What is Badger? {/* vale Google.Contractions = NO */}
99

1010
BadgerDB is an embeddable, persistent, and fast key-value (KV) database written
11-
in pure Go. It's the underlying database for [Dgraph](https://dgraph.io), a
12-
fast, distributed graph database. It's meant to be an efficient alternative to
11+
in pure Go. It is the underlying database for [Dgraph](/dgraph), a fast,
12+
distributed graph database. It is meant to be an efficient alternative to
1313
non-Go-based key-value stores like RocksDB.
1414

1515
## Changelog

badger/quickstart.mdx

Lines changed: 38 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ import (
4949

5050
func main() {
5151
// Open the Badger database located in the /tmp/badger directory.
52-
// It will be created if it doesn't exist.
52+
// It is created if it doesn't exist.
5353
db, err := badger.Open(badger.DefaultOptions("/tmp/badger"))
5454
if err != nil {
5555
log.Fatal(err)
@@ -66,7 +66,7 @@ func main() {
6666
By default, Badger ensures all data persists to disk. It also supports a pure
6767
in-memory mode. When Badger is running in this mode, all data remains in memory
6868
only. Reads and writes are much faster, but Badger loses all stored data in the
69-
case of a crash or close. To open badger in in-memory mode, set the `InMemory`
69+
case of a crash or close. To open Badger in in-memory mode, set the `InMemory`
7070
option.
7171

7272
```go
@@ -185,8 +185,8 @@ The first argument to `DB.NewTransaction()` is a boolean stating if the
185185
transaction should be writable.
186186

187187
Badger allows an optional callback to the `Txn.Commit()` method. Normally, the
188-
callback can be set to `nil`, and the method will return after all the writes
189-
have succeeded. However, if this callback is provided, the `Txn.Commit()` method
188+
callback can be set to `nil`, and the method returns after all the writes have
189+
succeeded. However, if this callback is provided, the `Txn.Commit()` method
190190
returns as soon as it has checked for any conflicts. The actual writing to the
191191
disk happens asynchronously, and the callback is invoked once the writing has
192192
finished, or an error has occurred. This can improve the throughput of the app
@@ -288,8 +288,8 @@ for {
288288

289289
Badger provides support for ordered merge operations. You can define a func of
290290
type `MergeFunc` which takes in an existing value, and a value to be _merged_
291-
with it. It returns a new value which is the result of the _merge_ operation.
292-
All values are specified in byte arrays. For e.g., here is a merge function
291+
with it. It returns a new value which is the result of the merge operation. All
292+
values are specified in byte arrays. For example, this is a merge function
293293
(`add`) which appends a `[]byte` value to an existing `[]byte` value.
294294

295295
```go
@@ -354,7 +354,7 @@ m.Add(uint64ToBytes(3))
354354
res, _ := m.Get() // res should have value 6 encoded
355355
```
356356

357-
## Setting time to live (TTL) and user metadata on keys
357+
## Setting time to live and user metadata on keys
358358

359359
Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL
360360
has elapsed, the key is no longer retrievable and is eligible for garbage
@@ -458,16 +458,16 @@ db.View(func(txn *badger.Txn) error {
458458
Considering that iteration happens in **byte-wise lexicographical sorting**
459459
order, it's possible to create a sorting-sensitive key. For example, a simple
460460
blog post key might look like:`feed:userUuid:timestamp:postUuid`. Here, the
461-
`timestamp` part of the key is treated as an attribute, and items will be stored
462-
in the corresponding order:
461+
`timestamp` part of the key is treated as an attribute, and items are stored in
462+
the corresponding order:
463463

464-
| Order ASC | Key |
465-
| :-------: | :------------------------------------------------------------ |
466-
| 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
467-
| 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
468-
| 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |
464+
| Order Ascending | Key |
465+
| :-------------: | :------------------------------------------------------------ |
466+
| 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
467+
| 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
468+
| 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |
469469

470-
It's important to properly configure keys for lexicographical sorting to avoid
470+
It is important to properly configure keys for lexicographical sorting to avoid
471471
incorrect ordering.
472472

473473
A **prefix scan** through the preceding keys can be achieved using the prefix
@@ -486,7 +486,7 @@ identify where to resume.
486486

487487
```go
488488
// startCursor may look like 'feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486'.
489-
// A prefix scan with this cursor will locate the specific key where
489+
// A prefix scan with this cursor locates the specific key where
490490
// the previous iteration stopped.
491491
err = db.badger.View(func(txn *badger.Txn) error {
492492
it := txn.NewIterator(opts)
@@ -540,12 +540,13 @@ return nextCursor, err
540540

541541
### Key-only iteration
542542

543-
Badger supports a unique mode of iteration called _key-only_ iteration. It's
543+
Badger supports a unique mode of iteration called _key-only_ iteration. It is
544544
several order of magnitudes faster than regular iteration, because it involves
545-
access to the LSM-tree only, which is usually resident entirely in RAM. To
546-
enable key-only iteration, you need to set the `IteratorOptions.PrefetchValues`
547-
field to `false`. This can also be used to do sparse reads for selected keys
548-
during an iteration, by calling `item.Value()` only when required.
545+
access to the Log-structured merge (LSM)-tree only, which is usually resident
546+
entirely in RAM. To enable key-only iteration, you need to set the
547+
`IteratorOptions.PrefetchValues` field to `false`. This can also be used to do
548+
sparse reads for selected keys during an iteration, by calling `item.Value()`
549+
only when required.
549550

550551
```go
551552
err := db.View(func(txn *badger.Txn) error {
@@ -570,16 +571,16 @@ serially to be sent over network, written to disk, or even written back to
570571
Badger. This is a lot faster way to iterate over Badger than using a single
571572
Iterator. Stream supports Badger in both managed and normal mode.
572573

573-
Stream uses the natural boundaries created by SSTables within the LSM tree, to
574-
quickly generate key ranges. Each goroutine then picks a range and runs an
575-
iterator to iterate over it. Each iterator iterates over all versions of values
576-
and is created from the same transaction, thus working over a snapshot of the
577-
DB. Every time a new key is encountered, it calls `ChooseKey(item)`, followed by
578-
`KeyToList(key, itr)`. This allows a user to select or reject that key, and if
579-
selected, convert the value versions into custom key-values. The goroutine
580-
batches up 4 MB worth of key-values, before sending it over to a channel.
581-
Another goroutine further batches up data from this channel using _smart
582-
batching_ algorithm and calls `Send` serially.
574+
Stream uses the natural boundaries created by SSTables within the Log-structure
575+
merge (LSM)-tree, to quickly generate key ranges. Each goroutine then picks a
576+
range and runs an iterator to iterate over it. Each iterator iterates over all
577+
versions of values and is created from the same transaction, thus working over a
578+
snapshot of the DB. Every time a new key is encountered, it calls
579+
`ChooseKey(item)`, followed by `KeyToList(key, itr)`. This allows a user to
580+
select or reject that key, and if selected, convert the value versions into
581+
custom key-values. The goroutine batches up 4 MB worth of key-values, before
582+
sending it over to a channel. Another goroutine further batches up data from
583+
this channel using _smart batching_ algorithm and calls `Send` serially.
583584

584585
This framework is designed for high throughput key-value iteration, spreading
585586
the work of iteration across many goroutines. `DB.Backup` uses this framework to
@@ -624,9 +625,9 @@ if err := stream.Orchestrate(context.Background()); err != nil {
624625

625626
Badger values need to be garbage collected, because of two reasons:
626627

627-
- Badger keeps values separately from the LSM tree. This means that the
628-
compaction operations that clean up the LSM tree do not touch the values at
629-
all. Values need to be cleaned up separately.
628+
- Badger keeps values separately from the Log-structure merge (LSM)-tree. This
629+
means that the compaction operations that clean up the LSM tree do not touch
630+
the values at all. Values need to be cleaned up separately.
630631

631632
- Concurrent read/write transactions could leave behind multiple values for a
632633
single key, because they're stored with different versions. These could
@@ -639,9 +640,9 @@ appropriate time:
639640

640641
- `DB.RunValueLogGC()`: This method is designed to do garbage collection while
641642
Badger is online. Along with randomly picking a file, it uses statistics
642-
generated by the LSM-tree compactions to pick files that are likely to lead to
643-
maximum space reclamation. It's recommended to be called during periods of low
644-
activity in your system, or periodically. One call would only result in
643+
generated by the LSM tree compactions to pick files that are likely to lead to
644+
maximum space reclamation. It is recommended to be called during periods of
645+
low activity in your system, or periodically. One call would only result in
645646
removal of at max one log file. As an optimization, you could also immediately
646647
re-run it whenever it returns nil error (indicating a successful value log
647648
GC), as shown below.

badger/troubleshooting.mdx

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ workloads, you should be using the `Transaction` API.
5959

6060
If you're using Badger with `SyncWrites=false`, then your writes might not be
6161
written to value log and won't get synced to disk immediately. Writes to LSM
62-
tree are done inmemory first, before they get compacted to disk. The compaction
62+
tree are done in-memory first, before they get compacted to disk. The compaction
6363
would only happen once `BaseTableSize` has been reached. So, if you're doing a
6464
few writes and then checking, you might not see anything on disk. Once you
6565
`Close` the database, you'll see these writes on disk.
@@ -87,10 +87,10 @@ panic: close of closed channel
8787
panic: send on closed channel
8888
```
8989

90-
If you're seeing panics like above, this would be because you're operating on a
91-
closed DB. This can happen, if you call `Close()` before sending a write, or
92-
multiple times. You should ensure that you only call `Close()` once, and all
93-
your read/write operations finish before closing.
90+
If you're seeing panics like this, it is because you're operating on a closed
91+
DB. This can happen, if you call `Close()` before sending a write, or multiple
92+
times. You should ensure that you only call `Close()` once, and all your
93+
read/write operations finish before closing.
9494

9595
## Are there any Go specific settings that I should use?
9696

@@ -116,33 +116,33 @@ migrate their data directory. Badger data can be migrated from version X of
116116
badger to version Y of badger by following the steps listed below. Assume you
117117
were on badger v1.6.0 and you wish to migrate to v2.0.0 version.
118118

119-
1. Install badger version v1.6.0
119+
1. Install Badger version v1.6.0
120120

121121
- `cd $GOPATH/src/github.com/dgraph-io/badger`
122122
- `git checkout v1.6.0`
123123
- `cd badger && go install`
124124

125-
This should install the old badger binary in your `$GOBIN`.
125+
This should install the old Badger binary in your `$GOBIN`.
126126

127127
2. Create Backup
128128
- `badger backup --dir path/to/badger/directory -f badger.backup`
129-
3. Install badger version v2.0.0
129+
3. Install Badger version v2.0.0
130130

131131
- `cd $GOPATH/src/github.com/dgraph-io/badger`
132132
- `git checkout v2.0.0`
133133
- `cd badger && go install`
134134

135-
This should install the new badger binary in your `$GOBIN`.
135+
This should install the new Badger binary in your `$GOBIN`.
136136

137137
4. Restore data from backup
138138

139139
- `badger restore --dir path/to/new/badger/directory -f badger.backup`
140140

141-
This creates a new directory on `path/to/new/badger/directory` and add
142-
badger data in newer format to it.
141+
This creates a new directory on `path/to/new/badger/directory` and adds
142+
data in the new format to it.
143143

144144
NOTE - The preceding steps shouldn't cause any data loss but please ensure the
145-
new data is valid before deleting the old badger directory.
145+
new data is valid before deleting the old Badger directory.
146146

147147
## Why do I need gcc to build badger? Does badger need Cgo?
148148

@@ -162,6 +162,6 @@ required. The new library is
162162
<Note>
163163
Yes they're compatible both ways. The only exception is 0 bytes of input which
164164
gives 0 bytes output with the Go zstd. But you already have the
165-
zstd.WithZeroFrames(true) which will wrap 0 bytes in a header so it can be fed
166-
to DD zstd. This is only relevant when downgrading.
165+
zstd.WithZeroFrames(true) which wraps 0 bytes in a header so it can be fed to
166+
DD zstd. This is only relevant when downgrading.
167167
</Note>

create-project.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ creating your first Modus app, visit the [Modus quickstart](modus/quickstart).
2525
Next, initialize the app with Hypermode through the [Hyp CLI](/hyp-cli) and link
2626
your GitHub repo with your Modus app to Hypermode using:
2727

28-
```bash
28+
```sh
2929
hyp link
3030
```
3131

deploy.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ in the [app manifest](/modus/app-manifest).
1616
After you push your Modus app to GitHub, you can link your Hypermode project to
1717
the repo through the Hyp CLI.
1818

19-
```bash
19+
```sh
2020
hyp link
2121
```
2222

0 commit comments

Comments
 (0)