@@ -49,7 +49,7 @@ import (
49
49
50
50
func main () {
51
51
// Open the Badger database located in the /tmp/badger directory.
52
- // It will be created if it doesn't exist.
52
+ // It is created if it doesn't exist.
53
53
db , err := badger.Open (badger.DefaultOptions (" /tmp/badger" ))
54
54
if err != nil {
55
55
log.Fatal (err)
@@ -66,7 +66,7 @@ func main() {
66
66
By default, Badger ensures all data persists to disk. It also supports a pure
67
67
in-memory mode. When Badger is running in this mode, all data remains in memory
68
68
only. Reads and writes are much faster, but Badger loses all stored data in the
69
- case of a crash or close. To open badger in in-memory mode, set the ` InMemory `
69
+ case of a crash or close. To open Badger in in-memory mode, set the ` InMemory `
70
70
option.
71
71
72
72
``` go
@@ -185,8 +185,8 @@ The first argument to `DB.NewTransaction()` is a boolean stating if the
185
185
transaction should be writable.
186
186
187
187
Badger allows an optional callback to the ` Txn.Commit() ` method. Normally, the
188
- callback can be set to ` nil ` , and the method will return after all the writes
189
- have succeeded. However, if this callback is provided, the ` Txn.Commit() ` method
188
+ callback can be set to ` nil ` , and the method returns after all the writes have
189
+ succeeded. However, if this callback is provided, the ` Txn.Commit() ` method
190
190
returns as soon as it has checked for any conflicts. The actual writing to the
191
191
disk happens asynchronously, and the callback is invoked once the writing has
192
192
finished, or an error has occurred. This can improve the throughput of the app
@@ -288,8 +288,8 @@ for {
288
288
289
289
Badger provides support for ordered merge operations. You can define a func of
290
290
type ` MergeFunc ` which takes in an existing value, and a value to be _ merged_
291
- with it. It returns a new value which is the result of the _ merge _ operation.
292
- All values are specified in byte arrays. For e.g., here is a merge function
291
+ with it. It returns a new value which is the result of the merge operation. All
292
+ values are specified in byte arrays. For example, this is a merge function
293
293
(` add ` ) which appends a ` []byte ` value to an existing ` []byte ` value.
294
294
295
295
``` go
@@ -354,7 +354,7 @@ m.Add(uint64ToBytes(3))
354
354
res , _ := m.Get () // res should have value 6 encoded
355
355
```
356
356
357
- ## Setting time to live (TTL) and user metadata on keys
357
+ ## Setting time to live and user metadata on keys
358
358
359
359
Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL
360
360
has elapsed, the key is no longer retrievable and is eligible for garbage
@@ -458,16 +458,16 @@ db.View(func(txn *badger.Txn) error {
458
458
Considering that iteration happens in ** byte-wise lexicographical sorting**
459
459
order, it's possible to create a sorting-sensitive key. For example, a simple
460
460
blog post key might look like:` feed:userUuid:timestamp:postUuid ` . Here, the
461
- ` timestamp ` part of the key is treated as an attribute, and items will be stored
462
- in the corresponding order:
461
+ ` timestamp ` part of the key is treated as an attribute, and items are stored in
462
+ the corresponding order:
463
463
464
- | Order ASC | Key |
465
- | :-------: | :------------------------------------------------------------ |
466
- | 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
467
- | 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
468
- | 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |
464
+ | Order Ascending | Key |
465
+ | :------------- : | :------------------------------------------------------------ |
466
+ | 1 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127889:tQpnEDVRoCxTFQDvyQEzdo |
467
+ | 2 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127533:1Mryrou1xoekEaxzrFiHwL |
468
+ | 3 | feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486:pprRrNL2WP4yfVXsSNBSx6 |
469
469
470
- It's important to properly configure keys for lexicographical sorting to avoid
470
+ It is important to properly configure keys for lexicographical sorting to avoid
471
471
incorrect ordering.
472
472
473
473
A ** prefix scan** through the preceding keys can be achieved using the prefix
@@ -486,7 +486,7 @@ identify where to resume.
486
486
487
487
``` go
488
488
// startCursor may look like 'feed:tQpnEDVRoCxTFQDvyQEzdo:1733127486'.
489
- // A prefix scan with this cursor will locate the specific key where
489
+ // A prefix scan with this cursor locates the specific key where
490
490
// the previous iteration stopped.
491
491
err = db.badger .View (func (txn *badger.Txn ) error {
492
492
it := txn.NewIterator (opts)
@@ -540,12 +540,13 @@ return nextCursor, err
540
540
541
541
### Key-only iteration
542
542
543
- Badger supports a unique mode of iteration called _ key-only_ iteration. It's
543
+ Badger supports a unique mode of iteration called _ key-only_ iteration. It is
544
544
several order of magnitudes faster than regular iteration, because it involves
545
- access to the LSM-tree only, which is usually resident entirely in RAM. To
546
- enable key-only iteration, you need to set the ` IteratorOptions.PrefetchValues `
547
- field to ` false ` . This can also be used to do sparse reads for selected keys
548
- during an iteration, by calling ` item.Value() ` only when required.
545
+ access to the Log-structured merge (LSM)-tree only, which is usually resident
546
+ entirely in RAM. To enable key-only iteration, you need to set the
547
+ ` IteratorOptions.PrefetchValues ` field to ` false ` . This can also be used to do
548
+ sparse reads for selected keys during an iteration, by calling ` item.Value() `
549
+ only when required.
549
550
550
551
``` go
551
552
err := db.View (func (txn *badger.Txn ) error {
@@ -570,16 +571,16 @@ serially to be sent over network, written to disk, or even written back to
570
571
Badger. This is a lot faster way to iterate over Badger than using a single
571
572
Iterator. Stream supports Badger in both managed and normal mode.
572
573
573
- Stream uses the natural boundaries created by SSTables within the LSM tree, to
574
- quickly generate key ranges. Each goroutine then picks a range and runs an
575
- iterator to iterate over it. Each iterator iterates over all versions of values
576
- and is created from the same transaction, thus working over a snapshot of the
577
- DB. Every time a new key is encountered, it calls ` ChooseKey(item) ` , followed by
578
- ` KeyToList(key, itr) ` . This allows a user to select or reject that key, and if
579
- selected, convert the value versions into custom key-values. The goroutine
580
- batches up 4 MB worth of key-values, before sending it over to a channel.
581
- Another goroutine further batches up data from this channel using _ smart
582
- batching_ algorithm and calls ` Send ` serially.
574
+ Stream uses the natural boundaries created by SSTables within the Log-structure
575
+ merge (LSM)-tree, to quickly generate key ranges. Each goroutine then picks a
576
+ range and runs an iterator to iterate over it. Each iterator iterates over all
577
+ versions of values and is created from the same transaction, thus working over a
578
+ snapshot of the DB. Every time a new key is encountered, it calls
579
+ ` ChooseKey(item) ` , followed by ` KeyToList(key, itr) ` . This allows a user to
580
+ select or reject that key, and if selected, convert the value versions into
581
+ custom key-values. The goroutine batches up 4 MB worth of key-values, before
582
+ sending it over to a channel. Another goroutine further batches up data from
583
+ this channel using _ smart batching_ algorithm and calls ` Send ` serially.
583
584
584
585
This framework is designed for high throughput key-value iteration, spreading
585
586
the work of iteration across many goroutines. ` DB.Backup ` uses this framework to
@@ -624,9 +625,9 @@ if err := stream.Orchestrate(context.Background()); err != nil {
624
625
625
626
Badger values need to be garbage collected, because of two reasons:
626
627
627
- - Badger keeps values separately from the LSM tree. This means that the
628
- compaction operations that clean up the LSM tree do not touch the values at
629
- all. Values need to be cleaned up separately.
628
+ - Badger keeps values separately from the Log-structure merge (LSM)- tree. This
629
+ means that the compaction operations that clean up the LSM tree do not touch
630
+ the values at all. Values need to be cleaned up separately.
630
631
631
632
- Concurrent read/write transactions could leave behind multiple values for a
632
633
single key, because they're stored with different versions. These could
@@ -639,9 +640,9 @@ appropriate time:
639
640
640
641
- ` DB.RunValueLogGC() ` : This method is designed to do garbage collection while
641
642
Badger is online. Along with randomly picking a file, it uses statistics
642
- generated by the LSM- tree compactions to pick files that are likely to lead to
643
- maximum space reclamation. It's recommended to be called during periods of low
644
- activity in your system, or periodically. One call would only result in
643
+ generated by the LSM tree compactions to pick files that are likely to lead to
644
+ maximum space reclamation. It is recommended to be called during periods of
645
+ low activity in your system, or periodically. One call would only result in
645
646
removal of at max one log file. As an optimization, you could also immediately
646
647
re-run it whenever it returns nil error (indicating a successful value log
647
648
GC), as shown below.
0 commit comments