Import, export, and back up your data in multiple formats. ALOS DB supports automatic scheduled backups, streaming imports, multi-format exports, and one-command restores.
ALOS DB provides a complete data management toolkit built directly into the database. No external tools, no third-party backup agents, no cron scripts. Everything is a single API call.
Safety first. Every restore operation automatically creates a safety snapshot of your current data before overwriting anything. If something goes wrong, you can always recover from the pre-restore snapshot.
ALOS DB supports four export formats. Each has different tradeoffs for size, compatibility, and readability.
| Format | Constant | Structure | Multi-Collection | Best For |
|---|---|---|---|---|
| ALOS Binary | FormatALOS |
Header + JSONL records with collection tags | Yes | Full database backup & restore |
| JSONL | FormatJSONL |
One JSON object per line | Yes | Streaming, large datasets, pipelines |
| JSON | FormatJSON |
Single JSON array or object | Yes | Human-readable, small exports |
| MongoJL | FormatMongoJL |
MongoDB-compatible JSONL | Yes | Migration from/to MongoDB |
The ALOS format is the recommended format for backup and restore operations. It uses a header line with database metadata followed by one JSONL record per document, each tagged with its collection name. This format preserves the full database structure including all collections and their document IDs.
// Line 1: Header {"version": 1, "database": "myapp", "created_at": "2026-01-15T10:30:00Z", "collections": ["users", "orders"]} // Lines 2+: One document per line, tagged with collection {"collection": "users", "doc": {"_id": "abc123", "name": "Alice", "email": "alice@example.com"}} {"collection": "orders", "doc": {"_id": "ord456", "user_id": "abc123", "total": 99.99}}
JSONL (JSON Lines) writes one JSON object per line. For database-level exports, the first
line is a metadata record with __alos_export: true and each subsequent document
includes a __collection tag. For single-collection exports, each line is a raw
document with no metadata.
Standard JSON with optional pretty-printing. For single collections, this is a JSON array of
documents. For database-level exports, this is a JSON object with database metadata and a
collections map of collection-name to document-array. Best for small exports
that need to be human-readable or consumed by tools that don't support streaming.
MongoDB-compatible JSONL. Identical in wire format to JSONL but uses the
FormatMongoJL constant for semantic clarity when migrating data between ALOS DB
and MongoDB. Documents are written one-per-line in standard JSON.
Export a single collection to any supported format. The export streams documents one at a time — it never loads all documents into memory, so it works efficiently even for collections with millions of documents.
// Export as JSONL (streaming, memory-efficient) f, _ := os.Create("users_export.jsonl") defer f.Close() err := db.ExportCollection(f, "users", core.FormatJSONL, false) if err != nil { log.Fatal(err) } // Export as pretty JSON (human-readable) f2, _ := os.Create("users_export.json") defer f2.Close() err = db.ExportCollection(f2, "users", core.FormatJSON, true) // Export as ALOS binary (for backup/restore) f3, _ := os.Create("users_export.alos") defer f3.Close() err = db.ExportCollection(f3, "users", core.FormatALOS, false)
func (db *Database) ExportCollection( w io.Writer, // destination (file, HTTP response, buffer, etc.) coll string, // collection name fmt ExportFormat, // FormatALOS | FormatJSONL | FormatJSON | FormatMongoJL pretty bool, // indent JSON output (FormatJSON only) ) error
The writer can be anything that implements io.Writer: a file, an HTTP response
body, a network connection, a buffer, or even os.Stdout. The export uses a
256KB buffered writer internally for optimal I/O throughput.
Export all collections in a database (or a specific subset) in a single call. The output includes metadata headers so the data can be imported back into ALOS DB with full structure preservation.
// Export entire database as ALOS binary f, _ := os.Create("full_backup.alos") defer f.Close() err := db.ExportDatabase(f, core.ExportOptions{ Format: core.FormatALOS, }) // Export only specific collections as JSONL f2, _ := os.Create("partial_backup.jsonl") defer f2.Close() err = db.ExportDatabase(f2, core.ExportOptions{ Format: core.FormatJSONL, Collections: []string{"users", "orders"}, }) // Export as pretty JSON for inspection f3, _ := os.Create("readable_backup.json") defer f3.Close() err = db.ExportDatabase(f3, core.ExportOptions{ Format: core.FormatJSON, Pretty: true, })
type ExportOptions struct { Format ExportFormat // output format (required) Collections []string // specific collections (empty = all) Pretty bool // indent JSON output }
When Collections is empty, all collections in the database are exported in
alphabetical order. Each collection is streamed sequentially — the entire export uses
constant memory regardless of database size.
Export a collection as a compressed ZIP archive. Each document is stored as an individual JSON file inside the ZIP, organized by document ID. This is useful for archival, S3 upload, or file-based processing pipelines.
// Export collection as ZIP f, _ := os.Create("users_archive.zip") defer f.Close() err := db.ExportCollectionZip(f, "users") if err != nil { log.Fatal(err) } // Result: users_archive.zip containing: // users/abc123.json // users/def456.json // users/ghi789.json // ...
ZIP export uses flate.BestSpeed compression for fast throughput. Each document
is pretty-printed with 2-space indentation for readability. The ZIP writer streams documents
directly from disk — no full collection materialization in memory.
ALOS DB can import data from any of its supported formats. The import system uses streaming parsers that process documents one at a time, so you can import datasets that are larger than available memory.
type ImportOptions struct { Format ExportFormat // hint format (auto-detected if empty) Collection string // target collection (single-collection import) KeepIDs bool // preserve original _id values BatchSize int // documents per batch insert (default: 1000) FlushInterval int // flush to disk every N documents (default: 50000) }
KeepIDs — when true, the original
_id field from each document is preserved. When false
(default), ALOS DB generates new unique IDs for each imported document. Use
true when restoring from a backup, false when importing data
from external sources.BatchSize — documents are accumulated into batches
and inserted via InsertMany() for throughput. Larger batches = faster
import, more memory. Default is 1000 documents per batch.FlushInterval — controls how often the database
flushes data to disk during import. Range: 5,000 to 600,000 documents. Default: 50,000.
Lower values reduce memory usage at the cost of import speed.Auto-detection. If you don't specify a Format, ALOS DB
automatically detects the format by examining the first line of the input. It can
distinguish between ALOS binary, JSONL, JSON arrays, and MongoDB exports without any
configuration.
Import documents into a single collection from a reader (file, HTTP request body, etc.).
// Import JSONL file into "users" collection f, _ := os.Open("users_export.jsonl") defer f.Close() result, err := db.ImportCollectionFromReader(f, core.ImportOptions{ Collection: "users", Format: core.FormatJSONL, KeepIDs: true, BatchSize: 5000, }) fmt.Printf("Imported %d docs into %s (%d errors)\n", result.Inserted, result.Collection, result.Errors)
type ImportCollectionResult struct { Collection string // target collection name Inserted int64 // number of documents successfully inserted Errors int64 // number of documents that failed to insert }
Import an entire database from an ALOS binary, JSONL, or JSON export. The importer automatically routes documents to their correct collections based on collection tags in the export file.
// Import full database from ALOS binary backup f, _ := os.Open("full_backup.alos") defer f.Close() result, err := db.ImportDatabaseFromReader(f, core.ImportOptions{ KeepIDs: true, BatchSize: 10000, }) fmt.Printf("Total imported: %d docs across %d collections\n", result.TotalInserted, len(result.Collections)) // Per-collection breakdown for name, cr := range result.Collections { fmt.Printf(" %s: %d inserted, %d errors\n", name, cr.Inserted, cr.Errors) }
type ImportDatabaseResult struct { TotalInserted int64 // total across all collections TotalErrors int64 // total errors across all collections Collections map[string]*ImportCollectionResult // per-collection results Format string // detected format name }
The importer batches documents per collection and flushes them in bulk. This is significantly faster than inserting one document at a time. Collections are created automatically if they don't exist.
Import data from ZIP archives. ALOS DB supports both database-level ZIP import (multiple collections) and single-collection ZIP import.
// Import entire database from ZIP err := db.ImportDatabaseZip("full_backup.zip") // Import a single collection from ZIP err = db.ImportCollectionZip("users_archive.zip", "users")
The ZIP importer reads each .json file from the archive, parses the document,
and inserts it into the target collection. Files are processed in batches for throughput.
The ZIP file is read from disk (not streamed), so it must fit on the filesystem.
ALOS DB has a built-in backup scheduler that runs inside the database process. No external cron jobs, no backup agents, no infrastructure. Configure it once and it runs forever.
// Enable automatic backups every 60 minutes, keep last 24 copies err := backupManager.Configure(core.BackupConfig{ Enabled: true, IntervalMinutes: 60, // backup every hour MaxCopies: 24, // keep last 24 backups (1 day rolling) BackupPath: "", // use default path (dataPath/_backups/dbname) }) // Or use a custom backup directory err = backupManager.Configure(core.BackupConfig{ Enabled: true, IntervalMinutes: 30, MaxCopies: 48, BackupPath: "/mnt/backup-drive/alosdb", })
type BackupConfig struct { Enabled bool // enable/disable auto-backup IntervalMinutes int // minutes between backups (minimum: 1) MaxCopies int // max backups to retain (0 = unlimited) BackupPath string // custom backup directory (empty = default) }
_manifest.json file is written inside
the backup with file count, total size, and file list for verificationMaxCopies is set, the oldest backups are
deleted to stay within the limitPersistent configuration. Backup configuration is saved to
_backup_config.json inside the backup directory. On server restart, the
configuration is automatically loaded and the backup scheduler resumes where it left
off. No manual re-configuration needed.
// Check current backup status status := backupManager.Status() fmt.Printf("Enabled: %v\n", status.Enabled) fmt.Printf("Interval: %d min\n", status.IntervalMinutes) fmt.Printf("Max copies: %d\n", status.MaxCopies) fmt.Printf("Backup path: %s\n", status.BackupPath) fmt.Printf("Total backups: %d\n", status.TotalBackups) fmt.Printf("Currently running: %v\n", status.Running) if status.LastBackup != nil { fmt.Printf("Last: %s (%.2f MB)\n", status.LastBackup.Name, float64(status.LastBackup.SizeBytes)/(1024*1024)) } if status.NextBackupAt != nil { fmt.Printf("Next backup at: %s\n", status.NextBackupAt.Format(time.RFC3339)) }
type BackupStatus struct { Enabled bool // is auto-backup active IntervalMinutes int // configured interval MaxCopies int // configured retention limit BackupPath string // resolved backup directory LastBackup *BackupInfo // most recent backup (nil if none) NextBackupAt *time.Time // next scheduled backup time TotalBackups int // current backup count on disk Running bool // backup in progress right now }
Trigger an instant backup at any time, independent of the auto-backup schedule. The backup is created immediately and follows the same process as automatic backups: flush, copy, manifest, rotate.
// Create an instant backup info, err := backupManager.BackupNow() if err != nil { log.Fatal(err) } fmt.Printf("Backup created: %s\n", info.Name) fmt.Printf("Path: %s\n", info.Path) fmt.Printf("Size: %.2f MB\n", float64(info.SizeBytes)/(1024*1024)) fmt.Printf("Created at: %s\n", info.CreatedAt.Format(time.RFC3339))
type BackupInfo struct { Name string // "backup_20260415_120000" Path string // full filesystem path CreatedAt time.Time // when the backup was created SizeBytes int64 // total size of all files }
Backup names follow the pattern backup_YYYYMMDD_HHMMSS using UTC timestamps. The
backup directory is a complete copy of the database directory, including all data shards,
index snapshots, and metadata files. It can be used for restore or inspected manually.
Every backup includes a _manifest.json file that records the exact state of the
backup:
type BackupManifest struct { Version int // manifest format version DBName string // database name CreatedAt string // RFC3339 timestamp FileCount int // total files in backup TotalSize int64 // total bytes Files []string // list of all backed-up files }
During restore, the manifest is verified to ensure the backup is complete and uncorrupted before any data is overwritten.
Restore a database from any previous backup. The restore process is designed to be safe and reversible.
// List available backups backups := backupManager.ListBackups() for _, b := range backups { fmt.Printf("%s %.2f MB %s\n", b.Name, float64(b.SizeBytes)/(1024*1024), b.CreatedAt.Format(time.RFC3339)) } // Restore from a specific backup err := backupManager.RestoreFromBackup("backup_20260415_120000") if err != nil { log.Fatal(err) } // Database is now restored and reloaded
Every restore follows this exact sequence to prevent data loss:
_pre_restore_YYYYMMDD_HHMMSS inside the backup directory. This is your
escape hatch if the restore goes wrong.Safety snapshots are never deleted automatically. The
_pre_restore_* directories are excluded from normal backup listing and
rotation. They persist until you manually delete them, giving you an unlimited window to
recover from a bad restore.
List all available backups, sorted by creation time (newest first). Pre-restore safety snapshots are excluded from this list.
backups := backupManager.ListBackups() // Returns []BackupInfo sorted by CreatedAt descending
Delete a specific backup by name. The backup name is validated to prevent path traversal
attacks — names containing .., /, or \ are
rejected.
err := backupManager.DeleteBackup("backup_20260415_120000")
Disable the automatic backup scheduler. Existing backups are not deleted.
// Disable auto-backups backupManager.Configure(core.BackupConfig{ Enabled: false, }) // Or stop the scheduler directly backupManager.Stop()
When MaxCopies is set, ALOS DB automatically deletes the oldest backups after
each new backup is created. For example, with MaxCopies: 24 and
IntervalMinutes: 60, you get a rolling 24-hour backup window. Old backups are
deleted oldest-first.
Set MaxCopies: 0 for unlimited retention — backups will accumulate until
you manually delete them or run out of disk space.