Skip to content

Storage

Stratal provides a StorageModule for file operations with S3-compatible storage providers. It supports multiple disk configurations, chunked uploads, presigned URLs, and path template variables. It works with any S3-compatible service, including Cloudflare R2, AWS S3, and MinIO.

The storage module uses the AWS SDK v3 for S3 operations. Install the required packages:

Terminal window
yarn add @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner

Register the StorageModule in your root module using forRoot or forRootAsync:

import { Module } from 'stratal/module'
import { StorageModule } from 'stratal/storage'
@Module({
imports: [
StorageModule.forRoot({
storage: [
{
disk: 'uploads',
provider: 's3',
endpoint: 'https://your-account-id.r2.cloudflarestorage.com',
bucket: 'my-bucket',
region: 'auto',
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key',
root: 'uploads/{year}/{month}',
visibility: 'private',
},
],
defaultStorageDisk: 'uploads',
presignedUrl: {
defaultExpiry: 3600,
maxExpiry: 604800,
},
}),
],
})
export class AppModule {}
PropertyTypeDescription
storageStorageEntry[]Array of disk configurations
defaultStorageDiskstringDisk name used when no disk is specified
presignedUrl.defaultExpirynumberDefault presigned URL expiry in seconds
presignedUrl.maxExpirynumberMaximum allowed expiry (up to 604800 for 7 days)

Each entry in the storage array configures a disk:

PropertyTypeDescription
diskstringUnique disk identifier
provider's3'Storage provider (S3-compatible)
endpointstringS3 endpoint URL
bucketstringBucket name
regionstringAWS region or 'auto' for R2
accessKeyIdstringAccess key credential
secretAccessKeystringSecret key credential
rootstringRoot path prefix (supports template variables)
visibility'public' | 'private'Default object visibility

You can define multiple disks for different storage needs:

StorageModule.forRoot({
storage: [
{
disk: 'documents',
provider: 's3',
endpoint: 'https://account.r2.cloudflarestorage.com',
bucket: 'docs-bucket',
region: 'auto',
accessKeyId: '...',
secretAccessKey: '...',
root: 'documents/{year}',
visibility: 'private',
},
{
disk: 'public-assets',
provider: 's3',
endpoint: 'https://account.r2.cloudflarestorage.com',
bucket: 'assets-bucket',
region: 'auto',
accessKeyId: '...',
secretAccessKey: '...',
root: 'assets',
visibility: 'public',
},
],
defaultStorageDisk: 'documents',
presignedUrl: { defaultExpiry: 3600, maxExpiry: 604800 },
})

When calling storage methods, you can specify a disk by name. If omitted, the defaultStorageDisk is used.

You can query the configured disk names at runtime:

const disks = this.storage.getAvailableDisks()
// ['documents', 'public-assets']

Inject StorageService and call upload with the file content, a relative path, and upload options:

import { Transient, inject } from 'stratal/di'
import { STORAGE_TOKENS, type StorageService } from 'stratal/storage'
@Transient()
export class DocumentService {
constructor(
@inject(STORAGE_TOKENS.StorageService) private readonly storage: StorageService,
) {}
async uploadDocument(content: ArrayBuffer, filename: string) {
const result = await this.storage.upload(content, `docs/${filename}`, {
size: content.byteLength,
mimeType: 'application/pdf',
metadata: { uploadedBy: 'system' },
})
return result
// { path, disk, fullPath, size, mimeType, uploadedAt }
}
}

The upload method returns an UploadResult:

FieldTypeDescription
pathstringRelative path within the disk
diskstringDisk name used
fullPathstringFull path including bucket
sizenumberFile size in bytes
mimeTypestringMIME type
uploadedAtDateUpload timestamp
OptionTypeRequiredDescription
sizenumberYesContent size in bytes
mimeTypestringNoMIME type
metadataRecord<string, string>NoCustom S3 object metadata
taggingstringNoS3 tagging for lifecycle policies

For large files or streams where the content length is unknown, use chunkedUpload:

const result = await this.storage.chunkedUpload(stream, 'videos/clip.mp4', {
mimeType: 'video/mp4',
})

Chunked upload uses S3 multipart upload under the hood. It automatically handles retries and cleans up partial uploads on failure. The size option is optional for chunked uploads.

const file = await this.storage.download('docs/report.pdf')
// Access the file as a stream
const stream = file.toStream()
// Or convert to a string
const text = await file.toString()
// Or convert to bytes
const bytes = await file.toArrayBuffer()
// Metadata is also available
console.log(file.contentType, file.size, file.metadata)

The download method returns a DownloadResult with the following properties:

PropertyTypeDescription
toStream()ReadableStream | undefinedFile content as a web stream
toString()Promise<string> | undefinedFile content as a string
toArrayBuffer()Promise<Uint8Array> | undefinedFile content as bytes
contentTypestringMIME type
sizenumberFile size in bytes
metadataRecord<string, string>?S3 object metadata
await this.storage.delete('docs/report.pdf')

Delete is idempotent. No error is thrown if the file does not exist.

const exists = await this.storage.exists('docs/report.pdf')

Uses an S3 HeadObject request, so it does not download the file content.

Generate temporary URLs that grant time-limited access to private files:

// Download URL (GET)
const download = await this.storage.getPresignedDownloadUrl('docs/report.pdf', 3600)
// Upload URL (PUT)
const upload = await this.storage.getPresignedUploadUrl('docs/new-report.pdf', 3600)
// Delete URL (DELETE)
const del = await this.storage.getPresignedDeleteUrl('docs/old-report.pdf', 3600)

Each method returns a PresignedUrlResult:

FieldTypeDescription
urlstringThe presigned URL
expiresInnumberExpiry duration in seconds
expiresAtDateExpiration timestamp
method'GET' | 'PUT' | 'DELETE'HTTP method the URL is valid for

The expiresIn parameter is validated against the configured limits:

  • Minimum: 1 second
  • Maximum: the presignedUrl.maxExpiry value from your config (up to 604800 seconds / 7 days)
  • If omitted, presignedUrl.defaultExpiry is used

A PresignedUrlInvalidExpiryError is thrown if the value falls outside the allowed range.

The root path in a disk configuration supports template variables that are resolved at runtime:

VariableResolved toExample
{date}Current date in YYYY-MM-DD format2026-02-26
{year}Current year2026
{month}Current month, zero-padded02

For example, a root of uploads/{year}/{month} resolves to uploads/2026/02. This keeps files organized by date automatically without any manual path construction.

Storage operations throw specific error classes that extend ApplicationError. Each error includes contextual data (file path, disk name, etc.) and is automatically translated based on the request locale:

Error classWhen thrown
FileNotFoundErrorFile does not exist
InvalidDiskErrorSpecified disk not found in configuration
DiskNotConfiguredErrorDisk configuration is missing or incomplete
StorageProviderNotSupportedErrorProvider type is not implemented
PresignedUrlInvalidExpiryErrorPresigned URL expiry is outside the 1s–7d range
InvalidFileTypeErrorFile type is not allowed
FileTooLargeErrorFile exceeds the configured size limit
import {
FileNotFoundError,
InvalidDiskError,
PresignedUrlInvalidExpiryError,
} from 'stratal/storage'
try {
const file = await this.storage.download('docs/missing.pdf')
} catch (error) {
if (error instanceof FileNotFoundError) {
// File does not exist — return 404
}
if (error instanceof InvalidDiskError) {
// Disk name is invalid — check configuration
}
}

Cloudflare R2 is S3-compatible, so it works with the storage module’s s3 provider. The key differences are:

  • Set region to 'auto'
  • Use your R2 endpoint: https://<account-id>.r2.cloudflarestorage.com
  • Generate R2 API tokens in the Cloudflare dashboard for accessKeyId and secretAccessKey
{
disk: 'r2',
provider: 's3',
endpoint: 'https://your-account-id.r2.cloudflarestorage.com',
bucket: 'my-r2-bucket',
region: 'auto',
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
root: 'uploads/{year}/{month}',
visibility: 'private',
}
  • Use path templates. Leverage {year}, {month}, and {date} in your root configuration to organize files automatically.
  • Minimize disk switching. Design your disk layout so most operations use the default disk. Pass an explicit disk name only when you need a different bucket or visibility.
  • Use presigned URLs for client uploads. Generate presigned upload URLs instead of proxying file uploads through your API. This offloads bandwidth to the storage provider.
  • Keep expiry times short. Use the shortest practical presigned URL expiry. The default of 1 hour is reasonable for most use cases.
  • Handle errors gracefully. Always catch storage errors — a file might not exist, the disk might be misconfigured, or the network might be unreachable.
  • Separate environments. Use different buckets or disks for development, staging, and production to prevent accidental data overlap.
  • Use independent credentials. Each disk should have its own access credentials with the minimum required permissions.