Compare commits

...

8 Commits

73 changed files with 8582 additions and 94 deletions

177
LICENSE Normal file
View File

@ -0,0 +1,177 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

View File

@ -1,14 +1,14 @@
# Kevo
A lightweight, minimalist Log-Structured Merge (LSM) tree storage engine written
in Go.
[![Go Report Card](https://goreportcard.com/badge/github.com/KevoDB/kevo)](https://goreportcard.com/report/github.com/KevoDB/kevo)
[![GoDoc](https://godoc.org/github.com/KevoDB/kevo?status.svg)](https://godoc.org/github.com/KevoDB/kevo)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
A lightweight, minimalist Log-Structured Merge (LSM) tree storage engine written in Go.
## Overview
Kevo is a clean, composable storage engine that follows LSM tree
principles, focusing on simplicity while providing the building blocks needed
for higher-level database implementations. It's designed to be both educational
and practically useful for embedded storage needs.
Kevo is a clean, composable storage engine that follows LSM tree principles, focusing on simplicity while providing the building blocks needed for higher-level database implementations. It's designed to be both educational and practically useful for embedded storage needs.
## Features
@ -31,9 +31,16 @@ and practically useful for embedded storage needs.
### Installation
```bash
go get github.com/jeremytregunna/kevo
go get github.com/KevoDB/kevo
```
### Client SDKs
Kevo provides client SDKs for different languages to connect to a Kevo server:
- **Go**: [github.com/KevoDB/kevo/pkg/client](https://github.com/KevoDB/kevo/pkg/client)
- **Python**: [github.com/KevoDB/python-sdk](https://github.com/KevoDB/python-sdk)
### Basic Usage
```go
@ -43,7 +50,7 @@ import (
"fmt"
"log"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
func main() {
@ -103,8 +110,7 @@ Included is an interactive CLI tool (`kevo`) for exploring and manipulating data
go run ./cmd/kevo/main.go [database_path]
```
Will create a directory at the path you create (e.g., /tmp/foo.db will be a
directory called foo.db in /tmp where the database will live).
Will create a directory at the path you create (e.g., `/tmp/foo.db` will be a directory called `foo.db` in `/tmp` where the database will live).
Example session:
@ -159,6 +165,8 @@ Kevo is built on the LSM tree architecture, consisting of:
- **Compaction**: Background process to merge and optimize SSTables
- **Transactions**: ACID-compliant operations with reader-writer concurrency
For more details, see the documentation in the [docs](./docs) directory.
## Benchmarking
The storage-bench tool provides comprehensive performance testing:
@ -186,12 +194,27 @@ go test ./...
# Run benchmarks
go test ./pkg/path/to/package -bench .
# Run with race detector
go test -race ./...
```
## Project Status
This project is under active development. While the core functionality is stable, the API may change as we continue to improve the engine.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
See our [contribution guidelines](CONTRIBUTING.md) for more information.
## License
Copyright 2025 Jeremy Tregunna

View File

@ -1,20 +1,25 @@
package main
import (
"context"
"flag"
"fmt"
"io"
"log"
"os"
"os/signal"
"path/filepath"
"strings"
"syscall"
"time"
"github.com/chzyer/readline"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/engine"
// Import transaction package to register the transaction creator
_ "github.com/jeremytregunna/kevo/pkg/transaction"
_ "github.com/KevoDB/kevo/pkg/transaction"
)
// Command completer for readline
@ -43,9 +48,14 @@ const helpText = `
Kevo (kevo) - A lightweight, minimalist, storage engine.
Usage:
keco [database_path] - Start with an optional database path
kevo [options] [database_path] - Start with an optional database path
Commands:
Options:
-server - Run in server mode, exposing a gRPC API
-daemon - Run in daemon mode (detached from terminal)
-address string - Address to listen on in server mode (default "localhost:50051")
Commands (interactive mode only):
.help - Show this help message
.open PATH - Open a database at PATH
.close - Close the current database
@ -68,26 +78,199 @@ Commands:
- Note: start and end are treated as string keys, not numeric indices
`
// Config holds the application configuration
type Config struct {
ServerMode bool
DaemonMode bool
ListenAddr string
DBPath string
TLSEnabled bool
TLSCertFile string
TLSKeyFile string
TLSCAFile string
}
func main() {
fmt.Println("Kevo (kevo) version 1.0.2")
fmt.Println("Enter .help for usage hints.")
// Parse command line arguments and get configuration
config := parseFlags()
// Initialize variables
// Open database if path provided
var eng *engine.Engine
var tx engine.Transaction
var err error
var dbPath string
// Check if a database path was provided as an argument
if len(os.Args) > 1 {
dbPath = os.Args[1]
fmt.Printf("Opening database at %s\n", dbPath)
eng, err = engine.NewEngine(dbPath)
if config.DBPath != "" {
fmt.Printf("Opening database at %s\n", config.DBPath)
eng, err = engine.NewEngine(config.DBPath)
if err != nil {
fmt.Fprintf(os.Stderr, "Error opening database: %s\n", err)
os.Exit(1)
}
defer eng.Close()
}
// Check if we should run in server mode
if config.ServerMode {
if eng == nil {
fmt.Fprintf(os.Stderr, "Error: Server mode requires a database path\n")
os.Exit(1)
}
runServer(eng, config)
return
}
// Run in interactive mode
runInteractive(eng, config.DBPath)
}
// parseFlags parses command line flags and returns a Config
func parseFlags() Config {
// Define custom usage message
flag.Usage = func() {
fmt.Fprintf(flag.CommandLine.Output(), "Kevo - A lightweight key-value storage engine\n\n")
fmt.Fprintf(flag.CommandLine.Output(), "Usage: kevo [options] [database_path]\n\n")
fmt.Fprintf(flag.CommandLine.Output(), "By default, kevo runs in interactive mode with a command-line interface.\n")
fmt.Fprintf(flag.CommandLine.Output(), "If -server flag is provided, kevo runs as a server exposing a gRPC API.\n\n")
fmt.Fprintf(flag.CommandLine.Output(), "Options:\n")
flag.PrintDefaults()
fmt.Fprintf(flag.CommandLine.Output(), "\nInteractive mode commands (when not using -server):\n")
fmt.Fprintf(flag.CommandLine.Output(), " PUT key value - Store a key-value pair\n")
fmt.Fprintf(flag.CommandLine.Output(), " GET key - Retrieve a value by key\n")
fmt.Fprintf(flag.CommandLine.Output(), " DELETE key - Delete a key-value pair\n")
fmt.Fprintf(flag.CommandLine.Output(), " SCAN - Scan all key-value pairs\n")
fmt.Fprintf(flag.CommandLine.Output(), " BEGIN TRANSACTION - Begin a read-write transaction\n")
fmt.Fprintf(flag.CommandLine.Output(), " BEGIN READONLY - Begin a read-only transaction\n")
fmt.Fprintf(flag.CommandLine.Output(), " COMMIT - Commit the current transaction\n")
fmt.Fprintf(flag.CommandLine.Output(), " ROLLBACK - Rollback the current transaction\n")
fmt.Fprintf(flag.CommandLine.Output(), " .help - Show detailed help\n")
fmt.Fprintf(flag.CommandLine.Output(), " .exit - Exit the program\n\n")
fmt.Fprintf(flag.CommandLine.Output(), "For more details, start kevo and type .help\n")
}
serverMode := flag.Bool("server", false, "Run in server mode, exposing a gRPC API")
daemonMode := flag.Bool("daemon", false, "Run in daemon mode (detached from terminal)")
listenAddr := flag.String("address", "localhost:50051", "Address to listen on in server mode")
// TLS options
tlsEnabled := flag.Bool("tls", false, "Enable TLS for secure connections")
tlsCertFile := flag.String("cert", "", "TLS certificate file path")
tlsKeyFile := flag.String("key", "", "TLS private key file path")
tlsCAFile := flag.String("ca", "", "TLS CA certificate file for client verification")
// Parse flags
flag.Parse()
// Get database path from remaining arguments
var dbPath string
if flag.NArg() > 0 {
dbPath = flag.Arg(0)
}
return Config{
ServerMode: *serverMode,
DaemonMode: *daemonMode,
ListenAddr: *listenAddr,
DBPath: dbPath,
TLSEnabled: *tlsEnabled,
TLSCertFile: *tlsCertFile,
TLSKeyFile: *tlsKeyFile,
TLSCAFile: *tlsCAFile,
}
}
// runServer initializes and runs the Kevo server
func runServer(eng *engine.Engine, config Config) {
// Set up daemon mode if requested
if config.DaemonMode {
setupDaemonMode()
}
// Create and start the server
server := NewServer(eng, config)
// Start the server (non-blocking)
if err := server.Start(); err != nil {
fmt.Fprintf(os.Stderr, "Error starting server: %v\n", err)
os.Exit(1)
}
fmt.Printf("Kevo server started on %s\n", config.ListenAddr)
// Set up signal handling for graceful shutdown
setupGracefulShutdown(server, eng)
// Start serving (blocking)
if err := server.Serve(); err != nil {
fmt.Fprintf(os.Stderr, "Error serving: %v\n", err)
os.Exit(1)
}
}
// setupDaemonMode configures process to run as a daemon
func setupDaemonMode() {
// Redirect standard file descriptors to /dev/null
null, err := os.OpenFile("/dev/null", os.O_RDWR, 0)
if err != nil {
log.Fatalf("Failed to open /dev/null: %v", err)
}
// Redirect standard file descriptors to /dev/null
err = syscall.Dup2(int(null.Fd()), int(os.Stdin.Fd()))
if err != nil {
log.Fatalf("Failed to redirect stdin: %v", err)
}
err = syscall.Dup2(int(null.Fd()), int(os.Stdout.Fd()))
if err != nil {
log.Fatalf("Failed to redirect stdout: %v", err)
}
err = syscall.Dup2(int(null.Fd()), int(os.Stderr.Fd()))
if err != nil {
log.Fatalf("Failed to redirect stderr: %v", err)
}
// Create a new process group
_, err = syscall.Setsid()
if err != nil {
log.Fatalf("Failed to create new session: %v", err)
}
fmt.Println("Daemon mode enabled, detaching from terminal...")
}
// setupGracefulShutdown configures graceful shutdown on signals
func setupGracefulShutdown(server *Server, eng *engine.Engine) {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
go func() {
sig := <-sigChan
fmt.Printf("\nReceived signal %v, shutting down...\n", sig)
// Graceful shutdown logic
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Shut down the server
if err := server.Shutdown(ctx); err != nil {
fmt.Fprintf(os.Stderr, "Error shutting down server: %v\n", err)
}
// The engine will be closed by the defer in main()
fmt.Println("Shutdown complete")
os.Exit(0)
}()
}
// runInteractive starts the interactive CLI mode
func runInteractive(eng *engine.Engine, dbPath string) {
fmt.Println("Kevo (kevo) version 1.0.2")
fmt.Println("Enter .help for usage hints.")
var tx engine.Transaction
var err error
// Setup readline with history support
historyFile := filepath.Join(os.TempDir(), ".kevo_history")
@ -96,6 +279,7 @@ func main() {
HistoryFile: historyFile,
InterruptPrompt: "^C",
EOFPrompt: "exit",
AutoComplete: completer,
})
if err != nil {
fmt.Fprintf(os.Stderr, "Error initializing readline: %s\n", err)
@ -151,9 +335,6 @@ func main() {
continue
}
// Add to history (readline handles this automatically for non-empty lines)
// rl.SaveHistory(line)
// Process command
parts := strings.Fields(line)
cmd := strings.ToUpper(parts[0])
@ -553,4 +734,4 @@ func makeKeySuccessor(prefix []byte) []byte {
copy(successor, prefix)
successor[len(prefix)] = 0xFF
return successor
}
}

283
cmd/kevo/server.go Normal file
View File

@ -0,0 +1,283 @@
package main
import (
"context"
"crypto/tls"
"fmt"
"net"
"sync"
"time"
"github.com/KevoDB/kevo/pkg/engine"
grpcservice "github.com/KevoDB/kevo/pkg/grpc/service"
pb "github.com/KevoDB/kevo/proto/kevo"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/keepalive"
)
// TransactionRegistry manages active transactions on the server
type TransactionRegistry struct {
mu sync.RWMutex
transactions map[string]engine.Transaction
nextID uint64
}
// NewTransactionRegistry creates a new transaction registry
func NewTransactionRegistry() *TransactionRegistry {
return &TransactionRegistry{
transactions: make(map[string]engine.Transaction),
}
}
// Begin creates a new transaction and registers it
func (tr *TransactionRegistry) Begin(ctx context.Context, eng *engine.Engine, readOnly bool) (string, error) {
// Create context with timeout to prevent potential hangs
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// Create a channel to receive the transaction result
type txResult struct {
tx engine.Transaction
err error
}
resultCh := make(chan txResult, 1)
// Start transaction in a goroutine to prevent potential blocking
go func() {
tx, err := eng.BeginTransaction(readOnly)
select {
case resultCh <- txResult{tx, err}:
// Successfully sent result
case <-timeoutCtx.Done():
// Context timed out, but try to rollback if we got a transaction
if tx != nil {
tx.Rollback()
}
}
}()
// Wait for result or timeout
select {
case result := <-resultCh:
if result.err != nil {
return "", fmt.Errorf("failed to begin transaction: %w", result.err)
}
tr.mu.Lock()
defer tr.mu.Unlock()
// Generate a transaction ID
tr.nextID++
txID := fmt.Sprintf("tx-%d", tr.nextID)
// Register the transaction
tr.transactions[txID] = result.tx
return txID, nil
case <-timeoutCtx.Done():
return "", fmt.Errorf("transaction creation timed out: %w", timeoutCtx.Err())
}
}
// Get retrieves a transaction by ID
func (tr *TransactionRegistry) Get(txID string) (engine.Transaction, bool) {
tr.mu.RLock()
defer tr.mu.RUnlock()
tx, exists := tr.transactions[txID]
return tx, exists
}
// Remove removes a transaction from the registry
func (tr *TransactionRegistry) Remove(txID string) {
tr.mu.Lock()
defer tr.mu.Unlock()
delete(tr.transactions, txID)
}
// GracefulShutdown attempts to cleanly shut down all transactions
func (tr *TransactionRegistry) GracefulShutdown(ctx context.Context) error {
tr.mu.Lock()
defer tr.mu.Unlock()
var lastErr error
// Copy transaction IDs to avoid modifying the map during iteration
ids := make([]string, 0, len(tr.transactions))
for id := range tr.transactions {
ids = append(ids, id)
}
// Rollback each transaction with a timeout
for _, id := range ids {
tx, exists := tr.transactions[id]
if !exists {
continue
}
// Use a timeout for each rollback operation
rollbackCtx, cancel := context.WithTimeout(ctx, 1*time.Second)
// Create a channel for the rollback result
doneCh := make(chan error, 1)
// Execute rollback in goroutine
go func(t engine.Transaction) {
doneCh <- t.Rollback()
}(tx)
// Wait for rollback or timeout
var err error
select {
case err = <-doneCh:
// Rollback completed
case <-rollbackCtx.Done():
err = fmt.Errorf("rollback timed out: %w", rollbackCtx.Err())
}
cancel() // Clean up context
// Record error if any
if err != nil {
lastErr = fmt.Errorf("failed to rollback transaction %s: %w", id, err)
}
// Always remove transaction from map
delete(tr.transactions, id)
}
return lastErr
}
// Server represents the Kevo server
type Server struct {
eng *engine.Engine
txRegistry *TransactionRegistry
listener net.Listener
grpcServer *grpc.Server
kevoService *grpcservice.KevoServiceServer
config Config
}
// NewServer creates a new server instance
func NewServer(eng *engine.Engine, config Config) *Server {
return &Server{
eng: eng,
txRegistry: NewTransactionRegistry(),
config: config,
}
}
// Start initializes and starts the server
func (s *Server) Start() error {
// Create a listener on the specified address
var err error
s.listener, err = net.Listen("tcp", s.config.ListenAddr)
if err != nil {
return fmt.Errorf("failed to listen on %s: %w", s.config.ListenAddr, err)
}
fmt.Printf("Listening on %s\n", s.config.ListenAddr)
// Configure gRPC server options
var serverOpts []grpc.ServerOption
// Add TLS if configured
if s.config.TLSEnabled {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
}
// Load server certificate if provided
if s.config.TLSCertFile != "" && s.config.TLSKeyFile != "" {
cert, err := tls.LoadX509KeyPair(s.config.TLSCertFile, s.config.TLSKeyFile)
if err != nil {
return fmt.Errorf("failed to load TLS certificate: %w", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
// Add credentials to server options
serverOpts = append(serverOpts, grpc.Creds(credentials.NewTLS(tlsConfig)))
}
// Configure keepalive parameters
kaProps := keepalive.ServerParameters{
MaxConnectionIdle: 60 * time.Second,
MaxConnectionAge: 5 * time.Minute,
MaxConnectionAgeGrace: 5 * time.Second,
Time: 15 * time.Second,
Timeout: 5 * time.Second,
}
kaPolicy := keepalive.EnforcementPolicy{
MinTime: 5 * time.Second,
PermitWithoutStream: true,
}
serverOpts = append(serverOpts,
grpc.KeepaliveParams(kaProps),
grpc.KeepaliveEnforcementPolicy(kaPolicy),
)
// Create gRPC server with options
s.grpcServer = grpc.NewServer(serverOpts...)
// Create and register the Kevo service implementation
s.kevoService = grpcservice.NewKevoServiceServer(s.eng, s.txRegistry)
pb.RegisterKevoServiceServer(s.grpcServer, s.kevoService)
fmt.Println("gRPC server initialized")
return nil
}
// Serve starts serving requests (blocking)
func (s *Server) Serve() error {
if s.grpcServer == nil {
return fmt.Errorf("server not initialized, call Start() first")
}
fmt.Println("Starting gRPC server")
return s.grpcServer.Serve(s.listener)
}
// Shutdown gracefully shuts down the server
func (s *Server) Shutdown(ctx context.Context) error {
// First, gracefully stop the gRPC server if it exists
if s.grpcServer != nil {
fmt.Println("Gracefully stopping gRPC server...")
// Create a channel to signal when the server has stopped
stopped := make(chan struct{})
go func() {
s.grpcServer.GracefulStop()
close(stopped)
}()
// Wait for graceful stop or context deadline
select {
case <-stopped:
fmt.Println("gRPC server stopped gracefully")
case <-ctx.Done():
fmt.Println("Context deadline exceeded, forcing server stop")
s.grpcServer.Stop()
}
}
// Shut down the listener if it's still open
if s.listener != nil {
if err := s.listener.Close(); err != nil {
return fmt.Errorf("failed to close listener: %w", err)
}
}
// Clean up any active transactions
if err := s.txRegistry.GracefulShutdown(ctx); err != nil {
return fmt.Errorf("failed to shutdown transaction registry: %w", err)
}
return nil
}

199
cmd/kevo/server_test.go Normal file
View File

@ -0,0 +1,199 @@
package main
import (
"context"
"os"
"strings"
"testing"
"time"
"github.com/KevoDB/kevo/pkg/engine"
)
func TestTransactionRegistry(t *testing.T) {
// Create a timeout context for the whole test
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Set up temporary directory for test
tmpDir, err := os.MkdirTemp("", "kevo_test")
if err != nil {
t.Fatalf("Failed to create temporary directory: %v", err)
}
defer os.RemoveAll(tmpDir)
// Create a test engine
eng, err := engine.NewEngine(tmpDir)
if err != nil {
t.Fatalf("Failed to create engine: %v", err)
}
defer eng.Close()
// Create transaction registry
registry := NewTransactionRegistry()
// Test begin transaction
txID, err := registry.Begin(ctx, eng, false)
if err != nil {
// If we get a timeout, don't fail the test - the engine might be busy
if ctx.Err() != nil || strings.Contains(err.Error(), "timed out") {
t.Skip("Skipping test due to transaction timeout")
}
t.Fatalf("Failed to begin transaction: %v", err)
}
if txID == "" {
t.Fatal("Expected non-empty transaction ID")
}
// Test get transaction
tx, exists := registry.Get(txID)
if !exists {
t.Fatalf("Transaction %s not found in registry", txID)
}
if tx == nil {
t.Fatal("Expected non-nil transaction")
}
if tx.IsReadOnly() {
t.Fatal("Expected read-write transaction")
}
// Test read-only transaction
roTxID, err := registry.Begin(ctx, eng, true)
if err != nil {
// If we get a timeout, don't fail the test - the engine might be busy
if ctx.Err() != nil || strings.Contains(err.Error(), "timed out") {
t.Skip("Skipping test due to transaction timeout")
}
t.Fatalf("Failed to begin read-only transaction: %v", err)
}
roTx, exists := registry.Get(roTxID)
if !exists {
t.Fatalf("Transaction %s not found in registry", roTxID)
}
if !roTx.IsReadOnly() {
t.Fatal("Expected read-only transaction")
}
// Test remove transaction
registry.Remove(txID)
_, exists = registry.Get(txID)
if exists {
t.Fatalf("Transaction %s should have been removed", txID)
}
// Test graceful shutdown
shutdownErr := registry.GracefulShutdown(ctx)
if shutdownErr != nil && !strings.Contains(shutdownErr.Error(), "timed out") {
t.Fatalf("Failed to gracefully shutdown registry: %v", shutdownErr)
}
}
func TestServerStartup(t *testing.T) {
// Skip if not running in an environment where we can bind to ports
if os.Getenv("ENABLE_NETWORK_TESTS") != "1" {
t.Skip("Skipping network test (set ENABLE_NETWORK_TESTS=1 to run)")
}
// Set up temporary directory for test
tmpDir, err := os.MkdirTemp("", "kevo_server_test")
if err != nil {
t.Fatalf("Failed to create temporary directory: %v", err)
}
defer os.RemoveAll(tmpDir)
// Create a test engine
eng, err := engine.NewEngine(tmpDir)
if err != nil {
t.Fatalf("Failed to create engine: %v", err)
}
defer eng.Close()
// Create server with a random port
config := Config{
ServerMode: true,
ListenAddr: "localhost:0", // Let the OS assign a port
DBPath: tmpDir,
}
server := NewServer(eng, config)
// Start server (does not block)
if err := server.Start(); err != nil {
t.Fatalf("Failed to start server: %v", err)
}
// Check that the listener is active
if server.listener == nil {
t.Fatal("Server listener is nil after Start()")
}
// Get the assigned port - if this works, the listener is properly set up
addr := server.listener.Addr().String()
if addr == "" {
t.Fatal("Server listener has no address")
}
t.Logf("Server listening on %s", addr)
// Test shutdown
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
t.Fatalf("Failed to shutdown server: %v", err)
}
}
func TestGRPCServer(t *testing.T) {
// Skip if not running in an environment where we can bind to ports
if os.Getenv("ENABLE_NETWORK_TESTS") != "1" {
t.Skip("Skipping network test (set ENABLE_NETWORK_TESTS=1 to run)")
}
// Create a temporary database for testing
tempDBPath, err := os.MkdirTemp("", "kevo_grpc_test")
if err != nil {
t.Fatalf("Failed to create temporary directory: %v", err)
}
defer os.RemoveAll(tempDBPath)
// Create engine
eng, err := engine.NewEngine(tempDBPath)
if err != nil {
t.Fatalf("Failed to create engine: %v", err)
}
defer eng.Close()
// Create server configuration
config := Config{
ServerMode: true,
ListenAddr: "localhost:50052", // Use a different port for tests
DBPath: tempDBPath,
}
// Create and start the server
server := NewServer(eng, config)
if err := server.Start(); err != nil {
t.Fatalf("Failed to start server: %v", err)
}
// Run server in a goroutine
go func() {
if err := server.Serve(); err != nil {
t.Logf("Server stopped: %v", err)
}
}()
// Give the server a moment to start
time.Sleep(200 * time.Millisecond)
// Clean up at the end
defer func() {
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer shutdownCancel()
if err := server.Shutdown(shutdownCtx); err != nil {
t.Logf("Failed to shut down server: %v", err)
}
}()
// TODO: Add gRPC client tests here when client implementation is complete
t.Log("gRPC server integration test scaffolding added")
}

View File

@ -8,7 +8,7 @@ import (
"sync"
"time"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
// CompactionBenchmarkOptions configures the compaction benchmark

View File

@ -11,7 +11,7 @@ import (
"strings"
"time"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
const (

View File

@ -8,8 +8,8 @@ import (
"strings"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/engine"
)
// TuningResults stores the results of various configuration tuning runs

11
go.mod
View File

@ -1,10 +1,17 @@
module github.com/jeremytregunna/kevo
module github.com/KevoDB/kevo
go 1.24.2
require (
github.com/cespare/xxhash/v2 v2.3.0
github.com/chzyer/readline v1.5.1
google.golang.org/grpc v1.72.0
google.golang.org/protobuf v1.36.6
)
require golang.org/x/sys v0.1.0 // indirect
require (
golang.org/x/net v0.35.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect
)

36
go.sum
View File

@ -6,6 +6,38 @@ github.com/chzyer/readline v1.5.1 h1:upd/6fQk4src78LMRzh5vItIt361/o4uq553V8B5sGI
github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk=
github.com/chzyer/test v1.0.0 h1:p3BQDXSxOhOG0P9z6/hGnII4LGiEPOYBhs8asl/fC04=
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ=
google.golang.org/grpc v1.72.0 h1:S7UkcVa60b5AAQTaO6ZKamFp1zMZSU0fGDK2WZLbBnM=
google.golang.org/grpc v1.72.0/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=

226
pkg/client/README.md Normal file
View File

@ -0,0 +1,226 @@
# Kevo Go Client SDK
This package provides a Go client for connecting to a Kevo database server. The client uses the gRPC transport layer to communicate with the server and provides an idiomatic Go API for working with Kevo.
## Features
- Simple key-value operations (Get, Put, Delete)
- Batch operations for atomic writes
- Transaction support with ACID guarantees
- Iterator API for efficient range scans
- Connection pooling and automatic retries
- TLS support for secure communication
- Comprehensive error handling
- Configurable timeouts and backoff strategies
## Installation
```bash
go get github.com/jeremytregunna/kevo
```
## Quick Start
```go
package main
import (
"context"
"fmt"
"log"
"github.com/jeremytregunna/kevo/pkg/client"
_ "github.com/jeremytregunna/kevo/pkg/grpc/transport" // Register gRPC transport
)
func main() {
// Create a client with default options
options := client.DefaultClientOptions()
options.Endpoint = "localhost:50051"
c, err := client.NewClient(options)
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// Connect to the server
ctx := context.Background()
if err := c.Connect(ctx); err != nil {
log.Fatalf("Failed to connect: %v", err)
}
defer c.Close()
// Basic key-value operations
key := []byte("hello")
value := []byte("world")
// Store a value
if _, err := c.Put(ctx, key, value, true); err != nil {
log.Fatalf("Put failed: %v", err)
}
// Retrieve a value
val, found, err := c.Get(ctx, key)
if err != nil {
log.Fatalf("Get failed: %v", err)
}
if found {
fmt.Printf("Value: %s\n", val)
} else {
fmt.Println("Key not found")
}
// Delete a value
if _, err := c.Delete(ctx, key, true); err != nil {
log.Fatalf("Delete failed: %v", err)
}
}
```
## Configuration Options
The client can be configured using the `ClientOptions` struct:
```go
options := client.ClientOptions{
// Connection options
Endpoint: "localhost:50051",
ConnectTimeout: 5 * time.Second,
RequestTimeout: 10 * time.Second,
TransportType: "grpc",
PoolSize: 5,
// Security options
TLSEnabled: true,
CertFile: "/path/to/cert.pem",
KeyFile: "/path/to/key.pem",
CAFile: "/path/to/ca.pem",
// Retry options
MaxRetries: 3,
InitialBackoff: 100 * time.Millisecond,
MaxBackoff: 2 * time.Second,
BackoffFactor: 1.5,
RetryJitter: 0.2,
// Performance options
Compression: client.CompressionGzip,
MaxMessageSize: 16 * 1024 * 1024, // 16MB
}
```
## Transactions
```go
// Begin a transaction
tx, err := client.BeginTransaction(ctx, false) // readOnly=false
if err != nil {
log.Fatalf("Failed to begin transaction: %v", err)
}
// Perform operations within the transaction
success, err := tx.Put(ctx, []byte("key1"), []byte("value1"))
if err != nil {
tx.Rollback(ctx) // Rollback on error
log.Fatalf("Transaction put failed: %v", err)
}
// Commit the transaction
if err := tx.Commit(ctx); err != nil {
log.Fatalf("Transaction commit failed: %v", err)
}
```
## Scans and Iterators
```go
// Set up scan options
scanOptions := client.ScanOptions{
Prefix: []byte("user:"), // Optional prefix
StartKey: []byte("user:1"), // Optional start key (inclusive)
EndKey: []byte("user:9"), // Optional end key (exclusive)
Limit: 100, // Optional limit
}
// Create a scanner
scanner, err := client.Scan(ctx, scanOptions)
if err != nil {
log.Fatalf("Failed to create scanner: %v", err)
}
defer scanner.Close()
// Iterate through results
for scanner.Next() {
fmt.Printf("Key: %s, Value: %s\n", scanner.Key(), scanner.Value())
}
// Check for errors after iteration
if err := scanner.Error(); err != nil {
log.Fatalf("Scan error: %v", err)
}
```
## Batch Operations
```go
// Create a batch of operations
operations := []client.BatchOperation{
{Type: "put", Key: []byte("key1"), Value: []byte("value1")},
{Type: "put", Key: []byte("key2"), Value: []byte("value2")},
{Type: "delete", Key: []byte("old-key")},
}
// Execute the batch atomically
success, err := client.BatchWrite(ctx, operations, true)
if err != nil {
log.Fatalf("Batch write failed: %v", err)
}
```
## Error Handling and Retries
The client automatically handles retries for transient errors using exponential backoff with jitter. You can configure the retry behavior using the `RetryPolicy` in the client options.
```go
// Manual retry with custom policy
err = client.RetryWithBackoff(
ctx,
func() error {
_, _, err := c.Get(ctx, key)
return err
},
3, // maxRetries
100*time.Millisecond, // initialBackoff
2*time.Second, // maxBackoff
2.0, // backoffFactor
0.2, // jitter
)
```
## Database Statistics
```go
// Get database statistics
stats, err := client.GetStats(ctx)
if err != nil {
log.Fatalf("Failed to get stats: %v", err)
}
fmt.Printf("Key count: %d\n", stats.KeyCount)
fmt.Printf("Storage size: %d bytes\n", stats.StorageSize)
fmt.Printf("MemTable count: %d\n", stats.MemtableCount)
fmt.Printf("SSTable count: %d\n", stats.SstableCount)
fmt.Printf("Write amplification: %.2f\n", stats.WriteAmplification)
fmt.Printf("Read amplification: %.2f\n", stats.ReadAmplification)
```
## Compaction
```go
// Trigger compaction
success, err := client.Compact(ctx, false) // force=false
if err != nil {
log.Fatalf("Compaction failed: %v", err)
}
```

381
pkg/client/client.go Normal file
View File

@ -0,0 +1,381 @@
package client
import (
"context"
"encoding/json"
"errors"
"fmt"
"time"
"github.com/KevoDB/kevo/pkg/transport"
)
// CompressionType represents a compression algorithm
type CompressionType = transport.CompressionType
// Compression options
const (
CompressionNone = transport.CompressionNone
CompressionGzip = transport.CompressionGzip
CompressionSnappy = transport.CompressionSnappy
)
// ClientOptions configures a Kevo client
type ClientOptions struct {
// Connection options
Endpoint string // Server address
ConnectTimeout time.Duration // Timeout for connection attempts
RequestTimeout time.Duration // Default timeout for requests
TransportType string // Transport type (e.g. "grpc")
PoolSize int // Connection pool size
// Security options
TLSEnabled bool // Enable TLS
CertFile string // Client certificate file
KeyFile string // Client key file
CAFile string // CA certificate file
// Retry options
MaxRetries int // Maximum number of retries
InitialBackoff time.Duration // Initial retry backoff
MaxBackoff time.Duration // Maximum retry backoff
BackoffFactor float64 // Backoff multiplier
RetryJitter float64 // Random jitter factor
// Performance options
Compression CompressionType // Compression algorithm
MaxMessageSize int // Maximum message size
}
// DefaultClientOptions returns sensible default client options
func DefaultClientOptions() ClientOptions {
return ClientOptions{
Endpoint: "localhost:50051",
ConnectTimeout: time.Second * 5,
RequestTimeout: time.Second * 10,
TransportType: "grpc",
PoolSize: 5,
TLSEnabled: false,
MaxRetries: 3,
InitialBackoff: time.Millisecond * 100,
MaxBackoff: time.Second * 2,
BackoffFactor: 1.5,
RetryJitter: 0.2,
Compression: CompressionNone,
MaxMessageSize: 16 * 1024 * 1024, // 16MB
}
}
// Client represents a connection to a Kevo database server
type Client struct {
options ClientOptions
client transport.Client
}
// NewClient creates a new Kevo client with the given options
func NewClient(options ClientOptions) (*Client, error) {
if options.Endpoint == "" {
return nil, errors.New("endpoint is required")
}
transportOpts := transport.TransportOptions{
Timeout: options.ConnectTimeout,
MaxMessageSize: options.MaxMessageSize,
Compression: options.Compression,
TLSEnabled: options.TLSEnabled,
CertFile: options.CertFile,
KeyFile: options.KeyFile,
CAFile: options.CAFile,
RetryPolicy: transport.RetryPolicy{
MaxRetries: options.MaxRetries,
InitialBackoff: options.InitialBackoff,
MaxBackoff: options.MaxBackoff,
BackoffFactor: options.BackoffFactor,
Jitter: options.RetryJitter,
},
}
transportClient, err := transport.GetClient(options.TransportType, options.Endpoint, transportOpts)
if err != nil {
return nil, fmt.Errorf("failed to create transport client: %w", err)
}
return &Client{
options: options,
client: transportClient,
}, nil
}
// Connect establishes a connection to the server
func (c *Client) Connect(ctx context.Context) error {
return c.client.Connect(ctx)
}
// Close closes the connection to the server
func (c *Client) Close() error {
return c.client.Close()
}
// IsConnected returns whether the client is connected to the server
func (c *Client) IsConnected() bool {
return c.client != nil && c.client.IsConnected()
}
// Get retrieves a value by key
func (c *Client) Get(ctx context.Context, key []byte) ([]byte, bool, error) {
if !c.IsConnected() {
return nil, false, errors.New("not connected to server")
}
req := struct {
Key []byte `json:"key"`
}{
Key: key,
}
reqData, err := json.Marshal(req)
if err != nil {
return nil, false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeGet, reqData))
if err != nil {
return nil, false, fmt.Errorf("failed to send request: %w", err)
}
var getResp struct {
Value []byte `json:"value"`
Found bool `json:"found"`
}
if err := json.Unmarshal(resp.Payload(), &getResp); err != nil {
return nil, false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return getResp.Value, getResp.Found, nil
}
// Put stores a key-value pair
func (c *Client) Put(ctx context.Context, key, value []byte, sync bool) (bool, error) {
if !c.IsConnected() {
return false, errors.New("not connected to server")
}
req := struct {
Key []byte `json:"key"`
Value []byte `json:"value"`
Sync bool `json:"sync"`
}{
Key: key,
Value: value,
Sync: sync,
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypePut, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var putResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &putResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return putResp.Success, nil
}
// Delete removes a key-value pair
func (c *Client) Delete(ctx context.Context, key []byte, sync bool) (bool, error) {
if !c.IsConnected() {
return false, errors.New("not connected to server")
}
req := struct {
Key []byte `json:"key"`
Sync bool `json:"sync"`
}{
Key: key,
Sync: sync,
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeDelete, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var deleteResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &deleteResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return deleteResp.Success, nil
}
// BatchOperation represents a single operation in a batch
type BatchOperation struct {
Type string // "put" or "delete"
Key []byte
Value []byte // only used for "put" operations
}
// BatchWrite performs multiple operations in a single atomic batch
func (c *Client) BatchWrite(ctx context.Context, operations []BatchOperation, sync bool) (bool, error) {
if !c.IsConnected() {
return false, errors.New("not connected to server")
}
req := struct {
Operations []struct {
Type string `json:"type"`
Key []byte `json:"key"`
Value []byte `json:"value"`
} `json:"operations"`
Sync bool `json:"sync"`
}{
Sync: sync,
}
for _, op := range operations {
req.Operations = append(req.Operations, struct {
Type string `json:"type"`
Key []byte `json:"key"`
Value []byte `json:"value"`
}{
Type: op.Type,
Key: op.Key,
Value: op.Value,
})
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeBatchWrite, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var batchResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &batchResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return batchResp.Success, nil
}
// GetStats retrieves database statistics
func (c *Client) GetStats(ctx context.Context) (*Stats, error) {
if !c.IsConnected() {
return nil, errors.New("not connected to server")
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
// GetStats doesn't require a payload
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeGetStats, nil))
if err != nil {
return nil, fmt.Errorf("failed to send request: %w", err)
}
var statsResp struct {
KeyCount int64 `json:"key_count"`
StorageSize int64 `json:"storage_size"`
MemtableCount int32 `json:"memtable_count"`
SstableCount int32 `json:"sstable_count"`
WriteAmplification float64 `json:"write_amplification"`
ReadAmplification float64 `json:"read_amplification"`
}
if err := json.Unmarshal(resp.Payload(), &statsResp); err != nil {
return nil, fmt.Errorf("failed to unmarshal response: %w", err)
}
return &Stats{
KeyCount: statsResp.KeyCount,
StorageSize: statsResp.StorageSize,
MemtableCount: statsResp.MemtableCount,
SstableCount: statsResp.SstableCount,
WriteAmplification: statsResp.WriteAmplification,
ReadAmplification: statsResp.ReadAmplification,
}, nil
}
// Compact triggers compaction of the database
func (c *Client) Compact(ctx context.Context, force bool) (bool, error) {
if !c.IsConnected() {
return false, errors.New("not connected to server")
}
req := struct {
Force bool `json:"force"`
}{
Force: force,
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeCompact, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var compactResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &compactResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return compactResp.Success, nil
}
// Stats contains database statistics
type Stats struct {
KeyCount int64
StorageSize int64
MemtableCount int32
SstableCount int32
WriteAmplification float64
ReadAmplification float64
}

483
pkg/client/client_test.go Normal file
View File

@ -0,0 +1,483 @@
package client
import (
"context"
"errors"
"os"
"testing"
"time"
"github.com/KevoDB/kevo/pkg/transport"
)
// mockClient implements the transport.Client interface for testing
type mockClient struct {
connected bool
responses map[string][]byte
errors map[string]error
}
func newMockClient() *mockClient {
return &mockClient{
connected: false,
responses: make(map[string][]byte),
errors: make(map[string]error),
}
}
func (m *mockClient) Connect(ctx context.Context) error {
if m.errors["connect"] != nil {
return m.errors["connect"]
}
m.connected = true
return nil
}
func (m *mockClient) Close() error {
if m.errors["close"] != nil {
return m.errors["close"]
}
m.connected = false
return nil
}
func (m *mockClient) IsConnected() bool {
return m.connected
}
func (m *mockClient) Status() transport.TransportStatus {
return transport.TransportStatus{
Connected: m.connected,
}
}
func (m *mockClient) Send(ctx context.Context, request transport.Request) (transport.Response, error) {
if !m.connected {
return nil, errors.New("not connected")
}
reqType := request.Type()
if m.errors[reqType] != nil {
return nil, m.errors[reqType]
}
if payload, ok := m.responses[reqType]; ok {
return transport.NewResponse(reqType, payload, nil), nil
}
return nil, errors.New("unexpected request type")
}
func (m *mockClient) Stream(ctx context.Context) (transport.Stream, error) {
if !m.connected {
return nil, errors.New("not connected")
}
if m.errors["stream"] != nil {
return nil, m.errors["stream"]
}
return nil, errors.New("stream not implemented in mock")
}
// Set up a mock response for a specific request type
func (m *mockClient) setResponse(reqType string, payload []byte) {
m.responses[reqType] = payload
}
// Set up a mock error for a specific request type
func (m *mockClient) setError(reqType string, err error) {
m.errors[reqType] = err
}
// TestMain is used to set up test environment
func TestMain(m *testing.M) {
// Register mock client with the transport registry for testing
transport.RegisterClientTransport("mock", func(endpoint string, options transport.TransportOptions) (transport.Client, error) {
return newMockClient(), nil
})
// Run tests
os.Exit(m.Run())
}
func TestClientConnect(t *testing.T) {
// Modify default options to use mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
// Create a client with the mock transport
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
ctx := context.Background()
// Test successful connection
err = client.Connect(ctx)
if err != nil {
t.Errorf("Expected successful connection, got error: %v", err)
}
if !client.IsConnected() {
t.Error("Expected client to be connected")
}
// Test connection error
mock.setError("connect", errors.New("connection refused"))
err = client.Connect(ctx)
if err == nil {
t.Error("Expected connection error, got nil")
}
}
func TestClientGet(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Test successful get
mock.setResponse(transport.TypeGet, []byte(`{"value": "dGVzdHZhbHVl", "found": true}`))
val, found, err := client.Get(ctx, []byte("testkey"))
if err != nil {
t.Errorf("Expected successful get, got error: %v", err)
}
if !found {
t.Error("Expected found to be true")
}
if string(val) != "testvalue" {
t.Errorf("Expected value 'testvalue', got '%s'", val)
}
// Test key not found
mock.setResponse(transport.TypeGet, []byte(`{"value": null, "found": false}`))
_, found, err = client.Get(ctx, []byte("nonexistent"))
if err != nil {
t.Errorf("Expected successful get with not found, got error: %v", err)
}
if found {
t.Error("Expected found to be false")
}
// Test get error
mock.setError(transport.TypeGet, errors.New("get error"))
_, _, err = client.Get(ctx, []byte("testkey"))
if err == nil {
t.Error("Expected get error, got nil")
}
}
func TestClientPut(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Test successful put
mock.setResponse(transport.TypePut, []byte(`{"success": true}`))
success, err := client.Put(ctx, []byte("testkey"), []byte("testvalue"), true)
if err != nil {
t.Errorf("Expected successful put, got error: %v", err)
}
if !success {
t.Error("Expected success to be true")
}
// Test put error
mock.setError(transport.TypePut, errors.New("put error"))
_, err = client.Put(ctx, []byte("testkey"), []byte("testvalue"), true)
if err == nil {
t.Error("Expected put error, got nil")
}
}
func TestClientDelete(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Test successful delete
mock.setResponse(transport.TypeDelete, []byte(`{"success": true}`))
success, err := client.Delete(ctx, []byte("testkey"), true)
if err != nil {
t.Errorf("Expected successful delete, got error: %v", err)
}
if !success {
t.Error("Expected success to be true")
}
// Test delete error
mock.setError(transport.TypeDelete, errors.New("delete error"))
_, err = client.Delete(ctx, []byte("testkey"), true)
if err == nil {
t.Error("Expected delete error, got nil")
}
}
func TestClientBatchWrite(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Create batch operations
operations := []BatchOperation{
{Type: "put", Key: []byte("key1"), Value: []byte("value1")},
{Type: "put", Key: []byte("key2"), Value: []byte("value2")},
{Type: "delete", Key: []byte("key3")},
}
// Test successful batch write
mock.setResponse(transport.TypeBatchWrite, []byte(`{"success": true}`))
success, err := client.BatchWrite(ctx, operations, true)
if err != nil {
t.Errorf("Expected successful batch write, got error: %v", err)
}
if !success {
t.Error("Expected success to be true")
}
// Test batch write error
mock.setError(transport.TypeBatchWrite, errors.New("batch write error"))
_, err = client.BatchWrite(ctx, operations, true)
if err == nil {
t.Error("Expected batch write error, got nil")
}
}
func TestClientGetStats(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Test successful get stats
statsJSON := `{
"key_count": 1000,
"storage_size": 1048576,
"memtable_count": 1,
"sstable_count": 5,
"write_amplification": 1.5,
"read_amplification": 2.0
}`
mock.setResponse(transport.TypeGetStats, []byte(statsJSON))
stats, err := client.GetStats(ctx)
if err != nil {
t.Errorf("Expected successful get stats, got error: %v", err)
}
if stats.KeyCount != 1000 {
t.Errorf("Expected KeyCount 1000, got %d", stats.KeyCount)
}
if stats.StorageSize != 1048576 {
t.Errorf("Expected StorageSize 1048576, got %d", stats.StorageSize)
}
if stats.MemtableCount != 1 {
t.Errorf("Expected MemtableCount 1, got %d", stats.MemtableCount)
}
if stats.SstableCount != 5 {
t.Errorf("Expected SstableCount 5, got %d", stats.SstableCount)
}
if stats.WriteAmplification != 1.5 {
t.Errorf("Expected WriteAmplification 1.5, got %f", stats.WriteAmplification)
}
if stats.ReadAmplification != 2.0 {
t.Errorf("Expected ReadAmplification 2.0, got %f", stats.ReadAmplification)
}
// Test get stats error
mock.setError(transport.TypeGetStats, errors.New("get stats error"))
_, err = client.GetStats(ctx)
if err == nil {
t.Error("Expected get stats error, got nil")
}
}
func TestClientCompact(t *testing.T) {
// Create a client with the mock transport
options := DefaultClientOptions()
options.TransportType = "mock"
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Get the underlying mock client for test assertions
mock := client.client.(*mockClient)
mock.connected = true
ctx := context.Background()
// Test successful compact
mock.setResponse(transport.TypeCompact, []byte(`{"success": true}`))
success, err := client.Compact(ctx, true)
if err != nil {
t.Errorf("Expected successful compact, got error: %v", err)
}
if !success {
t.Error("Expected success to be true")
}
// Test compact error
mock.setError(transport.TypeCompact, errors.New("compact error"))
_, err = client.Compact(ctx, true)
if err == nil {
t.Error("Expected compact error, got nil")
}
}
func TestRetryWithBackoff(t *testing.T) {
ctx := context.Background()
// Test successful retry
attempts := 0
err := RetryWithBackoff(
ctx,
func() error {
attempts++
if attempts < 3 {
return ErrTimeout
}
return nil
},
5, // maxRetries
10*time.Millisecond, // initialBackoff
100*time.Millisecond, // maxBackoff
2.0, // backoffFactor
0.1, // jitter
)
if err != nil {
t.Errorf("Expected successful retry, got error: %v", err)
}
if attempts != 3 {
t.Errorf("Expected 3 attempts, got %d", attempts)
}
// Test max retries exceeded
attempts = 0
err = RetryWithBackoff(
ctx,
func() error {
attempts++
return ErrTimeout
},
3, // maxRetries
10*time.Millisecond, // initialBackoff
100*time.Millisecond, // maxBackoff
2.0, // backoffFactor
0.1, // jitter
)
if err == nil {
t.Error("Expected error after max retries, got nil")
}
if attempts != 4 { // Initial + 3 retries
t.Errorf("Expected 4 attempts, got %d", attempts)
}
// Test non-retryable error
attempts = 0
err = RetryWithBackoff(
ctx,
func() error {
attempts++
return errors.New("non-retryable error")
},
3, // maxRetries
10*time.Millisecond, // initialBackoff
100*time.Millisecond, // maxBackoff
2.0, // backoffFactor
0.1, // jitter
)
if err == nil {
t.Error("Expected non-retryable error to be returned, got nil")
}
if attempts != 1 {
t.Errorf("Expected 1 attempt for non-retryable error, got %d", attempts)
}
// Test context cancellation
attempts = 0
cancelCtx, cancel := context.WithCancel(ctx)
go func() {
time.Sleep(20 * time.Millisecond)
cancel()
}()
err = RetryWithBackoff(
cancelCtx,
func() error {
attempts++
return ErrTimeout
},
10, // maxRetries
50*time.Millisecond, // initialBackoff
500*time.Millisecond, // maxBackoff
2.0, // backoffFactor
0.1, // jitter
)
if !errors.Is(err, context.Canceled) {
t.Errorf("Expected context.Canceled error, got: %v", err)
}
}

307
pkg/client/iterator.go Normal file
View File

@ -0,0 +1,307 @@
package client
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"github.com/KevoDB/kevo/pkg/transport"
)
// ScanOptions configures a scan operation
type ScanOptions struct {
// Prefix limit the scan to keys with this prefix
Prefix []byte
// StartKey sets the starting point for the scan (inclusive)
StartKey []byte
// EndKey sets the ending point for the scan (exclusive)
EndKey []byte
// Limit sets the maximum number of key-value pairs to return
Limit int32
}
// KeyValue represents a key-value pair from a scan
type KeyValue struct {
Key []byte
Value []byte
}
// Scanner interface for iterating through keys and values
type Scanner interface {
// Next advances the scanner to the next key-value pair
Next() bool
// Key returns the current key
Key() []byte
// Value returns the current value
Value() []byte
// Error returns any error that occurred during iteration
Error() error
// Close releases resources associated with the scanner
Close() error
}
// scanIterator implements the Scanner interface for regular scans
type scanIterator struct {
client *Client
options ScanOptions
stream transport.Stream
current *KeyValue
err error
closed bool
ctx context.Context
cancelFunc context.CancelFunc
}
// Scan creates a scanner to iterate over keys in the database
func (c *Client) Scan(ctx context.Context, options ScanOptions) (Scanner, error) {
if !c.IsConnected() {
return nil, errors.New("not connected to server")
}
// Use the provided context directly for streaming operations
// Implement stream request
streamCtx, streamCancel := context.WithCancel(ctx)
stream, err := c.client.Stream(streamCtx)
if err != nil {
streamCancel()
return nil, fmt.Errorf("failed to create stream: %w", err)
}
// Create the scan request
req := struct {
Prefix []byte `json:"prefix"`
StartKey []byte `json:"start_key"`
EndKey []byte `json:"end_key"`
Limit int32 `json:"limit"`
}{
Prefix: options.Prefix,
StartKey: options.StartKey,
EndKey: options.EndKey,
Limit: options.Limit,
}
reqData, err := json.Marshal(req)
if err != nil {
streamCancel()
stream.Close()
return nil, fmt.Errorf("failed to marshal scan request: %w", err)
}
// Send the scan request
if err := stream.Send(transport.NewRequest(transport.TypeScan, reqData)); err != nil {
streamCancel()
stream.Close()
return nil, fmt.Errorf("failed to send scan request: %w", err)
}
// Create the iterator
iter := &scanIterator{
client: c,
options: options,
stream: stream,
ctx: streamCtx,
cancelFunc: streamCancel,
}
return iter, nil
}
// Next advances the iterator to the next key-value pair
func (s *scanIterator) Next() bool {
if s.closed || s.err != nil {
return false
}
resp, err := s.stream.Recv()
if err != nil {
if err != io.EOF {
s.err = fmt.Errorf("error receiving scan response: %w", err)
}
return false
}
// Parse the response
var scanResp struct {
Key []byte `json:"key"`
Value []byte `json:"value"`
}
if err := json.Unmarshal(resp.Payload(), &scanResp); err != nil {
s.err = fmt.Errorf("failed to unmarshal scan response: %w", err)
return false
}
s.current = &KeyValue{
Key: scanResp.Key,
Value: scanResp.Value,
}
return true
}
// Key returns the current key
func (s *scanIterator) Key() []byte {
if s.current == nil {
return nil
}
return s.current.Key
}
// Value returns the current value
func (s *scanIterator) Value() []byte {
if s.current == nil {
return nil
}
return s.current.Value
}
// Error returns any error that occurred during iteration
func (s *scanIterator) Error() error {
return s.err
}
// Close releases resources associated with the scanner
func (s *scanIterator) Close() error {
if s.closed {
return nil
}
s.closed = true
s.cancelFunc()
return s.stream.Close()
}
// transactionScanIterator implements the Scanner interface for transaction scans
type transactionScanIterator struct {
tx *Transaction
options ScanOptions
stream transport.Stream
current *KeyValue
err error
closed bool
ctx context.Context
cancelFunc context.CancelFunc
}
// Scan creates a scanner to iterate over keys in the transaction
func (tx *Transaction) Scan(ctx context.Context, options ScanOptions) (Scanner, error) {
if tx.closed {
return nil, ErrTransactionClosed
}
// Use the provided context directly for streaming operations
// Implement transaction stream request
streamCtx, streamCancel := context.WithCancel(ctx)
stream, err := tx.client.client.Stream(streamCtx)
if err != nil {
streamCancel()
return nil, fmt.Errorf("failed to create stream: %w", err)
}
// Create the transaction scan request
req := struct {
TransactionID string `json:"transaction_id"`
Prefix []byte `json:"prefix"`
StartKey []byte `json:"start_key"`
EndKey []byte `json:"end_key"`
Limit int32 `json:"limit"`
}{
TransactionID: tx.id,
Prefix: options.Prefix,
StartKey: options.StartKey,
EndKey: options.EndKey,
Limit: options.Limit,
}
reqData, err := json.Marshal(req)
if err != nil {
streamCancel()
stream.Close()
return nil, fmt.Errorf("failed to marshal transaction scan request: %w", err)
}
// Send the transaction scan request
if err := stream.Send(transport.NewRequest(transport.TypeTxScan, reqData)); err != nil {
streamCancel()
stream.Close()
return nil, fmt.Errorf("failed to send transaction scan request: %w", err)
}
// Create the iterator
iter := &transactionScanIterator{
tx: tx,
options: options,
stream: stream,
ctx: streamCtx,
cancelFunc: streamCancel,
}
return iter, nil
}
// Next advances the iterator to the next key-value pair
func (s *transactionScanIterator) Next() bool {
if s.closed || s.err != nil {
return false
}
resp, err := s.stream.Recv()
if err != nil {
if err != io.EOF {
s.err = fmt.Errorf("error receiving transaction scan response: %w", err)
}
return false
}
// Parse the response
var scanResp struct {
Key []byte `json:"key"`
Value []byte `json:"value"`
}
if err := json.Unmarshal(resp.Payload(), &scanResp); err != nil {
s.err = fmt.Errorf("failed to unmarshal transaction scan response: %w", err)
return false
}
s.current = &KeyValue{
Key: scanResp.Key,
Value: scanResp.Value,
}
return true
}
// Key returns the current key
func (s *transactionScanIterator) Key() []byte {
if s.current == nil {
return nil
}
return s.current.Key
}
// Value returns the current value
func (s *transactionScanIterator) Value() []byte {
if s.current == nil {
return nil
}
return s.current.Value
}
// Error returns any error that occurred during iteration
func (s *transactionScanIterator) Error() error {
return s.err
}
// Close releases resources associated with the scanner
func (s *transactionScanIterator) Close() error {
if s.closed {
return nil
}
s.closed = true
s.cancelFunc()
return s.stream.Close()
}

View File

@ -0,0 +1,39 @@
package client
import (
"testing"
"time"
)
func TestDefaultClientOptions(t *testing.T) {
options := DefaultClientOptions()
// Verify the default options have sensible values
if options.Endpoint != "localhost:50051" {
t.Errorf("Expected default endpoint to be localhost:50051, got %s", options.Endpoint)
}
if options.ConnectTimeout != 5*time.Second {
t.Errorf("Expected default connect timeout to be 5s, got %s", options.ConnectTimeout)
}
if options.RequestTimeout != 10*time.Second {
t.Errorf("Expected default request timeout to be 10s, got %s", options.RequestTimeout)
}
if options.TransportType != "grpc" {
t.Errorf("Expected default transport type to be grpc, got %s", options.TransportType)
}
if options.PoolSize != 5 {
t.Errorf("Expected default pool size to be 5, got %d", options.PoolSize)
}
if options.TLSEnabled != false {
t.Errorf("Expected default TLS enabled to be false")
}
if options.MaxRetries != 3 {
t.Errorf("Expected default max retries to be 3, got %d", options.MaxRetries)
}
}

35
pkg/client/simple_test.go Normal file
View File

@ -0,0 +1,35 @@
package client
import (
"testing"
"github.com/KevoDB/kevo/pkg/transport"
)
// mockTransport is a simple mock for testing
type mockTransport struct{}
// Create a simple mock client factory for testing
func mockClientFactory(endpoint string, options transport.TransportOptions) (transport.Client, error) {
return &mockClient{}, nil
}
func TestClientCreation(t *testing.T) {
// First, register our mock transport
transport.RegisterClientTransport("mock_test", mockClientFactory)
// Create client options using our mock transport
options := DefaultClientOptions()
options.TransportType = "mock_test"
// Create a client
client, err := NewClient(options)
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Verify the client was created
if client == nil {
t.Fatal("Client is nil")
}
}

288
pkg/client/transaction.go Normal file
View File

@ -0,0 +1,288 @@
package client
import (
"context"
"encoding/json"
"errors"
"fmt"
"sync"
"github.com/KevoDB/kevo/pkg/transport"
)
// Transaction represents a database transaction
type Transaction struct {
client *Client
id string
readOnly bool
closed bool
mu sync.RWMutex
}
// ErrTransactionClosed is returned when attempting to use a closed transaction
var ErrTransactionClosed = errors.New("transaction is closed")
// BeginTransaction starts a new transaction
func (c *Client) BeginTransaction(ctx context.Context, readOnly bool) (*Transaction, error) {
if !c.IsConnected() {
return nil, errors.New("not connected to server")
}
req := struct {
ReadOnly bool `json:"read_only"`
}{
ReadOnly: readOnly,
}
reqData, err := json.Marshal(req)
if err != nil {
return nil, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, c.options.RequestTimeout)
defer cancel()
resp, err := c.client.Send(timeoutCtx, transport.NewRequest(transport.TypeBeginTx, reqData))
if err != nil {
return nil, fmt.Errorf("failed to begin transaction: %w", err)
}
var txResp struct {
TransactionID string `json:"transaction_id"`
}
if err := json.Unmarshal(resp.Payload(), &txResp); err != nil {
return nil, fmt.Errorf("failed to unmarshal response: %w", err)
}
return &Transaction{
client: c,
id: txResp.TransactionID,
readOnly: readOnly,
closed: false,
}, nil
}
// Commit commits the transaction
func (tx *Transaction) Commit(ctx context.Context) error {
tx.mu.Lock()
defer tx.mu.Unlock()
if tx.closed {
return ErrTransactionClosed
}
req := struct {
TransactionID string `json:"transaction_id"`
}{
TransactionID: tx.id,
}
reqData, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, tx.client.options.RequestTimeout)
defer cancel()
resp, err := tx.client.client.Send(timeoutCtx, transport.NewRequest(transport.TypeCommitTx, reqData))
if err != nil {
return fmt.Errorf("failed to commit transaction: %w", err)
}
var commitResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &commitResp); err != nil {
return fmt.Errorf("failed to unmarshal response: %w", err)
}
tx.closed = true
if !commitResp.Success {
return errors.New("transaction commit failed")
}
return nil
}
// Rollback aborts the transaction
func (tx *Transaction) Rollback(ctx context.Context) error {
tx.mu.Lock()
defer tx.mu.Unlock()
if tx.closed {
return ErrTransactionClosed
}
req := struct {
TransactionID string `json:"transaction_id"`
}{
TransactionID: tx.id,
}
reqData, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, tx.client.options.RequestTimeout)
defer cancel()
resp, err := tx.client.client.Send(timeoutCtx, transport.NewRequest(transport.TypeRollbackTx, reqData))
if err != nil {
return fmt.Errorf("failed to rollback transaction: %w", err)
}
var rollbackResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &rollbackResp); err != nil {
return fmt.Errorf("failed to unmarshal response: %w", err)
}
tx.closed = true
if !rollbackResp.Success {
return errors.New("transaction rollback failed")
}
return nil
}
// Get retrieves a value by key within the transaction
func (tx *Transaction) Get(ctx context.Context, key []byte) ([]byte, bool, error) {
tx.mu.RLock()
defer tx.mu.RUnlock()
if tx.closed {
return nil, false, ErrTransactionClosed
}
req := struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
}{
TransactionID: tx.id,
Key: key,
}
reqData, err := json.Marshal(req)
if err != nil {
return nil, false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, tx.client.options.RequestTimeout)
defer cancel()
resp, err := tx.client.client.Send(timeoutCtx, transport.NewRequest(transport.TypeTxGet, reqData))
if err != nil {
return nil, false, fmt.Errorf("failed to send request: %w", err)
}
var getResp struct {
Value []byte `json:"value"`
Found bool `json:"found"`
}
if err := json.Unmarshal(resp.Payload(), &getResp); err != nil {
return nil, false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return getResp.Value, getResp.Found, nil
}
// Put stores a key-value pair within the transaction
func (tx *Transaction) Put(ctx context.Context, key, value []byte) (bool, error) {
tx.mu.RLock()
defer tx.mu.RUnlock()
if tx.closed {
return false, ErrTransactionClosed
}
if tx.readOnly {
return false, errors.New("cannot write to a read-only transaction")
}
req := struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
Value []byte `json:"value"`
}{
TransactionID: tx.id,
Key: key,
Value: value,
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, tx.client.options.RequestTimeout)
defer cancel()
resp, err := tx.client.client.Send(timeoutCtx, transport.NewRequest(transport.TypeTxPut, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var putResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &putResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return putResp.Success, nil
}
// Delete removes a key-value pair within the transaction
func (tx *Transaction) Delete(ctx context.Context, key []byte) (bool, error) {
tx.mu.RLock()
defer tx.mu.RUnlock()
if tx.closed {
return false, ErrTransactionClosed
}
if tx.readOnly {
return false, errors.New("cannot delete in a read-only transaction")
}
req := struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
}{
TransactionID: tx.id,
Key: key,
}
reqData, err := json.Marshal(req)
if err != nil {
return false, fmt.Errorf("failed to marshal request: %w", err)
}
timeoutCtx, cancel := context.WithTimeout(ctx, tx.client.options.RequestTimeout)
defer cancel()
resp, err := tx.client.client.Send(timeoutCtx, transport.NewRequest(transport.TypeTxDelete, reqData))
if err != nil {
return false, fmt.Errorf("failed to send request: %w", err)
}
var deleteResp struct {
Success bool `json:"success"`
}
if err := json.Unmarshal(resp.Payload(), &deleteResp); err != nil {
return false, fmt.Errorf("failed to unmarshal response: %w", err)
}
return deleteResp.Success, nil
}

120
pkg/client/utils.go Normal file
View File

@ -0,0 +1,120 @@
package client
import (
"context"
"errors"
"math"
"math/rand"
"time"
)
// RetryableFunc is a function that can be retried
type RetryableFunc func() error
// Errors that can occur during client operations
var (
// ErrNotConnected indicates the client is not connected to the server
ErrNotConnected = errors.New("not connected to server")
// ErrInvalidOptions indicates invalid client options
ErrInvalidOptions = errors.New("invalid client options")
// ErrTimeout indicates a request timed out
ErrTimeout = errors.New("request timed out")
// ErrKeyNotFound indicates a key was not found
ErrKeyNotFound = errors.New("key not found")
// ErrTransactionConflict indicates a transaction conflict occurred
ErrTransactionConflict = errors.New("transaction conflict detected")
)
// IsRetryableError returns true if the error is considered retryable
func IsRetryableError(err error) bool {
if err == nil {
return false
}
// These errors are considered transient and can be retried
if errors.Is(err, ErrTimeout) || errors.Is(err, context.DeadlineExceeded) {
return true
}
// Other errors are considered permanent
return false
}
// RetryWithBackoff executes a function with exponential backoff and jitter
func RetryWithBackoff(
ctx context.Context,
fn RetryableFunc,
maxRetries int,
initialBackoff time.Duration,
maxBackoff time.Duration,
backoffFactor float64,
jitter float64,
) error {
var err error
backoff := initialBackoff
for attempt := 0; attempt <= maxRetries; attempt++ {
// Execute the function
err = fn()
if err == nil {
return nil
}
// Check if the error is retryable
if !IsRetryableError(err) {
return err
}
// Check if we've reached the retry limit
if attempt >= maxRetries {
return err
}
// Calculate next backoff with jitter
jitterRange := float64(backoff) * jitter
jitterAmount := int64(rand.Float64() * jitterRange)
sleepTime := backoff + time.Duration(jitterAmount)
// Check context before sleeping
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(sleepTime):
// Continue with next attempt
}
// Increase backoff for next attempt
backoff = time.Duration(float64(backoff) * backoffFactor)
if backoff > maxBackoff {
backoff = maxBackoff
}
}
return err
}
// CalculateExponentialBackoff calculates the backoff time for a given attempt
func CalculateExponentialBackoff(
attempt int,
initialBackoff time.Duration,
maxBackoff time.Duration,
backoffFactor float64,
jitter float64,
) time.Duration {
backoff := initialBackoff * time.Duration(math.Pow(backoffFactor, float64(attempt)))
if backoff > maxBackoff {
backoff = maxBackoff
}
if jitter > 0 {
jitterRange := float64(backoff) * jitter
jitterAmount := int64(rand.Float64() * jitterRange)
backoff = backoff + time.Duration(jitterAmount)
}
return backoff
}

View File

@ -3,7 +3,7 @@ package bounded
import (
"bytes"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// BoundedIterator wraps an iterator and limits it to a specific key range

View File

@ -1,7 +1,7 @@
package composite
import (
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// CompositeIterator is an interface for iterators that combine multiple source iterators

View File

@ -4,7 +4,7 @@ import (
"bytes"
"sync"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// HierarchicalIterator implements an iterator that follows the LSM-tree hierarchy

View File

@ -4,7 +4,7 @@ import (
"bytes"
"testing"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// mockIterator is a simple in-memory iterator for testing

View File

@ -7,8 +7,8 @@ import (
"sort"
"strings"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/sstable"
)
// BaseCompactionStrategy provides common functionality for compaction strategies

View File

@ -4,7 +4,7 @@ import (
"bytes"
"fmt"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/sstable"
)
// SSTableInfo represents metadata about an SSTable file

View File

@ -9,8 +9,8 @@ import (
"testing"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/sstable"
)
func createTestSSTable(t *testing.T, dir string, level, seq int, timestamp int64, keyValues map[string]string) string {

View File

@ -3,7 +3,7 @@ package compaction
import (
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
// NewCompactionManager creates a new compaction manager with the old API

View File

@ -5,7 +5,7 @@ import (
"sync"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
// CompactionCoordinatorOptions holds configuration options for the coordinator

View File

@ -6,10 +6,10 @@ import (
"os"
"time"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/jeremytregunna/kevo/pkg/common/iterator/composite"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator/composite"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/sstable"
)
// DefaultCompactionExecutor handles the actual compaction process

View File

@ -6,7 +6,7 @@ import (
"path/filepath"
"sort"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
// TieredCompactionStrategy implements a tiered compaction strategy

View File

@ -5,8 +5,8 @@ import (
"os"
"path/filepath"
"github.com/jeremytregunna/kevo/pkg/compaction"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/compaction"
"github.com/KevoDB/kevo/pkg/sstable"
)
// setupCompaction initializes the compaction manager for the engine

View File

@ -10,12 +10,12 @@ import (
"sync/atomic"
"time"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/jeremytregunna/kevo/pkg/compaction"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/memtable"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/compaction"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/memtable"
"github.com/KevoDB/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/wal"
)
const (

View File

@ -8,7 +8,7 @@ import (
"testing"
"time"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/sstable"
)
func setupTest(t *testing.T) (string, *Engine, func()) {

View File

@ -5,9 +5,9 @@ import (
"container/heap"
"sync"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/jeremytregunna/kevo/pkg/memtable"
"github.com/jeremytregunna/kevo/pkg/sstable"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/memtable"
"github.com/KevoDB/kevo/pkg/sstable"
)
// iterHeapItem represents an item in the priority queue of iterators

511
pkg/grpc/service/service.go Normal file
View File

@ -0,0 +1,511 @@
package service
import (
"context"
"fmt"
"sync"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/engine"
pb "github.com/KevoDB/kevo/proto/kevo"
)
// TxRegistry is the interface we need for the transaction registry
type TxRegistry interface {
Begin(ctx context.Context, eng *engine.Engine, readOnly bool) (string, error)
Get(txID string) (engine.Transaction, bool)
Remove(txID string)
}
// KevoServiceServer implements the gRPC KevoService interface
type KevoServiceServer struct {
pb.UnimplementedKevoServiceServer
engine *engine.Engine
txRegistry TxRegistry
activeTx sync.Map // map[string]engine.Transaction
txMu sync.Mutex
compactionSem chan struct{} // Semaphore for limiting concurrent compactions
maxKeySize int // Maximum allowed key size
maxValueSize int // Maximum allowed value size
maxBatchSize int // Maximum number of operations in a batch
maxTransactions int // Maximum number of concurrent transactions
transactionTTL int64 // Maximum time in seconds a transaction can be idle
activeTransCount int32 // Count of active transactions
}
// NewKevoServiceServer creates a new KevoServiceServer
func NewKevoServiceServer(engine *engine.Engine, txRegistry TxRegistry) *KevoServiceServer {
return &KevoServiceServer{
engine: engine,
txRegistry: txRegistry,
compactionSem: make(chan struct{}, 1), // Allow only one compaction at a time
maxKeySize: 4096, // 4KB
maxValueSize: 10 * 1024 * 1024, // 10MB
maxBatchSize: 1000,
maxTransactions: 1000,
transactionTTL: 300, // 5 minutes
}
}
// Get retrieves a value for a given key
func (s *KevoServiceServer) Get(ctx context.Context, req *pb.GetRequest) (*pb.GetResponse, error) {
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
value, err := s.engine.Get(req.Key)
if err != nil {
return &pb.GetResponse{Found: false}, nil
}
return &pb.GetResponse{
Value: value,
Found: true,
}, nil
}
// Put stores a key-value pair
func (s *KevoServiceServer) Put(ctx context.Context, req *pb.PutRequest) (*pb.PutResponse, error) {
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
if len(req.Value) > s.maxValueSize {
return nil, fmt.Errorf("value too large")
}
if err := s.engine.Put(req.Key, req.Value); err != nil {
return &pb.PutResponse{Success: false}, err
}
return &pb.PutResponse{Success: true}, nil
}
// Delete removes a key-value pair
func (s *KevoServiceServer) Delete(ctx context.Context, req *pb.DeleteRequest) (*pb.DeleteResponse, error) {
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
if err := s.engine.Delete(req.Key); err != nil {
return &pb.DeleteResponse{Success: false}, err
}
return &pb.DeleteResponse{Success: true}, nil
}
// BatchWrite performs multiple operations in a batch
func (s *KevoServiceServer) BatchWrite(ctx context.Context, req *pb.BatchWriteRequest) (*pb.BatchWriteResponse, error) {
if len(req.Operations) == 0 {
return &pb.BatchWriteResponse{Success: true}, nil
}
if len(req.Operations) > s.maxBatchSize {
return nil, fmt.Errorf("batch size exceeds maximum allowed (%d)", s.maxBatchSize)
}
// Start a transaction for atomic batch operations
tx, err := s.engine.BeginTransaction(false) // Read-write transaction
if err != nil {
return &pb.BatchWriteResponse{Success: false}, fmt.Errorf("failed to start transaction: %w", err)
}
// Ensure we either commit or rollback
defer func() {
if err != nil {
tx.Rollback()
}
}()
// Process each operation
for _, op := range req.Operations {
if len(op.Key) == 0 || len(op.Key) > s.maxKeySize {
err = fmt.Errorf("invalid key size in batch operation")
return &pb.BatchWriteResponse{Success: false}, err
}
switch op.Type {
case pb.Operation_PUT:
if len(op.Value) > s.maxValueSize {
err = fmt.Errorf("value too large in batch operation")
return &pb.BatchWriteResponse{Success: false}, err
}
if err = tx.Put(op.Key, op.Value); err != nil {
return &pb.BatchWriteResponse{Success: false}, err
}
case pb.Operation_DELETE:
if err = tx.Delete(op.Key); err != nil {
return &pb.BatchWriteResponse{Success: false}, err
}
default:
err = fmt.Errorf("unknown operation type")
return &pb.BatchWriteResponse{Success: false}, err
}
}
// Commit the transaction
if err = tx.Commit(); err != nil {
return &pb.BatchWriteResponse{Success: false}, err
}
return &pb.BatchWriteResponse{Success: true}, nil
}
// Scan iterates over a range of keys
func (s *KevoServiceServer) Scan(req *pb.ScanRequest, stream pb.KevoService_ScanServer) error {
var limit int32 = 0
if req.Limit > 0 {
limit = req.Limit
}
// Create a read-only transaction for consistent snapshot
tx, err := s.engine.BeginTransaction(true)
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback() // Always rollback read-only TX when done
// Create appropriate iterator based on request parameters
var iter iterator.Iterator
if len(req.Prefix) > 0 {
// Create a prefix iterator
prefixIter := tx.NewIterator()
iter = newPrefixIterator(prefixIter, req.Prefix)
} else if len(req.StartKey) > 0 || len(req.EndKey) > 0 {
// Create a range iterator
iter = tx.NewRangeIterator(req.StartKey, req.EndKey)
} else {
// Create a full scan iterator
iter = tx.NewIterator()
}
count := int32(0)
// Position iterator at the first entry
iter.SeekToFirst()
// Iterate through all valid entries
for iter.Valid() {
if limit > 0 && count >= limit {
break
}
// Skip tombstones (deletion markers)
if !iter.IsTombstone() {
if err := stream.Send(&pb.ScanResponse{
Key: iter.Key(),
Value: iter.Value(),
}); err != nil {
return err
}
count++
}
// Move to the next entry
iter.Next()
}
return nil
}
// prefixIterator wraps another iterator and filters for a prefix
type prefixIterator struct {
iter iterator.Iterator
prefix []byte
err error
}
func newPrefixIterator(iter iterator.Iterator, prefix []byte) *prefixIterator {
return &prefixIterator{
iter: iter,
prefix: prefix,
}
}
func (pi *prefixIterator) Next() bool {
for pi.iter.Next() {
// Check if current key has the prefix
key := pi.iter.Key()
if len(key) >= len(pi.prefix) &&
equalByteSlice(key[:len(pi.prefix)], pi.prefix) {
return true
}
}
return false
}
func (pi *prefixIterator) Key() []byte {
return pi.iter.Key()
}
func (pi *prefixIterator) Value() []byte {
return pi.iter.Value()
}
func (pi *prefixIterator) Valid() bool {
return pi.iter.Valid()
}
func (pi *prefixIterator) IsTombstone() bool {
return pi.iter.IsTombstone()
}
func (pi *prefixIterator) SeekToFirst() {
pi.iter.SeekToFirst()
}
func (pi *prefixIterator) SeekToLast() {
pi.iter.SeekToLast()
}
func (pi *prefixIterator) Seek(target []byte) bool {
return pi.iter.Seek(target)
}
// equalByteSlice compares two byte slices for equality
func equalByteSlice(a, b []byte) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if a[i] != b[i] {
return false
}
}
return true
}
// BeginTransaction starts a new transaction
func (s *KevoServiceServer) BeginTransaction(ctx context.Context, req *pb.BeginTransactionRequest) (*pb.BeginTransactionResponse, error) {
txID, err := s.txRegistry.Begin(ctx, s.engine, req.ReadOnly)
if err != nil {
return nil, fmt.Errorf("failed to begin transaction: %w", err)
}
return &pb.BeginTransactionResponse{
TransactionId: txID,
}, nil
}
// CommitTransaction commits an ongoing transaction
func (s *KevoServiceServer) CommitTransaction(ctx context.Context, req *pb.CommitTransactionRequest) (*pb.CommitTransactionResponse, error) {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return nil, fmt.Errorf("transaction not found: %s", req.TransactionId)
}
if err := tx.Commit(); err != nil {
return &pb.CommitTransactionResponse{Success: false}, err
}
s.txRegistry.Remove(req.TransactionId)
return &pb.CommitTransactionResponse{Success: true}, nil
}
// RollbackTransaction aborts an ongoing transaction
func (s *KevoServiceServer) RollbackTransaction(ctx context.Context, req *pb.RollbackTransactionRequest) (*pb.RollbackTransactionResponse, error) {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return nil, fmt.Errorf("transaction not found: %s", req.TransactionId)
}
if err := tx.Rollback(); err != nil {
return &pb.RollbackTransactionResponse{Success: false}, err
}
s.txRegistry.Remove(req.TransactionId)
return &pb.RollbackTransactionResponse{Success: true}, nil
}
// TxGet retrieves a value for a given key within a transaction
func (s *KevoServiceServer) TxGet(ctx context.Context, req *pb.TxGetRequest) (*pb.TxGetResponse, error) {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return nil, fmt.Errorf("transaction not found: %s", req.TransactionId)
}
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
value, err := tx.Get(req.Key)
if err != nil {
return &pb.TxGetResponse{Found: false}, nil
}
return &pb.TxGetResponse{
Value: value,
Found: true,
}, nil
}
// TxPut stores a key-value pair within a transaction
func (s *KevoServiceServer) TxPut(ctx context.Context, req *pb.TxPutRequest) (*pb.TxPutResponse, error) {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return nil, fmt.Errorf("transaction not found: %s", req.TransactionId)
}
if tx.IsReadOnly() {
return nil, fmt.Errorf("cannot write to read-only transaction")
}
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
if len(req.Value) > s.maxValueSize {
return nil, fmt.Errorf("value too large")
}
if err := tx.Put(req.Key, req.Value); err != nil {
return &pb.TxPutResponse{Success: false}, err
}
return &pb.TxPutResponse{Success: true}, nil
}
// TxDelete removes a key-value pair within a transaction
func (s *KevoServiceServer) TxDelete(ctx context.Context, req *pb.TxDeleteRequest) (*pb.TxDeleteResponse, error) {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return nil, fmt.Errorf("transaction not found: %s", req.TransactionId)
}
if tx.IsReadOnly() {
return nil, fmt.Errorf("cannot delete in read-only transaction")
}
if len(req.Key) == 0 || len(req.Key) > s.maxKeySize {
return nil, fmt.Errorf("invalid key size")
}
if err := tx.Delete(req.Key); err != nil {
return &pb.TxDeleteResponse{Success: false}, err
}
return &pb.TxDeleteResponse{Success: true}, nil
}
// TxScan iterates over a range of keys within a transaction
func (s *KevoServiceServer) TxScan(req *pb.TxScanRequest, stream pb.KevoService_TxScanServer) error {
tx, exists := s.txRegistry.Get(req.TransactionId)
if !exists {
return fmt.Errorf("transaction not found: %s", req.TransactionId)
}
var limit int32 = 0
if req.Limit > 0 {
limit = req.Limit
}
// Create appropriate iterator based on request parameters
var iter iterator.Iterator
if len(req.Prefix) > 0 {
// Create a prefix iterator
rawIter := tx.NewIterator()
iter = newPrefixIterator(rawIter, req.Prefix)
} else if len(req.StartKey) > 0 || len(req.EndKey) > 0 {
// Create a range iterator
iter = tx.NewRangeIterator(req.StartKey, req.EndKey)
} else {
// Create a full scan iterator
iter = tx.NewIterator()
}
count := int32(0)
// Position iterator at the first entry
iter.SeekToFirst()
// Iterate through all valid entries
for iter.Valid() {
if limit > 0 && count >= limit {
break
}
// Skip tombstones (deletion markers)
if !iter.IsTombstone() {
if err := stream.Send(&pb.TxScanResponse{
Key: iter.Key(),
Value: iter.Value(),
}); err != nil {
return err
}
count++
}
// Move to the next entry
iter.Next()
}
return nil
}
// GetStats retrieves database statistics
func (s *KevoServiceServer) GetStats(ctx context.Context, req *pb.GetStatsRequest) (*pb.GetStatsResponse, error) {
// Collect basic stats that we know are available
keyCount := int64(0)
sstableCount := int32(0)
memtableCount := int32(1) // At least 1 active memtable
// Create a read-only transaction to count keys
tx, err := s.engine.BeginTransaction(true)
if err != nil {
return nil, fmt.Errorf("failed to begin transaction for stats: %w", err)
}
defer tx.Rollback()
// Use an iterator to count keys
iter := tx.NewIterator()
// Count keys and estimate size
var totalSize int64
for iter.Next() {
keyCount++
totalSize += int64(len(iter.Key()) + len(iter.Value()))
}
return &pb.GetStatsResponse{
KeyCount: keyCount,
StorageSize: totalSize,
MemtableCount: memtableCount,
SstableCount: sstableCount,
WriteAmplification: 1.0, // Placeholder
ReadAmplification: 1.0, // Placeholder
}, nil
}
// Compact triggers database compaction
func (s *KevoServiceServer) Compact(ctx context.Context, req *pb.CompactRequest) (*pb.CompactResponse, error) {
// Use a semaphore to prevent multiple concurrent compactions
select {
case s.compactionSem <- struct{}{}:
// We got the semaphore, proceed with compaction
defer func() { <-s.compactionSem }()
default:
// Semaphore is full, compaction is already running
return &pb.CompactResponse{Success: false}, fmt.Errorf("compaction is already in progress")
}
// For now, Compact just performs a memtable flush as we don't have a public
// Compact method on the engine yet
tx, err := s.engine.BeginTransaction(false)
if err != nil {
return &pb.CompactResponse{Success: false}, err
}
// Do a dummy write to force a flush
if req.Force {
err = tx.Put([]byte("__compact_marker__"), []byte("force"))
if err != nil {
tx.Rollback()
return &pb.CompactResponse{Success: false}, err
}
}
err = tx.Commit()
if err != nil {
return &pb.CompactResponse{Success: false}, err
}
return &pb.CompactResponse{Success: true}, nil
}

View File

@ -0,0 +1,59 @@
package transport
import (
"context"
"time"
)
// BenchmarkOptions defines the options for gRPC benchmarking
type BenchmarkOptions struct {
Address string
Connections int
Iterations int
KeySize int
ValueSize int
Parallelism int
UseTLS bool
TLSConfig *TLSConfig
}
// BenchmarkResult holds the results of a benchmark run
type BenchmarkResult struct {
Operation string
TotalTime time.Duration
RequestsPerSec float64
AvgLatency time.Duration
MinLatency time.Duration
MaxLatency time.Duration
P90Latency time.Duration
P99Latency time.Duration
TotalBytes int64
BytesPerSecond float64
ErrorRate float64
TotalOperations int
FailedOps int
}
// NOTE: This is a stub implementation
// A proper benchmark requires the full client implementation
// which will be completed in a later phase
func Benchmark(ctx context.Context, opts *BenchmarkOptions) (map[string]*BenchmarkResult, error) {
results := make(map[string]*BenchmarkResult)
results["put"] = &BenchmarkResult{
Operation: "Put",
TotalTime: time.Second,
RequestsPerSec: 1000.0,
}
return results, nil
}
// sortDurations sorts a slice of durations in ascending order
func sortDurations(durations []time.Duration) {
for i := 0; i < len(durations); i++ {
for j := i + 1; j < len(durations); j++ {
if durations[i] > durations[j] {
durations[i], durations[j] = durations[j], durations[i]
}
}
}
}

View File

@ -0,0 +1,675 @@
package transport
import (
"context"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"sync"
"time"
pb "github.com/KevoDB/kevo/proto/kevo"
"github.com/KevoDB/kevo/pkg/transport"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/keepalive"
)
// GRPCClient implements the transport.Client interface for gRPC
type GRPCClient struct {
endpoint string
options transport.TransportOptions
conn *grpc.ClientConn
client pb.KevoServiceClient
status transport.TransportStatus
statusMu sync.RWMutex
metrics transport.MetricsCollector
}
// NewGRPCClient creates a new gRPC client
func NewGRPCClient(endpoint string, options transport.TransportOptions) (transport.Client, error) {
return &GRPCClient{
endpoint: endpoint,
options: options,
metrics: transport.NewMetricsCollector(),
status: transport.TransportStatus{
Connected: false,
},
}, nil
}
// Connect establishes a connection to the server
func (c *GRPCClient) Connect(ctx context.Context) error {
dialOptions := []grpc.DialOption{
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: 15 * time.Second,
Timeout: 5 * time.Second,
PermitWithoutStream: true,
}),
}
// Configure TLS if enabled
if c.options.TLSEnabled {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
}
// Load client certificate if provided
if c.options.CertFile != "" && c.options.KeyFile != "" {
cert, err := tls.LoadX509KeyPair(c.options.CertFile, c.options.KeyFile)
if err != nil {
c.metrics.RecordConnection(false)
return fmt.Errorf("failed to load client certificate: %w", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
// Add credentials to dial options
dialOptions = append(dialOptions, grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)))
} else {
// Use insecure credentials if TLS is not enabled
dialOptions = append(dialOptions, grpc.WithTransportCredentials(insecure.NewCredentials()))
}
// Set timeout for connection
dialCtx, cancel := context.WithTimeout(ctx, c.options.Timeout)
defer cancel()
// Connect to the server
conn, err := grpc.DialContext(dialCtx, c.endpoint, dialOptions...)
if err != nil {
c.metrics.RecordConnection(false)
c.setStatus(false, err)
return fmt.Errorf("failed to connect to %s: %w", c.endpoint, err)
}
c.conn = conn
c.client = pb.NewKevoServiceClient(conn)
c.metrics.RecordConnection(true)
c.setStatus(true, nil)
return nil
}
// Close closes the connection
func (c *GRPCClient) Close() error {
if c.conn != nil {
err := c.conn.Close()
c.conn = nil
c.client = nil
c.setStatus(false, nil)
return err
}
return nil
}
// IsConnected returns whether the client is connected
func (c *GRPCClient) IsConnected() bool {
c.statusMu.RLock()
defer c.statusMu.RUnlock()
return c.status.Connected
}
// Status returns the current status of the connection
func (c *GRPCClient) Status() transport.TransportStatus {
c.statusMu.RLock()
defer c.statusMu.RUnlock()
return c.status
}
// setStatus updates the client status
func (c *GRPCClient) setStatus(connected bool, err error) {
c.statusMu.Lock()
defer c.statusMu.Unlock()
c.status.Connected = connected
c.status.LastError = err
if connected {
c.status.LastConnected = time.Now()
}
}
// Send sends a request and waits for a response
func (c *GRPCClient) Send(ctx context.Context, request transport.Request) (transport.Response, error) {
if !c.IsConnected() {
return nil, transport.ErrNotConnected
}
// Record request metrics
startTime := time.Now()
requestType := request.Type()
// Record bytes sent
requestPayload := request.Payload()
c.metrics.RecordSend(len(requestPayload))
var resp transport.Response
var err error
// Handle request based on type
switch requestType {
case transport.TypeGet:
resp, err = c.handleGet(ctx, requestPayload)
case transport.TypePut:
resp, err = c.handlePut(ctx, requestPayload)
case transport.TypeDelete:
resp, err = c.handleDelete(ctx, requestPayload)
case transport.TypeBatchWrite:
resp, err = c.handleBatchWrite(ctx, requestPayload)
case transport.TypeBeginTx:
resp, err = c.handleBeginTransaction(ctx, requestPayload)
case transport.TypeCommitTx:
resp, err = c.handleCommitTransaction(ctx, requestPayload)
case transport.TypeRollbackTx:
resp, err = c.handleRollbackTransaction(ctx, requestPayload)
case transport.TypeTxGet:
resp, err = c.handleTxGet(ctx, requestPayload)
case transport.TypeTxPut:
resp, err = c.handleTxPut(ctx, requestPayload)
case transport.TypeTxDelete:
resp, err = c.handleTxDelete(ctx, requestPayload)
case transport.TypeGetStats:
resp, err = c.handleGetStats(ctx, requestPayload)
case transport.TypeCompact:
resp, err = c.handleCompact(ctx, requestPayload)
default:
err = fmt.Errorf("unsupported request type: %s", requestType)
resp = transport.NewErrorResponse(err)
}
// Record metrics for the request
c.metrics.RecordRequest(requestType, startTime, err)
// If we got a response, record received bytes
if resp != nil {
c.metrics.RecordReceive(len(resp.Payload()))
}
return resp, err
}
// Stream opens a bidirectional stream
func (c *GRPCClient) Stream(ctx context.Context) (transport.Stream, error) {
if !c.IsConnected() {
return nil, transport.ErrNotConnected
}
// For now, we'll implement streaming only for scan operations
return nil, fmt.Errorf("streaming not fully implemented yet")
}
// Request handler methods
func (c *GRPCClient) handleGet(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
Key []byte `json:"key"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid get request payload: %w", err)), err
}
grpcReq := &pb.GetRequest{
Key: req.Key,
}
grpcResp, err := c.client.Get(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Value []byte `json:"value"`
Found bool `json:"found"`
}{
Value: grpcResp.Value,
Found: grpcResp.Found,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeGet, respData, nil), nil
}
func (c *GRPCClient) handlePut(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
Key []byte `json:"key"`
Value []byte `json:"value"`
Sync bool `json:"sync"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid put request payload: %w", err)), err
}
grpcReq := &pb.PutRequest{
Key: req.Key,
Value: req.Value,
Sync: req.Sync,
}
grpcResp, err := c.client.Put(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypePut, respData, nil), nil
}
func (c *GRPCClient) handleDelete(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
Key []byte `json:"key"`
Sync bool `json:"sync"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid delete request payload: %w", err)), err
}
grpcReq := &pb.DeleteRequest{
Key: req.Key,
Sync: req.Sync,
}
grpcResp, err := c.client.Delete(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeDelete, respData, nil), nil
}
func (c *GRPCClient) handleBatchWrite(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
Operations []struct {
Type string `json:"type"`
Key []byte `json:"key"`
Value []byte `json:"value"`
} `json:"operations"`
Sync bool `json:"sync"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid batch write request payload: %w", err)), err
}
operations := make([]*pb.Operation, len(req.Operations))
for i, op := range req.Operations {
pbOp := &pb.Operation{
Key: op.Key,
Value: op.Value,
}
switch op.Type {
case "put":
pbOp.Type = pb.Operation_PUT
case "delete":
pbOp.Type = pb.Operation_DELETE
default:
return transport.NewErrorResponse(fmt.Errorf("invalid operation type: %s", op.Type)), fmt.Errorf("invalid operation type: %s", op.Type)
}
operations[i] = pbOp
}
grpcReq := &pb.BatchWriteRequest{
Operations: operations,
Sync: req.Sync,
}
grpcResp, err := c.client.BatchWrite(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeBatchWrite, respData, nil), nil
}
func (c *GRPCClient) handleBeginTransaction(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
ReadOnly bool `json:"read_only"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid begin transaction request payload: %w", err)), err
}
grpcReq := &pb.BeginTransactionRequest{
ReadOnly: req.ReadOnly,
}
grpcResp, err := c.client.BeginTransaction(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
TransactionID string `json:"transaction_id"`
}{
TransactionID: grpcResp.TransactionId,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeBeginTx, respData, nil), nil
}
func (c *GRPCClient) handleCommitTransaction(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
TransactionID string `json:"transaction_id"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid commit transaction request payload: %w", err)), err
}
grpcReq := &pb.CommitTransactionRequest{
TransactionId: req.TransactionID,
}
grpcResp, err := c.client.CommitTransaction(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeCommitTx, respData, nil), nil
}
func (c *GRPCClient) handleRollbackTransaction(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
TransactionID string `json:"transaction_id"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid rollback transaction request payload: %w", err)), err
}
grpcReq := &pb.RollbackTransactionRequest{
TransactionId: req.TransactionID,
}
grpcResp, err := c.client.RollbackTransaction(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeRollbackTx, respData, nil), nil
}
func (c *GRPCClient) handleTxGet(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid tx get request payload: %w", err)), err
}
grpcReq := &pb.TxGetRequest{
TransactionId: req.TransactionID,
Key: req.Key,
}
grpcResp, err := c.client.TxGet(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Value []byte `json:"value"`
Found bool `json:"found"`
}{
Value: grpcResp.Value,
Found: grpcResp.Found,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeTxGet, respData, nil), nil
}
func (c *GRPCClient) handleTxPut(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
Value []byte `json:"value"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid tx put request payload: %w", err)), err
}
grpcReq := &pb.TxPutRequest{
TransactionId: req.TransactionID,
Key: req.Key,
Value: req.Value,
}
grpcResp, err := c.client.TxPut(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeTxPut, respData, nil), nil
}
func (c *GRPCClient) handleTxDelete(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
TransactionID string `json:"transaction_id"`
Key []byte `json:"key"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid tx delete request payload: %w", err)), err
}
grpcReq := &pb.TxDeleteRequest{
TransactionId: req.TransactionID,
Key: req.Key,
}
grpcResp, err := c.client.TxDelete(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeTxDelete, respData, nil), nil
}
func (c *GRPCClient) handleGetStats(ctx context.Context, payload []byte) (transport.Response, error) {
grpcReq := &pb.GetStatsRequest{}
grpcResp, err := c.client.GetStats(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
KeyCount int64 `json:"key_count"`
StorageSize int64 `json:"storage_size"`
MemtableCount int32 `json:"memtable_count"`
SstableCount int32 `json:"sstable_count"`
WriteAmplification float64 `json:"write_amplification"`
ReadAmplification float64 `json:"read_amplification"`
}{
KeyCount: grpcResp.KeyCount,
StorageSize: grpcResp.StorageSize,
MemtableCount: grpcResp.MemtableCount,
SstableCount: grpcResp.SstableCount,
WriteAmplification: grpcResp.WriteAmplification,
ReadAmplification: grpcResp.ReadAmplification,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeGetStats, respData, nil), nil
}
func (c *GRPCClient) handleCompact(ctx context.Context, payload []byte) (transport.Response, error) {
var req struct {
Force bool `json:"force"`
}
if err := json.Unmarshal(payload, &req); err != nil {
return transport.NewErrorResponse(fmt.Errorf("invalid compact request payload: %w", err)), err
}
grpcReq := &pb.CompactRequest{
Force: req.Force,
}
grpcResp, err := c.client.Compact(ctx, grpcReq)
if err != nil {
return transport.NewErrorResponse(err), err
}
resp := struct {
Success bool `json:"success"`
}{
Success: grpcResp.Success,
}
respData, err := json.Marshal(resp)
if err != nil {
return transport.NewErrorResponse(err), err
}
return transport.NewResponse(transport.TypeCompact, respData, nil), nil
}
// GRPCScanStream implements the transport.Stream interface for scan operations
type GRPCScanStream struct {
ctx context.Context
cancel context.CancelFunc
stream pb.KevoService_ScanClient
client *GRPCClient
streamType string
}
func (s *GRPCScanStream) Send(request transport.Request) error {
return fmt.Errorf("sending to scan stream not supported")
}
func (s *GRPCScanStream) Recv() (transport.Response, error) {
resp, err := s.stream.Recv()
if err != nil {
if err == io.EOF {
return nil, io.EOF
}
return transport.NewErrorResponse(err), err
}
// Build response based on scan type
scanResp := struct {
Key []byte `json:"key"`
Value []byte `json:"value"`
}{
Key: resp.Key,
Value: resp.Value,
}
respData, err := json.Marshal(scanResp)
if err != nil {
return transport.NewErrorResponse(err), err
}
s.client.metrics.RecordReceive(len(respData))
return transport.NewResponse(s.streamType, respData, nil), nil
}
func (s *GRPCScanStream) Close() error {
s.cancel()
return nil
}

View File

@ -0,0 +1,287 @@
package transport
import (
"context"
"fmt"
"net"
"sync"
"time"
pb "github.com/KevoDB/kevo/proto/kevo"
"github.com/KevoDB/kevo/pkg/transport"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/keepalive"
)
// Constants for default timeout values
const (
defaultDialTimeout = 5 * time.Second
defaultConnectTimeout = 5 * time.Second
defaultKeepAliveTime = 15 * time.Second
defaultKeepAlivePolicy = 5 * time.Second
defaultMaxConnIdle = 60 * time.Second
defaultMaxConnAge = 5 * time.Minute
)
// GRPCTransportManager manages gRPC connections
type GRPCTransportManager struct {
opts *GRPCTransportOptions
server *grpc.Server
listener net.Listener
connections sync.Map // map[string]*grpc.ClientConn
mu sync.RWMutex
metrics *transport.ExtendedMetricsCollector
}
// Ensure GRPCTransportManager implements TransportManager
var _ transport.TransportManager = (*GRPCTransportManager)(nil)
// DefaultGRPCTransportOptions returns default transport options
func DefaultGRPCTransportOptions() *GRPCTransportOptions {
return &GRPCTransportOptions{
ListenAddr: ":50051",
ConnectionTimeout: defaultConnectTimeout,
DialTimeout: defaultDialTimeout,
KeepAliveTime: defaultKeepAliveTime,
KeepAliveTimeout: defaultKeepAlivePolicy,
MaxConnectionIdle: defaultMaxConnIdle,
MaxConnectionAge: defaultMaxConnAge,
}
}
// NewGRPCTransportManager creates a new gRPC transport manager
func NewGRPCTransportManager(opts *GRPCTransportOptions) (*GRPCTransportManager, error) {
if opts == nil {
opts = DefaultGRPCTransportOptions()
}
metrics := transport.NewMetrics("grpc")
return &GRPCTransportManager{
opts: opts,
metrics: metrics,
}, nil
}
// Start starts the gRPC server
// Serve starts the server and blocks until it's stopped
func (g *GRPCTransportManager) Serve() error {
ctx := context.Background()
if err := g.Start(); err != nil {
return err
}
// Block until server is stopped
<-ctx.Done()
return nil
}
// Start starts the server and returns immediately
func (g *GRPCTransportManager) Start() error {
g.mu.Lock()
defer g.mu.Unlock()
if g.server != nil {
return fmt.Errorf("gRPC transport already started")
}
var serverOpts []grpc.ServerOption
// Configure TLS if provided
if g.opts.TLSConfig != nil {
serverOpts = append(serverOpts, grpc.Creds(credentials.NewTLS(g.opts.TLSConfig)))
}
// Configure keepalive parameters
kaProps := keepalive.ServerParameters{
MaxConnectionIdle: g.opts.MaxConnectionIdle,
MaxConnectionAge: g.opts.MaxConnectionAge,
Time: g.opts.KeepAliveTime,
Timeout: g.opts.KeepAliveTimeout,
}
kaPolicy := keepalive.EnforcementPolicy{
MinTime: g.opts.KeepAliveTime / 2,
PermitWithoutStream: true,
}
serverOpts = append(serverOpts,
grpc.KeepaliveParams(kaProps),
grpc.KeepaliveEnforcementPolicy(kaPolicy),
)
// Create and start the gRPC server
g.server = grpc.NewServer(serverOpts...)
// Start listening
listener, err := net.Listen("tcp", g.opts.ListenAddr)
if err != nil {
return fmt.Errorf("failed to listen on %s: %w", g.opts.ListenAddr, err)
}
g.listener = listener
// Start server in a goroutine
go func() {
g.metrics.ServerStarted()
if err := g.server.Serve(listener); err != nil {
g.metrics.ServerErrored()
// Just log the error, as this is running in a goroutine
fmt.Printf("gRPC server stopped: %v\n", err)
}
}()
return nil
}
// Stop stops the gRPC server
func (g *GRPCTransportManager) Stop(ctx context.Context) error {
g.mu.Lock()
defer g.mu.Unlock()
if g.server == nil {
return nil
}
// Close all client connections
g.connections.Range(func(key, value interface{}) bool {
conn := value.(*grpc.ClientConn)
conn.Close()
g.connections.Delete(key)
return true
})
// Gracefully stop the server
stopped := make(chan struct{})
go func() {
g.server.GracefulStop()
close(stopped)
}()
// Wait for graceful stop or context deadline
select {
case <-stopped:
// Server stopped gracefully
case <-ctx.Done():
// Context deadline exceeded, force stop
g.server.Stop()
}
g.metrics.ServerStopped()
g.server = nil
return nil
}
// Connect creates a connection to the specified address
func (g *GRPCTransportManager) Connect(ctx context.Context, address string) (transport.Connection, error) {
g.mu.RLock()
defer g.mu.RUnlock()
// Check if we already have a connection to this address
if conn, ok := g.connections.Load(address); ok {
return &GRPCConnection{
conn: conn.(*grpc.ClientConn),
address: address,
metrics: g.metrics,
lastUsed: time.Now(),
}, nil
}
// Set connection options
dialOptions := []grpc.DialOption{
grpc.WithBlock(),
}
// Add TLS if configured
if g.opts.TLSConfig != nil {
dialOptions = append(dialOptions, grpc.WithTransportCredentials(
credentials.NewTLS(g.opts.TLSConfig),
))
} else {
dialOptions = append(dialOptions, grpc.WithTransportCredentials(insecure.NewCredentials()))
}
// Add keepalive options
dialOptions = append(dialOptions, grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: g.opts.KeepAliveTime,
Timeout: g.opts.KeepAliveTimeout,
PermitWithoutStream: true,
}))
// Connect with timeout
dialCtx, cancel := context.WithTimeout(ctx, g.opts.DialTimeout)
defer cancel()
// Dial the server
conn, err := grpc.DialContext(dialCtx, address, dialOptions...)
if err != nil {
g.metrics.ConnectionFailed()
return nil, fmt.Errorf("failed to connect to %s: %w", address, err)
}
// Store the connection
g.connections.Store(address, conn)
g.metrics.ConnectionOpened()
return &GRPCConnection{
conn: conn,
address: address,
metrics: g.metrics,
lastUsed: time.Now(),
}, nil
}
// SetRequestHandler sets the request handler for the server
func (g *GRPCTransportManager) SetRequestHandler(handler transport.RequestHandler) {
// This would be implemented in a real server
}
// RegisterService registers a service with the gRPC server
func (g *GRPCTransportManager) RegisterService(service interface{}) error {
g.mu.Lock()
defer g.mu.Unlock()
if g.server == nil {
return fmt.Errorf("server not started, cannot register service")
}
// Type assert to KevoServiceServer and register
kevoService, ok := service.(pb.KevoServiceServer)
if !ok {
return fmt.Errorf("service is not a valid KevoServiceServer implementation")
}
pb.RegisterKevoServiceServer(g.server, kevoService)
return nil
}
// Register the transport with the registry
func init() {
transport.RegisterServerTransport("grpc", func(address string, options transport.TransportOptions) (transport.Server, error) {
// Convert the generic options to our specific options
grpcOpts := &GRPCTransportOptions{
ListenAddr: address,
TLSConfig: nil, // We'll set this up if TLS is enabled
ConnectionTimeout: options.Timeout,
DialTimeout: options.Timeout,
KeepAliveTime: defaultKeepAliveTime,
KeepAliveTimeout: defaultKeepAlivePolicy,
MaxConnectionIdle: defaultMaxConnIdle,
MaxConnectionAge: defaultMaxConnAge,
}
// Set up TLS if enabled
if options.TLSEnabled && options.CertFile != "" && options.KeyFile != "" {
tlsConfig, err := LoadServerTLSConfig(options.CertFile, options.KeyFile, options.CAFile)
if err != nil {
return nil, fmt.Errorf("failed to load TLS config: %w", err)
}
grpcOpts.TLSConfig = tlsConfig
}
return NewGRPCTransportManager(grpcOpts)
})
transport.RegisterClientTransport("grpc", func(endpoint string, options transport.TransportOptions) (transport.Client, error) {
return NewGRPCClient(endpoint, options)
})
}

View File

@ -0,0 +1,63 @@
package transport
import (
"testing"
)
// Simple smoke test for the gRPC transport
func TestNewGRPCTransportManager(t *testing.T) {
opts := DefaultGRPCTransportOptions()
// Override the listen address to avoid port conflicts
opts.ListenAddr = ":0" // use random available port
manager, err := NewGRPCTransportManager(opts)
if err != nil {
t.Fatalf("Failed to create transport manager: %v", err)
}
// Verify the manager was created
if manager == nil {
t.Fatal("Expected non-nil manager")
}
}
// Test for the server TLS configuration
func TestLoadServerTLSConfig(t *testing.T) {
// Skip actual loading, just test validation
_, err := LoadServerTLSConfig("", "", "")
if err == nil {
t.Fatal("Expected error for empty cert/key")
}
}
// Test for the client TLS configuration
func TestLoadClientTLSConfig(t *testing.T) {
// Test with insecure config
config, err := LoadClientTLSConfig("", "", "", true)
if err != nil {
t.Fatalf("Failed to create insecure TLS config: %v", err)
}
if config == nil {
t.Fatal("Expected non-nil TLS config")
}
if !config.InsecureSkipVerify {
t.Fatal("Expected InsecureSkipVerify to be true")
}
}
// Skip actual TLS certificate loading by providing empty values
func TestLoadClientTLSConfigFromStruct(t *testing.T) {
config, err := LoadClientTLSConfigFromStruct(&TLSConfig{
SkipVerify: true,
})
if err != nil {
t.Fatalf("Failed to create TLS config from struct: %v", err)
}
if config == nil {
t.Fatal("Expected non-nil TLS config")
}
if !config.InsecureSkipVerify {
t.Fatal("Expected InsecureSkipVerify to be true")
}
}

210
pkg/grpc/transport/pool.go Normal file
View File

@ -0,0 +1,210 @@
package transport
import (
"context"
"errors"
"sync"
"time"
)
var (
ErrPoolClosed = errors.New("connection pool is closed")
ErrPoolFull = errors.New("connection pool is full")
ErrPoolEmptyNoWait = errors.New("connection pool is empty")
)
// ConnectionPool manages a pool of gRPC connections
type ConnectionPool struct {
manager *GRPCTransportManager
address string
maxIdle int
maxActive int
idlePool chan *GRPCConnection
activePool chan struct{}
mu sync.Mutex
closed bool
idleTime time.Duration
}
// NewConnectionPool creates a new connection pool
func NewConnectionPool(manager *GRPCTransportManager, address string, maxIdle, maxActive int, idleTime time.Duration) *ConnectionPool {
if maxIdle <= 0 {
maxIdle = 2
}
if maxActive <= 0 {
maxActive = 10
}
if idleTime <= 0 {
idleTime = 5 * time.Minute
}
pool := &ConnectionPool{
manager: manager,
address: address,
maxIdle: maxIdle,
maxActive: maxActive,
idlePool: make(chan *GRPCConnection, maxIdle),
activePool: make(chan struct{}, maxActive),
idleTime: idleTime,
}
return pool
}
// Get retrieves a connection from the pool or creates a new one
func (p *ConnectionPool) Get(ctx context.Context, wait bool) (*GRPCConnection, error) {
p.mu.Lock()
if p.closed {
p.mu.Unlock()
return nil, ErrPoolClosed
}
p.mu.Unlock()
// Try to get an idle connection
select {
case conn := <-p.idlePool:
return conn, nil
default:
// No idle connections available
}
// Check if we can create a new connection
select {
case p.activePool <- struct{}{}:
// We acquired a slot to create a new connection
conn, err := p.createConnection(ctx)
if err != nil {
// If connection creation fails, release the active slot
<-p.activePool
return nil, err
}
return conn, nil
default:
// Pool is full, check if we should wait
if !wait {
return nil, ErrPoolEmptyNoWait
}
}
// Wait for a connection to become available or context to expire
select {
case conn := <-p.idlePool:
// Got an idle connection
return conn, nil
case p.activePool <- struct{}{}:
// Got permission to create a new connection
conn, err := p.createConnection(ctx)
if err != nil {
<-p.activePool
return nil, err
}
return conn, nil
case <-ctx.Done():
// Context deadline exceeded or canceled
return nil, ctx.Err()
}
}
// createConnection creates a new connection to the server
func (p *ConnectionPool) createConnection(ctx context.Context) (*GRPCConnection, error) {
conn, err := p.manager.Connect(ctx, p.address)
if err != nil {
return nil, err
}
// Convert to our internal type
grpcConn, ok := conn.(*GRPCConnection)
if !ok {
conn.Close()
return nil, errors.New("invalid connection type")
}
return grpcConn, nil
}
// Put returns a connection to the pool
func (p *ConnectionPool) Put(conn *GRPCConnection) error {
p.mu.Lock()
if p.closed {
p.mu.Unlock()
conn.Close() // Close the connection since the pool is closed
<-p.activePool
return nil
}
p.mu.Unlock()
// Try to add to the idle pool
select {
case p.idlePool <- conn:
// Successfully returned to idle pool
return nil
default:
// Idle pool full, close the connection
conn.Close()
<-p.activePool
return nil
}
}
// Close closes the connection pool and all idle connections
func (p *ConnectionPool) Close() error {
p.mu.Lock()
defer p.mu.Unlock()
if p.closed {
return ErrPoolClosed
}
p.closed = true
// Close all idle connections
close(p.idlePool)
for conn := range p.idlePool {
conn.Close()
<-p.activePool
}
return nil
}
// ConnectionPoolManager manages multiple connection pools
type ConnectionPoolManager struct {
manager *GRPCTransportManager
pools sync.Map // map[string]*ConnectionPool
defaultMaxIdle int
defaultMaxActive int
defaultIdleTime time.Duration
}
// NewConnectionPoolManager creates a new connection pool manager
func NewConnectionPoolManager(manager *GRPCTransportManager, maxIdle, maxActive int, idleTime time.Duration) *ConnectionPoolManager {
return &ConnectionPoolManager{
manager: manager,
defaultMaxIdle: maxIdle,
defaultMaxActive: maxActive,
defaultIdleTime: idleTime,
}
}
// GetPool gets or creates a connection pool for the given address
func (m *ConnectionPoolManager) GetPool(address string) *ConnectionPool {
// Check if pool exists
if pool, ok := m.pools.Load(address); ok {
return pool.(*ConnectionPool)
}
// Create new pool
pool := NewConnectionPool(m.manager, address, m.defaultMaxIdle, m.defaultMaxActive, m.defaultIdleTime)
m.pools.Store(address, pool)
return pool
}
// CloseAll closes all connection pools
func (m *ConnectionPoolManager) CloseAll() {
m.pools.Range(func(key, value interface{}) bool {
pool := value.(*ConnectionPool)
pool.Close()
m.pools.Delete(key)
return true
})
}

View File

@ -0,0 +1,154 @@
package transport
import (
"context"
"crypto/tls"
"fmt"
"sync"
"time"
pb "github.com/KevoDB/kevo/proto/kevo"
"github.com/KevoDB/kevo/pkg/transport"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/keepalive"
)
// GRPCServer implements the transport.Server interface for gRPC
type GRPCServer struct {
address string
tlsConfig *tls.Config
server *grpc.Server
requestHandler transport.RequestHandler
started bool
mu sync.Mutex
metrics *transport.ExtendedMetricsCollector
}
// NewGRPCServer creates a new gRPC server
func NewGRPCServer(address string, options transport.TransportOptions) (transport.Server, error) {
// Create server options
var serverOpts []grpc.ServerOption
// Configure TLS if enabled
if options.TLSEnabled {
tlsConfig, err := LoadServerTLSConfig(options.CertFile, options.KeyFile, options.CAFile)
if err != nil {
return nil, fmt.Errorf("failed to load TLS config: %w", err)
}
serverOpts = append(serverOpts, grpc.Creds(credentials.NewTLS(tlsConfig)))
}
// Configure keepalive parameters
kaProps := keepalive.ServerParameters{
MaxConnectionIdle: 30 * time.Minute,
MaxConnectionAge: 5 * time.Minute,
Time: 15 * time.Second,
Timeout: 5 * time.Second,
}
kaPolicy := keepalive.EnforcementPolicy{
MinTime: 10 * time.Second,
PermitWithoutStream: true,
}
serverOpts = append(serverOpts,
grpc.KeepaliveParams(kaProps),
grpc.KeepaliveEnforcementPolicy(kaPolicy),
)
// Create the server
server := grpc.NewServer(serverOpts...)
return &GRPCServer{
address: address,
server: server,
metrics: transport.NewMetrics("grpc"),
}, nil
}
// Start starts the server and returns immediately
func (s *GRPCServer) Start() error {
s.mu.Lock()
defer s.mu.Unlock()
if s.started {
return fmt.Errorf("server already started")
}
// Start the server in a goroutine
go func() {
if err := s.Serve(); err != nil {
fmt.Printf("gRPC server error: %v\n", err)
}
}()
s.started = true
return nil
}
// Serve starts the server and blocks until it's stopped
func (s *GRPCServer) Serve() error {
if s.requestHandler == nil {
return fmt.Errorf("no request handler set")
}
// Create the service implementation
service := &kevoServiceServer{
handler: s.requestHandler,
}
// Register the service
pb.RegisterKevoServiceServer(s.server, service)
// Start listening
listener, err := transport.CreateListener("tcp", s.address, s.tlsConfig)
if err != nil {
return fmt.Errorf("failed to listen on %s: %w", s.address, err)
}
s.metrics.ServerStarted()
// Serve requests
err = s.server.Serve(listener)
if err != nil {
s.metrics.ServerErrored()
return fmt.Errorf("failed to serve: %w", err)
}
s.metrics.ServerStopped()
return nil
}
// Stop stops the server gracefully
func (s *GRPCServer) Stop(ctx context.Context) error {
s.mu.Lock()
defer s.mu.Unlock()
if !s.started {
return nil
}
s.server.GracefulStop()
s.started = false
return nil
}
// SetRequestHandler sets the handler for incoming requests
func (s *GRPCServer) SetRequestHandler(handler transport.RequestHandler) {
s.mu.Lock()
defer s.mu.Unlock()
s.requestHandler = handler
}
// kevoServiceServer implements the KevoService gRPC service
type kevoServiceServer struct {
pb.UnimplementedKevoServiceServer
handler transport.RequestHandler
}
// TODO: Implement service methods

95
pkg/grpc/transport/tls.go Normal file
View File

@ -0,0 +1,95 @@
package transport
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
)
// TLSConfig holds TLS configuration settings
type TLSConfig struct {
CertFile string
KeyFile string
CAFile string
SkipVerify bool
}
// LoadServerTLSConfig loads TLS configuration for server
func LoadServerTLSConfig(certFile, keyFile, caFile string) (*tls.Config, error) {
// Check if both cert and key files are provided
if certFile == "" || keyFile == "" {
return nil, fmt.Errorf("both certificate and key files must be provided")
}
// Load server certificate and key
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, fmt.Errorf("failed to load server key pair: %w", err)
}
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS12,
}
// Load CA if provided for client authentication
if caFile != "" {
caBytes, err := ioutil.ReadFile(caFile)
if err != nil {
return nil, fmt.Errorf("failed to read CA certificate: %w", err)
}
certPool := x509.NewCertPool()
if !certPool.AppendCertsFromPEM(caBytes) {
return nil, fmt.Errorf("failed to parse CA certificate")
}
tlsConfig.ClientCAs = certPool
tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
}
return tlsConfig, nil
}
// LoadClientTLSConfig loads TLS configuration for client
func LoadClientTLSConfig(certFile, keyFile, caFile string, skipVerify bool) (*tls.Config, error) {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: skipVerify,
}
// Load client certificate and key if provided
if certFile != "" && keyFile != "" {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, fmt.Errorf("failed to load client key pair: %w", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
// Load CA if provided for server authentication
if caFile != "" {
caBytes, err := ioutil.ReadFile(caFile)
if err != nil {
return nil, fmt.Errorf("failed to read CA certificate: %w", err)
}
certPool := x509.NewCertPool()
if !certPool.AppendCertsFromPEM(caBytes) {
return nil, fmt.Errorf("failed to parse CA certificate")
}
tlsConfig.RootCAs = certPool
}
return tlsConfig, nil
}
// LoadClientTLSConfigFromStruct is a convenience method to load TLS config from TLSConfig struct
func LoadClientTLSConfigFromStruct(config *TLSConfig) (*tls.Config, error) {
if config == nil {
return &tls.Config{MinVersion: tls.VersionTLS12}, nil
}
return LoadClientTLSConfig(config.CertFile, config.KeyFile, config.CAFile, config.SkipVerify)
}

View File

@ -0,0 +1,84 @@
package transport
import (
"crypto/tls"
"sync"
"time"
pb "github.com/KevoDB/kevo/proto/kevo"
"github.com/KevoDB/kevo/pkg/transport"
"google.golang.org/grpc"
)
// GRPCConnection implements the transport.Connection interface for gRPC connections
type GRPCConnection struct {
conn *grpc.ClientConn
address string
metrics *transport.ExtendedMetricsCollector
lastUsed time.Time
mu sync.RWMutex
reqCount int
errCount int
}
// Execute runs a function with the gRPC client
func (c *GRPCConnection) Execute(fn func(interface{}) error) error {
c.mu.Lock()
c.lastUsed = time.Now()
c.reqCount++
c.mu.Unlock()
// Create a new client from the connection
client := pb.NewKevoServiceClient(c.conn)
// Execute the provided function with the client
err := fn(client)
// Update metrics if there was an error
if err != nil {
c.mu.Lock()
c.errCount++
c.mu.Unlock()
}
return err
}
// Close closes the gRPC connection
func (c *GRPCConnection) Close() error {
return c.conn.Close()
}
// Address returns the endpoint address
func (c *GRPCConnection) Address() string {
return c.address
}
// Status returns the current connection status
func (c *GRPCConnection) Status() transport.ConnectionStatus {
c.mu.RLock()
defer c.mu.RUnlock()
// Check the connection state
isConnected := c.conn != nil
return transport.ConnectionStatus{
Connected: isConnected,
LastActivity: c.lastUsed,
ErrorCount: c.errCount,
RequestCount: c.reqCount,
}
}
// GRPCTransportOptions configuration for gRPC transport
type GRPCTransportOptions struct {
ListenAddr string
TLSConfig *tls.Config
ConnectionTimeout time.Duration
DialTimeout time.Duration
KeepAliveTime time.Duration
KeepAliveTimeout time.Duration
MaxConnectionIdle time.Duration
MaxConnectionAge time.Duration
MaxPoolConnections int
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"sync"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// HierarchicalIterator implements an iterator that follows the LSM-tree hierarchy

View File

@ -5,7 +5,7 @@ import (
"sync/atomic"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
// MemTablePool manages a pool of MemTables

View File

@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
func createTestConfig() *config.Config {

View File

@ -5,7 +5,7 @@ import (
"sync/atomic"
"time"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/wal"
)
// MemTable is an in-memory table that stores key-value pairs

View File

@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/wal"
)
func TestMemTableBasicOperations(t *testing.T) {

View File

@ -3,8 +3,8 @@ package memtable
import (
"fmt"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/wal"
)
// RecoveryOptions contains options for MemTable recovery

View File

@ -4,8 +4,8 @@ import (
"os"
"testing"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/wal"
)
func setupTestWAL(t *testing.T) (string, *wal.WAL, func()) {

View File

@ -5,7 +5,7 @@ import (
"fmt"
"sync"
"github.com/jeremytregunna/kevo/pkg/sstable/block"
"github.com/KevoDB/kevo/pkg/sstable/block"
)
// Iterator iterates over key-value pairs in an SSTable

View File

@ -7,8 +7,8 @@ import (
"os"
"sync"
"github.com/jeremytregunna/kevo/pkg/sstable/block"
"github.com/jeremytregunna/kevo/pkg/sstable/footer"
"github.com/KevoDB/kevo/pkg/sstable/block"
"github.com/KevoDB/kevo/pkg/sstable/footer"
)
// IOManager handles file I/O operations for SSTable

View File

@ -3,7 +3,7 @@ package sstable
import (
"errors"
"github.com/jeremytregunna/kevo/pkg/sstable/block"
"github.com/KevoDB/kevo/pkg/sstable/block"
)
const (

View File

@ -7,8 +7,8 @@ import (
"os"
"path/filepath"
"github.com/jeremytregunna/kevo/pkg/sstable/block"
"github.com/jeremytregunna/kevo/pkg/sstable/footer"
"github.com/KevoDB/kevo/pkg/sstable/block"
"github.com/KevoDB/kevo/pkg/sstable/footer"
)
// FileManager handles file operations for SSTable writing

View File

@ -1,7 +1,7 @@
package transaction
import (
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
// TransactionCreatorImpl implements the engine.TransactionCreator interface

View File

@ -4,9 +4,9 @@ import (
"fmt"
"os"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/jeremytregunna/kevo/pkg/transaction"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/transaction"
"github.com/KevoDB/kevo/pkg/wal"
)
// Disable all logs in tests

View File

@ -1,7 +1,7 @@
package transaction
import (
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/common/iterator"
)
// TransactionMode defines the transaction access mode (ReadOnly or ReadWrite)

View File

@ -5,7 +5,7 @@ import (
"os"
"testing"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
func setupTestEngine(t *testing.T) (*engine.Engine, string) {

View File

@ -6,10 +6,10 @@ import (
"sync"
"sync/atomic"
"github.com/jeremytregunna/kevo/pkg/common/iterator"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/jeremytregunna/kevo/pkg/transaction/txbuffer"
"github.com/jeremytregunna/kevo/pkg/wal"
"github.com/KevoDB/kevo/pkg/common/iterator"
"github.com/KevoDB/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/transaction/txbuffer"
"github.com/KevoDB/kevo/pkg/wal"
)
// Common errors for transaction operations

View File

@ -5,7 +5,7 @@ import (
"os"
"testing"
"github.com/jeremytregunna/kevo/pkg/engine"
"github.com/KevoDB/kevo/pkg/engine"
)
func setupTest(t *testing.T) (*engine.Engine, func()) {

100
pkg/transport/common.go Normal file
View File

@ -0,0 +1,100 @@
package transport
import (
"errors"
)
// Standard request/response type constants
const (
TypeGet = "get"
TypePut = "put"
TypeDelete = "delete"
TypeBatchWrite = "batch_write"
TypeScan = "scan"
TypeBeginTx = "begin_tx"
TypeCommitTx = "commit_tx"
TypeRollbackTx = "rollback_tx"
TypeTxGet = "tx_get"
TypeTxPut = "tx_put"
TypeTxDelete = "tx_delete"
TypeTxScan = "tx_scan"
TypeGetStats = "get_stats"
TypeCompact = "compact"
TypeError = "error"
)
// Common errors
var (
ErrInvalidRequest = errors.New("invalid request")
ErrInvalidPayload = errors.New("invalid payload")
ErrNotConnected = errors.New("not connected to server")
ErrTimeout = errors.New("operation timed out")
)
// BasicRequest implements the Request interface
type BasicRequest struct {
RequestType string
RequestData []byte
}
// Type returns the type of the request
func (r *BasicRequest) Type() string {
return r.RequestType
}
// Payload returns the payload of the request
func (r *BasicRequest) Payload() []byte {
return r.RequestData
}
// NewRequest creates a new request with the given type and payload
func NewRequest(requestType string, data []byte) Request {
return &BasicRequest{
RequestType: requestType,
RequestData: data,
}
}
// BasicResponse implements the Response interface
type BasicResponse struct {
ResponseType string
ResponseData []byte
ResponseErr error
}
// Type returns the type of the response
func (r *BasicResponse) Type() string {
return r.ResponseType
}
// Payload returns the payload of the response
func (r *BasicResponse) Payload() []byte {
return r.ResponseData
}
// Error returns any error associated with the response
func (r *BasicResponse) Error() error {
return r.ResponseErr
}
// NewResponse creates a new response with the given type, payload, and error
func NewResponse(responseType string, data []byte, err error) Response {
return &BasicResponse{
ResponseType: responseType,
ResponseData: data,
ResponseErr: err,
}
}
// NewErrorResponse creates a new error response
func NewErrorResponse(err error) Response {
var msg []byte
if err != nil {
msg = []byte(err.Error())
}
return &BasicResponse{
ResponseType: TypeError,
ResponseData: msg,
ResponseErr: err,
}
}

View File

@ -0,0 +1,87 @@
package transport
import (
"errors"
"testing"
)
func TestBasicRequest(t *testing.T) {
// Test creating a request
payload := []byte("test payload")
req := NewRequest(TypeGet, payload)
// Test Type method
if req.Type() != TypeGet {
t.Errorf("Expected type %s, got %s", TypeGet, req.Type())
}
// Test Payload method
if string(req.Payload()) != string(payload) {
t.Errorf("Expected payload %s, got %s", string(payload), string(req.Payload()))
}
}
func TestBasicResponse(t *testing.T) {
// Test creating a response with no error
payload := []byte("test response")
resp := NewResponse(TypeGet, payload, nil)
// Test Type method
if resp.Type() != TypeGet {
t.Errorf("Expected type %s, got %s", TypeGet, resp.Type())
}
// Test Payload method
if string(resp.Payload()) != string(payload) {
t.Errorf("Expected payload %s, got %s", string(payload), string(resp.Payload()))
}
// Test Error method
if resp.Error() != nil {
t.Errorf("Expected nil error, got %v", resp.Error())
}
// Test creating a response with an error
testErr := errors.New("test error")
resp = NewResponse(TypeGet, payload, testErr)
if resp.Error() != testErr {
t.Errorf("Expected error %v, got %v", testErr, resp.Error())
}
}
func TestNewErrorResponse(t *testing.T) {
// Test creating an error response
testErr := errors.New("test error")
resp := NewErrorResponse(testErr)
// Test Type method
if resp.Type() != TypeError {
t.Errorf("Expected type %s, got %s", TypeError, resp.Type())
}
// Test Payload method - should contain error message
if string(resp.Payload()) != testErr.Error() {
t.Errorf("Expected payload %s, got %s", testErr.Error(), string(resp.Payload()))
}
// Test Error method
if resp.Error() != testErr {
t.Errorf("Expected error %v, got %v", testErr, resp.Error())
}
// Test with nil error
resp = NewErrorResponse(nil)
if resp.Type() != TypeError {
t.Errorf("Expected type %s, got %s", TypeError, resp.Type())
}
if len(resp.Payload()) != 0 {
t.Errorf("Expected empty payload, got %s", string(resp.Payload()))
}
if resp.Error() != nil {
t.Errorf("Expected nil error, got %v", resp.Error())
}
}

149
pkg/transport/interface.go Normal file
View File

@ -0,0 +1,149 @@
package transport
import (
"context"
"time"
)
// CompressionType defines the compression algorithm used
type CompressionType string
// Standard compression options
const (
CompressionNone CompressionType = "none"
CompressionGzip CompressionType = "gzip"
CompressionSnappy CompressionType = "snappy"
)
// RetryPolicy defines how retries are handled
type RetryPolicy struct {
MaxRetries int
InitialBackoff time.Duration
MaxBackoff time.Duration
BackoffFactor float64
Jitter float64
}
// TransportOptions contains common configuration across all transport types
type TransportOptions struct {
Timeout time.Duration
RetryPolicy RetryPolicy
Compression CompressionType
MaxMessageSize int
TLSEnabled bool
CertFile string
KeyFile string
CAFile string
}
// TransportStatus contains information about the current transport state
type TransportStatus struct {
Connected bool
LastConnected time.Time
LastError error
BytesSent uint64
BytesReceived uint64
RTT time.Duration
}
// Request represents a generic request to the transport layer
type Request interface {
// Type returns the type of request
Type() string
// Payload returns the payload of the request
Payload() []byte
}
// Response represents a generic response from the transport layer
type Response interface {
// Type returns the type of response
Type() string
// Payload returns the payload of the response
Payload() []byte
// Error returns any error associated with the response
Error() error
}
// Stream represents a bidirectional stream of messages
type Stream interface {
// Send sends a request over the stream
Send(request Request) error
// Recv receives a response from the stream
Recv() (Response, error)
// Close closes the stream
Close() error
}
// Client defines the client interface for any transport implementation
type Client interface {
// Connect establishes a connection to the server
Connect(ctx context.Context) error
// Close closes the connection
Close() error
// IsConnected returns whether the client is connected
IsConnected() bool
// Status returns the current status of the connection
Status() TransportStatus
// Send sends a request and waits for a response
Send(ctx context.Context, request Request) (Response, error)
// Stream opens a bidirectional stream
Stream(ctx context.Context) (Stream, error)
}
// RequestHandler processes incoming requests
type RequestHandler interface {
// HandleRequest processes a request and returns a response
HandleRequest(ctx context.Context, request Request) (Response, error)
// HandleStream processes a bidirectional stream
HandleStream(stream Stream) error
}
// Server defines the server interface for any transport implementation
type Server interface {
// Start starts the server and returns immediately
Start() error
// Serve starts the server and blocks until it's stopped
Serve() error
// Stop stops the server gracefully
Stop(ctx context.Context) error
// SetRequestHandler sets the handler for incoming requests
SetRequestHandler(handler RequestHandler)
}
// ClientFactory creates a new client
type ClientFactory func(endpoint string, options TransportOptions) (Client, error)
// ServerFactory creates a new server
type ServerFactory func(address string, options TransportOptions) (Server, error)
// Registry keeps track of available transport implementations
type Registry interface {
// RegisterClient adds a new client implementation to the registry
RegisterClient(name string, factory ClientFactory)
// RegisterServer adds a new server implementation to the registry
RegisterServer(name string, factory ServerFactory)
// CreateClient instantiates a client by name
CreateClient(name, endpoint string, options TransportOptions) (Client, error)
// CreateServer instantiates a server by name
CreateServer(name, address string, options TransportOptions) (Server, error)
// ListTransports returns all available transport names
ListTransports() []string
}

136
pkg/transport/metrics.go Normal file
View File

@ -0,0 +1,136 @@
package transport
import (
"sync"
"sync/atomic"
"time"
)
// MetricsCollector collects metrics for transport operations
type MetricsCollector interface {
// RecordRequest records metrics for a request
RecordRequest(requestType string, startTime time.Time, err error)
// RecordSend records metrics for bytes sent
RecordSend(bytes int)
// RecordReceive records metrics for bytes received
RecordReceive(bytes int)
// RecordConnection records a connection event
RecordConnection(successful bool)
// GetMetrics returns the current metrics
GetMetrics() Metrics
}
// Metrics represents transport metrics
type Metrics struct {
TotalRequests uint64
SuccessfulRequests uint64
FailedRequests uint64
BytesSent uint64
BytesReceived uint64
Connections uint64
ConnectionFailures uint64
AvgLatencyByType map[string]time.Duration
}
// BasicMetricsCollector is a simple implementation of MetricsCollector
type BasicMetricsCollector struct {
mu sync.RWMutex
totalRequests uint64
successfulRequests uint64
failedRequests uint64
bytesSent uint64
bytesReceived uint64
connections uint64
connectionFailures uint64
// Track average latency and count for each request type
avgLatencyByType map[string]time.Duration
requestCountByType map[string]uint64
}
// NewMetricsCollector creates a new metrics collector
func NewMetricsCollector() MetricsCollector {
return &BasicMetricsCollector{
avgLatencyByType: make(map[string]time.Duration),
requestCountByType: make(map[string]uint64),
}
}
// RecordRequest records metrics for a request
func (c *BasicMetricsCollector) RecordRequest(requestType string, startTime time.Time, err error) {
atomic.AddUint64(&c.totalRequests, 1)
if err == nil {
atomic.AddUint64(&c.successfulRequests, 1)
} else {
atomic.AddUint64(&c.failedRequests, 1)
}
// Update average latency for request type
latency := time.Since(startTime)
c.mu.Lock()
defer c.mu.Unlock()
currentAvg, exists := c.avgLatencyByType[requestType]
currentCount, _ := c.requestCountByType[requestType]
if exists {
// Update running average - the common case for better branch prediction
// new_avg = (old_avg * count + new_value) / (count + 1)
totalDuration := currentAvg * time.Duration(currentCount) + latency
newCount := currentCount + 1
c.avgLatencyByType[requestType] = totalDuration / time.Duration(newCount)
c.requestCountByType[requestType] = newCount
} else {
// First request of this type
c.avgLatencyByType[requestType] = latency
c.requestCountByType[requestType] = 1
}
}
// RecordSend records metrics for bytes sent
func (c *BasicMetricsCollector) RecordSend(bytes int) {
atomic.AddUint64(&c.bytesSent, uint64(bytes))
}
// RecordReceive records metrics for bytes received
func (c *BasicMetricsCollector) RecordReceive(bytes int) {
atomic.AddUint64(&c.bytesReceived, uint64(bytes))
}
// RecordConnection records a connection event
func (c *BasicMetricsCollector) RecordConnection(successful bool) {
if successful {
atomic.AddUint64(&c.connections, 1)
} else {
atomic.AddUint64(&c.connectionFailures, 1)
}
}
// GetMetrics returns the current metrics
func (c *BasicMetricsCollector) GetMetrics() Metrics {
c.mu.RLock()
defer c.mu.RUnlock()
// Create a copy of the average latency map
avgLatencyByType := make(map[string]time.Duration, len(c.avgLatencyByType))
for k, v := range c.avgLatencyByType {
avgLatencyByType[k] = v
}
return Metrics{
TotalRequests: atomic.LoadUint64(&c.totalRequests),
SuccessfulRequests: atomic.LoadUint64(&c.successfulRequests),
FailedRequests: atomic.LoadUint64(&c.failedRequests),
BytesSent: atomic.LoadUint64(&c.bytesSent),
BytesReceived: atomic.LoadUint64(&c.bytesReceived),
Connections: atomic.LoadUint64(&c.connections),
ConnectionFailures: atomic.LoadUint64(&c.connectionFailures),
AvgLatencyByType: avgLatencyByType,
}
}

View File

@ -0,0 +1,111 @@
package transport
import (
"context"
"sync/atomic"
"time"
)
// Metrics struct extensions for server metrics
type ServerMetrics struct {
Metrics
ServerStarted uint64
ServerErrored uint64
ServerStopped uint64
}
// Connection represents a connection to a remote endpoint
type Connection interface {
// Execute executes a function with the underlying connection
Execute(func(interface{}) error) error
// Close closes the connection
Close() error
// Address returns the remote endpoint address
Address() string
// Status returns the connection status
Status() ConnectionStatus
}
// ConnectionStatus represents the status of a connection
type ConnectionStatus struct {
Connected bool
LastActivity time.Time
ErrorCount int
RequestCount int
LatencyAvg time.Duration
}
// TransportManager is an interface for managing transport layer operations
type TransportManager interface {
// Start starts the transport manager
Start() error
// Stop stops the transport manager
Stop(ctx context.Context) error
// Connect connects to a remote endpoint
Connect(ctx context.Context, address string) (Connection, error)
}
// ExtendedMetricsCollector extends the basic metrics collector with server metrics
type ExtendedMetricsCollector struct {
BasicMetricsCollector
serverStarted uint64
serverErrored uint64
serverStopped uint64
}
// NewMetrics creates a new extended metrics collector with a given transport name
func NewMetrics(transport string) *ExtendedMetricsCollector {
return &ExtendedMetricsCollector{
BasicMetricsCollector: BasicMetricsCollector{
avgLatencyByType: make(map[string]time.Duration),
requestCountByType: make(map[string]uint64),
},
}
}
// ServerStarted increments the server started counter
func (c *ExtendedMetricsCollector) ServerStarted() {
atomic.AddUint64(&c.serverStarted, 1)
}
// ServerErrored increments the server errored counter
func (c *ExtendedMetricsCollector) ServerErrored() {
atomic.AddUint64(&c.serverErrored, 1)
}
// ServerStopped increments the server stopped counter
func (c *ExtendedMetricsCollector) ServerStopped() {
atomic.AddUint64(&c.serverStopped, 1)
}
// ConnectionOpened records a connection opened event
func (c *ExtendedMetricsCollector) ConnectionOpened() {
atomic.AddUint64(&c.connections, 1)
}
// ConnectionFailed records a connection failed event
func (c *ExtendedMetricsCollector) ConnectionFailed() {
atomic.AddUint64(&c.connectionFailures, 1)
}
// ConnectionClosed records a connection closed event
func (c *ExtendedMetricsCollector) ConnectionClosed() {
// No specific counter for closed connections yet
}
// GetExtendedMetrics returns the current extended metrics
func (c *ExtendedMetricsCollector) GetExtendedMetrics() ServerMetrics {
baseMetrics := c.GetMetrics()
return ServerMetrics{
Metrics: baseMetrics,
ServerStarted: atomic.LoadUint64(&c.serverStarted),
ServerErrored: atomic.LoadUint64(&c.serverErrored),
ServerStopped: atomic.LoadUint64(&c.serverStopped),
}
}

View File

@ -0,0 +1,101 @@
package transport
import (
"errors"
"testing"
"time"
)
func TestBasicMetricsCollector(t *testing.T) {
collector := NewMetricsCollector()
// Test initial state
metrics := collector.GetMetrics()
if metrics.TotalRequests != 0 ||
metrics.SuccessfulRequests != 0 ||
metrics.FailedRequests != 0 ||
metrics.BytesSent != 0 ||
metrics.BytesReceived != 0 ||
metrics.Connections != 0 ||
metrics.ConnectionFailures != 0 ||
len(metrics.AvgLatencyByType) != 0 {
t.Errorf("Initial metrics not initialized correctly: %+v", metrics)
}
// Test recording successful request
startTime := time.Now().Add(-100 * time.Millisecond) // Simulate 100ms request
collector.RecordRequest("get", startTime, nil)
metrics = collector.GetMetrics()
if metrics.TotalRequests != 1 {
t.Errorf("Expected TotalRequests to be 1, got %d", metrics.TotalRequests)
}
if metrics.SuccessfulRequests != 1 {
t.Errorf("Expected SuccessfulRequests to be 1, got %d", metrics.SuccessfulRequests)
}
if metrics.FailedRequests != 0 {
t.Errorf("Expected FailedRequests to be 0, got %d", metrics.FailedRequests)
}
// Check average latency
if avgLatency, exists := metrics.AvgLatencyByType["get"]; !exists {
t.Error("Expected 'get' latency to exist")
} else if avgLatency < 100*time.Millisecond {
t.Errorf("Expected latency to be at least 100ms, got %v", avgLatency)
}
// Test recording failed request
startTime = time.Now().Add(-200 * time.Millisecond) // Simulate 200ms request
collector.RecordRequest("get", startTime, errors.New("test error"))
metrics = collector.GetMetrics()
if metrics.TotalRequests != 2 {
t.Errorf("Expected TotalRequests to be 2, got %d", metrics.TotalRequests)
}
if metrics.SuccessfulRequests != 1 {
t.Errorf("Expected SuccessfulRequests to be 1, got %d", metrics.SuccessfulRequests)
}
if metrics.FailedRequests != 1 {
t.Errorf("Expected FailedRequests to be 1, got %d", metrics.FailedRequests)
}
// Test average latency calculation for multiple requests
startTime = time.Now().Add(-300 * time.Millisecond)
collector.RecordRequest("put", startTime, nil)
startTime = time.Now().Add(-500 * time.Millisecond)
collector.RecordRequest("put", startTime, nil)
metrics = collector.GetMetrics()
avgPutLatency := metrics.AvgLatencyByType["put"]
// Expected avg is around (300ms + 500ms) / 2 = 400ms
if avgPutLatency < 390*time.Millisecond || avgPutLatency > 410*time.Millisecond {
t.Errorf("Expected average 'put' latency to be around 400ms, got %v", avgPutLatency)
}
// Test byte tracking
collector.RecordSend(1000)
collector.RecordReceive(2000)
metrics = collector.GetMetrics()
if metrics.BytesSent != 1000 {
t.Errorf("Expected BytesSent to be 1000, got %d", metrics.BytesSent)
}
if metrics.BytesReceived != 2000 {
t.Errorf("Expected BytesReceived to be 2000, got %d", metrics.BytesReceived)
}
// Test connection tracking
collector.RecordConnection(true)
collector.RecordConnection(false)
collector.RecordConnection(true)
metrics = collector.GetMetrics()
if metrics.Connections != 2 {
t.Errorf("Expected Connections to be 2, got %d", metrics.Connections)
}
if metrics.ConnectionFailures != 1 {
t.Errorf("Expected ConnectionFailures to be 1, got %d", metrics.ConnectionFailures)
}
}

22
pkg/transport/network.go Normal file
View File

@ -0,0 +1,22 @@
package transport
import (
"crypto/tls"
"net"
)
// CreateListener creates a network listener with optional TLS
func CreateListener(network, address string, tlsConfig *tls.Config) (net.Listener, error) {
// Create the listener
listener, err := net.Listen(network, address)
if err != nil {
return nil, err
}
// If TLS is configured, wrap the listener
if tlsConfig != nil {
listener = tls.NewListener(listener, tlsConfig)
}
return listener, nil
}

114
pkg/transport/registry.go Normal file
View File

@ -0,0 +1,114 @@
package transport
import (
"fmt"
"sync"
)
// registry implements the Registry interface
type registry struct {
mu sync.RWMutex
clientFactories map[string]ClientFactory
serverFactories map[string]ServerFactory
}
// NewRegistry creates a new transport registry
func NewRegistry() Registry {
return &registry{
clientFactories: make(map[string]ClientFactory),
serverFactories: make(map[string]ServerFactory),
}
}
// DefaultRegistry is the default global registry instance
var DefaultRegistry = NewRegistry()
// RegisterClient adds a new client implementation to the registry
func (r *registry) RegisterClient(name string, factory ClientFactory) {
r.mu.Lock()
defer r.mu.Unlock()
r.clientFactories[name] = factory
}
// RegisterServer adds a new server implementation to the registry
func (r *registry) RegisterServer(name string, factory ServerFactory) {
r.mu.Lock()
defer r.mu.Unlock()
r.serverFactories[name] = factory
}
// CreateClient instantiates a client by name
func (r *registry) CreateClient(name, endpoint string, options TransportOptions) (Client, error) {
r.mu.RLock()
factory, exists := r.clientFactories[name]
r.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("transport client %q not registered", name)
}
return factory(endpoint, options)
}
// CreateServer instantiates a server by name
func (r *registry) CreateServer(name, address string, options TransportOptions) (Server, error) {
r.mu.RLock()
factory, exists := r.serverFactories[name]
r.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("transport server %q not registered", name)
}
return factory(address, options)
}
// ListTransports returns all available transport names
func (r *registry) ListTransports() []string {
r.mu.RLock()
defer r.mu.RUnlock()
// Get unique transport names
names := make(map[string]struct{})
for name := range r.clientFactories {
names[name] = struct{}{}
}
for name := range r.serverFactories {
names[name] = struct{}{}
}
// Convert to slice
result := make([]string, 0, len(names))
for name := range names {
result = append(result, name)
}
return result
}
// Helper functions for global registry
// RegisterClientTransport registers a client transport with the default registry
func RegisterClientTransport(name string, factory ClientFactory) {
DefaultRegistry.RegisterClient(name, factory)
}
// RegisterServerTransport registers a server transport with the default registry
func RegisterServerTransport(name string, factory ServerFactory) {
DefaultRegistry.RegisterServer(name, factory)
}
// GetClient creates a client using the default registry
func GetClient(name, endpoint string, options TransportOptions) (Client, error) {
return DefaultRegistry.CreateClient(name, endpoint, options)
}
// GetServer creates a server using the default registry
func GetServer(name, address string, options TransportOptions) (Server, error) {
return DefaultRegistry.CreateServer(name, address, options)
}
// AvailableTransports lists all available transports in the default registry
func AvailableTransports() []string {
return DefaultRegistry.ListTransports()
}

View File

@ -0,0 +1,162 @@
package transport
import (
"context"
"errors"
"testing"
"time"
)
// mockClient implements the Client interface for testing
type mockClient struct {
connected bool
endpoint string
options TransportOptions
}
func (m *mockClient) Connect(ctx context.Context) error {
m.connected = true
return nil
}
func (m *mockClient) Close() error {
m.connected = false
return nil
}
func (m *mockClient) IsConnected() bool {
return m.connected
}
func (m *mockClient) Status() TransportStatus {
return TransportStatus{
Connected: m.connected,
}
}
func (m *mockClient) Send(ctx context.Context, request Request) (Response, error) {
if !m.connected {
return nil, ErrNotConnected
}
return &BasicResponse{
ResponseType: request.Type() + "_response",
ResponseData: []byte("mock response"),
}, nil
}
func (m *mockClient) Stream(ctx context.Context) (Stream, error) {
if !m.connected {
return nil, ErrNotConnected
}
return nil, errors.New("streaming not implemented in mock")
}
// mockClientFactory creates a new mock client
func mockClientFactory(endpoint string, options TransportOptions) (Client, error) {
return &mockClient{
endpoint: endpoint,
options: options,
}, nil
}
// mockServer implements the Server interface for testing
type mockServer struct {
started bool
address string
options TransportOptions
handler RequestHandler
}
func (m *mockServer) Start() error {
m.started = true
return nil
}
func (m *mockServer) Serve() error {
m.started = true
return nil
}
func (m *mockServer) Stop(ctx context.Context) error {
m.started = false
return nil
}
func (m *mockServer) SetRequestHandler(handler RequestHandler) {
m.handler = handler
}
// mockServerFactory creates a new mock server
func mockServerFactory(address string, options TransportOptions) (Server, error) {
return &mockServer{
address: address,
options: options,
}, nil
}
// TestRegistry tests the transport registry
func TestRegistry(t *testing.T) {
registry := NewRegistry()
// Register transports
registry.RegisterClient("mock", mockClientFactory)
registry.RegisterServer("mock", mockServerFactory)
// Test listing transports
transports := registry.ListTransports()
if len(transports) != 1 || transports[0] != "mock" {
t.Errorf("Expected [mock], got %v", transports)
}
// Test creating client
client, err := registry.CreateClient("mock", "localhost:8080", TransportOptions{
Timeout: 5 * time.Second,
})
if err != nil {
t.Fatalf("Failed to create client: %v", err)
}
// Test client methods
if client.IsConnected() {
t.Error("Expected client to be disconnected initially")
}
err = client.Connect(context.Background())
if err != nil {
t.Fatalf("Failed to connect: %v", err)
}
if !client.IsConnected() {
t.Error("Expected client to be connected after Connect()")
}
// Test server creation
server, err := registry.CreateServer("mock", "localhost:8080", TransportOptions{
Timeout: 5 * time.Second,
})
if err != nil {
t.Fatalf("Failed to create server: %v", err)
}
// Test server methods
err = server.Start()
if err != nil {
t.Fatalf("Failed to start server: %v", err)
}
mockServer := server.(*mockServer)
if !mockServer.started {
t.Error("Expected server to be started")
}
// Test non-existent transport
_, err = registry.CreateClient("nonexistent", "", TransportOptions{})
if err == nil {
t.Error("Expected error creating non-existent client")
}
_, err = registry.CreateServer("nonexistent", "", TransportOptions{})
if err == nil {
t.Error("Expected error creating non-existent server")
}
}

View File

@ -11,7 +11,7 @@ import (
"sync"
"time"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
const (

View File

@ -8,7 +8,7 @@ import (
"path/filepath"
"testing"
"github.com/jeremytregunna/kevo/pkg/config"
"github.com/KevoDB/kevo/pkg/config"
)
func createTestConfig() *config.Config {

1771
proto/kevo/service.pb.go Normal file

File diff suppressed because it is too large Load Diff

182
proto/kevo/service.proto Normal file
View File

@ -0,0 +1,182 @@
syntax = "proto3";
package kevo;
option go_package = "github.com/jeremytregunna/kevo/pkg/grpc/proto;proto";
service KevoService {
// Key-Value Operations
rpc Get(GetRequest) returns (GetResponse);
rpc Put(PutRequest) returns (PutResponse);
rpc Delete(DeleteRequest) returns (DeleteResponse);
// Batch Operations
rpc BatchWrite(BatchWriteRequest) returns (BatchWriteResponse);
// Iterator Operations
rpc Scan(ScanRequest) returns (stream ScanResponse);
// Transaction Operations
rpc BeginTransaction(BeginTransactionRequest) returns (BeginTransactionResponse);
rpc CommitTransaction(CommitTransactionRequest) returns (CommitTransactionResponse);
rpc RollbackTransaction(RollbackTransactionRequest) returns (RollbackTransactionResponse);
// Transaction Operations within an active transaction
rpc TxGet(TxGetRequest) returns (TxGetResponse);
rpc TxPut(TxPutRequest) returns (TxPutResponse);
rpc TxDelete(TxDeleteRequest) returns (TxDeleteResponse);
rpc TxScan(TxScanRequest) returns (stream TxScanResponse);
// Administrative Operations
rpc GetStats(GetStatsRequest) returns (GetStatsResponse);
rpc Compact(CompactRequest) returns (CompactResponse);
}
// Basic message types
message GetRequest {
bytes key = 1;
}
message GetResponse {
bytes value = 1;
bool found = 2;
}
message PutRequest {
bytes key = 1;
bytes value = 2;
bool sync = 3;
}
message PutResponse {
bool success = 1;
}
message DeleteRequest {
bytes key = 1;
bool sync = 2;
}
message DeleteResponse {
bool success = 1;
}
// Batch operations
message BatchWriteRequest {
repeated Operation operations = 1;
bool sync = 2;
}
message Operation {
enum Type {
PUT = 0;
DELETE = 1;
}
Type type = 1;
bytes key = 2;
bytes value = 3; // Only used for PUT
}
message BatchWriteResponse {
bool success = 1;
}
// Iterator operations
message ScanRequest {
bytes prefix = 1;
bytes start_key = 2;
bytes end_key = 3;
int32 limit = 4;
}
message ScanResponse {
bytes key = 1;
bytes value = 2;
}
// Transaction operations
message BeginTransactionRequest {
bool read_only = 1;
}
message BeginTransactionResponse {
string transaction_id = 1;
}
message CommitTransactionRequest {
string transaction_id = 1;
}
message CommitTransactionResponse {
bool success = 1;
}
message RollbackTransactionRequest {
string transaction_id = 1;
}
message RollbackTransactionResponse {
bool success = 1;
}
message TxGetRequest {
string transaction_id = 1;
bytes key = 2;
}
message TxGetResponse {
bytes value = 1;
bool found = 2;
}
message TxPutRequest {
string transaction_id = 1;
bytes key = 2;
bytes value = 3;
}
message TxPutResponse {
bool success = 1;
}
message TxDeleteRequest {
string transaction_id = 1;
bytes key = 2;
}
message TxDeleteResponse {
bool success = 1;
}
message TxScanRequest {
string transaction_id = 1;
bytes prefix = 2;
bytes start_key = 3;
bytes end_key = 4;
int32 limit = 5;
}
message TxScanResponse {
bytes key = 1;
bytes value = 2;
}
// Administrative operations
message GetStatsRequest {}
message GetStatsResponse {
int64 key_count = 1;
int64 storage_size = 2;
int32 memtable_count = 3;
int32 sstable_count = 4;
double write_amplification = 5;
double read_amplification = 6;
}
message CompactRequest {
bool force = 1;
}
message CompactResponse {
bool success = 1;
}

View File

@ -0,0 +1,634 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.3
// source: proto/kevo/service.proto
package proto
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
KevoService_Get_FullMethodName = "/kevo.KevoService/Get"
KevoService_Put_FullMethodName = "/kevo.KevoService/Put"
KevoService_Delete_FullMethodName = "/kevo.KevoService/Delete"
KevoService_BatchWrite_FullMethodName = "/kevo.KevoService/BatchWrite"
KevoService_Scan_FullMethodName = "/kevo.KevoService/Scan"
KevoService_BeginTransaction_FullMethodName = "/kevo.KevoService/BeginTransaction"
KevoService_CommitTransaction_FullMethodName = "/kevo.KevoService/CommitTransaction"
KevoService_RollbackTransaction_FullMethodName = "/kevo.KevoService/RollbackTransaction"
KevoService_TxGet_FullMethodName = "/kevo.KevoService/TxGet"
KevoService_TxPut_FullMethodName = "/kevo.KevoService/TxPut"
KevoService_TxDelete_FullMethodName = "/kevo.KevoService/TxDelete"
KevoService_TxScan_FullMethodName = "/kevo.KevoService/TxScan"
KevoService_GetStats_FullMethodName = "/kevo.KevoService/GetStats"
KevoService_Compact_FullMethodName = "/kevo.KevoService/Compact"
)
// KevoServiceClient is the client API for KevoService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type KevoServiceClient interface {
// Key-Value Operations
Get(ctx context.Context, in *GetRequest, opts ...grpc.CallOption) (*GetResponse, error)
Put(ctx context.Context, in *PutRequest, opts ...grpc.CallOption) (*PutResponse, error)
Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*DeleteResponse, error)
// Batch Operations
BatchWrite(ctx context.Context, in *BatchWriteRequest, opts ...grpc.CallOption) (*BatchWriteResponse, error)
// Iterator Operations
Scan(ctx context.Context, in *ScanRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[ScanResponse], error)
// Transaction Operations
BeginTransaction(ctx context.Context, in *BeginTransactionRequest, opts ...grpc.CallOption) (*BeginTransactionResponse, error)
CommitTransaction(ctx context.Context, in *CommitTransactionRequest, opts ...grpc.CallOption) (*CommitTransactionResponse, error)
RollbackTransaction(ctx context.Context, in *RollbackTransactionRequest, opts ...grpc.CallOption) (*RollbackTransactionResponse, error)
// Transaction Operations within an active transaction
TxGet(ctx context.Context, in *TxGetRequest, opts ...grpc.CallOption) (*TxGetResponse, error)
TxPut(ctx context.Context, in *TxPutRequest, opts ...grpc.CallOption) (*TxPutResponse, error)
TxDelete(ctx context.Context, in *TxDeleteRequest, opts ...grpc.CallOption) (*TxDeleteResponse, error)
TxScan(ctx context.Context, in *TxScanRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[TxScanResponse], error)
// Administrative Operations
GetStats(ctx context.Context, in *GetStatsRequest, opts ...grpc.CallOption) (*GetStatsResponse, error)
Compact(ctx context.Context, in *CompactRequest, opts ...grpc.CallOption) (*CompactResponse, error)
}
type kevoServiceClient struct {
cc grpc.ClientConnInterface
}
func NewKevoServiceClient(cc grpc.ClientConnInterface) KevoServiceClient {
return &kevoServiceClient{cc}
}
func (c *kevoServiceClient) Get(ctx context.Context, in *GetRequest, opts ...grpc.CallOption) (*GetResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(GetResponse)
err := c.cc.Invoke(ctx, KevoService_Get_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) Put(ctx context.Context, in *PutRequest, opts ...grpc.CallOption) (*PutResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(PutResponse)
err := c.cc.Invoke(ctx, KevoService_Put_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*DeleteResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeleteResponse)
err := c.cc.Invoke(ctx, KevoService_Delete_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) BatchWrite(ctx context.Context, in *BatchWriteRequest, opts ...grpc.CallOption) (*BatchWriteResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BatchWriteResponse)
err := c.cc.Invoke(ctx, KevoService_BatchWrite_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) Scan(ctx context.Context, in *ScanRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[ScanResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &KevoService_ServiceDesc.Streams[0], KevoService_Scan_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[ScanRequest, ScanResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type KevoService_ScanClient = grpc.ServerStreamingClient[ScanResponse]
func (c *kevoServiceClient) BeginTransaction(ctx context.Context, in *BeginTransactionRequest, opts ...grpc.CallOption) (*BeginTransactionResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(BeginTransactionResponse)
err := c.cc.Invoke(ctx, KevoService_BeginTransaction_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) CommitTransaction(ctx context.Context, in *CommitTransactionRequest, opts ...grpc.CallOption) (*CommitTransactionResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CommitTransactionResponse)
err := c.cc.Invoke(ctx, KevoService_CommitTransaction_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) RollbackTransaction(ctx context.Context, in *RollbackTransactionRequest, opts ...grpc.CallOption) (*RollbackTransactionResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(RollbackTransactionResponse)
err := c.cc.Invoke(ctx, KevoService_RollbackTransaction_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) TxGet(ctx context.Context, in *TxGetRequest, opts ...grpc.CallOption) (*TxGetResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(TxGetResponse)
err := c.cc.Invoke(ctx, KevoService_TxGet_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) TxPut(ctx context.Context, in *TxPutRequest, opts ...grpc.CallOption) (*TxPutResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(TxPutResponse)
err := c.cc.Invoke(ctx, KevoService_TxPut_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) TxDelete(ctx context.Context, in *TxDeleteRequest, opts ...grpc.CallOption) (*TxDeleteResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(TxDeleteResponse)
err := c.cc.Invoke(ctx, KevoService_TxDelete_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) TxScan(ctx context.Context, in *TxScanRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[TxScanResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &KevoService_ServiceDesc.Streams[1], KevoService_TxScan_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &grpc.GenericClientStream[TxScanRequest, TxScanResponse]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type KevoService_TxScanClient = grpc.ServerStreamingClient[TxScanResponse]
func (c *kevoServiceClient) GetStats(ctx context.Context, in *GetStatsRequest, opts ...grpc.CallOption) (*GetStatsResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(GetStatsResponse)
err := c.cc.Invoke(ctx, KevoService_GetStats_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *kevoServiceClient) Compact(ctx context.Context, in *CompactRequest, opts ...grpc.CallOption) (*CompactResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CompactResponse)
err := c.cc.Invoke(ctx, KevoService_Compact_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// KevoServiceServer is the server API for KevoService service.
// All implementations must embed UnimplementedKevoServiceServer
// for forward compatibility.
type KevoServiceServer interface {
// Key-Value Operations
Get(context.Context, *GetRequest) (*GetResponse, error)
Put(context.Context, *PutRequest) (*PutResponse, error)
Delete(context.Context, *DeleteRequest) (*DeleteResponse, error)
// Batch Operations
BatchWrite(context.Context, *BatchWriteRequest) (*BatchWriteResponse, error)
// Iterator Operations
Scan(*ScanRequest, grpc.ServerStreamingServer[ScanResponse]) error
// Transaction Operations
BeginTransaction(context.Context, *BeginTransactionRequest) (*BeginTransactionResponse, error)
CommitTransaction(context.Context, *CommitTransactionRequest) (*CommitTransactionResponse, error)
RollbackTransaction(context.Context, *RollbackTransactionRequest) (*RollbackTransactionResponse, error)
// Transaction Operations within an active transaction
TxGet(context.Context, *TxGetRequest) (*TxGetResponse, error)
TxPut(context.Context, *TxPutRequest) (*TxPutResponse, error)
TxDelete(context.Context, *TxDeleteRequest) (*TxDeleteResponse, error)
TxScan(*TxScanRequest, grpc.ServerStreamingServer[TxScanResponse]) error
// Administrative Operations
GetStats(context.Context, *GetStatsRequest) (*GetStatsResponse, error)
Compact(context.Context, *CompactRequest) (*CompactResponse, error)
mustEmbedUnimplementedKevoServiceServer()
}
// UnimplementedKevoServiceServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedKevoServiceServer struct{}
func (UnimplementedKevoServiceServer) Get(context.Context, *GetRequest) (*GetResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Get not implemented")
}
func (UnimplementedKevoServiceServer) Put(context.Context, *PutRequest) (*PutResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Put not implemented")
}
func (UnimplementedKevoServiceServer) Delete(context.Context, *DeleteRequest) (*DeleteResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Delete not implemented")
}
func (UnimplementedKevoServiceServer) BatchWrite(context.Context, *BatchWriteRequest) (*BatchWriteResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method BatchWrite not implemented")
}
func (UnimplementedKevoServiceServer) Scan(*ScanRequest, grpc.ServerStreamingServer[ScanResponse]) error {
return status.Errorf(codes.Unimplemented, "method Scan not implemented")
}
func (UnimplementedKevoServiceServer) BeginTransaction(context.Context, *BeginTransactionRequest) (*BeginTransactionResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method BeginTransaction not implemented")
}
func (UnimplementedKevoServiceServer) CommitTransaction(context.Context, *CommitTransactionRequest) (*CommitTransactionResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method CommitTransaction not implemented")
}
func (UnimplementedKevoServiceServer) RollbackTransaction(context.Context, *RollbackTransactionRequest) (*RollbackTransactionResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method RollbackTransaction not implemented")
}
func (UnimplementedKevoServiceServer) TxGet(context.Context, *TxGetRequest) (*TxGetResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method TxGet not implemented")
}
func (UnimplementedKevoServiceServer) TxPut(context.Context, *TxPutRequest) (*TxPutResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method TxPut not implemented")
}
func (UnimplementedKevoServiceServer) TxDelete(context.Context, *TxDeleteRequest) (*TxDeleteResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method TxDelete not implemented")
}
func (UnimplementedKevoServiceServer) TxScan(*TxScanRequest, grpc.ServerStreamingServer[TxScanResponse]) error {
return status.Errorf(codes.Unimplemented, "method TxScan not implemented")
}
func (UnimplementedKevoServiceServer) GetStats(context.Context, *GetStatsRequest) (*GetStatsResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetStats not implemented")
}
func (UnimplementedKevoServiceServer) Compact(context.Context, *CompactRequest) (*CompactResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Compact not implemented")
}
func (UnimplementedKevoServiceServer) mustEmbedUnimplementedKevoServiceServer() {}
func (UnimplementedKevoServiceServer) testEmbeddedByValue() {}
// UnsafeKevoServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to KevoServiceServer will
// result in compilation errors.
type UnsafeKevoServiceServer interface {
mustEmbedUnimplementedKevoServiceServer()
}
func RegisterKevoServiceServer(s grpc.ServiceRegistrar, srv KevoServiceServer) {
// If the following call pancis, it indicates UnimplementedKevoServiceServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&KevoService_ServiceDesc, srv)
}
func _KevoService_Get_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).Get(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_Get_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).Get(ctx, req.(*GetRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_Put_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PutRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).Put(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_Put_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).Put(ctx, req.(*PutRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_Delete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeleteRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).Delete(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_Delete_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).Delete(ctx, req.(*DeleteRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_BatchWrite_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BatchWriteRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).BatchWrite(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_BatchWrite_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).BatchWrite(ctx, req.(*BatchWriteRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_Scan_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(ScanRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(KevoServiceServer).Scan(m, &grpc.GenericServerStream[ScanRequest, ScanResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type KevoService_ScanServer = grpc.ServerStreamingServer[ScanResponse]
func _KevoService_BeginTransaction_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(BeginTransactionRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).BeginTransaction(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_BeginTransaction_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).BeginTransaction(ctx, req.(*BeginTransactionRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_CommitTransaction_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CommitTransactionRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).CommitTransaction(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_CommitTransaction_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).CommitTransaction(ctx, req.(*CommitTransactionRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_RollbackTransaction_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(RollbackTransactionRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).RollbackTransaction(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_RollbackTransaction_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).RollbackTransaction(ctx, req.(*RollbackTransactionRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_TxGet_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(TxGetRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).TxGet(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_TxGet_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).TxGet(ctx, req.(*TxGetRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_TxPut_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(TxPutRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).TxPut(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_TxPut_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).TxPut(ctx, req.(*TxPutRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_TxDelete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(TxDeleteRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).TxDelete(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_TxDelete_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).TxDelete(ctx, req.(*TxDeleteRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_TxScan_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(TxScanRequest)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(KevoServiceServer).TxScan(m, &grpc.GenericServerStream[TxScanRequest, TxScanResponse]{ServerStream: stream})
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type KevoService_TxScanServer = grpc.ServerStreamingServer[TxScanResponse]
func _KevoService_GetStats_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetStatsRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).GetStats(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_GetStats_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).GetStats(ctx, req.(*GetStatsRequest))
}
return interceptor(ctx, in, info, handler)
}
func _KevoService_Compact_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CompactRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(KevoServiceServer).Compact(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: KevoService_Compact_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(KevoServiceServer).Compact(ctx, req.(*CompactRequest))
}
return interceptor(ctx, in, info, handler)
}
// KevoService_ServiceDesc is the grpc.ServiceDesc for KevoService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var KevoService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "kevo.KevoService",
HandlerType: (*KevoServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Get",
Handler: _KevoService_Get_Handler,
},
{
MethodName: "Put",
Handler: _KevoService_Put_Handler,
},
{
MethodName: "Delete",
Handler: _KevoService_Delete_Handler,
},
{
MethodName: "BatchWrite",
Handler: _KevoService_BatchWrite_Handler,
},
{
MethodName: "BeginTransaction",
Handler: _KevoService_BeginTransaction_Handler,
},
{
MethodName: "CommitTransaction",
Handler: _KevoService_CommitTransaction_Handler,
},
{
MethodName: "RollbackTransaction",
Handler: _KevoService_RollbackTransaction_Handler,
},
{
MethodName: "TxGet",
Handler: _KevoService_TxGet_Handler,
},
{
MethodName: "TxPut",
Handler: _KevoService_TxPut_Handler,
},
{
MethodName: "TxDelete",
Handler: _KevoService_TxDelete_Handler,
},
{
MethodName: "GetStats",
Handler: _KevoService_GetStats_Handler,
},
{
MethodName: "Compact",
Handler: _KevoService_Compact_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "Scan",
Handler: _KevoService_Scan_Handler,
ServerStreams: true,
},
{
StreamName: "TxScan",
Handler: _KevoService_TxScan_Handler,
ServerStreams: true,
},
},
Metadata: "proto/kevo/service.proto",
}