Skip to main content

Golang Architecture & Practices

Structured recap of the Golang reference (Uber Go Guide) in English.

1) Clean Architecture (Hexagonal / Onion)

Prinsip Inti

  • Dependency Rule: outer layers depend inward (framework/DB → use case → entities). Entities depend on nothing external.
  • Goal: domain/core stays free of implementation details (DB, UI, libraries).

Lapisan

LayerMain RoleDependenciesGolang Context
Entities / Domain CoreEnterprise business rules; most stable objects (Payment, User).NonePlain Go structs & methods
Use Cases / ApplicationApplication rules (CreatePaymentUseCase, LoginUseCase).Entities + Interfaces (Ports)Implements interfaces from Domain
Interface AdaptersAdapt external ↔ internal data.Depends on Use CasesControllers (gRPC/REST), Repo Interfaces, Presenters
Frameworks & DevicesImplementation details (DB, web server, UI, tools).Depends on Interface AdaptersGin/Echo, GORM/SQL driver, Logrus/Zap

Ports & Adapters (Hexagonal)

  • Ports: defined in the Use Cases/Application layer. Example: type PaymentRepository interface { Save(p Payment) error }.
  • Adapters: implementations in Frameworks/Adapters. Example: type PostgreSQLAdapter struct { db *sql.DB } implements that port.

Why it fits Go

  1. Explicit Dependency Inversion via interfaces (DIP).
  2. Testability: core can be unit-tested without DB/HTTP/filesystem (mock interfaces).
  3. Performance: core stays lean, minimal framework overhead.

2) Concurrency di Go

Goroutine

  • Lightweight functions, small stacks (~KB), managed by the runtime. Create with go fn().
  • Runtime maps goroutines to few OS threads (GOMAXPROCS).
package main

import (
"fmt"
"time"
)

func cetak(s string) {
for i := 0; i < 3; i++ {
time.Sleep(100 * time.Millisecond)
fmt.Println(s)
}
}

func main() {
go cetak("Dunia")
cetak("Halo")
time.Sleep(1 * time.Second)
}

Channels

  • Safe communication between goroutines; built-in blocking semantics.
  • Unbuffered: send/receive wait for each other. Buffered: block when full/empty.
TypeDescriptionWhen Blocking
UnbufferedNo capacitySender waits for receiver; receiver waits for sender
BufferedFixed capacity (N)Sender blocks if full; receiver blocks if empty
pesan := make(chan string)

go func() {
pesan <- "Selesai memproses"
}()

hasil := <-pesan
fmt.Println(hasil) // Output: Selesai memproses

Worker Pool (Bounded Concurrency)

  • Problem: prevent resource exhaustion (RAM/connections) from too many parallel tasks.
  • Pattern: jobs channel → N worker goroutines → results channel.
graph TD
A[Main Program] -->|Send Task| B(Jobs Channel)
B --> W1[Worker 1]
B --> W2[Worker 2]
B --> W3[Worker 3]
W1 -->|Send Result| C(Results Channel)
W2 -->|Send Result| C
W3 -->|Send Result| C
C --> D[Main Program]
subgraph Worker_Pool
W1
W2
W3
end

3) Golang Best Practices

context

  • Manage timeout, cancellation, cross-goroutine values.
  • ctx context.Context as first arg in request paths (controller → use case → repo).
func (s *UserService) GetUser(ctx context.Context, id int) (*User, error) {
user, err := s.repo.FindByID(ctx, id)
return user, err
}

Error Wrapping

  • Use %w to wrap; evaluate with errors.Is/As.
if err != nil {
return fmt.Errorf("repository: failed to find user %d: %w", id, err)
}

Interface Segregation (ISP)

  • Small, consumer-owned interfaces. Avoid fat interfaces.
type UserCreator interface {
Create(user User) error
}

type UserDeleter interface {
Delete(id int) error
}

4) REST API (Interface Adapter)

Framework Role

  • Framework (Gin/Echo/Fiber) lives at the outer layer: routing, binding, middleware, response handling.
FrameworkPhilosophyStrengths
GinPopular, high-perf, tree routingStable, many middlewares
EchoMinimal, flexibleClean API, modular, zero-alloc JSON
FiberVery fast (fasthttp)High perf, Express-like syntax

Handler ↔ Use Case

  • Handler: bind & validate → call use case with context → map errors → response. No business logic inside handler.
type PaymentService struct { /* deps */ }

func (s *PaymentService) ProcessPayment(ctx context.Context, amount float64) error {
if amount < 1.0 {
return errors.New("payment amount too low")
}
return nil
}
type PaymentHandler struct {
service *PaymentService
}

func (h *PaymentHandler) HandleProcessPayment(c *gin.Context) {
var req struct {
Amount float64 `json:"amount"`
}
if err := c.BindJSON(&req); err != nil {
c.JSON(400, gin.H{"error": "Invalid request"})
return
}
ctx := c.Request.Context()
err := h.service.ProcessPayment(ctx, req.Amount)
if err != nil {
if errors.Is(err, errors.New("payment amount too low")) {
c.JSON(400, gin.H{"error": err.Error()})
return
}
c.JSON(500, gin.H{"error": "Internal Server Error"})
return
}
c.JSON(200, gin.H{"message": "Payment processed"})
}

Practical Middleware

  • Auth (JWT), logging/tracing (OTel), context propagation always passed to use case and repo.

5) Database Layer (Repository Pattern)

Role

  • Repository acts as adapter implementing ports from the application layer.
graph LR
U[Use Case Layer] -->|Calls Port| P{PaymentRepository Interface}
P -- Implemented By --> R[Repository/Adapter Layer]
R --> D((PostgreSQL/ORM))

SQL vs ORM

  • database/sql or sqlx: full control, performance, raw SQL; more verbose.
  • ORM (GORM/Bun): fast for CRUD, less efficient for complex cases; N+1 risk.

Context & Error

  • All repo methods accept context; wrap driver errors and translate to domain (e.g., not found).

Transactions

func (r *PostgresRepo) Transfer(ctx context.Context, fromID, toID int, amount float64) error {
tx, err := r.db.BeginTx(ctx, nil)
if err != nil { return err }
defer func() {
if r := recover(); r != nil {
tx.Rollback(); panic(r)
} else if err != nil {
tx.Rollback()
}
}()
_, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, fromID)
if err != nil { return err }
_, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, toID)
if err != nil { return err }
return tx.Commit()
}

6) Caching with Redis

Why Redis

FeatureDescriptionBenefit
In-MemoryData in RAMVery low latency
Data StructuresString, Hash, List, Set, Sorted SetComplex caching (leaderboard, etc.)
TTLPer-key expiryAuto refresh/remove stale data
Optional PersistenceRDB/AOFSurvives restart

Core Patterns

  • Cache-Aside (Lazy Load): Read cache → miss → DB → set cache (TTL). Write: update DB then invalidate.
  • Write-Through: Write cache & DB synchronously; stronger freshness, higher write latency.
  • Write-Back (Write-Behind): Write cache, quick ack; async flush to DB. Risk if cache dies before flush.

Implementation (Cache-Aside)

type PaymentCacheRepository struct {
DBClient *sql.DB
RDBClient *redis.Client
}

func (r *PaymentCacheRepository) GetPaymentByID(ctx context.Context, id int) (*Payment, error) {
paymentJSON, err := r.RDBClient.Get(ctx, fmt.Sprintf("payment:%d", id)).Result()
if err == nil {
// return deserialized payment
}
payment, err := r.DBClient.QueryRowContext(ctx, "SELECT ...")
if err != nil { return nil, err }
r.RDBClient.Set(ctx, fmt.Sprintf("payment:%d", id), json.Marshal(payment), 5*time.Minute)
return payment, nil
}

Consistency & Herding

  • Stale data: mitigate with TTL + invalidation.
  • Thundering herd: per-key lock (singleflight), TTL jitter, stale-while-revalidate.

7) Message Queue (Broker)

Roles

ComponentRoleTools
ProducerSends events/messagesOrder/Payment Service
ConsumerProcesses messagesNotification/Inventory Service
Queue/TopicTemporary storageQueue in RabbitMQ, Topic in Kafka/NATS
BrokerManages queue/topic & deliveryRabbitMQ, Kafka, NATS

Producer

  • Record event (User Created, Payment Approved), send to broker, done after ACK; not concerned if consumer is online.
  • Example: Payment Service sends PAYMENT_APPROVED via sarama (Kafka).

Consumer

  • Subscribe, receive push/pull, process, ACK, broker deletes/marks offset.
  • Example: Notification Service listens to PAYMENT_APPROVED then sends email.

Main Brokers

BrokerPhilosophyMain PatternDurability & Scalability
RabbitMQMessage queueing, FIFO, messages removed after ACKTask queueGood for HA & task queues
Apache KafkaEvent streaming, append-only logTopic + partition, offsetHighly scalable, replay
NATSHigh-perf pub/subSimple pub/subRealtime/telemetry; persist with JetStream
  • Choose Kafka for event permanence & replay; RabbitMQ for fast task queues; NATS for lightweight pub/sub.
  • Resilience: if consumer dies, messages stay in broker until processed.

Reordered for structure; content mirrors the original reference, now in English.