Quick Notes on Go - Part 2

In this post, I share some notes on Go. Check out part 1.


- To create a new error:
errors.New("error message")

- Error is an interface:
type error interface {
   Error() string

- Customer error type: Just implement the interface
type MyError struct {
    Status      int
    Message string

func (e MyError) Error() {
    return e.Message

- You can wrap error with %w and Errorf, creating an error chain.
return fmt.Errorf("Added messsage %w", err)

You can Unwrap an error to get the inner error. 
if innerError := errors.Unwrap(err); innerError != nil {

but we usually don't do that, and instead use errors.Is and errors.As

- One option is to wrap errors in the defer function. For that we have to name the return value in the function signature to be able to access it in the defer function.

func MyFunc() (_ int, err error) {
    defer func() {
        if err != nil {
            err = fmt.Errorf("Additional message %w", err)

- errros.Is: returns true if any of the errors in the error chain is equal (==) to a specific error.
if (errors.Is(err, os.ErrNotExist) {

- errors.As: returns true if any of the errors in the error chain is of the given error type.
var myErr MyErr
if errors.As(err, &myErr){

or you can use interface:
var coder HasCode interface {
    GetErrorCode() int

if errors.As(err, &HasCode) {
    fmt.Println("Error code is: " + coder.GetErrorCode())

So basically, we are saying, if there is an error in the error chain that has GetErrorCode function, set it to the coder variable, and then we use it to print the error code.

- Panic: You can panic with panic("message") function. 
When the program panic, the defer blocks are executed. 

- Recover: You can recover from a panic with recover in the defer.
if p :=. recover(); p != nill {

p is the panic we recovered from.

- You can print stack stack with %+v in fmt.Printf. 

- io.Reader errors:
  • io.EOF: reported as error, but not really an error. It is a normal end of file.
  • io.ErrUnexpectedEOF: it is a real error


- Repository > Modules > Packages
It is recommended to have only one module in a repository to have easier versioning. Each module needs a globally unique id. We usually use the repository (e.g., GitHub) path as for the module path.

- go.mod: To have a module, you must have a go.mod file in the root of the module path.
- You can specify a package with package keyword.
  • All files in a directory must have the same package names.
  • But the directory name can be different than the package name.
  • When you import, you import by the name of the directory, then you access package exports with packageName.resource. 
    • Example: Suppose in directory mydirectory I have myfile.go that has package mypackage that has type MyType.
    • I can import it with import "/path/to/mydirectory"
    • Then use it like mypackage.MyType
- internal is a special package name. Anything in this package is only accessible to it its parent package and its siblings. 

- Packages can have init functions:
  • No parameter, no return value.
  • Try to avoid that. 
  •  It is called first time when another package refer it. 
  • When you see a code import a package with _ (called blank import) is because the code only cares about the init and we not are going to use any of the identifiers of the package.

- Go does not allows circular dependencies, i.e., if A import B directly or indirectly, B cannot do the same.

Solutions if you have circular dependency:
  • Merge two package.
  • Move only part of a package that the other needs to the other package.
  • Create a third package and move shared part there.
- go.mod has the dependencies in its require section. 

- When you do go run/build/test/list, the following files are updated:
  • go.mod: it downloads required imports and puts them to the require section.
  • go.sum: it has a checksum of each imported module at specified version.
- Check in go.mod to the repository, so it is clear which version of our imports must be used. 
- Check in go.sum, so you prevent using a module with different checksum (as a security protection against malicious code included in a hijacked module). Specifically, whenever you download a module, Go tools calculate the checksum of the downloaded module and compare it against a sum database (from Google for example) and if the calculated checksum is different from the one in the database, it does not install the module. 

- To see list of available versions for a module:
go list -m -version path/to/module

- To get a specific version
go get path/to/module@v1.2.3

- To get latest version 
go get -u path/to/module

- To get latest patch for the current minor version
go get -u=patch path/to/module

- Important: If you have multiple dependencies that all require the same dependency, go pick the minimum version of the indirect dependency that satisfies all of them.
  • If a module does not respect semantic versioning, e.g., it works with v1.1.1 of a module, but it does not work with v1.2.3, you have to ask them to fix their bug.
- Major version bump, requires a different module path. Version 0 and 1 don't have vN in their path, but any major version larger than 1 typically have vN in its path.

  • As the user: when you want to use a new major version of a third-party module, you have to update your import path to the new path ending in vN where N is your desired major version.
    • After that when you update go.mod you will see new version is added. If you there is no reference to the older version, go mo tidy will removes it.
  • As the provider: To bump major, you can create a new branch named vN and put the new code there. Tag that branch with vN.0.0, and apply changes.
    • Note that when you want to bump your minor or path, you can simply just tag your current branch and don't need to do anything else.
- You can vendor all of your dependencies with 
go mod vendor
it creates a folder called vendor at the top level containing all of your dependencies.
  • run it again after changing version.
- pkg.go.dev indexes all public Go modules.
  • To publish put it on a repo and have LICENSE file
- When you download a module, Go tools get it from a proxy and not directly from the repo. If you have private repositories that are not available to the public proxy servers, you can list them with GOPRIVATE.

Now, any module from myrepo.com will be downloaded directly instead of from the proxy.

- init function: Try not to use it. Only use it for initializing package-level immutable variables.


- Concurrency model of Go is Communicating Sequential Processes (CSP) which is like distributed systems does not use shared states, instead processes talk to each other via channels. CSP is as powerful as state sharing, but it is simpler to reason about. 

- Goroutines:
  •  You can think of them as pseudo-threads managed by the Go runtime, instead of the OS. Go run times creates OS threads and schedule goroutine to use them.
  • Faster than OS thread as they are managed entirely by the Go runtime.
  • More memory efficient.
  • Easier to switch goroutines.
  • Important: Integrates with Go's garbage collector, so it is more efficient.
  • Because of these advantages, you can run tens of thousands of goroutines unlike OS threads. 
  • Since goroutines are not OS threads, we don't have threadlocal variables in Go. Instead we use context.
- Two types of channel:
  • Unbuffered channel: make(chan int):
    • Synchronous: There is no spaces in the channel. Write must hand in the data to a reader.
      • All writes block until someone reads.
      • Reads block until someone write.
    • len(ch) = 0 always
  • buffered channel: make(chan int, 10)
    • Asynchronous:
      • A write blocks until channel has a space to write. 
      • A read blocks until channel has some data.
- Reading a from a nil channel blocks indefinitely. 

- You can restrict channel direction:
  • func myFunc(ch <-chan int): you can only read from ch in myFunc. 
  • func myFunc(ch chan<- int): you can only write to the ch in myFunc.
- For-range over a channel:
for d := range myChan {
    read d
it goes until:
  1. channel is closed, and
  2. nothing left in the channel (in case of buffered channel)
-  closed channel:
  • write or close will panic
  • read, returns the zero value of the type. You can use , ok idiom to know channel is closed or not when reading.
- select case: Use it when you waiting to read from channel (usually used inside a for loop)
select {
case d, ok := <- chan1
    if !ok {
        chan1 = nil //to avoid executing this case again, as chan1 is closed!
    do something
case d := <- chan2
    do something
    fmt.Println("No channel is ready")

Note that default is executed immediately if no channel is ready. If you don't have default and no channel is ready, select blocks until one is ready.

Note that reading from closed channels always return zero value immediately. Thus, you can set a closed channel to nil to avoid returning zero value immediately in a select statement! 

- Go runtime detects deadlock and kills the program. 
- Done pattern: You can use a done channel to signal to get out of the for loop when selecting from channels.
done := make(chan struct{})

You never write to done channel. Just close it and it will call the case in the select. 

for {
    select {
    //other cases
    case <-done:

- Rate limiting with buffered channels: Have a channel with number of tokens. Each time to execute try to take a token, if no token is available, reject the request:

tokensChan := make(chan struct{})
//put desired number of struct{}{} token in tokensChannel.

When processing a request:
select {
case <-tokensChan:
    tokensChan <- {} //return the token
    //return error, request rejected.

- sync.WaitGroup is like semaphore. You set a value using Add, then you decrement it in workers with Done, and you do Wait on the reducer. 

- sync.Once: You can call Do function of a once value and it will be executed only one time.
Example: It will print execution only one time. 
var once sync.Once
func main() {
once.Do(func() {

once.Do(func() {

- Go has mutex and atomics as well. 


- Convention: Context is the first parameter of functions.
- You should create the root context with context.Background(), and then never do that again. Instead, pass context from the root.

- Cancelable context:
  • ctx, cancel := context.WithCancel(parentCtx)
  • Note that when the parent context cancels, then the child also cancels. 
  • You must call the cancel function. So do defer cancel() to make sure you will cancel it, otherwise you leak resources.
  • It is ok to cancel more than one time.
- Using ctx.Done() channel with select:
 select {
        case <-ctx.Done():
            fmt.Println("context is cancelled")
            return ctx.Err()

Note that ctx.Err(), if context is not alive anymore, returns either:
  • context.Canceled
  • context.DeadlineExceeded
Otherwise it returns nil.

- Auto-canceling Context:
  • ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
  • ctx, cancel := context.WithDeadline(ctx, time.Now().Add(2*time.Second))
- You can add key-values to context: but try not to pass data in context, and have it in func signature.
context.WithValue(ctx, key, user)
and then read it

One example use is to put GUID in the http requests.


- Test files are next to production files in the same package. Thus, they access all package functions and variables. 
- Test files must end in _test.go.

- If you want to test only the public API:
  • Still keep your test file next to your production files
  • Change its package name to productionPackageName_test
- Sample test: use testing package, start with Test, and Test_ when testing unexported functions, and have testing.T as parameter.
func Test_myFunc(t *testing.T) {
    // test

- Errors:
  • t.Error t.Errorf: Test will continue.
  • t.Fatal, t.Fatalf: Test stops after reporting the error.
  • - Use TestMain to setup the state shared between tests.
func TestMain(m *testing.M) {
    //setup state
    result := m.Run() //This runs the tests.

- You can also run a test function like this:
t.Run(name, func(t *testing) {//write your test here}) 
The nice things about this way is that you can use a for loop on your test data and run a test for each instead of writing a separate test function manually. This is called a table test.
for i, d := testData {
    t.Run("Test" + i, func(t *testing){ //write your test using d})
This way you can avoid repeating test code.

- To cleanup a single test:
t.Cleanup(func() {//cleanup})

- Put test resources for a test in a folder named testdata next to your test files. Access this folder in your test code with "testdata/myFile.data" address.

- To avoid cached test results do go test -count=1

- To compare composite types (slice, map, struct with slice/map), use go-cmp:
if diff := cmp.Diff(expected, result); diff != "" {

- Use cmp.Comparer to have customer comparison:
c := cmp.Comparer(func(x, y MyType) bool {
    return //compare here
Then do cmp.Diff(expected, result, c)

- To get test converage
go test -cover -coverprofile=c.out

To see coverage:
go tool cover -html=c.out

- You can create a stub that embeds the interface of a dependency. Then you can implement only functions of the dependency that your tests actually need. This technique is good when dealing with dependencies with an interface with many methods.

- You can use httptest to stub an http server.

- You add a tag to a test file by adding a comment like this to the first line of integration tests:
// +build integration

Then they will be executed only you run tests like this:
go test -tags integration

- Run race checking -race flag
go test -race

You can also build with -race flag. This way, the binary print races to console. But your binary will be significantly slower.


- Put benchmark function in the _test.go files. 

- To run benchmarks:
go test -bench=. 

- To have memory allocation result add -benchmem flag.

- Results:
BenchmarkName-#CPUs  N  time ns/op  bytes_in_memroy B/op  bytes_in_heap allocs/op

- A benchmark function starts with Benchmark, and must have this struct:
func BenchmarkMyFunc (b *testing.B) {
    for i := 0; i < b.N; i++ {
        //doin't use i here! 

- When running benchmark go test, increases N until test takes at least benchtime seconds. Default is 1, but you can set it with -benchtime=x to x seconds.


Popular posts from this blog

In-memory vs. On-disk Databases

ByteGraph: A Graph Database for TikTok

Amazon DynamoDB: ACID Transactions using Timestamp Ordering

Eventual Consistency and Conflict Resolution - Part 1