Go concurrency is elegant and simple

These days I wanted to speed up some data retrieval with Go. Its concurrency model is elegant and simple, it has everything you need built-in.

Let’s say there are some articles that need to be fetched from an API. I have the IDSs of all the articles, and I can fetch them one by one. One request can take even a second, so I added a 1 second sleep to simulate this.

type Article struct {
       ID    uint
       Title string
}

func GetArticle(ID uint) Article {
       time.Sleep(time.Second * 1)
       return Article{ID, fmt.Sprintf("Title %d", ID)}
}

The classic way of doing this is making a request for each article, wait for it to finish, store the data.

var articles []Article
var id uint

for id = 1; id <= 10; id++ {
       log.Println(fmt.Sprintf("Fetching article %d...", id))
       article := GetArticle(id)
       articles = append(articles, article)
}

log.Println(articles)

With a 1 second response time it takes 10 seconds. Now imagine 100 articles or more. Continue reading Go concurrency is elegant and simple

Pipelines and workers in Go

I have some lists of users that I get and, for each user, I need to apply some rules (text formatting, max length, and who knows what other business rules can come up in the future), then send it further to another service. If I get the user again, I have to ignore them from the entire process. If one of the rules tells the user is not eligible, I have to stop the entire process, no need to go the next rules.

If you read the previous paragraph again, you can see some if statements that you should avoid from the technical implementation, but of course, not from the business rules:

  • If user was already processed, continue
  • If max length is exceeded, truncate
  • If a rule tells user is not OK, stop

I look at the rules as being some workers in a pipeline. Every worker does its job and sends its work to the next worker. Here’s how I’ve handled this. Continue reading Pipelines and workers in Go