Search Write Sign up Sign in Concurrent HTTP Requests in Golang: Best Practices and Techniques Rafet Topcu · Follow Published in Insider Engineering · 10 min read · Jan 3 -- 2 In the realm of Golang, sending HTTP requests concurrently is a vital skill for optimizing web applications. This article explores various methods to achieve this, from basic goroutines to advanced techniques involving channels and sync.WaitGroup. We’ll delve into best practices for performance and error handling in concurrent environments, equipping you with strategies to enhance the speed and reliability of your Go applications. Let’s dive into the world of concurrent HTTP requests in Golang! Basic Approach Using Goroutines When it comes to implementing concurrency in Golang, the most straightforward approach is using goroutines. These are the building blocks of concurrency in Go, offering a simple yet powerful way to execute functions concurrently. Getting Started with Goroutines To start a goroutine, simply prefix a function call with the go keyword. This launches the function as a goroutine, allowing the main program to continue running independently. It's like starting a task and moving on without waiting for it to finish. Membership Free For instance, consider a scenario where you’re sending an HTTP request. Access the best member-only stories. reading. No ads. Support independent authors. Normally, you’d call a function likeDistraction-free sendRequest() , and your program would wait until this function is complete. Withyour goroutines, you Organize knowledge with listscan and do this concurrently: Sign up to discover human stories that deepen your understanding of the world. go sendRequest("http://example.com") highlights. Listen to audio narrations. Read offline. Tell your story. Find your audience. Join the Partner Program and earn for your writing. Sign up for free Try for $5/month Handling Multiple Requests Imagine you have a list of URLs, and you need to send an HTTP request to each. Without goroutines, your program would send these requests one after the other, which is time-consuming. With goroutines, you can send them all at almost the same time: urls := []string{"http://example.com", "http://another.com", ...} for _, url := range urls { go sendRequest(url) } This loop starts a new goroutine for each URL, drastically reducing the time your program takes to send all the requests. Approaches to Concurrent HTTP Requests In this section, we’ll delve into various methods to handle HTTP requests concurrently in Go. Each approach has its unique characteristics, and understanding these can help you choose the right method for your specific needs. I’m going to use our insrequester package, our open-source requester, to handle the HTTP requests that I mentioned in this article; you can check it out. Basic Goroutines The simplest way to send HTTP requests concurrently in Go is by using goroutines. Goroutines are lightweight threads managed by the Go runtime. Here’s a basic example: requester := insrequester.NewRequester().Load() urls := []string{"http://example.com", "http://example.org", "http://example.net"} for _, url := range urls { go requester.Get(insrequester.RequestEntity{Endpoint: url}) } time.Sleep(2 * time.Second) // Wait for goroutines to finish Membership Free Sign up to discover human stories that deepen your understanding of the world. Access the best member-only stories. Distraction-free reading. No ads. Support independent authors. Organize your knowledge with lists and highlights. Listen to audio narrations. Tell your story. Find your audience. Read offline. Join the Partner Program and earn for your writing. This method is straightforward but lacks control over goroutines once they are launched. You can’t get the return value of the Get method in this way. You need to sleep for approximately some time to wait for all the goroutines. Even if you call sleep, you may still not be sure that they are finished. WaitGroups To improve upon basic goroutines, sync.WaitGroup can be used for better synchronization. It waits for a collection of goroutines to finish executing: requester := insrequester.NewRequester().Load() wg := sync.WaitGroup{} urls := []string{"http://example.com", "http://example.org", "http://example.net"} wg.Add(len(urls)) for _, url := range urls { go requester.Get(insrequester.RequestEntity{Endpoint: url}) } wg.Wait() // Wait for all goroutines to complete This ensures that the main function waits for all the HTTP requests to complete. Channels Channels are a powerful feature in Go for communication between goroutines. They can be used to collect data from multiple HTTP requests: Membership Free requester := insrequester.NewRequester().Load() Access the best member-only stories. urls := []string{"http://example.com", "http://example.org", "http://example.net"} Distraction-free reading. No ads. Support independent authors. ch := make(chan string, len(urls)) Organize your knowledge with lists and for _, url := range urls { highlights. go func() { res, _ := requester.Get(insrequester.RequestEntity{Endpoint: url}) Tell your story. Find your audience. ch <- fmt.Sprintf("%s: %d", url, res.StatusCode) }() } Sign up to discover human stories that deepen your understanding of the world. for range urls { response := <-ch Listen to audio narrations. Read offline. Join the Partner Program and earn for your writing. fmt.Println(response) } Channels not only synchronize goroutines but also facilitate the passing of data between them. Worker Pools A worker pool is a pattern where a fixed number of workers (goroutines) are created to handle a variable number of tasks. This helps in limiting the number of concurrent HTTP requests, thereby preventing resource exhaustion. Here’s how you can implement a worker pool in Go: type Job struct { URL string } func worker(requester *insrequester.Request, jobs <-chan Job, results chan<- *http.R for job := range jobs { res, _ := requester.Get(insrequester.RequestEntity{Endpoint: job.URL}) results <- res wg.Done() } } func main() { requester := insrequester.NewRequester().Load() urls := []string{"http://example.com", "http://example.org", "http://example.net"} numWorkers := 2 // Define the number of workers in the pool jobs := make(chan Job, len(urls)) results := make(chan *http.Response, len(urls)) var wg sync.WaitGroup // Start workers for w := 0; w < numWorkers; w++ { go worker(requester, jobs, results, &wg) } // Sending jobs to the worker pool Free wg.Add(len(urls)) for _, url := range urls { jobs <- Job{URL: url} Distraction-free reading. No ads. } close(jobs) Organize your knowledge with lists and wg.Wait() highlights. Sign up to discover human stories that // Collecting results deepen your understanding offorthe world. i := 0; i < len(urls); i++ { fmt.Println(<-results) } } Tell your story. Find your audience. Membership Access the best member-only stories. Support independent authors. Listen to audio narrations. Read offline. Join the Partner Program and earn for your writing. Using a worker pool allows you to manage a large number of concurrent HTTP requests efficiently. It’s a scalable solution that can be adjusted based on the workload and system capacity, thereby optimizing resource utilization and improving overall performance. Limiting Goroutines with Channels This method uses channels to create a semaphore-like mechanism to limit the number of concurrent goroutines. It’s effective in scenarios where you need to throttle HTTP requests to avoid overwhelming the server or hitting rate limits. Here’s how you can implement it: requester := insrequester.NewRequester().Load() urls := []string{"http://example.com", "http://example.org", "http://example.net"} maxConcurrency := 2 // Limit the number of concurrent requests limiter := make(chan struct{}, maxConcurrency) for _, url := range urls { limiter <- struct{}{} // Acquire a token. Waits here for token releases from the go func(url string) { defer func() { <-limiter }() // Release the token requester.Post(insrequester.RequestEntity{Endpoint: url}) }(url) } // Wait for all goroutines to complete for i := 0; i < cap(limiter); i++ { limiter <- struct{}{} } The use of deferring in this context is crucial. If you place the <-limiter statement after the Post method and the Post method triggers panic or a similar exception, the <-limiter line will not be executed. This can lead to infinite waits, as the semaphore token is never released, ultimately resulting in timeout issues. Membership Free Sign up to discover human stories that deepen your understanding of the world. Access the best member-only stories. Distraction-free reading. No ads. Support independent authors. Organize your knowledge with lists and highlights. Listen to audio narrations. Tell your story. Find your audience. Limiting Goroutines with Semaphore Read offline. Join the Partner Program and earn for your writing. The sync/semaphore package offers a clean and efficient way to limit the number of goroutines running concurrently. This approach is particularly useful when you want to manage resource allocation more systematically. requester := insrequester.NewRequester().Load() urls := []string{"http://example.com", "http://example.org", "http://example.net"} maxConcurrency := int64(2) // Set the maximum number of concurrent requests sem := semaphore.NewWeighted(maxConcurrency) ctx := context.Background() for _, url := range urls { // Acquire a semaphore weight before starting a goroutine if err := sem.Acquire(ctx, 1); err != nil { fmt.Printf("Failed to acquire semaphore: %v\n", err) continue } go func(url string) { defer sem.Release(1) // Release the semaphore weight on completion res, _ := requester.Get(insrequester.RequestEntity{Endpoint: url}) fmt.Printf("%s: %d\n", url, res.StatusCode) }(url) } // Wait for all goroutines to release their semaphore weights if err := sem.Acquire(ctx, maxConcurrency); err != nil { fmt.Printf("Failed to acquire semaphore while waiting: %v\n", err) } This approach, using the semaphore package, offers a more structured and readable way of handling concurrency compared to manually managing channels. It’s particularly beneficial when dealing with complex synchronization requirements or when you need more granular control over the concurrency levels. So, What is the Best Way? After exploring various approaches to handling concurrent HTTP requests in Go, the question arises: What is the best way to do it? The answer, as often is the case in software engineering, depends on the specific requirements and constraints of your application. Let’s consider the key factors to determine the most suitable approach: Assessing Your Needs Scale of Requests: If you’re dealing with a high volume of requests, a worker pool or semaphore-based approach provides better control over resource usage. Error Handling: If robust error handling is crucial, using channels or the semaphore package can offer more structured error management. Rate Limiting: For applications that need to respect rate limits, limiting goroutines with channels or the semaphore package can be effective. Complexity and Free Distraction-free No ads. Maintainability: Considerreading. the complexity Membership Access the best member-only stories. of each Support independent authors. approach. While channels offerOrganize more your control, they add complexity. knowledge withalso lists and Listen to audio narrations. highlights. The semaphore package, on the other hand, provides a more Sign up to discover human stories that Tell your story. Find your audience. deepen your understandingstraightforward of the world. solution. Read offline. Join the Partner Program and earn for your writing. Error Handling Error handling in goroutines is a tricky topic due to the nature of concurrent execution in Go. Since goroutines run independently, managing and propagating errors can be challenging but is crucial for building robust applications. Below are some strategies to effectively handle errors in concurrent Go programs: Centralized Error Channel One common approach is to use a centralized error channel through which all goroutines can send their errors. The main goroutine can then listen to this channel and take appropriate action. func worker(errChan chan<- error) { // Perform task if err := doTask(); err != nil { errChan <- err // Send any errors to the error channel } } func main() { errChan := make(chan error, 1) // Buffered channel for errors go worker(errChan) if err := <-errChan; err != nil { // Handle error log.Printf("Error occurred: %v", err) } } Or you can listen to the errChan in a different goroutine. func worker(errChan chan<- error, job Job) { // Perform task if err := doTask(job); err != nil { errChan <- err // Send any errors to the error channel } } Free func listenErrors(done chan struct{}, errChan <-chan error) { for { Distraction-free reading. No ads. select { case err := <-errChan: Organize your knowledge with lists and // Handle error highlights. case <-done: return Tell your story. Find your audience. } } } Sign up to discover human stories that deepen your understanding of the world. func main() { errChan := make(chan error, 1000) // Channel for errors done := make(chan struct{}) // Channel to signal goroutine to stop go listenErrors(done, errChan) Membership Access the best member-only stories. Support independent authors. Listen to audio narrations. Read offline. Join the Partner Program and earn for your writing. for _, job := range jobs { go worker(errChan, job) } // wait for all goroutines to complete somehow done <- struct{}{} // Signal goroutine to stop listening for errors } Error Group The golang.org/x/sync/errgroup package provides a convenient way to group multiple goroutines and handle any errors they produce. An errgroup.Group ensures that once an error occurs in any goroutine, all subsequent operations are canceled. import "golang.org/x/sync/errgroup" func main() { g, ctx := errgroup.WithContext(context.Background()) urls := []string{"http://example.com", "http://example.org"} for _, url := range urls { // Launch a goroutine for each URL g.Go(func() error { // Replace with actual HTTP request logic _, err := fetchURL(ctx, url) return err }) } // Wait for all requests to complete if err := g.Wait(); err != nil { log.Printf("Error occurred: %v", err) } } This approach simplifies error handling, especially when dealing with a large number of goroutines. Wrapping Goroutines Another strategy is to wrap each goroutine in a function that handles its errors. This encapsulation can include recovery from panics or other error management logic. func work() error { // Do some work return err } func main() { go func() { err := work() if err != nil { // Handle error } }() Membership Free // Wait for the work to be done somehow } Distraction-free reading. No ads. Organize your knowledge with lists and highlights. Access the best member-only stories. Support independent authors. Listen to audio narrations. Read offline. Sign up to discover human stories that In summary, the choice of error-handling strategy in Go’s concurrent Tell your story. Find your audience. deepen your understanding of the world. programming depends on the specific requirements and context of your Join the Partner Program and earn for application. Whether it’s through centralized error channels, dedicated your writing. error-handling goroutines, the use of error groups, or wrapping goroutines in error-managing functions, each method offers its own set of benefits and trade-offs. Conclusion In conclusion, this article has explored various approaches to sending HTTP requests concurrently in Golang, a crucial skill for optimizing web applications. We’ve discussed basic goroutines, sync.WaitGroup, channels, worker pools, and methods for limiting goroutines. Each approach has its unique characteristics and can be chosen based on specific application requirements. Furthermore, the article has highlighted the importance of error handling in concurrent Go programs. Managing errors in a concurrent environment can be challenging but is essential for building robust applications. Strategies such as using centralized error channels, the errgroup package, or wrapping goroutines with error handling logic have been discussed to help developers effectively handle errors. Ultimately, the choice of the best approach for handling concurrent HTTP requests in Go depends on factors like the scale of requests, error handling requirements, rate limiting, and overall complexity and maintainability of the code. Developers should carefully consider these factors when implementing concurrent features in their applications. I hope you enjoyed this article. If you have any questions, please feel free to contact me on LinkedIn or comment below. Follow us on the Insider Engineering Blog to read more about our AWS solutions at scale and engineering stories. Here are more stories you may enjoy. Designing Cost-Efficient Change Data Capture on AWS Serverless How We Migrated Our Data Lake to Apache Iceberg How to reduce 40% cost in AWS Lambda without writing a line of code! Goroutines -- Golang Http Request Concurrency 2 Membership Free Written Rafet Topcu Sign up to discover human storiesbythat 19 Followers for Insider Engineering deepen your understanding of the· Writer world. Access the best member-only stories. Distraction-free reading. No ads. Support independent authors. Organize your knowledge with lists and highlights. Listen to audio narrations. Tell your story. Find your audience. Follow Read offline. Join the Partner Program and earn for your writing. Help Status About Careers Blog Privacy Terms Text to speech Teams Membership Free Sign up to discover human stories that deepen your understanding of the world. Access the best member-only stories. Distraction-free reading. No ads. Support independent authors. Organize your knowledge with lists and highlights. Listen to audio narrations. Tell your story. Find your audience. Read offline. Join the Partner Program and earn for your writing.