Auto-entrepreneur : supprimer son adresse, stop au démarchage commercial

Standard

Vous êtes auto-entrepreneur ? Vous vous faites démarcher au téléphone ou par courrier et vous n’en pouvez plus ? Saviez-vous que n’importe qui pouvait trouver l’adresse déclarée de votre entreprise (souvent votre domicile) sur un moteur de recherche, sur des sites comme societe.com ?

En tant qu’auto-entrepreneur depuis plusieurs années, j’ai subi ces pratiques et je n’étais pas vraiment heureux que mon adresse personnelle soit aussi facilement accessible sur les moteurs de recherche. Et puis, j’ai découvert que la loi était de notre côté. Plus précisément, l’article A123-96 du code de commerce, dont le rôle est d’empêcher tout démarchage commercial ou utilisation de vos données pour les tiers non habilités.

Cerise sur le gâteau : pas besoin de contacter chaque tiers (se serait horrible) ! Il suffit de contacter l’INSEE, qui va mettre à jour la base pour votre entreprise.

Empêcher les tiers non habilités d’exploiter vos données

La démarche est la suivante : il faut envoyer un e-mail à [email protected] avec un document prouvant votre identité en pièce jointe. Voici un corps de texte possible :

Bonjour,

Je souhaite que mes informations ne soient utilisées que par les organismes habilités conformément à l’article A123-96 du code de commerce.

Mon auto entreprise est Alice Dupont, portant le SIREN 800 424 242.

Cordialement,

Il vous suffit d’attendre quelques jours que les différents tiers répercutent l’information dans leurs systèmes et vous serez de nouveau tranquille.

Golang : instant first tick for ticker

Standard

Do you know about tickers? They’re used when you want to do something repeatedly at regular intervals. They shouldn’t be confused with timers, that are used when you want to do something in the future.

Here is how a ticker is used. In this example, the ticker will tick every 500ms and the program will exit after 1600ms, after 3 ticks.

package main

import "time"
import "fmt"

func main() {
    ticker := time.NewTicker(500 * time.Millisecond)
    go func() {
        for t := range ticker.C {
            fmt.Println("Tick at", t)
        }
    }()
    time.Sleep(1600 * time.Millisecond)
    ticker.Stop()
    fmt.Println("Ticker stopped")
}

You can run the code in the Go Playground.

But what if you wanted your first tick to happen instantly, when your program starts? This can come in handy if your ticker ticks less often, say every hour, and you don’t want to wait that much time.

In that case, if the logic you need to run at a specific interval is in a function, you can call your function outside of the ticker statement or you can adopt this kind of construction.

package main

import "time"
import "fmt"

func main() {
	ticker := time.NewTicker(1 * time.Second)
	fmt.Println("Started at", time.Now())
	defer ticker.Stop()
	go func() {
		for ; true; < -ticker.C {
			fmt.Println("Tick at", time.Now())
		}
	}()
	time.Sleep(10 * time.Second)
	fmt.Println("Stopped at", time.Now())
}

You can run the code in the Go Playground. Here is a sample output:

Started at 2009-11-10 23:00:00 +0000 UTC m=+0.000000001
Tick at 2009-11-10 23:00:00 +0000 UTC m=+0.000000001
Tick at 2009-11-10 23:00:01 +0000 UTC m=+1.000000001
Tick at 2009-11-10 23:00:02 +0000 UTC m=+2.000000001
Tick at 2009-11-10 23:00:03 +0000 UTC m=+3.000000001
Tick at 2009-11-10 23:00:04 +0000 UTC m=+4.000000001
Tick at 2009-11-10 23:00:05 +0000 UTC m=+5.000000001
Tick at 2009-11-10 23:00:06 +0000 UTC m=+6.000000001
Tick at 2009-11-10 23:00:07 +0000 UTC m=+7.000000001
Tick at 2009-11-10 23:00:08 +0000 UTC m=+8.000000001
Tick at 2009-11-10 23:00:09 +0000 UTC m=+9.000000001
Stopped at 2009-11-10 23:00:10 +0000 UTC m=+10.000000001

Tips for testing Airflow DAGs

Standard

During my job at Drivy as a Data Engineer, I had the chance to write close to 100 main Airflow DAGs. In this quick blog post, I’ll share what’s it’s worth testing according to me.

Custom operators

If you’re using several times the same operator in different DAGs with a similar construction method, I would recommend about either:

  • creating a custom Airflow operator thanks to the plugin mechanism
  • creating a Python class that will act as a factory to create the underlying Airflow operator with the common arguments you’re using

Python logic

If you’re using a non trivial logic from a PythonOperator, I would recommend about extracting this logic into a Python module named after the DAG ID. With this, you’ll be able to keep your Python logic away from Airflow internals and it’ll be easier to test it. You’ll just need to perform a single function call from your DAG’s PythonOperator after.

Smoke test

Finally, the last test I would recommend writing is a smoke test that will target all DAGs. This test will make sure that:

  • each DAG can be loaded by the Airflow scheduler without any failure. It’ll show in your CI environment if some DAGs expect a specific state (a CSV file to be somewhere, a network connection to be opened) to be able to be loaded or if you need to define environment / Airflow variables for example
  • a single file defining multiple DAGs loads fast enough
  • Airflow email alerts are properly defined on all DAGs

Here is an example test file to test this. It relies heavily on the code provided by WePay in this blog post.

# -*- coding: utf-8 -*-
import unittest

from airflow.models import DagBag


class TestDags(unittest.TestCase):
    """
    Generic tests that all DAGs in the repository should be able to pass.
    """
    AIRFLOW_ALERT_EMAIL = '[email protected]'
    LOAD_SECOND_THRESHOLD = 2

    def setUp(self):
        self.dagbag = DagBag()

    def test_dagbag_import(self):
        """
        Verify that Airflow will be able to import all DAGs in the repository.
        """
        self.assertFalse(
            len(self.dagbag.import_errors),
            'There should be no DAG failures. Got: {}'.format(
                self.dagbag.import_errors
            )
        )

    def test_dagbag_import_time(self):
        """
        Verify that files describing DAGs load fast enough
        """
        stats = self.dagbag.dagbag_stats
        slow_files = filter(lambda d: d.duration > self.LOAD_SECOND_THRESHOLD, stats)
        res = ', '.join(map(lambda d: d.file[1:], slow_files))

        self.assertEquals(
            0,
            len(slow_files),
            'The following files take more than {threshold}s to load: {res}'.format(
                threshold=self.LOAD_SECOND_THRESHOLD,
                res=res
            )
        )

    def test_dagbag_emails(self):
        """
        Verify that every DAG register alerts to the appropriate email address
        """
        for dag_id, dag in self.dagbag.dags.iteritems():
            email_list = dag.default_args.get('email', [])
            msg = 'Alerts are not sent for DAG {id}'.format(id=dag_id)
            self.assertIn(self.AIRFLOW_ALERT_EMAIL, email_list, msg)

The DAG logic

I would say that it’s not worth testing an end to end DAG logic because:

  • it’ll be often very hard to do as you’ll likely need various components (databases, external systems, files) and can make your test suite slow
  • You should embrace the power of Airflow to define DAGs with Python code and treat them as just wiring pieces you’ve tested individually together. DAGs are not the main piece of the logic.

That said, the logic of the DAG should be tested in your dev / staging environment before running it in production if you want to avoid bad surprises.

Tests in production

Your DAGs are running happily in production without throwing error emails. Fine? Not so sure. You can sleep peacefully if you have:

  • set DAG timeouts and SLA targets to be alerted if your DAGs run too slowly
  • general monitoring and alerting on the Airflow servers (webserver, scheduler and workers) to make sure that they are fine
  • Data quality checkers that will make sure that the data you have in production respects some predicates

Data quality checkers

Standard

As a Data Engineer at Drivy, one of my main challenge has been to import data from various datasources into our data warehouse. Working with various datasources is often very hard, because they are inherently different in terms of connection method, freshness level, trust, maturity and stability.

I’ve been talking on our company blog about the need for data quality checkers: a tool which checks and enforces a high level of quality and consistency for data. If you are interested about data quality, data warehousing, testing and alerting, this should be an interesting blog post.

You can read the full blog post on Drivy’s engineering blog: data quality checkers.

Experimenting with distributed queries to servers in Golang

Standard

One day, I came accross the Go Concurrency Patterns talk made by Rob Pike (one of the creators of Golang) and I found it fascinating. After this talk, I wanted to explore a bit more the concept of the Google Search code given at the end of the talk.

The goal is to find a behaviour that could be used by a search engine to handle a search query. We have got 3 services (web, images and videos – no ads ahah!) and we want to perform a search on each service according to the query. The goal is to respond as fast as possible.

Architecture

We have got multiple instances of each service. We are going to send the search query in parallel to available instances of web servers, images servers and videos servers. For each server we will take the first returned search result, to meet our goal to respond as fast as possible.

Hyperparameters

We will assume that each server answers a query in a time that follows a normal distribution (the mean is explicit given and is referred to as latency, the standard derivation is inferred from the latency). A search has also a timeout which represents the number of milliseconds we are willing to wait to have search results before exiting (it is possible that search results from all the services have not yet arrived). This is referred to as the timeout parameter.

Finally, we can control how many instances of each service we have available. This is referred to as the replicas parameter.

Execution samples

To test how the variation of the different parameters influence the number of results and when they are returned, you can find below some executions and their results:

# High latency but large number of replicas
./fake-google -timeout 20 -replicas 200 -latency 28
[
  {Search result: `res for: test query` from: `image95` in 18.695281ms}
  {Search result: `res for: test query` from: `web129` in 17.11128ms}
  {Search result: `res for: test query` from: `video13` in 19.058285ms}
]

# High latency but normal number of replicas
./fake-google -timeout 20 -replicas 100 -latency 28
[
  {Search result: `res for: test query` from: `web90` in 19.499019ms}
]

# High latency, very low number of replicas
./fake-google -timeout 20 -replicas 10 -latency 25
[]

# Latency is the same as the timeout and we've got enough replicas
./fake-google -timeout 20 -replicas 100 -latency 20
[
  {Search result: `res for: test query` from: `web90` in 12.735776ms}
  {Search result: `res for: test query` from: `image63` in 12.727817ms}
  {Search result: `res for: test query` from: `video26` in 13.02499ms}
]

Nothing unexpected in these results, this can all be verified by computing probabilities on multiple independent normal laws.

Ameliorations

The existing code is super simple and is definitely not ready for a real life scenario. We could for instance, improve the following points:

  • I assume that all replicas are always available. The notion of an available replica is hard to define. We don’t want to send requests to replicas that are not healthy, down or are already overwhelmed
  • I assume that the number of replicas is the same for each service
  • I assume that the response time of every replica follows a normal law, and is query independent

And countless other things I didn’t think of in a 2-minute window.

Code

Putting aside all the ameliorations I just listed, I find the existing code still interesting because it shows how to use advanced concurrency patterns in Go. The code is available on GitHub, and the main logic resides in the file core/core.go.

Go client for Updown

Standard

What is Updown?

Over the weekend, I’ve been working on creating a Go client for updown.io. Updown lets you monitor websites and online services for an affordable price. Checks can be performed for HTTP, HTTPS, ICMP and a custom TCP connection down to every 30s, from 4 locations around the globe. They also offer status pages, like the one I use for Teen Quotes. I find the design of the application and status pages really slick. For all these reasons, I use Updown for personal and freelance projects.

A Go REST client

I think that it’s the first time I wrote a REST API client in Go, and I feel pretty happy. My inspiration for the package came from the Godo package, the Go library for DigitalOcean. It helped me start and structure my files, structures and functions.

The source code is available on GitHub, under the MIT license. Here is a small glance at what you can do with it.

package main

import (
    "github.com/antoineaugusti/updown"
)

func main() {
    // Your API key can be retrieved at https://updown.io/settings/edit
    client := updown.NewClient("your-api-key", nil)
    // List all checks
    checks, HTTPResponse, err := client.Check.List()
    // Finding a token by an alias
    token, err := client.Check.TokenForAlias("Google")
    // Downtimes for a check
    page := 1 // 100 results per page
    downs, HTTPResponse, err := client.Downtime.List(token, page)
}

Enjoying working with Go again

I particularly enjoyed working with Go again, after a few months without touching it. I really like the integration with Sublime Text, the fast compilation, static typing, the golint (linter for Go code, that even takes into account variable names and comments) and go fmt (automatic code formatting) commands. I knew and I experienced once again that developing with Go is fast and enjoyable. You rapidly end up with a code that is nice to read, tested and documented.

Feedback

As always, feedback, pull-requests, or kudos are welcomed! I did not achieved 100% coverage as I was quite lazy and opted for integration tests, meaning tests actually hit the real Updown API when they are performed.

Multiple deploy keys on the same machine – GitHub: key already in use

Standard

Github does not let you use the same SSH key as a deploy key for several projects. Knowing this, you’ve got 2 choices: edit the configuration of your 1st project and say that this SSH key is not longer a deploy key or find another solution.

Deleting the deploy key of the existing project

To know what is the project associated with your deploy key, you can run the command ssh -T -ai ~/.ssh/id_rsa [email protected] (adjust the path to your SSH key if necessary). Github will then great you with something like:

Hi AntoineAugusti/foo-project! You've successfully authenticated, but GitHub does not provide shell access.

From this point, solving your problem is just a matter of going to the settings of this repository and removing the deploy key.

The alternative: generating other SSH keys

We are going to generate a SSH key for each repository, you’ll see it’s not too much trouble.

  • First, generate a new SSH key with a comprehensive name with the command ssh-keygen -t rsa -f ~/.ssh/id_vendor_foo-project -C https://github.com/vendor/foo-project (replace vendor and foo-project).
  • Edit your ~/.ssh/config file to map a fake subdomain to the appropriate SSH key. You will need to add the following content:
    Host vendor_foo-project.github.com
        Hostname github.com
        IdentityFile ~/.ssh/id_vendor_foo-project
    

    This code maps a fake Github’s subdomain to the root domain and say that when connecting to the fake subdomain, we should automatically use the previously created SSH key.

  • Add the newly created SSH public key as a deploy key to the repository of your choice
  • Clone your Git repository with the fake subdomain: instead of using the URL given by GitHub (git clone [email protected]:vendor/foo-project.git) you will use git clone [email protected]_foo-project.github.com:vendor/foo-project.git
  • From now on, running git pull will connect to GitHub with the appropriate SSH key and GitHub will not complain 🙂

If you’ve already cloned the Git repository before, you can always change the remote URL to the Git server by editing the file .git/config of your project.

Happy deploys!

My experience as a mentor for students

Standard

Mentor what?

For the last 3 months, I have been a mentor for a few students on OpenClassrooms. OpenClassrooms is a French MOOC platform, visited by 2.5M people each month and they currently offer more than 1000 courses. They focus on technology courses for now: web development, mobile development, networking, databases for example. A course can be composed of textual explanations, videos, quizzes, practical sessions…

Courses are free, but you can pay a monthly fee to become a “Premium Plus” student, and thanks to this you will have a weekly 45 minutes / 1 hour session with someone experienced (student, professional, teacher…) to help you achieve your goals: getting certifications, finding an internship or starting your career in web development for instance. As a mentor, your primary goal is not to teach a course. Instead, you’re here as a support for students: you can help them understand a difficult part of a course, give them additional exercises, share with them valuable resources, look at their code and do a basic code review.

Mathieu Nebra en séance de mentorat
Mathieu Nebra (co-founder of OpenClassrooms) in a mentoring session

About “my students”

As an engineering student in a well recognised school in France, I’m used to be surrounded by lucky people: they are intelligent, they have good grades and one day they will get an engineering degree. This means that they will have a job nearly no matter what, and a well payed one. At OpenClassrooms, this is very different: a fair amount of students have had difficulties (left school early, were not interested in their first years at university, did some small jobs here and there to pay the rent…) and now they are working hard to improve their life. Web development is a fantastic opportunity: you can learn it from home, you only need a computer (and a cheap one is perfectly okay) and you can find a lot of learning resources for free on the Internet. The job market is not too crowded, and there is a good chance that you can find a job in a local web agency if you know HTML5, CSS3, a PHP framework and some basic jQuery. No need to work long hours, to wake up during the night, to fight to find a part time job to pay your rent; you can make a living by typing text in a text editor.

It has been a very valuable experience for me to listen to people that had bad times, had troubles in their life and now are dedicated to get better, to learn stuff and they just need advices to achieve what they want.

I am a mentor, but I learn

I’m helping my students mostly around web technologies. And this means that I’m supposed to know a lot of stuff about HTML5 (canvas, you know it?), CSS3 (flexbox anyone?), naked PHP (good ol’ PDO API) and JavaScript. Clearly, this is not the case. I don’t even do web development on a monthly basis. At first, I was a bit worried: am I going to be able to remember how I did it, a few years ago? How can you do this feature without a framework? Can I still read a mix of HTML / CSS / PHP, all in the same file? I was surprised, but the answer was yes, and it was very interesting to witness how my brain can actually remember things I did years ago, and how fast I can retrieve this information (just by thinking or by doing the right Google query).

I was also surprised by how broad my role is. Sure, students have some difficulties understanding every aspect of oriented object principles, and I have to go over some concepts multiple times, but who doesn’t? What they really need is not a simple technical advisor. They need to hear from someone experienced that it is perfectly fine to not understand OOP in just 2 weeks, that it is fine to forget method names or to mix up language syntaxes when you write for the first time HTML, CSS, JavaScript and PHP during the same day.

They need to hear from someone that they are doing great, and to remember what they have learned during the last month or so. I found that it helps them a lot to keep a simple schedule somewhere: “for next week, I want to have done these sections from this course, and I need to start looking at this also”. When you look back, they are happy to see that indeed they have finished and done successfully quizzes / activities for multiple courses recently. It is a tremendous achievement for students to know that they have learned something, that they are actually getting somewhere and that their knowledge is growing.

What next?

So far, it has been an incredible experience and I think I have learned a lot, and I do hope that students have learned valuable things thanks to me. I am feeling good because I see that I can help people, I can give back to the community and I can share my passion with people that are interested and deeply motivated.

Sounds like something you want to do? Visit this page.

Testing an os.exit scenario in Golang

Standard

Today, I ran into an issue. I wanted to test that a function logged a fatal error when something bad happened. The problem with a fatal log message is that it calls os.Exit(1) after logging the message. As a result, if you try to test this by calling your function with the required arguments to make it fail, your test suite is just going to exit.

Suppose you want to test something like this:

package foo

import (
  "log"
)

func Crashes(i int) {
  if i == 42 {
    log.Fatal("It crashes because you gave the answer")
  }
}

Well, this is not so easy as explained before. It turns out that the solution is to start a subprocess to test that the function crashes. The subprocess will exit, but not the main test suite. This is explained in a talk about testing techniques given in 2014 by Andrew Gerrand. If you want to check that the fatal message is something specific, you can inspect the standard error by using the os.exec package. Finally, the code to test the crashing part of the previous function would be the following:

package foo

import (
  "io/ioutil"
  "os"
  "os/exec"
  "strings"
  "testing"
)

func TestCrashes(t *testing.T) {
  // Only run the failing part when a specific env variable is set
  if os.Getenv("BE_CRASHER") == "1" {
    Crashes(42)
    return
  }

  // Start the actual test in a different subprocess
  cmd := exec.Command(os.Args[0], "-test.run=TestCrashes")
  cmd.Env = append(os.Environ(), "BE_CRASHER=1")
  stdout, _ := cmd.StderrPipe()
  if err := cmd.Start(); err != nil {
    t.Fatal(err)
  }

  // Check that the log fatal message is what we expected
  gotBytes, _ := ioutil.ReadAll(stdout)
  got := string(gotBytes)
  expected := "It crashes because you gave the answer"
  if !strings.HasSuffix(got[:len(got)-1], expected) {
    t.Fatalf("Unexpected log message. Got %s but should contain %s", got[:len(got)-1], expected)
  }

  // Check that the program exited
  err := cmd.Wait()
  if e, ok := err.(*exec.ExitError); !ok || e.Success() {
    t.Fatalf("Process ran with err %v, want exit status 1", err)
  }
}

Not so readable, definitely feels like a hack, but it does the job.

Limit the number of goroutines running at the same time

Standard

Recently, I was working on package that was doing network requests inside goroutines and I encountered an issue: the program was really fast to finish, but the results were awful. This was because the number of goroutines running at the same time was too high. As a result, the network was congested, too many sockets were opened on my laptop and the final performance was degraded: requests were slow or failing.

In order to keep the network healthy while maintaining some concurrency, I wanted to limit the number of goroutines making requests at the same time. Here is a sample main file to illustrate how you can control the maximum number of goroutines that are allowed to run concurrently.

package main

import (
	"flag"
	"fmt"
	"time"
)

// Fake a long and difficult work.
func DoWork() {
	time.Sleep(500 * time.Millisecond)
}

func main() {
	maxNbConcurrentGoroutines := flag.Int("maxNbConcurrentGoroutines", 5, "the number of goroutines that are allowed to run concurrently")
	nbJobs := flag.Int("nbJobs", 100, "the number of jobs that we need to do")
	flag.Parse()

	// Dummy channel to coordinate the number of concurrent goroutines.
	// This channel should be buffered otherwise we will be immediately blocked
	// when trying to fill it.
	concurrentGoroutines := make(chan struct{}, *maxNbConcurrentGoroutines)
	// Fill the dummy channel with maxNbConcurrentGoroutines empty struct.
	for i := 0; i < *maxNbConcurrentGoroutines; i++ {
		concurrentGoroutines <- struct{}{}
	}

	// The done channel indicates when a single goroutine has
	// finished its job.
	done := make(chan bool)
	// The waitForAllJobs channel allows the main program
	// to wait until we have indeed done all the jobs.
	waitForAllJobs := make(chan bool)

	// Collect all the jobs, and since the job is finished, we can
	// release another spot for a goroutine.
	go func() {
		for i := 0; i < *nbJobs; i++ {
			<-done
			// Say that another goroutine can now start.
			concurrentGoroutines <- struct{}{}
		}
		// We have collected all the jobs, the program
		// can now terminate
		waitForAllJobs <- true
	}()

	// Try to start nbJobs jobs
	for i := 1; i <= *nbJobs; i++ {
		fmt.Printf("ID: %v: waiting to launch!\n", i)
		// Try to receive from the concurrentGoroutines channel. When we have something,
		// it means we can start a new goroutine because another one finished.
		// Otherwise, it will block the execution until an execution
		// spot is available.
		<-concurrentGoroutines
		fmt.Printf("ID: %v: it's my turn!\n", i)
		go func(id int) {
			DoWork()
			fmt.Printf("ID: %v: all done!\n", id)
			done <- true
		}(i)
	}

	// Wait for all jobs to finish
	<-waitForAllJobs
}

This file is available as a gist on GitHub if you find it more convenient.

Sample runs

For the command time go run concurrent.go -nbJobs 25 -maxNbConcurrentGoroutines 10:

ID: 1: waiting to launch!
ID: 1: it's my turn!
ID: 2: waiting to launch!
ID: 2: it's my turn!
ID: 3: waiting to launch!
ID: 3: it's my turn!
ID: 4: waiting to launch!
ID: 4: it's my turn!
ID: 5: waiting to launch!
ID: 5: it's my turn!
ID: 6: waiting to launch!
ID: 6: it's my turn!
ID: 7: waiting to launch!
ID: 7: it's my turn!
ID: 8: waiting to launch!
ID: 8: it's my turn!
ID: 9: waiting to launch!
ID: 9: it's my turn!
ID: 10: waiting to launch!
ID: 10: it's my turn!
ID: 11: waiting to launch!
ID: 1: all done!
ID: 9: all done!
ID: 11: it's my turn!
ID: 12: waiting to launch!
ID: 12: it's my turn!
ID: 7: all done!
ID: 13: waiting to launch!
ID: 5: all done!
ID: 13: it's my turn!
ID: 14: waiting to launch!
ID: 4: all done!
ID: 14: it's my turn!
ID: 8: all done!
ID: 15: waiting to launch!
ID: 15: it's my turn!
ID: 16: waiting to launch!
ID: 16: it's my turn!
ID: 10: all done!
ID: 17: waiting to launch!
ID: 2: all done!
ID: 17: it's my turn!
ID: 18: waiting to launch!
ID: 18: it's my turn!
ID: 3: all done!
ID: 19: waiting to launch!
ID: 6: all done!
ID: 19: it's my turn!
ID: 20: waiting to launch!
ID: 20: it's my turn!
ID: 21: waiting to launch!
ID: 20: all done!
ID: 16: all done!
ID: 17: all done!
ID: 12: all done!
ID: 21: it's my turn!
ID: 19: all done!
ID: 11: all done!
ID: 14: all done!
ID: 18: all done!
ID: 15: all done!
ID: 13: all done!
ID: 22: waiting to launch!
ID: 22: it's my turn!
ID: 23: waiting to launch!
ID: 23: it's my turn!
ID: 24: waiting to launch!
ID: 24: it's my turn!
ID: 25: waiting to launch!
ID: 25: it's my turn!
ID: 24: all done!
ID: 21: all done!
ID: 22: all done!
ID: 25: all done!
ID: 23: all done!
0,28s user 0,05s system 18% cpu 1,762 total

For the command time go run concurrent.go -nbJobs 10 -maxNbConcurrentGoroutines 1:

ID: 1: waiting to launch!
ID: 1: it's my turn!
ID: 2: waiting to launch!
ID: 1: all done!
ID: 2: it's my turn!
ID: 3: waiting to launch!
ID: 2: all done!
ID: 3: it's my turn!
ID: 4: waiting to launch!
ID: 3: all done!
ID: 4: it's my turn!
ID: 5: waiting to launch!
ID: 4: all done!
ID: 5: it's my turn!
ID: 6: waiting to launch!
ID: 5: all done!
ID: 6: it's my turn!
ID: 7: waiting to launch!
ID: 6: all done!
ID: 7: it's my turn!
ID: 8: waiting to launch!
ID: 7: all done!
ID: 8: it's my turn!
ID: 9: waiting to launch!
ID: 8: all done!
ID: 9: it's my turn!
ID: 10: waiting to launch!
ID: 9: all done!
ID: 10: it's my turn!
ID: 10: all done!
0,32s user 0,03s system 6% cpu 5,274 total

Questions? Feedback? Hit me on Twitter @AntoineAugusti