The value of your network


How do you manage your professional network? I have an account at both Viadeo (which I rarely use) and LinkedIn (which I use much more than Facebook). I always wondered how some people managed to get to over 500 connections so quickly. I personally reached that number less than a year ago and I have been on the workplace for over 13 years.

I soon realized that many people will accept anyone into their LinkedIn network, or even use adding to the network as a means to establish potential contacts. Yet most of the time they have never even crossed a single email. I realized this when I was searching for a job abroad. I used LinkedIn to find second degree contacts to which I could get introduced. I quickly realized that many of the people returned in search results were not real contacts. I couldn’t be really introduced to them because my LinkedIn contact didn’t know that person.

I believe that having people on your network that you don’t know reduces the value of your network. Not only can you make your real contacts loose time as it happened to me. You also introduce noise for yourself: people you don’t know changing jobs, etc… If you want to see what an unknown person is up to, you can use the “follow” functionality in LinkedIn. That person is not a contact.

I’ll go even further. How many times have you been endorsed for a skill that you don’t have? Or how many times have you been endorsed for a skill you have, but the person endorsing you couldn’t really assess if you have it or not? If my mother or my high school friend endorse me for Scrum, that says nothing about my knowledge of Scrum. Whereas, if Jeff Sutherland himself endorsed me… now that’s something! Actually, if Jeff had endorsed only a dozen of people of Scrum it would be much better for me than if Jeff had endorsed ten thousand. Endorsements also suffer from inflation.

I can’t wait until LinkedIn implements a proper reputation algorithm. This would be a similar algorithm to the infamous PageRank. There are so many interesting things that could be done with LinkedIn’s data. For example, give a reputation on someone’s knowledge on a specific subject; estimate the value of someone’s network; establish whether someone is a hyper-connector; etc…

My question is, have you been too sloppy with your network? How valuable are you as a contact? How valuable are your contacts? Networking is not about collecting electronic business cards on LinkedIn or the like. It’s about establishing meaningful relationships with colleagues. That’s what brings value. In order to get a sense how real networking is done, I recommend you read “Never Eat Alone” by Keith Ferrazzi.

Letter of intent


Acme Inc.
Av. Frederick W. Taylor 42
Gantt Building
1000 Gotham City
NeverlandDear IT Manager,before you hire your next Agile coach to either kickstart or breathe some life into your Agile change initiative, take a step back and think about it. You might be surprised to hear this from me, but maybe that budget could be better spent elsewhere.

I’m not saying this out of some suicidal desire to kill the very market where I earn my living. Quite the opposite. I say this because I desperately want to improve it, and make sure it’s both a market that motivates and challenges me, as well as one whose existence is based on actually improving organizations.

So before you hire your (next) Agile coach, think about the journey you are embarking on. Agile is not a quick fix for your delivery problems. These problems are a symptom of a much larger dysfunctionality in your organization. Any (real) Agile coach you hire will only be as effective as the breadth of the change initiative. If this initiative is coming solely from the IT department, and it has no support from your other delivery partners such as product management, sales, customer service, operations, or the project management office, then chances are the initiative will yield poor or limited results (when compared to its real potential).

So if you are going to hire an Agile coach, you should be ready to support them when they inevitably start to reach out to these other departments. This support should be strong yet honest since there will likely be some resistance to the change, especially on the political side of things. Cross-departmental collaboration means ignoring the silo’d hierarchy that got so many people their fancy job titles in the first place.

Also, the very fact that you are considering introducing Agile in your organization is most likely because you have experienced the pains caused by an organization driven by predictive planning approaches. Embarking on an Agile change initiative means going in the opposite direction of predictive planning in almost every sense. Here is where the resistance from the organization will really show its teeth, especially when Agile starts shining a bright light on all the waste clogging the delivery process.

Any Agile change initiative will eventually try to change the culture of the organization. It must. Unless it succeeds in doing this, it will ultimately fail. And changing organizational culture is by far the toughest thing to do in the business world. So if you want to hire an Agile coach, you must be open for change and eager to drive it internally. You also should be ready for some tough discussions.

You’ll have to embrace failure (as long as it happens quickly) since it’s the best opportunity to learn and a necessary by-product of exploration. Because ultimately, it is about delivering value by allowing your knowledge workers the freedom to focus on collaboratively identifying, prioritizing and solving your organization’s toughest challenges.

Now, if what I described above sounds too ambitious, too frightening or just plain too difficult, then I think you should re-consider your Agile plans. You’re not going to find the quick fixes you’re looking for. Quick fixes are a specialty of the predictive planning guys, so you’re better off spending your money on them.

Why am I telling you this?

Because if we’re honest from the start about what an Agile change initiative entails, then I won’t need to hear about yet another Agile coach stuck trying to help a company that desperately wants to put an Agile face on its waterfall heart. Trying to jam the square peg in the round hole. These cases are later re-counted as “Agile failures”, which is a disservice to the coaching market and an insult to the word “failure”. Failure would be a valuable learning opportunity. But in order to fail, you first need to actually try to achieve something.

If, on the other hand, you think all this sounds like a liberating experience of discovery and challenging work, if you can see the real and wonderful benefits that result from it, then you’re ready to drive this important change ahead. And in this case, indeed yes, please find yourself an experienced Agile coach to support you in… actually, forget about that. Just contact me directly instead. You sound exactly like the kind of person I would love to work with.

Let’s get to work?

Story points vs Man days – The misguided debate



Relative estimation and story points is one of the topics I find people most often struggling to grasp, whether in trainings or at client sites. The main issue seems to be the belief that eventually, Story Points (SPs) need to be translated into Man Days (MDs) if you want to be able to do things like capacity planning, estimation and portfolio management. And because of this, some people have a hard time understanding the real reasons for using relative estimation. And even worse, their focus on the abstract concept of MDs prevents them from seeing the bigger picture and what really matters – value.When I discuss this topic with clients, I always try to highlight that there are two distinct issues at play here:

  • how relative estimation can be plugged into any existing MDs driven process
  • how the focus on MDs clouds managers from seeing the real issue

Using relative estimations in a MD-driven organization

The MDs currency is a constraint that Agile coaches typically cannot avoid when working with clients. Often, the organization’s entire planning & budgeting structure is based around MDs and changing that structure is not in the scope of the Agile change initiative (for now). Rather, managers want to know how they can use relative estimations within this MD structure.

After going through this exercise with most clients I’ve worked with, I’ve found the easiest way to explain how to do this is by writing 3 simple equations on the whiteboard, like this:

Velocity-MDs_No circles_cropped

I explain that the first line (estimation x factor = effort) is the formula that everybody uses to obtain MDs, regardless of methodology or discipline.

The second line is how that formula works in the Waterfall world, where a team will estimate the requirements in MDs and the manager will apply some conversion factor to account for the non-productive time and overhead of the team, to finally obtain a MD figure that they can use for budgetting and planning.

The third line is how you can obtain your MDs when using relative estimation. In that equation, the velocity factor = MDs per Sprint / Velocity.

Fine, no issues up to here, it’s very straightforward and managers get the equation. But this is where the questions start. In a recent case, a team manager was challenging this model, and his crticism focused on two basic points:

  1. The “velocity factor” (MDs p/ SP) is essentially an exchange rate for MDs. So if the team does not have a consistent velocity (which was the case for his Scrum teams), this exchange rate fluctuates too much meaning your estimates will likely be incorrect.
  2. Story Points are too abstract, and since the two formulas are essentially the same thing, why not just estimate in MDs?

Both criticisms he raised helped me understand the rootcause of the disconnect.

I explained that in issue # 1 (fluctuating velocity), he was absolutely correct. If your team’s velocity is fluctuating a lot, your MD estimation using the conversion formula will indeed likely be incorrect. Never forget that Scrum does not solve your problems, it just makes them painfully visible. And that is exactly what was happening here.  You still have to do the hard work of fixing them.

He should be talking to his team about what are the reasons that their velocity is fluctuating so much and listening attentively to their feedback. Very likely these issues have already been raised in their retrospectives (which was the case here). The manager’s focus should be in removing these impediments, helping the team deliver more consistently, instead of trying to somehow magically improve their ability to estimate.

And this was the perfect lead in to address the second issue – “story points are too abstract, why not just use MDs?” I went back to the whiteboard and drew two red circles:


Yes, the formulas were very similar (they are both equally simplistic), but the difference was on where the focus was placed. In the waterfall version of the formula, there is no consideration for productivity. Its focus is on getting the estimate correct, since the overhead factor is easy to calculate and doesn’t fluctuate much.

On the other hand, the Agile version of the formula flips that focus away from the estimate. Relative estimation is not difficult and can be learned in 1 hour. Besides trying out different relative estimation techniques (team estimation, planning poker, …), there isn’t much to improve there. Rather, it is the velocity factor that we focus on. That is a measure of our productivity, and if that factor is changing wildly or trending in the wrong direction, Agilists want to find out why. We’re trying to unearth the actual problems that are keeping us from a sustainable, productive pace.

(note: velocity is an imperfect measure of productivity.  Using it is a barometer for the Team’s delivery capacity and as a data set for retrospectives is very helpful, but trying to use it as a performance goal for a Team is missing the point completely.  As with any imperfect measurement, velocity can be gamed, so don’t lose time trying to use it as a performance metric for the Team.)

If your team’s velocity is too unpredictable, then no formula in the world is going to change that fact.  You need to get your hands dirty and find out why that’s happening.  Often times, it will be related to the fights managers are trying to avoid for political reasons (dependencies on other teams, inconsistent test data at the corporate level, bad product management, bad technical practices, …).  If you want to improve your delivery capabilities, start tackling those impediments.

Focusing on MDs is missing the point

The reason MDs are so prevalent on the mind of managers is because this is the accepted currency of IT organizations. In fact, it is so common place that managers often forget it is not the end goal, but rather an abstraction layer used to represent the hard-to-measure-yet-always-mentioned Business Value.

Side note: while in the corporate world, business value will usually mean profit, it (value) does not have to equal money. It varies depending on the purpose of your organization (customer happiness, lives saved, quality, …).

Instead of trying to think about how to measure Business Value, managers feel comfortable in the MD abstraction layer.  This leads to misguided success metrics for projects, such as “deviation from initial estimate”.  These only perpetuate the focus on getting that damn MD estimation correct and cements the erroneous belief that eventually, everything must be translated into MDs.

I say this debate is mis-guided because I’ve seen many managers lose sight of what they should be focusing on. I’ve heard many IT managers tell me that their number one priority was making sure that they delivered projects on budget (MDs). Not improving the throughput of their teams, not reducing technical debt or time-to-market. No, their priority was nailing the estimates.

Essentially, they’re saying “I don’t care if we’re delivering cr*p, I just want us to deliver cr*p in a predictable manner”.

This is a problem. And until managers are willing to move beyond this focus on being predictable over being productive, they are essentially becoming an impediment to the improvement of their teams. Because it’s only natural that their teams will sense the focus on predictability over value creation, and they will make it their priority also.

The mindset shift that must happen is the realization that the focus should be on delivering value. And to achieve this, even SPs are not sufficient, they only measure the amount of work a Team is able to deliver. Organizations looking to improve their ability to deliver value, must first figure out how to measure it.

In fact, one can easily imagine that it is precisely the inability to measure value that makes managers, instead, focus on the cost side of the equation.  It’s hard to say how valuable a story (or even a project) is, but calculating how much a project deviated from its original estimates involves little more than 6th grade math.

The #NoEstimates movement has been making a lot of noise about this recently. Vasco Duarte wrote a good, short overview of it, for those interested.

I don’t disagree (my favourite double negative) with anything they say, except that I don’t like the name (everybody estimates, even in the scenario they are proposing) and also I don’t think they are describing any breakthrough, but rather an advanced state for organizations applying Lean Thinking.

I prefer to think of estimations as Waste. Better estimations are not what will make an organization successful or help you deliver your new, super-cool product to the market. But even though it doesn’t add any real value, it’s inherent to product management and software development. Minimizing this waste is a gradual and never-ending journey.

The noodle challenge


Recientemente visité el equipo off-shore de Criteo en Vietnam. El propósito de la visita era entrenar al equipo en técnicas ágiles. Como soy un fanático de usar juegos para enseñar, tenía un surtido de juegos preparados para ellos. Uno de ellos es el archi-conocido Marshmallow Challenge. Aunque yo normalmente lo llamo Spaghetti Challenge para que no puedan encontrarlo por internet y arruinar el ejercicio.Ya que estaba viajando desde París y me gusta viajar ligero de equipaje, les pedí que compraran la mayoría del material necesario para los ejercicios. Una de las cosas que les pedí que compraran eran los espaguetis. Menuda mi sorpresa cuando llegué y vi que habían comprado fideos chinos, que son mucho más gordos y resistentes.Tuve que improvisar, ya que los fideos iban a aguantar fácilmente el peso de la nube (marshmallow), fastidiando la idea original del ejercicio: enseñar las ventajas de testeo continuo y diseño evolutivo. Lo primero que hice fue cambiar la nube por algo más pesado y menos sabroso: un pedazo de baguet francesa. Aún yasí, todos los equipos consiguieron construir un prototipo funcional al final de los 25 minutos, sin importar lo malo que fuera su diseño.
2013-05-28 13.53.41 2
Tuve que improvisar un poco más. Al haberles dicho que el ejercicio estaba diseñado para aprender diseño evolutivo, en la segunda ronda les pedí que, en vez de empezar de cero, volvieran a trabajar sobre la torre para hacerla más alta. Para ello les volví a dar exactamente la misma cantidad de material que en la primera ronda. Algunas de las torres empezaron a ser montañas de parches, pero aún y así todas aguantaron el peso del pan.
2013-05-28 14.29.02
La improvisación final fue añadir una tercera ronda en la que les pedí lo mismo que en la anterior. No obstante, esta vez les di materiales ilimitados. La mayoría de los grupos consiguieron hacer la torre un poco más alta, aunque esta vez era a base de parches y cantidades ingentes de celo.

How to obtain the resulting code of merging a pull request in GitHub



If you use Github in your development workflow, you probably use pull requests. At some point, you might need to get the code resultant of merging that pull request. Or even the code in that pull request, before being merged. If it’s the case, you have to edit your .git/config file, which might look like this:

repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
ignorecase = true
precomposeunicode = false
[remote "origin"]
url =
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[branch "test"]
remote = origin
merge = refs/heads/test


and in the remote “origin”, you have to add the next lines:

[remote "origin"]
url =
fetch = +refs/heads/*:refs/remotes/origin/*
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
fetch = +refs/pull/*/merge:refs/gh-merge/remotes/origin/*


With this, you will be letting your repository know that you also want to download the remote pull requests and the merges. Now you should execute:

MacBook-Pro-de-Fernando:test2 fernando$ git fetch origin
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 2 (delta 1), reused 0 (delta 0)
Unpacking objects: 100% (2/2), done.
* [new ref] refs/pull/1/head -> origin/pr/1
* [new ref] refs/pull/1/merge -> refs/gh-merge/remotes/origin/1


From this moment, you could obtain the code in the pull request number 1 with the next simple command:

MacBook-Pro-de-Fernando:test2 fernando$ git fetch origin pull/1/head:pr_1
* [new ref] refs/pull/1/head -> pr_1

now, with git checkout pr_1 you would be browsing the code contained in that pull request. However, if you want to see the actual final code resultant of this merge ,which you could use to place your code in a staging server , where you could launch your acceptance tests , and perform your User Acceptance Tests, you should execute:

MacBook-Pro-de-Fernando:test2 fernando$ git fetch origin pull/1/merge:merge_1
* [new ref] refs/pull/1/merge -> merge_1

And you are almost there, now:

MacBook-Pro-de-Fernando-Palomo-Garcia:test2 fernando$ git checkout merge_1
Switched to branch 'merge_1'

Yes! you have the merged code before executing that actual merge.

I hope this is useful for you as it was for me when preparing a continuous delivery workflow in one of our customers.


Tribal Recruitment


What made you be part of friend’s tribe?

You might believe “proximity, we grew up in the same neighbourhood” . Ok , if it’s the case your tribe is composed of the same friends you had from your childhood, that’s a good point. However, I am sure there were many other people in the same area and they were not part of your group. So, what made you a group was not only living near, but also something else.

No matter what level is your tribe ( ) , you became part of it because you choose to, and the tribe accepted you as part of it. We all have experienced the situation of someone coming new to our group of friends, with no sponsor but the only support of one team member. What if the new member is not accepted by the rest of the tribe? This person simply didn’t match with the general mood or interests of the group. In real life, this new member moves out smoothly, naturally. It happens, not many people can say they have never seen this situation.

I truly believe in organizations creating an environment where members can feel like at home, fighting for master, authonomy, purpose ( ) ;  Organizations where there is a clear group vision, the company purpose is completely shared across all teams. When everybody understands where we want to go, everybody is able to realize and understand what we need in each moment. I don’t think the ability to create teams belongs only to the organizational leaders.

Why not creating a recruitment process where the last word is in hands of the organization members? Mainly, those organization members who are possibly going to collaborate with this new one. What if, for example, you define a recruitment process like this for your organization:

1.- Attract talent. Show how good you are in your community. Inspire others with your achievements, your process, your tools, your environment. Not only by creating a nice and beauty website with the classic pictures, but have your team members sharing knowledge and opening the doors to your inside.

2.- Have a great inspiring and motivating job vacancy posts. Make people feel curiosity and willing to go there, to dream with a better life.

3.- Create an inspiring technical test, hard enough as to have the candidate feeling it’s not simple to reach the place. An average valid candidate should not be able to reach 100% of your test, as you still want them to think there are things to learn. The test has to be prepared between all the current members, not only one of them. You can have some members assigned to evaluate the tests, but it should not be always the same person, and always at least 2 or 3 members.

4.- Have all your team interviewing the new candidates, and choosing who they want to have next to them. For each vacancy, interview 2-3 members. That will give everyone vision of all the possible candidates there are, differences that each one brings, possitive points, negative points. Have everyone aware of the organization values, and remind them to dig the candidate searching for the matching to them.

The key is choosing, having several options opens the team to analyze and feel they are chosing their environment. Always, If there is a current member raising a red flag to a candidate, just reject this candidate. Your group will never reach a high success if they have to collaborate with someone they don’t want . Avoid more than 2/3 interviews for your candidate. In my opinion, It becomes too complex for the candidate and looses the sense of agility, tribe. It becomes politics. We want them here in the moment they fell in love with us, not 6 months later.

5.- Offer the new team member whatever salary he/she thinks he/she deserves, always inside your salary ranges, which are public in your organization and in our job vacancy posts. Make them be aware of why you have these ranges, and what they mean. If you want to have a “senior developer” salary, you are expected to write the best code in your team, train your colleagues in a regular basis, lead technical decissions… and we will check that hardly. Your salary will be reviewed periodically, every year, setting it according to your value in the organization.

I am sure there are many variants of this approach, but I will always suggest to respect the final word of the tribe members, offering them the possibility to decide and block candidates. Do you think it’s feasible somehow to bring this approach to your organization?

A coach is not part of the team


Teams are cool. Forming part of a cool team is even cooler. By working closely with a team it is easy to feel part of it but that’s not the job of a coach. As a coach your role is to make the team go through the tortuous path to high-performance. Chances are you are not going to be able to be their friend through the process.

Furthermore, every member of the team is committed to achieving the team’s goal. This goal is to develop a product increment. They should care about this product enough to want to make it grow healthily. The coach doesn’t need to care about the product the team is building. The coach’s product is the actual team. His goal is to make the team an amazing one. One that is well coordinated and capable to step-up to challenges together. To achieve that you will have to ask them to do things that they might not understand, disagree with, and even dislike you for. Tough luck! Hopefully you will be right, they will end up understanding and eventually thank you for it.

It feels good to be appreciated by the team. You appreciate them. Beware of being driven by this need of appreciation, you might make decisions that the team will like you for but end up hurting in the long run.

No todo es una Historia de Usuario


Utilizamos las historias de usuario para recoger requerimientos y poder planificar (entre otras cosas). Nos marcan qué hacemos en un sprint del trabajo en un producto, y estimadas, nos dan un dato de velocidad del equipo en Scrum. Nos muestran cómo avanza el equipo construyendo el producto.

Y eso es lo queremos conseguir, saber nuestra velocidad de avance de producto. Es lo que reflejaremos en nuestros digramas de burndown de proyecto. El problema es cuando mezclamos otras métricas con esta.

Y así en algunos equipos, se ven las historias de usuario como el mecanismo de reflejar el trabajo que se realiza. Y por tanto, surge la pregunta de cómo especificar tareas o trabajos que hacer como: montar el sistema de integración continua (“como desarrollador quiero…”), realizar una documentación para el departamento de marketing (“Como marketing quiero…”), etc. Cuando tenemos historias de este estilo puede significar varias cosas:

  • Se ha perdido el objetivo de las historias de usuario: Que es descomponer un producto en partes más pequeñas que agregan valor.
  • Hemos confundido el objetivo de una herramienta como las historias de usuario con nuestra necesidad de que nuestro trabajo esté reflejado en algún sitio.

El trabajo de un equipo que no queda reflejado directamente en historias de usuario, y por tanto no suma velocidad de avance real, no significa que no agregue valor o no sea necesario. Simplemente es otra cosa. No todo son historias de usuario.


Programación Asíncrona en Dart: Funciones y Streams



Me he decidido a escribir este articulo tan técnico en este momento porque creo que, aunque no sigue un orden lógico dentro de la serie de artículos sobre Dart que está por venir, es muy importante para los que empiecen con este lenguaje.

Desde mi anterior articulo me he dedicado a sondear posibles pilas tecnológicas para implementar una aplicación web en Dart. Lo que he descubierto es que, aunque la capa servidora está aún muy verde, es fácil montar una aplicación web que genere contenido basado en plantillas y siguiendo el modelo MVC (al estilo de Struts).

De momento estoy usando MongoDB como base de datos con el driver a pelo (bastante sencillo puesto que Mongo usa JSON, que se mapea fácilmente a/desde objetos Dart) y voy a empezar a mirar un ORM (Objectory sobre MongoDB).

Las librerías en las que me estoy basando son:

  • Rikulo Stream Server (
  • mongo_dart (

La parte cliente la estoy postergando de momento porque está mucho más estable el lenguaje en la parte cliente y, por tanto, ofrece menos dudas en cuanto a su viabilidad. De hecho ya hay un framework para el cliente (que viene con Dart) llamado web_ui. Por supuesto no es obligatorio usarlo y hay más frameworks cliente, pero el hecho de que venga con el lenguaje supongo que favorecerá bastante su implantación.

De todo el desarrollo que llevo hecho y de mis incursiones en la lista de discusión de Google Dart -por cierto la recomiendo: es muy interesante ver como se va moldeando el lenguaje entre toda la comunidad- he podido sacar varias conclusiones. Una de ellas es que lo que más le puede costar a alguien que venga del mundo Java (y especialmente si ha programado sólo en servidor) es la programación asíncrona.

Programación asíncrona

En Dart todas las APIs de entrada/salida son asíncronas puesto que el lenguaje Dart es mono-thread. Esto quiere decir que cada programa Dart sólo dispone de un thread con lo que, si lo bloqueamos para realizar una lectura o una escritura de, por ejemplo, un socket paramos todo el programa. Evidentemente esto en un servidor web querría decir que dejaríamos de atender más clientes, con lo que la liaríamos parda. Para resolver este “problema” las APIs de Dart son asíncronas (al estilo de Node.js). De esta forma, cuando leemos de un socket, la llamada vuelve inmediatamente y Dart nos vuelve a llamar cuando hay datos disponibles.

El modelo asíncrono tiene seguidores y detractores. Para los que están acostumbrados a programar de forma síncrona y mono-thread es un infierno. Pero cuando programas de forma síncrona y tienes varios threads que cooperan entre si entonces no está tan claro que es más complejo.

La sincronización multi-thread es difícil de entender y, una vez la has entendido, siempre te deja dudando de si no te habrás dejado algún bloqueo mutuo o un cuello de botella. En programación asíncrona no hace falta sincronizar pero la ejecución se bifurca infinidad de veces. Conclusión: la programación síncrona es más fácil de escribir, pero mas difícil de implementar correctamente. Por el contrario, escribir código asíncrono es más tedioso pero también es más difícil meter la pata (al menos en temas de concurrencia).

Esto, que queda muy bonito dicho así, se empieza a complicar cuando tienes que leer de un socket, pero a la vez de un fichero y, ademas, acceder a la base de datos. En este escenario ya tienes tres fuentes asíncronas y el quebradero de cabeza puede ser considerable, porque el flujo de programación se puede convertir en un grafo muy complejo.

Afortunadamente, tantos años de programación asíncrona (por ejemplo en Javascript) han dado su fruto y disponemos de un patrón bastante potente: las promesas. Una promesa representa una computación aún no realizada que nos dará un determinado resultado (o un error) en el futuro.

Como era de esperar, Dart ha interiorizado este patrón en el lenguaje por medio de dos clases: Future y Stream. Un Future es lo mismo que una promesa y un Stream es como un Future recurrente. Por ejemplo, un Future<String> es un objeto que representa que, en algún momento, una computación asíncrona devolverá un String, mientras que un Stream<String> es un objeto que representa que una computación asíncrona generara uno o más Strings de forma recurrente en el futuro.

Hasta aquí todo bien. Se entiende la teoría. Pero luego usarlo en la practica no es tan fácil porque los Futures se pueden encadenar entre si de muchas maneras y no es nada evidente como hacerlo. Este es el punto en el que creo que bastante gente que venga de la programación síncrona en servidor puede tirar la toalla.

Así que, para vuestro solaz y disfrute, os voy a poner un ejemplo de como se pueden encadenar llamadas asíncronas para que, cuando hagáis vuestros pinitos en Dart, no tengáis que jurar en arameo (bastante tenemos con Dart como para tener que aprender también lenguas muertas).

El ejemplo está directamente sacado del código que estoy escribiendo:

  var db = new Db("mongodb://");
  var entry = new Entry.empty();
  parsePostBody(connect.request).then( (params) {
    return ObjectUtil.inject( entry, params );
  }).then( (entry) {
  }).then( (_) {
    return db.collection("Entries").insert( entry.toMap() );
  }).then( (_) {
    sendRedirect(connect, "/");

Este código abre una conexión a la base datos, crea un objeto del modelo de negocio llamado Entry y después llama a un método asíncrono que descodifica los parámetros enviados en una petición POST de HTTP. Este método es asíncrono porque tiene que leer del socket que conecta nuestro servidor con el navegador (cliente) así que devuelve un Future<String>. Sobre ese Future<String> llamamos al método then(), que recibe una función creada al vuelo (llamadas también funciones lambda o clausuras, o “closures” en inglés) con un solo parámetro (un mapa que contiene los parámetros del POST). Esta “closure” es invocada cuando los parámetros están listos, porque se han leído en su totalidad del socket, e inmediatamente se llama a ObjectUtil.inject(), que también es asíncrona y devuelve un Future<Entry>.

Sin embargo, observad que, en lugar de llamar a then() otra vez sobre el resultado de ObjectUtil.inject(), lo que hacemos es devolver el Future<Entry> como valor de retorno de la “closure” inicial. Esto es lo que se llama encadenamiento de futuros y sirve para poder poner el siguiente then() a continuación de la closure y, de esta forma, obtener un código que, si bien es asíncrono, se asemeja mucho a -o se lee como- código síncrono.

De esta forma podemos encadenar tantas acciones asíncronas como queramos sin volvernos locos. Imaginemos que pesadilla sería este código sin encadenamiento de futuros. Nos quedaría algo así:

var db = new Db("mongodb://");
var entry = new Entry.empty();
parsePostBody(connect.request).then( (params) {
  ObjectUtil.inject( entry, params ).then( (entry) { (_) {
      db.collection("Entries").insert( entry.toMap() ).then( (_) {
        sendRedirect(connect, "/");

Creo que es bastante evidente cual de las dos formas da más “asquete” y espero que todos sabremos captar las bondades del encadenamiento de futuros.

Quiero advertir que en este código he omitido deliberadamente el control de errores por no liarlo más, pero baste decir que hay otro método paralelo al then() llamado catchError() que nos permite escribir cosas del estilo:

parsePostBody(connect.request).then( (params) {
  print( "Habemus params: ${params}" );
}).catchError( (err) {
  print( "Mal rollo (AKA error): ${err}" );

Fijémonos en lo gracioso del asunto: el catchError() se está aplicando no sobre el Future que devuelve parsePostBody(), sino sobre el Future que devuelve el then() (no se podía esperar que devolviese otra cosa el then() 😉 . Lo bueno de esto es que el catchError() captura los errores tanto del parsePostBody() como del código de la closure que procesa su resultado. Un lío, ¿a que si?.

Pero en realidad, lo bueno de todo esto es que, a pesar de que es complicado entender todo el flujo y el manejo que hacen los Futures por dentro, se hace muy intuitivo programarlo y funciona todo como esperamos (al menos con algoritmos mundanos yo no he tenido ningún problema).

Veamos este ejemplo de la implementación del método parsePostBody:

Future<Map<String,String>> parsePostBody( HttpRequest request ) {
  var contentType = request.headers["content-type"][0];
  switch( contentType ) {
    case "application/x-www-form-urlencoded": 
      return IOUtil.readAsString( request ).then( (body) {
        return _parseUrlEncodedBody(body);
      return new Future.immediateError("Unsupported content: ${contentType}");

Map<String,String> _parseUrlEncodedBody(String body) {
  . . .

Si nos fijamos, el método parsePostBody() devuelve el Future que devuelve el método then() de otro Future que devuelve IOUtil.readAsString(). Visto así no hay quien lo entienda, pero… si ponemos return _parseUrlEncodedBody(body); en la closure del then, ¿qué acaba devolviendo el método parsePostBody()? En efecto, el mapa que retorna el método síncrono _parseUrlEncodedBody(). ¿A que mola?


Personalmente es la primera vez que uso promesas. Sabía que existían en Javascript pero yo me había quedado en los callbacks (sobre todo porque, habitualmente, uso GWT en vez de Javascript a pelo y en GWT no hay closures). Por eso me gustaría obtener feedback de los que las hayáis usado en Javascript y me gustaría que los que habéis pasado toda la vida programando en el servidor de forma síncrona no os amilanaseis a la primera de cambio 😉 : la programación asíncrona no es agradable, pero tampoco es el fin del mundo. Es sólo un modelo más de concurrencia.

Os animos a dejar vuestros comentarios sobre el tema más abajo. Gracias.

Futuros artículos

En el próximo artículo trataremos los problemas que se puede encontrar alguien que venga de Javascript al empezar con Dart. Ya voy anticipando que ese problema será el uso de tipos de datos, por lo tanto el artículo tratará de cómo funciona el sistema de tipos opcionales de Dart. También echaremos un vistazo a las partes de la guía de estilo de programación en Dart relacionadas con este tema.

Agile Game Development: Planning, what is a User Story?



Recently, Jose Ramón and I are working with the guys from at Barcelona. Some of our learnings there is that in game development, we tend to think that a game feature is a User Story. This belief leads us to think that developing US takes way longer than a reasonable time for a sprint, because we have to develop the concept, design, art, and also development. But, what about iterative incremental product development? Since your features take longer than a sprint to be delivered, and there is a dependency between art, design and coding, they can be translated to incremental user stories, delivered in different iterations.—–

Example game: Counter Strike by Valve.

Epic Title: Dogs.

Epic Description: We want to add a new character, a dog. The dogs can walk through the game, and interact with people.

We could have our feature splitted the next way:

Sprint 1: Defining the dog’s behaviour, which is actually a requirement for our artists and developers, because it will generate the actual user stories.  Some of the generated user stories might be these, ordered by business priority:

  • US1 : The dogs appears in the game, not moving, just like any other moveless element.
  • US2 : The dogs move through the scenario, without any special destination.
  • US3 : The dogs are able to bite people.
  • US4 : The dogs have a natural behaviour, some of them might be more aggresive, some others might be more calm…
  • US5 : The dogs, as nature animals, do their stuff somewhere in the scenario, usually in the corners, traffic lights…
  • US6 : The dogs can sit down.
  • US7 : The dogs obey to human orders.

Sprint 2

(US1) The artists develop the dog character, and as it’s moveless, we can have it deployed to our game.

(prepare the requirements for US2) The game designer start defining what movements the dogs will have: that is, he will only run at certain speed, he will open and close his mouth while running …

Sprint 3

(US2) Artists, animators and developers start working collaborating to finish this US.

(preparing requirements for US3) The game designer defines what injures this will cause to the player, defines the dog movement and player behaviour when being bitted.


Sprint 4

(US3) Artists, animators, and developers collaborate working to deliver this new US.

(you can imagine how it goes on and on … )



The concept here is to have a continuous flow of “ready” User Stories, to have some always stories ready to be started, but with a limit to avoid falling into big upfront analysis. As a general rule, you P.O. , let me suggest you to come to a sprint plan enough user stories ready for around 1.5 – 2 sprints, based in the team’s average speed. That will give you enough space in case the team can compromise more than you thought for this sprint, or in case the pprint moves really smooth and during the middle of the sprint the team can start sprint+1 user stories.