The golden path

Revisited

Recently a few of us met to discuss introducing industry wide peer review into our development workflow. If you want a more thorough grasp of why that might be necessary you can read my previous article on front end review.

To summarise for those short on time we have a fragmented development platform. In the interest of time saving, code re-use and standardisation it would benefit us all to see which paths were more trodden.

Peer review of code would allow us to see which were gaining traction, something our current metrics don't necessarily provide. Github stats can only get you so far.

Ideally we want to “encourage the behaviour of congregation” around components developed with best practises in mind.

Eventually I’d like to see this available for our whole development platform but in practical terms it makes more sense to constrain this experiment to a contained platform and web components is a great place to start.

The team at webcomponents.org had already highlighted this issue as part of their work and are keen to implement review as part of the component eco-system. As a test bed, an opt in peer review will be introduced to webcomponents.org as part of their library of components.

Its important to stress that this isn’t a prerequisite for creating web components, releasing them into the wild, or submitting them to the webcomponents.org library. This is merely an opt in choice, for the site, for those interested in being reviewed.

Automated vs. Manual

As a basic step it is possible to partly automate review by writing software to act as a Linter which runs after submission.

However we felt the majority of metrics can only be gathered as part of a manual review either because automation would be prohibitively complex or because the metric itself could be subjective.

A two tiered approach to manual review was proposed. A high level star rating like an Amazon review for those who are down with the whole brevity thing and a more granular form of review based around predefined success criteria.

A questionnaire format in the style of JSManners (produced by Andrew Betts) was suggested allowing participants to rate components by answering questions about their behaviour.

The exact mechanic is still to be ironed out but a detailed review could solve a number of problems:

Scalability vs engagement

My primary concern with an industry wide initiative is lack of engagement and whether its possible to get a critical mass of developers to adopt something as a standard behaviour.

The webcomponents.org team feel thay have a greater issue with scaling their review solution, particularly where the review process could cause a bottleneck. They are anticipating a proliferation of submissions.

Creating a review process which kicks off automation as a basic metric and where contributors can review the work of others on an adhoc basis removes the bottleneck to a certain extent.

Diving deeper

Other facets of the mechanic were discussed.

Community Concerns

Several people contacted me to voice their concerns about how peer review might act to enforce people’s behaviour, by actively policing code or limiting creativity.

I don’t think that’s anyones motivation. Ideally Peer review would encourage best practise in the industry without preventing proliferation or experimentation.

Another concern was raised about the possibility of one true path or canonical solution. I don't think that exists. I think what we need is a way to easily adopt commonly used solutions and to see where there are patterns we want to repeat. It's more about seeing which paths are well travelled than forcing everyone down the same path.

Review is just another way to sort the wheat from the chaff, not a standards body monitoring output.

Conclusion

The webcomponents.org team will be implementing this as part of their platform but if its successful it would be good to see this widened to Bower and maybe even NPM as a way to augment their search and allow developers an easy way to compare like for like solutions and find the best fit for their project.

The web platform needs you

Final thought?

I’d like to leave you with something I found in the exercism.io documentation (with Thanks to Katrina Owen)

A rising tide lifts all the boats - Unknown

Many thanks to Sebastien, Addy, Andrew, Ryan, Daniel & Oliver for coming to talk.

You can reach me at @tiny_m if you want to talk some more.