Recently a few of us met to discuss introducing industry wide peer review into our development workflow. If you want a more thorough grasp of why that might be necessary you can read my previous article on front end review.
To summarise for those short on time we have a fragmented development platform. In the interest of time saving, code re-use and standardisation it would benefit us all to see which paths were more trodden.
Peer review of code would allow us to see which were gaining traction, something our current metrics don't necessarily provide. Github stats can only get you so far.
Ideally we want to “encourage the behaviour of congregation” around components developed with best practises in mind.
Eventually I’d like to see this available for our whole development platform but in practical terms it makes more sense to constrain this experiment to a contained platform and web components is a great place to start.
The team at webcomponents.org had already highlighted this issue as part of their work and are keen to implement review as part of the component eco-system. As a test bed, an opt in peer review will be introduced to webcomponents.org as part of their library of components.
Its important to stress that this isn’t a prerequisite for creating web components, releasing them into the wild, or submitting them to the webcomponents.org library. This is merely an opt in choice, for the site, for those interested in being reviewed.
As a basic step it is possible to partly automate review by writing software to act as a Linter which runs after submission.
However we felt the majority of metrics can only be gathered as part of a manual review either because automation would be prohibitively complex or because the metric itself could be subjective.
A two tiered approach to manual review was proposed. A high level star rating like an Amazon review for those who are down with the whole brevity thing and a more granular form of review based around predefined success criteria.
A questionnaire format in the style of JSManners (produced by Andrew Betts) was suggested allowing participants to rate components by answering questions about their behaviour.
The exact mechanic is still to be ironed out but a detailed review could solve a number of problems:
It could allow people to cherry pick which questions to answer. To ensure people only judged code on areas in which they felt themselves to be an expert.
If you’re submitting code you could also invite industry experts to review your code on their given specialism as a way of improving it in ways you might personally be weak on.
Allowing partial completion would obviously be faster than forcing everyone to answer the entire questionnaire.
Aggregating scores and giving an average allows for crowd sourcing a rating.
Allowing users to see how many people had rated a component on a given criteria would also give developers confidence in how accurate that rating might be.
The number of reviews is also a strong metric for how many people are using the component and wish to support its usage.
Review criteria can act as a checklist for people creating components. If you know what you will be reviewed against then you know what to aim for and it raises the profile of some less sexy pillars of development.
Its also a great way to introduce a new development paradigm as a launchpad for beginners to propel them in the right direction.
Reviewing other people’s code is the best way to learn. You can check out exercism.io if you want to start doing that right now.
My primary concern with an industry wide initiative is lack of engagement and whether its possible to get a critical mass of developers to adopt something as a standard behaviour.
The webcomponents.org team feel thay have a greater issue with scaling their review solution, particularly where the review process could cause a bottleneck. They are anticipating a proliferation of submissions.
Creating a review process which kicks off automation as a basic metric and where contributors can review the work of others on an adhoc basis removes the bottleneck to a certain extent.
Other facets of the mechanic were discussed.
An appeal use case where you could either challenge or debate the result of a review, which in turn might prompt an independant review.
An approved moderator status. Perhaps contributors with highly scoring components could be promoted within the system into “expert” reviewers or behave as moderators in the case of appeal.
We agreed that reviews and resulting scores had to be tied to release versioning and that the ratings of previous versions of your component would be visible, so when you released a new version your score isn’t destroyed.
Relevant metadata and tagging to allow for filtering and sorting needs to be supplied by a component author.
Several people contacted me to voice their concerns about how peer review might act to enforce people’s behaviour, by actively policing code or limiting creativity.
I don’t think that’s anyones motivation. Ideally Peer review would encourage best practise in the industry without preventing proliferation or experimentation.
Another concern was raised about the possibility of one true path or canonical solution. I don't think that exists. I think what we need is a way to easily adopt commonly used solutions and to see where there are patterns we want to repeat. It's more about seeing which paths are well travelled than forcing everyone down the same path.
Review is just another way to sort the wheat from the chaff, not a standards body monitoring output.
The webcomponents.org team will be implementing this as part of their platform but if its successful it would be good to see this widened to Bower and maybe even NPM as a way to augment their search and allow developers an easy way to compare like for like solutions and find the best fit for their project.
The web platform needs you
Refactor some of your existing code into a component to get a feel for it.
Take a look at the guiding principles on webcomponents.org to see if you're covering all the bases.
When the review platform is ready bring your knowledge and expertise so you can help with the peer review.
I’d like to leave you with something I found in the exercism.io documentation (with Thanks to Katrina Owen)
A rising tide lifts all the boats - Unknown
Many thanks to Sebastien, Addy, Andrew, Ryan, Daniel & Oliver for coming to talk.
You can reach me at @tiny_m if you want to talk some more.