Hi,
I would like to know your opinions regarding code metrics service for Ruby applications - https://codeclimate.com .
This service is run by Bryan Helmkamp ( https://twitter.com/brynary ) and recently they started to offer free use for OSS projects - http://blog.codeclimate.com/blog/2012/07/10/code-climate-is-free-for-open-so...
They use metrics for Complexity based of Flog, Duplication of code based of Flay and Code Smells like long methods or duplicated blocks of code.
You can see it in action on some Ruby projects ( Rails included ) in links from the blog post above or just try for example Paperclip - https://codeclimate.com/github/thoughtbot/paperclip
Seting up codeclimate to work for your OSS Ruby project is very easy ( see blog post above ) - just provide repo name on github and email addres to recieve email when metrics are ready (from my tetsing matter if minutes).
Codeclimate then checkout repo every 3 hours and run metrics tools and provides results via dashboard with changes over time and "class explorer" to go throught class by class.
They interpret combined code metrics results as grades ( A best, F worst ) so no need to find out what different numbers from different tools means. I used some code metrics tools in past ( for my pet projects ) and I have been always very confused how to interpret these numbers. So I welcome codeclimate as an easy tool to help me improve my coding habits a little.
What I want to ask you is whether you can see benefit of using it for Aeolus Project (not only for Conductor) or if you percieve this service only as a little toy.
Thanks for your opinions and sugestions for similar services / tools.
On Thu, Jul 26, 2012 at 05:29:24PM +0200, Petr Blaho wrote:
Hi,
I would like to know your opinions regarding code metrics service for Ruby applications - https://codeclimate.com .
<snip>
What I want to ask you is whether you can see benefit of using it for Aeolus Project (not only for Conductor) or if you percieve this service only as a little toy.
I think this looks pretty nifty, and I don't see any downside to trying it out, though I'm not crazy about the idea of it sending emails to the list. (Just because there's already a ton of noise.)
I do think it's important that we take automated metrics as vague rules of thumb, versus hard scientific facts. If complexity goes way up, or readability goes way down, we should investigate. But if you extend a basic file to have it do some more complex tasks, and the complexity goes up a little, I don't think we should be alarmed.
I went to a talk Chad Fowler gave a while back, and he talked about a lot of this stuff. One of the more interesting projects to me was his own turbulence gem: https://github.com/chad/turbulence
The notion is to graph churn vs. complexity. I had a hard time getting it to work on Conductor and ended up forgetting about it. But he pointed out that there are instances of complex, unmaintainable code that "just work" and go years without needing anyone to touch them. What's really dangerous is high-complexity code that gets modified all the time.
-- Matt
On Thursday, July 26, 2012 01:10:50 PM Matt Wagner wrote:
On Thu, Jul 26, 2012 at 05:29:24PM +0200, Petr Blaho wrote:
Hi,
I would like to know your opinions regarding code metrics service for Ruby applications - https://codeclimate.com .
<snip> > What I want to ask you is whether you can see benefit of using it for Aeolus Project (not only for Conductor) > or if you percieve this service only as a little toy.
I think this looks pretty nifty, and I don't see any downside to trying it out, though I'm not crazy about the idea of it sending emails to the list. (Just because there's already a ton of noise.)
I just tested that after change in code no email comes. (Change already on codeclimate.)
I can subscribe to atom feed if I want.
I do think it's important that we take automated metrics as vague rules of thumb, versus hard scientific facts. If complexity goes way up, or readability goes way down, we should investigate. But if you extend a basic file to have it do some more complex tasks, and the complexity goes up a little, I don't think we should be alarmed.
You are right. I think this tool can only provide hint what can be problem.
I went to a talk Chad Fowler gave a while back, and he talked about a lot of this stuff. One of the more interesting projects to me was his own turbulence gem: https://github.com/chad/turbulence
The notion is to graph churn vs. complexity. I had a hard time getting it to work on Conductor and ended up forgetting about it. But he pointed out that there are instances of complex, unmaintainable code that "just work" and go years without needing anyone to touch them. What's really dangerous is high-complexity code that gets modified all the time.
I agree that often changed code (class, file) is probably not well designed and coded when it has huge complexity too. I liked churn and I will look at turbulence.
-- Matt
Thanks for your input...
aeolus-devel@lists.fedorahosted.org