I just had a run through and I hit 45 minutes exactly.
So, I'm just going to get started,
so that you can all
have a nice coffee break after this.
So, chances are that you've heard of refinements,
but never used them.
The refinements feature has existed as part of Ruby,
for around five years, first as a patch,
and then, subsequently, as an official part of Ruby,
since Ruby 2.0, and yet, to most of us,
it exists only in the background,
surrounded by a haze of opinions about how they work,
how to use them, and indeed whether or not
using them at all is a good idea.
I’d like to spend
a little time looking at what refinements are,
how to use them, and what they can do.
But don't get me wrong,
this is not a sales pitch for refinements.
I'm not going to try and convince you
that you should be using refinements
and that they’re going to solve all your problems.
The title of this presentation is
Why is Nobody Using Refinements?,
and that's a genuine question.
I don't have all the answers.
My only goal is that by the end of this session,
both you and I will have a better understanding
of what they actually are,
what they can actually do,
and, why they might be useful,
when they might be useful,
and why they’ve lingered
in the background for so long.
So, let's go.
refinements are a mechanism
to change the behavior of an object
in a limited and controlled way.
And, by change, I mean add new methods
or redefine existing methods.
And, by limited and controlled,
I mean that by adding or changing those methods
it does not have an impact
on other parts of our software
which might interact with the same object.
So let's look at a very simple example.
Refinements are defined inside a module,
using the refine method.
This method accepts a class,
a string in this case,
and a block which contains
all the methods to add to that class
when the refinement is used.
You can refine as many classes as you want
within a module,
and you can define
as many methods as you want,
within each block.
To use a refinement,
we call the using method,
with the name of the enclosing module.
And, when we do that,
all instances of that class,
which is string in this case,
within the same scope as our using call,
will have the refine methods available.
Another way of saying this is that,
the refinement has been activated
within that scope,
however, any strings outside the scope
are left unaffected.
Refinements can also change methods
that already exist.
When the refinement is active,
it is used instead of the existing method,
although the original is still available
via the super keyword,
which can be very useful.
And, anyway the refinement isn’t active.
The original method gets called,
exactly as before.
And that’s really all there is to refinements.
Two new methods, refine and using,
however there are some quirks,
and if we want to properly understand refinements
we need to explore them a little bit.
And the best way of approaching this
is by considering a few more examples.
So, now we know
that we can call the refine method
within a module to create refinements,
and that is actually
all relatively straightforward,
but it turns out that when and where
you call the using method
can have a profound effect
on how the refinement behaves with our code.
We’ve seen that invoking using
inside a class definition works.
We activate the refinement,
and we can call refine methods
on string instances in this case.
But, we can also move the call to using
to somewhere outside the class,
and still use the refine method as before.
In the example so far,
we’ve been calling the refine method directly,
but we can also use them
within methods defined in the class.
And, again this also works,
even if the call to using is outside of the class.
But, this doesn’t work.
We cannot call our shout method
on the string returned by our method
even though that string object was created
within a class where
the refinement was activated.
And, here’s another broken example.
We’ve activated the refinement inside our class
but when we re-open the class,
and try to use the refinement,
we get NoMethodError.
If we nest a class within another
where refinement is active,
it seems to work,
but it doesn't work in subclasses
unless they're also nested classes.
And, even though nested classes seem to work,
if you try and define them,
using the double colon, or the compact form,
the refinements will have disappeared again.
And even blocks seem to act a little bit strangely.
Our class uses the refinements,
but when we pass a block
to the method in that class,
suddenly it breaks.
It’s as if the refinement has disappeared.
So, what's going on here?
For many of us, especially those
relatively new to Ruby,
this is going to be quite counterintuitive.
After all, we’re used to being able
to reopen classes or share behavior
between super and subclasses,
but it seems like
that only works intermittently with refinements.
It turns out that the key to understanding how
and when refinements are available
relies on another aspect of how Ruby works,
which you may have already heard of,
and possibly even encountered directly.
The key to understanding refinements
is understanding about lexical scope.
And, to understand about lexical scope
we need to learn about some of the things
that happens when Ruby parses our program.
So let's look at that first example again.
As Ruby parses the program,
it is constantly tracking a handful of things
to understand what the meaning of the program is.
And exploring all of these in detail
would take far more time that I actually have
but for the moment,
the one that we’re interested in,
is called the current lexical scope.
So, let’s pretend to be Ruby,
as we walk through the code,
and see what happens.
When Ruby starts parsing the file,
it creates a new structure in memory,
a new lexical scope,
which holds various bits of information
that Ruby uses to track what’s happening at that point.
When we start processing,
we create this initial one,
and we call that the top-level lexical scope.
And when we encounter a class definition
or a module definition
as well as creating the class
and everything that that involves,
Ruby also creates a new lexical scope
nested inside the current one.
And we can call this lexical scope A,
just to give it an easy label.
It doesn’t actually have a name.
But, visually, it makes sense to show them as nested,
but behind the scenes, the relationship is modeled
by each scope linking to it’s parent.
So, A’s parent is the top-level scope,
and the top-level scope has no parent.
As Ruby process all the code
within this class definition,
the current scope is now lexical scope A.
When we call using,
Ruby stores a reference to the refinement
within the current lexical scope.
We can also say that within lexical scope A,
the refinement has been activated.
You can now see there are no activated refinements
in the top-level scope,
but our shouting refinement
is activated for lexical scope A.
So, next we can see there’s a call
to the method shout on a string instance.
Jamie Gavern, who sat there,
is going to talk a lot more
about what method dispatch does,
but one of the things that happens at this point
is that Ruby checks to see
if there any activated refinements
in the current lexical scope
that might affect this method.
In this case, there is an activated refinement
for the shout method on strings,
which is exactly what we're calling.
So, Ruby then looks up the correct method body
within the refinement, rather than the class,
and invokes that instead of any existing method.
And there, we can see that our refinement
is working as we hope.
So, what about when we try to call the method later?
Well, once we leave the class definition
the current lexical scope becomes the top-level scope again
and then we find our second string instance,
with a method being called on it.
And, once again, when Ruby dispatches for the shout method,
it checks the current lexical scope
for the presence of any refinements
and in this case there are none.
So, Ruby behaves as normal
which is to invoke method missing,
which raises an exception
and that's why we get our NoMethodError.
Now, if we had called using shouting
outside the class at the top of the file,
or something like that,
we can see our refine method
works both inside and outside the class perfectly.
And this is because we're activating the refinement
for the top-level lexical scope.
And once a refinement is activated,
for all it’s activated for all nested lexical scopes.
So, call to using at a top of a file
means that it will work everywhere in that file.
And so, our call to the refine method in the class,
works as well as just in the top of the file.
So this is our first principle
about how refinements work.
When we activate a refinement with a using method,
that refinement is active in the current
and any nested lexical scopes,
however, once we leave that scope,
the refinement is no longer activated
and Ruby behaves just like it did before.
So let's look at another example from earlier.
Here, we define a class and activate the refinement,
and then later, either somewhere in the same file,
or a different file, we reopen the class and try to use it.
Now, we’ve already seen this doesn't work,
but the question is why?
Watching Ruby build its lexical scopes,
again will reveal why this is the case.
So once again we have our top-level lexical scope,
and when we encounter the first class definition,
Ruby gives us a new nested lexical scope
that'll call A again.
And, it’s within this scope
that we activate the refinements.
Once we reach the end of the class definition,
we return to the top-level lexical scope.
But, when we re-open the class,
Ruby creates a nested lexical scope,
just as it did before,
but it’s distinct from the previous one,
and we’ll call it B, just to make that clear.
While the refinement is activated
in the first lexical scope,
when we reopen the class,
we’re in a new lexical scope.
and one where the refinements
are no longer active.
So the second principle is this:
just because the class is the same,
doesn’t mean you’re back in the same lexical scope.
And this also is the reason
why our example for the subclasses
didn’t behave as we might have expected.
So, we don’t have to pretend to be Ruby anymore,
we can just draw these scopes.
It should be clear now,
that the fact that we’re in a subclass,
actually has no bearing
on whether or not the refinement is active.
It’s entirely determined by lexical scope.
Any time Ruby encounters a class
or module definition,
via the class or module keywords,
it creates a new, fresh lexical scope,
even if that class or module
has already been defined somewhere else.
And, this is also the reason why,
when even when activated
at the top-level of the file,
refinements only stay activated
until the end of that file,
because each file is also processed
using a new top-level lexical scope.
So, we now have another two principles
about how lexical scope and refinements work.
Just as reopened classes have a different scope,
so do subclasses.
If fact, the class hierarchy has nothing to do
with the lexical scope hierarchy.
And we also now know that every file
is processed with a new top-level scope,
so refinements activated in one file
are not activated in any other files,
unless those other files
also explicitly activate our refinement.
Let’s look at one more example.
What happened there?
There we go.
Here, we’re activating a refinement within a class
and then, defining a method in that class,
which uses the refinement
and then later
we create an instance of the class
and then call that method.
So, we can see that
even though the method gets invoked
from our top-level lexical scope,
that’s where that call to greet is,
where the refinement is not activated,
the refinement still somehow works,
and the behavior is what we hoped.
So, what’s going on here?
Well, when Ruby processes
a method definition,
it stores with that method,
a reference to the current lexical scope,
at the point the method was defined.
So, when Ruby processes
the greet method definition,
it stores a reference to lexical scope A
along with it.
And, then when we call the greet method,
even in a different file,
Ruby evaluates it using the lexical scope
that it has associated with it.
So, when Ruby tries to evaluate
hello.shout inside our greet method
and tries to dispatch to the shout method,
it’s checking for activated refinements,
in lexical scope A,
even though the method was called
from an entirely different lexical scope.
We already know that our refinement
is active in scope A,
and so Ruby can use
the method body for shout
from the refinement
and it works exactly like we'd hope.
So, our fourth principle is this:
methods are evaluated
using the lexical scope
at their definition,
no matter where those methods
are actually called from.
Okay, one more example.
I promise, just one.
A very similar process
explains why blocks didn’t work.
So, here’s that example again,
a method defined in a class
where the refinement is activated,
yields to a block,
and, when we call that method
with a block that uses the refinement,
we get our error.
So, we can quickly see
which lexical scopes Ruby has created
as it's processed this code,
and as before, we have a nested lexical scope
that we’ll call A,
and the method defined in our class
is associated with it.
And, A has the refinement active.
However, just as methods
are associated with the current lexical scope,
so are blocks, and procs,
and lambdas, and everything like that.
When we define the block,
the current lexical scope,
is the top-level one.
So, when the run method yields to the block,
Ruby evaluates that block,
using the top-level lexical scope,
and so Ruby’s method dispatch algorithm
finds no active refinements
and therefore no shout method.
Our final principal:
Blocks, and procs, and lambdas, and so on,
are evaluated using the lexical scope
at their definition too.
And, with a bit of experimentation,
we can also demonstrate to ourselves
that even blocks evaluated using tricks like
instance eval, or class eval,
or anything like that
retain this link to their original lexical scope
even if the value of self might change
depending on how you’re passing them around.
And this link for methods and blocks
to a specific lexical scope
might seem strange, or even confusing right now,
but we’ll soon see that precisely because of this,
that refinements are so safe to use,
but I’ll get to that in a minute.
For now, let’s just recap what we know.
Refinements are controlled
entirely using lexical scope structures
already present in Ruby.
You get a new lexical scope
anytime you do any of the following:
entering a different file,
opening a class or module definition,
or running code from a string using eval,
although I haven’t showed any examples of that.
But, as I said earlier,
you might find the principle of lexical scopes
surprising if you’ve never really
thought about it before,
but it’s actually a very useful property
for a language to have,
because without it,
lots of the things that we take for granted
about Ruby would be much harder, if not impossible.
Lexical scope is actually part of how Ruby
determines what constant you mean,
and it’s fundamental to using
blocks, and procs as closures, for example.
We also now have our five basic principles
that enable us to explain how and why refinements
behave the way they do.
Once you call using,
refinements are activated for the current
and any nested lexical scopes.
The nested scope hierarchy is entirely distinct
from any class hierarchy in your code.
Superclasses and subclasses
have no impact on refinement at all,
only nested lexical scopes do.
Different files get different top-level scopes,
so, even if we call using at the very top of a file,
and activate refinement for all code within that file,
the meaning of code in all other files is unchanged.
Methods are evaluated using the current lexical scope
at the point of definition,
so, we can call methods that make use of refinements
internally, that make use of refinements internally.
From anywhere in the rest of our code base.
And finally, blocks are also evaluated
using the lexical scope.
And, so it’s impossible for refinements
activated elsewhere in our code,
to change the behavior of blocks,
or indeed other methods,
or any other code written
where that refinement wasn’t activated.
Right, so now you basically know everything
there is to know about refinements.
But, what is it good for?
Let’s try and find out.
Now again, another disclaimer.
These are just some ideas,
as some are more controversial than others,
but hopefully they will help frame\
what refinements might be good for,
what they might make more elegant, or more robust.
The first one is probably not going to be a surprise,
but I think it’s worth discussing anyway.
Monkey patching is the act
of modifying a class or object that we don’t own,
that we didn’t write, basically.
And, because Ruby has open classes,
it’s trivial to redefine any method on any class
with new or different behavior.
The danger that monkey patching brings,
is that those changes are global.
They affect every part of the system as it runs.
And, as a result, it can be very hard
to tell which parts of our software will be affected.
If we change the behavior
of an existing method to suit one use,
there’s a good chance that some distant part of the
code base, hidden somewhere in rails,
or something like that,
is going to call that method
expecting the original behavior.
Or, even worse, it’s own monkey patch behavior,
and things are going to get messy.
So, say I’m writing some code in a jam,
and part of that I want to be able to turn
an underscored string into a camelized version.
I might decide, the easiest thing to do
would be to reopen the string class,
and just add this method.
It’s innocent looking and it looks like it works.
That’s very simple
and quite understandable thing to want to do,
but unfortunately, as soon as anyone
tries to use my jam,
even myself in a reels application,
the test weep is going to go from passing,
not to failing, but to exploding entirely.
You can see the arrow at the top there.
Something to do with constant names
or something like that,
but looking at the backtrace,
I don’t see anything about camelize,
so it doesn’t seem very obvious,
really why what I did seems to have broken this.
I really doubt if I was using code from someone else,
that I would have any idea.
And, it would probably take me a long time
to trace through figuring out what had gone.
And this is exactly the problem that Yehuda Katz
indentified with monkey patching in his blog post
about refinements almost exactly five years ago.
So, monkey-patching has two fundamental issues.
The first is breaking API expectations.
We can see that Rails has some expectation,
for example, about the behavior of the camelize method,
on string, which we obviously broke,
when we added our own monkey-patch.
The second is that monkey-patching
can make it far harder to understand
what might be causing unexpected
or strange behavior in our software.
Refinements in Ruby
addresses both of these issues.
If we change the behavior of a class
using refinement we know that it cannot
affect parts of the software we don’t control,
because refinements are restricted by lexical scope.
We’ve already seen that refinements activated in one file
are not activated in any other file
even when reopening the same classes.
If I wanted to use a version of camelize in my jam,
I could define and use it via refinement,
but any way that refinement
wasn't specifically activated,
which won't be anywhere in Rails for example,
the original behavior remains.
It's actually impossible to break existing software
like Rails using refinements.
There’s no way to influence the lexical scope
associated with code without editing that code itself.
And so the only way we can poke
refinement behavior into a jam
is by literally finding the source code to the jam
and typing into it.
This is exactly what I meant by limited and controlled
at the start of this presentation.
Refinements also make it easier to understand when
unexpected behavior might,
where unexpected behavior may coming from,
because they require an explicit call to using
somewhere in the same file
as the code that uses that behavior.
You know if there's no call to using in a file,
there we can be confident,
assuming no one else has monkey-patched anything,
that there are no refinements active,
and that Ruby should behave
the way that we would expect.
Now, this is not to say that it’s impossible
to create convoluted code,
which is tricky to trace or debug.
In Ruby, that will always be possible.
But, if we use refinements,
there will always be a visual clue
that refinement is activated,
so it might be involved.
On to my second example.
Sometimes, stuff that we depend on
changes it’s behavior over time,
as new versions are released.
API’s can change, and newer versions of libraries,
and even in some cases
the language itself can change.
For example, in Ruby two,
the behavior of chars method on strings
changed from returning a numerator,
to returning an array of single character strings.
Imagine we’re migrating an application
from Ruby 1.9 to Ruby two or later,
and we discover that some part of our application
is relying on this earlier behavior of chars.
If some parts of our software rely on it,
we can use refinements to preserve the original API,
without impacting any other code
that might have already been adapted to the new API.
Now, here’s a simple refinement,
which we could activate only for the code
which depends on that 1.9 behavior,
while the rest of the system remains unaffected,
and even any dependencies that we bring in
now or in the future,
will be able to use Ruby two as they might expect.
My third example,
will hopefully be familiar to most people.
One of the major strengths of Ruby,
is that it’s flexibility can be used
to help us write very expressive code.
That’s the main reason why
I was drawn to Ruby in the first place.
In particular, it supports the creation of DSL’s
or Domain Specific Languages,
and these are just collections of objects and methods
that have been designed to express concepts
as closely as possible to the terminology
that non-developers might use.
And they're often designed to read more like
human language than code.
Adding methods to core classes can often
help make DSL’s more readable and expressive.
So, refinements are a natural candidate
of doing this in a way that doesn’t leak
those methods into other parts of an application.
RSpec is a great example of a DSL,
in this case, we’re testing.
Until recently, this would have been a typical
example of RSpec usage.
One hallmark is the emphasis
on writing code that reads fluidly
and we can see that demonstrated
in the line “developer.should be_happy"
which is valid Ruby
but reads more like English than code.
And, to enable this, RSpec used monkey-patching
to add a should method to all objects.
Now, recently RSpec moved away from this DSL,
and while I cannot speak
for the developers that maintain RSpec,
I'm quite confident that part of the reason
was that they wanted to avoid monkey-patching
in the object class.
However, refinements offer a compromise
that balances readability of the original API
with the integrity of our objects.
It’s easy to add a should method
to all objects in your spec files using a refinement.
But, this method doesn't leak out
into the rest of the code base.
Now, the compromise is that
you must write using RSpec at the top,
or somewhere in every file,
which I don't think it is a large price to pay,
but you might disagree,
and we’ll get to that shortly.
RSpec isn’t the only DSL that’s commonly used,
and you might not even have thought of it as a DSL,
After all, it is just Ruby.
You could also view the roots file of a reels application
as a DSL of sorts,
or even the query methods of active records.
And, in fact, the sequel jam
actually does, optionally, provide a mechanism
to let you write queries more fluently
by adding methods to string, and symbol,
and a few other classes using refinements
so that you don't affect the rest of your code base.
DSL’s are everywhere
and refinements can help make them more expressive,
without resorting to monkey-patching
or other brutal techniques.
So, on to my last example.
Refinements might not just be useful for
monkey-patching, or as I mentioned, DSL’s.
We might also be able to harness refinements
as a kind of design pattern and use them to ensure
that certain methods are only callable
from specific, potentially restricted,
parts of our code base.
For example, consider a reels application
with a model that has some sort of dangerous
or expensive method on it.
By using a refinement,
the only places we can call this method
are where we’ve explicitly activated that refinement.
From everywhere else,
all other normal controllers,
views or other classes,
even though they might be handling the same object,
even the same instance of that object,
the dangerous or expensive method
is guaranteed not to be available there.
I think this is a really interesting use for refinements
as a design patcher or monkey-patching.
And, while I know there could be some obvious
objections to that suggestion,
and I even have some objections, myself,
I’m certainly curious to explore a bit more,
and decide whether or not it’s worthwhile.
So, those are some examples of things
that we might be able to do with refinements.
I think they’re all potentially very interesting,
and all potentially very useful.
And, so finally,
to the question I’m curious about.
If refinements can do all these things
in such an elegant, safe way,
why aren’t we seeing more use of them?
It’s been five years since they appeared,
and three years since they were an official part of Ruby,
and yet, when I search Github,
almost none of the results are actual uses of refinements.
In fact, some of the top hits are actually jams
that try to remove refinements from Ruby.
You can see in the description,
“No one knows what problem they solve,
“or how they work.”
over the last 25 minutes,
we might have tried to address some of that.
Now, I asked another one of the speakers
from this conference who will remain nameless,
what they thought the answer to my question might be,
and they said, “Because they’re just bad”,
as if it was a fact.
Now, my initial reaction to this kind of answer
is somewhat emotionally charged--
But my actual answer is more like,
“Why do you think that?”
So, I don’t find this answer very satisfying.
Why are they bad?
I asked them, “Why do you think that?”
And they replied, “Because they’re just
“some other form of monkey-patching, right?”
Well, yes sort of,
but also, not really.
And, just because they might be related
in some way to monkey-patching,
does that automatically make them bad?
Or, not worth understanding?
I can’t shake the feeling
that this is the same mode of thinking
that leads us to ideas like,
metaprogramming is too much magic,
or using single or double-quoted strings consistently
is a very important thing,
or something, anything you type
into a text editor can be described as awesome,
when that word should be reserved exclusively
for moments in your life
like seeing the Grand Canyon for the first time,
and not when you install the latest jam
or anything like that.
I am deeply suspicious of awesome,
and so I’m also suspicious of bad.
I asked another friend if they had any ideas about
why people weren’t using refinements,
and they said, “Because they’re slow”,
again, as if it was a fact.
And, if that were true,
that would be totally legitimate,
but it’s not.
If you look at this blog post,
which is actually from, I think, a few weeks ago,
someone’s done some nice benchmarking,
and it has almost no difference on the amount of time
it takes to dispatch Ruby method calls.
So, why aren’t people using refinements?
Why do people have these ideas like they’re just slow,
or plain bad?
Is there any actual solid basis for those opinions?
As I told you right at the start,
I don’t have any neatly packaged answer,
and maybe nobody does.
When I proposed this talk,
it really was a genuine question.
I didn’t know that much about refinements,
and I wanted to know if there was an answer.
So, here are my best guesses,
based on tangible evidence,
and the understanding that we now have
about how refinements actually work.
Well, refinements have been around for five years.
The refinements you see now
are not the same as those
that were introduced half a decade ago.
Originally, they weren't strictly lexically scoped.
And while this provides some opportunity
for more elegant code,
think not having to write using RSpec
at the top of every file,
it also breaks the guarantee that refinements
cannot affect distant parts of the code base.
It’s also probably true that lexical scope
is not a familiar concept
for many Ruby developers.
I'm not ashamed to say
I've been using Ruby for over 13 years now,
and it's only recently that I’ve really understood
what lexical scope actually meant.
I think you could probably make quite a lot of money
writing Rails applications,
without really caring about lexical scope at all,
and yet, without understanding it,
refinements will always seem like
confusing and uncontrollable magic.
The evolution of refinements hasn’t been smooth.
And, I think that’s maybe why some people feel
like nobody knows how they work
or what problem they solve.
It doesn’t help, for example,
that a lot of the blog posts you find,
if you search for refinements in Ruby now,
are no longer accurate.
And, even the official Ruby documentation
is actually wrong.
This hasn’t been true since Ruby 2.1,
but this is what the documentation says right now.
As a nudge to any Ruby core team members,
issue 11681 might fix that
if you have a look at it.
I think that some of this information brought
can explain a little about why refinements have stayed
in the background.
There were genuine and valid questions
about early implementation,
and design choices,
and I think it’s fair to say that some of those questions,
maybe took a little bit of the steam
out of the new feature as it was, kind of, being
unveiled to the world.
But even with all the outdated blog posts,
I don’t think this entirely explains
why no one seems to be using them.
So, perhaps it’s the current implementation
that people don’t like.
Maybe the idea of having to write using everywhere
goes against the mantra of dry,
don’t repeat yourself,
that we’ve generally adopted as a community.
After all, who wants to remember to have to write
“using RSpec” or “using sequel”,
or “using active support”
at the top of literally every file.
Doesn't sound fun.
And this points to another potential reason.
A huge number of Ruby developers
spend most, if not all, of their time using Rails.
And, so Rails has a huge amount of influence,
over which language features
are promoted and adopted by the community.
Rails contains, perhaps,
the biggest collection of monkey-patches ever,
in the form of active support,
but because it doesn’t use refinements,
no signal was sent to us as developers,
that we should, or even could be using them.
Now, you might be starting to form the impression
that I don’t like Rails,
but I’m actually very hesitant to single it out.
To be clear, I love Rails.
Rails feeds and clothes me,
and enables me to fly to Texas
and meet all y’all wonderful people.
The developers who contribute to Rails,
are also wonderful human beings
who deserve a lot of thanks.
It’s also, I also think it's easily possible,
and perhaps even likely,
that there’s just no way
for Rails to use refinements,
for something of the scale of active support.
It’s possible, but even more than this,
nothing in the Ruby standard library
even uses refinements.
There’s no call to refine
anywhere in the Ruby standard library.
Many new language features,
like keyword arguments or refinements,
won’t see widespread adoption
until Rails and the Ruby standard library
starts to promote them.
Now Rails five has adopted keyword arguments,
and so I think we can expect to see them spread
to other libraries as a result,
but without compelling examples of refinements,
from the libraries and framework
that we use everyday,
there's nothing nudging us,
towards really understanding
when they were appropriate or not.
I said the word number of quirks with refinements,
or unexpected gotchas, and it could be that
that is the reason why no one is using them.
For example, even when a refinement is activated,
and you can call it,
you cannot call methods like send or respond to,
to check whether or not
those refinements are activated.
And you can’t also use them in convenient forms
like symbol to proc.
You can also get into some really weird situations,
if you try to include a module into a refinement,
where methods from that module
cannot call other methods
to find in the same module.
But, these aren’t necessarily,
these don’t necessarily mean
that refinements are broken.
All of these are either by design,
or direct consequences of lexical scoping.
Even so, they’re unintuitive,
and it could be that aspects like these
are a factor in limiting the ability
to use refinements on the scale
of something like active support.
But, as easy as it is for me to stand up here,
and make logical and rational arguments
why monkey-patching is bad, and wrong,
and breaks things, it’s impossible to deny,
that even since the start of this presentation,
software written using libraries
that relies heavily on monkey-patching,
has made literally, millions of dollars.
So, maybe refinements solve a problem,
that nobody actually has.
Maybe for all the potential problems
that monkey-patching might bring,
the solutions that we have for managing those
are good enough; things like test weeps.
And, even if you disagree with that,
which I wouldn’t blame you for doing,
perhaps it points to another reason
that’s more compelling.
Maybe refinements aren’t the right solution
for the problem of monkey-patching.
Maybe the right solution is something
like Object-Oriented Design.
I think it’s fair to say,
that over the last two or three years,
the Ruby community has become
much more interested in object-oriented design,
and, you can trace that in the presentations,
that Sandy Metz, for example,
has given in her book,
or in the discussion of patterns,
like hexagonal architecture,
or interactors, or presenters,
and all the jams that have recently appeared,
that help us use those patterns.
The benefit that object-oriented design brings,
tries to bring to software,
those benefits are important and valuable.
Smaller objects with cleaner responsibilities,
they’re easier and faster to test, and change.
All of this helps us do our jobs more effectively,
and anything that does that must be good.
And, from our perspective today,
there’s nothing that you can do with refinements,
you cannot do by introducing a new object,
or new method, that encapsulates the new,
or changed behavior.
For example, rather than adding a shout method
to all strings, we could introduce a new class,
that only knows about shouting,
and wrap any strings that we want shouted
in instances of this new class.
Now, I don’t want to discuss whether or not
this is actually better than the refinement version,
probably because it’s obviously trivial,
and, so it wouldn’t be a realistic discussion,
but mostly because I think there’s a more interesting point.
While good object-oriented design
brings a lot of tangible benefits
to software development,
the cost of proper OO design is verbosity.
Just as DSL tries to hide the act of programming
behind language that appears natural,
the introduction of many objects
can, sometimes, make it harder to quickly grasp
what the overall intention of code is.
And, the right balance of explicitness and expressiveness
will be different for different teams
and for different projects.
And, not everyone who interacts with software
is even a developer,
let alone somebody trained in software design.
And so, not everyone can be expected
to easily adopt sophisticated principles with ease.
Software is for its users,
and sometimes the cost of making them deal with
extra objects or methods might not be worth the benefit
in terms of design purity.
It is, like so many things, subjective.
Now, to be clear, just like with Rails,
I’m not arguing in any way
that OO design is not good.
I’m simply wondering that whether or not
it being good necessarily means
that other approaches should not be considered
in some situations.
And, so these are the six reasonable reasons
that I could come up with for why
nobody seems to be using refinements.
Which is the right answer?
I don’t know.
There’s probably no way to know.
I think all of these are potentially good
why we might have decided collectively
to ignore refinements.
Or, why we might make a case
to remove refinements from Ruby, entirely.
However, I’m not really sure that any of them
are really the answer
that most accurately reflects reality.
I think the answer is probably
more likely to be closer to the one that
we encountered at the very start of our journey.
It’s because other people
have told us that they are bad.
So, let me make a confession.
When I said, “This is not a sales pitch for refinements,"
I really meant it.
I'm fully open to the possibility
that it might never be a good idea to use them.
I think it's unlikely, but it is possible.
And, to be honest, I don't even really care.
I just want to make my software.
But, what I do care about, though,
is that we might start to accept and adopt opinions,
like, that feature is bad, or this sucks,
without ever pausing to question them
or explore the feature for ourselves.
Now, nobody has the time to research everything.
That would, not only be unrealistic,
but one of the benefits of being in a community,
is that we can benefit from each other’s experiences.
We can use our collective experience to learn and improve,
and, that’s definitely a good thing.
But, if we just accept opinions as facts,
without ever even asking why,
I think this is a bit more dangerous.
If nobody ever questioned opinion as facts,
then we’d still think that the world was flat.
It’s only by questioning opinions
that we make new discoveries,
and that we learn for ourselves,
and that, together, we make progress,
as a community.
The sucks/awesome binary,
can be easy, contenting, and even fun to use,
but it’s an illusion.
Nothing is ever that clear cut.
There’s a great quote by a British journalist and doctor
called Ben Goldacre that he uses
anytime somebody tries to present something
as being starkly good or bad, when he says,
“I think you’ll find
"it’s a little bit more complicated than that.”
And, this is how I feel anytime anyone tells me
that something sucks, or is awesome.
It might suck for you,
but unless you can explain to me why it sucks,
then how can I decide how your experience
might apply to mine?
One person’s suck
can easily be another person’s awesome,
and they’re not mutually exclusive.
It’s up to us to listen and read critically,
and then explore for ourselves what we think.
I think this is particularly true
when it comes to software development.
If we hand most,
if not all responsibility for that exploration
to the relatively small number of people
who talk at conferences,
or have popular blogs,
or who tweet a lot,
or who maintain these very popular projects or frameworks,
that’s only a very limited perspective,
compared to the enormous size
of the actual Ruby community.
I think we have a responsibility,
not only to ourselves, but also to each other,
to our community, not to use Ruby
only in the ways that are either implicitly,
or explicitly promoted to us,
but to explore the fringes,
to wrestle with new and experimental ideas,
and features, and techniques,
so that as many different perspectives as possible
inform on the question of whether or not this is good.
And, If you'll forgive the pun,
there are no constants in programming.
The opinions that Rails enshrines,
even for great benefit, will change.
And, even the principles of OO design,
are only principles.
They’re not laws that we have to follow blindly
for the rest of time.
There will be other ways of doing things.
Change is inevitable.
So, we’re at the end now.
I might not have been able to tell you precisely why
so few people seem to be using refinements,
but I do have one small request.
Please, make a little time to explore Ruby.
Maybe you’ll discover something simple.
Maybe you’ll discover something wonderful.
And if you do, please share it with everybody.
Thanks very much.
Does anybody have any questions?
I’m not really sure what the question was, but--
I have the question,
“What was the history of refinements?”
So, they were inspired
by a concept called class boxing,
from a different language,
which effectively does a similar thing,
and originally proposed as a patch,
actually at Rubyconf 2010,
so, literally 10 years ago,
if Rubyconf happened at the same time.
The inspiration was to solve monkey-patching.
I imagine that was because
Rails was really becoming popular at that point,
and people were encountering some issues
like the camelize example,
actually does come from a real experience
that Yehuda Katz blogged about.
So, that is the inspiration.
The history is quite interesting,
but it will take you a long time if you want to read it.
There are 278 comments on the Ruby tracker,
the issue that introduced it,
over a period of two years, so...
The question was in the concept of refactoring.
This might be a bit confusing,
but I think they’re quite separate, because
you’re not really extracting a method from anywhere.
It’s not like an object is losing a responsibility,
or something like that.
Okay, I think that’s the time.
Thank you very much for your time.