Wednesday, October 05, 2016

Changing the Unchangeable: The Hows and Whys of Immutable Data Structures by Brad Urani

everyone's morning?
Good? Cool. (audience cheering)
So my name is Brad Urani.
This is a talk about immutable data structures.
Immutable data structures are an idea
from functional programming
and I gave this talk because I got interested in this
because I took a course on Coursera called
The Principles of Functional Programming in Scala.
It's taught by Martin Orderski, who's the inventor of Scala.
A very, very, very great course,
highly, highly recommend you take it.
It got me thinking about how we can do some types
of functional programming in Ruby.
And what do I mean by functional programming?
Functional programming is the techniques that are used
to isolate state change, basically,
and it's what you get when you combine
a number of techniques and also use some discipline.
It's what you do and also what you don't do.
Ruby has a lot of the stuff that you need
for functional programming built-in.
Immutable data structures has that yellow squiggle there
because we get that with a third-party gem.
Higher order functions would be like passing lambdas around.
Partial function application, that's like currying.
We don't do that a lot in Ruby.
We do have lazy sequences as of Ruby 2.0.
Recursion, you can always recurse in Ruby.
We don't have lazy evaluation and pattern matching.
So, if you combine all this stuff,
you kind of get functional programming, but the truth is
most of functional programming is a poor fit for Ruby.
Ruby's an object-oriented language
and I have tried
to do pure functional programming in Ruby and you can do it,
but I don't really recommend it, at least not in production.
But these immutable data structures
I thought were really, really interesting
because they are a piece that we can take
that I think do have some really,
really interesting use cases.
And also just a really, really neat topic
with some really cool ah ha moments in them.
So we do have this immutability built into Ruby.
We have this dot freeze method.
So, we take this array, we call dot freeze,
and then what happen is if you to change this array,
if you try to update, you know, add an element
or delete an element,
this will raise an error.
The reason we have this is
to prevent certain types of bugs.
So, if you have this array
and you're passing it to some function
that the success is predicated
on this not changing, you can do this to avoid certain bugs.
If you jump into Rails source code,
you'll often see string dot freeze, right?
That's a performance enhancement.
It's a memory-saving device.
Freezing an array does not have
the same performance enhancement as far as I know.
It's for, you know,
not running into certain bugs.
I guess that's useful.
It's not really that interesting
and it's not what this talk is about.
That's because this data structure is immutable,
but it is not persistent.
These are two different things.
Persistent is a data structure with some special qualities
and that's what I'm gonna be explaining today.
We get persistent data structures with a third-party gem,
it's called Hamster.
It's a really, really awesome gem.
Here I've made a list just containing the items
one, two, and three.
Hamster is really, really awesome,
it's this great gem, it's got this whole collection
of persistent, immutable data structures.
Is anyone here a maintainer or has anyone worked on it?
Maybe not, well, if you're watching my video later,
shout out to you, this is a great gem.
Great work.
So we get this list, one, two, and three,
and what actually is the difference here?
Let me show you really fast.
So, make sure this is big enough.
All right, so if I make an array in Ruby,
one, two, three.
So there's l.
l is one, two, and three.
So if I do l2
equals l dot unshift, zero,
and unshift, if you don't remember,
is actually the list operation that pushes something
onto the front of an array,
I get zero, one, two, three.
What is l right now?
What happens if I print out l?
Is it one, two, three?
Is it zero, one, two three?
It's zero, one, two, three, isn't it?
Why is that?
That's because both of these references, l and l2,
have a reference to the same array.
There's only one array in memory
and they both point to the same one,
so editing it in one place,
changes it for all references that have it.
Hamster is a little different, hamster list.
So if I do l equals Hamster,
List, one, two, three.
There we go.
All right, sorry about that.
So I have l, which is one, two, three,
and if I say l2 equals l dot cons,
and cons is the same as unshift,
it pushes something onto the front.
Why is it called cons?
It's tradition from functional programming,
dating back 40 years, but same thing.
I con something on the front,
I get zero, one, two, three.
What is l now?
l is still one, two, three.
So that's the difference here.
That's the difference between just a normal array
and a hamster list, is that when I edit Hamster list,
anything with the reference to the original stays the same.
It remains unchanged.
That's why it's immutable.
So a persistent data structure is immutable,
but immutable data structures
are not necessarily persistent.
Persistent data structures have these special qualities.
So how do they implement that?
All right, let's think about this for a second.
Say you wanted to make that Hamster list class yourself,
how are you gonna go about and do that?
Hm, well,
the the first thing we can think of is, all right,
when I update the list,
I could just clone it, couldn't I?
I could just make a whole dang copy right in my memory
and then I would achieve what I wanted, right?
No problem, that makes sense.
Interesting example from everyone's favorite language,
PHP, right?
Is that PHP actually does this.
You take this array, which is set to two, three.
You can actually swap references like that,
like just set array two equals array one,
then you can add four onto array two,
and actually array one is still unchanged.
This is called copy on write.
The PHP runtime is kind of smart.
Oh my god, did I just use smart talking about PHP?
Oh, what's wrong with me?
But yeah, it actually clones things under the hood,
keeps track of these reference and clones things
under the hood for you.
That's kind of interesting,
but it's not very efficient, isn't it?
That's not very fast.
You wouldn't expect that to be very performant.
So, how would we implement that immutable array,
that persistent array in a way that's performant,
efficient, and fast?
It turns out, a functional persistent list
is actually a linked list.
Did any of you ever take an algorithms class or something,
where you learn stacks and queues and trees
and linked lists and you're like, wow,
why would I ever used a linked list?
Well, it turns out that they do actually have a use case
and that's in this.
So we have this list.
So if I do this,
I take one, two, and three, and then I use cons
to push something onto the front,
you can see that it's persistent.
I have one, two, three, and I have the new list.
What I've actually done is this.
So I have a single linked list and I have two references
to the same list, to two different nodes in the same list.
That's actually awfully clever, isn't it?
Because I didn't have to clone anything,
the one, two, three part is shared by the two lists,
only to the references list and new list,
it looks like two entirely different lists.
It looks immutable, doesn't it?
That's actually really clever because it's fast.
It's performant, it's fast,
it's a neat implementation detail.
Okay, so straight from Wikipedia,
"a persistent data structure is a data structure
"that always preserves the previous version of itself
"when it is modified.
"Such data structures are effectively immutable,
"as their operations do not visibly
"update the structure in-place,
"but instead always yield a new updated structure."
Okay, cool.
So a few limitations of this list as you might expect,
first of all, it's singly-linked.
Why can't I make this a doubly-linked list?
Well, think about that for a second.
If I have this and I tack zero onto the front,
if I wanted to make it doubly linked,
I would have to add a reference from one to zero,
wouldn't I?
If I do that, I'm changing it.
Is that still immutable?
No, so I've violated immutability,
so it has to be a singly-linked list.
I can't go backwards either.
'Cause it's singly-linked,
I can only traverse this list in one direction.
Also, I can only add to the front, can't I?
So that's a limitation of this list.
And I can't do random access,
so a Hamster list, you literally cannot say,
all right, give me element three.
Why not?
Why can't I get element three?
Well, what's element three in this list?
Is that relative to the first reference?
Is that relative to the second reference?
That's kind of a relative thing,
what element three in this list is, isn't it?
So it's a relative question that we can't really answer.
It doesn't really make sense.
Also, even if you did implement it,
you'd have to recurse to be big O of n,
it wouldn't be very fast.
All right, so another example here
is my rock band linked list.
So I've got this rock band, vh_1980.
Consists of David, Eddie, Alex, And Mike.
Then I do this, I take vh_1980,
this is a reference to Van Halen, the rock band,
I don't know if you all got my joke,
but they swapped out lead singers in 1984.
Okay, so I take vh_1980 dot tail.
Dot tail gives me everything after the head,
which is Eddie, Alex, Mike,
and I con Sammy onto the front.
So here I swapped David with Sammy, haven't I?
So now I have this in memory
because I've taken David off and added Sammy.
I've got two references,
I'm starting to sort of build up a tree here.
To each of these references it looks like a list
because you can only traverse on direction,
but I'm basically building up a linked tree.
And we can even take this further.
I can take this a, I add foo and bar.
To foo I add brisket and ribs.
To bar I add chicken and sausage.
We can end up entirely with a linked tree
and we can have a reference to each node in the tree.
And just so you know my inspiration from this
is that I'm from St. Louis,
we have awesome, amazing barbecue.
I moved out to California three months ago,
they don't even know what barbecue means.
It's this like grilled tri-tip, it's not the real thing.
I'm really excited to be here in Texas
because I got to eat barbecue ribs, the real kind.
But anyways, so I've got this tree now,
but the arrows only point up
because I can only traverse one direction,
which means to the references and the nodes,
it looks like a list.
Could we make a persistent graph?
So a graph does not have the limitations of a tree
and a graph, everything can basically link to anything else.
It's like the free-for-all of data structures, isn't it?
Well, let's think about this second.
If we started connecting nodes in a graph
and you allowed traversing both ways,
we'd have to doubly link, that violates immutability,
but we probably could do it
if the arrows all went one direction, couldn't we?
Yeah, so if it was a directed graph.
And it would kind of ruin our directed graph
if we made circles, wouldn't it?
Because at some point we'd have to modify something,
so if we had a directed acyclic graph,
we could make this a persistent data structure
because we'd only add in things on the end.
Does this remind you of a tool you all use,
maybe something you know and love
and use every, single day?
Someone yell it out.
- [Voiceover] Git. - [Brad] Woo, yes!
Git is basically a persistent data structure
on disk, isn't it?
It follows these principles, right?
One-way traversal, singly-linked list,
perfect history, yes.
You all use this everyday,
git is a persistent data structure.
So let's solve something a little bit more interesting
or do something a little bit more interesting, rather.
(audience laughs)
Ha ha ha ha ha.
That part's coming later.
Okay, maybe, no, I'm gonna do it now.
All right.
So, I'm gonna pull up a little game here.
Come on.
Oh yeah.
So this is my
isomorphic, functionally,
functional reactive JavaScript app.
Just kidding, it's a flash game.
(audience laughs) No, not now.
This is called Bloxorz.
I first ran into this problem when I took the class
that Principles of Functional Programming in Scala class.
I ran into this problem, it's a really, really cool one.
Bloxorz is this video game, right?
I am that block right there
and the deal is I have to kind of move around
and try to get that block to land in that hole right there.
This was the whole just of the game.
All right.
Very, very cool.
I did not make this game.
What I made is a solver for this game.
So this level has these little bridges.
I did not implement the bridges in my simulation,
but we have this little game here
and it turns out this game can teach us a lot
about persistent data structures.
Let's see if I can do this.
This is always the hardest part of my talk.
All right, maybe this.
Oh and they're like, freeze mistakes, Flash!
Oh my god.
All right, let's try this again.
All right, all right, all right.
Here we go.
This is the part where the nerves get me.
Oops, no, I can do this.
All right, all right, cool.
(audience claps) Woo!
That's the hardest part of my talk.
All right.
So memorize this board a second.
Take a look at this board, kind of memorize the shape.
You got it?
You got it stuck in your brains,
you know what it looks like.
All right.
Here's my ASCII representation of that board.
The S is the starting point.
The G is goal point.
Let me make that a little bigger for ya.
The g is the goal point, the zeros are the board, right?
So I made a little ASCII representation for this.
And my program is a little solver.
cd ah,
go to my ruby,
ruby level two.
My program, what it does is it spits out the,
oh yeah,
it spits out the quickest solution to this problem,
so it's gonna give me,
it's a cheating game,
it's gonna give me the fastest route
to the goal there.
Just to prove it to you that it works,
let me make that a little wider.
This is the second hardest part of my talk
is making this all line up.
All right, ready? Here we go.
Wait a minute, that's not right.
So right, oops.
Right, up, right, right, right,
up, left, down,
right, up, up,
right, right, right,
down, down, down,
right, oh, it froze again.
This is why we got rid of Flash.
Yeah, can you not see it?
There we go.
All right, wow, it really froze on me big time.
Anyway, right,
oh, there we go.
Yeah, all right!
Right, up.
All right, so I made a little solver.
It finds the quickest answer to this problem, right?
It's a video game solver.
So how on earth did I do that?
What does that have to do with persistent data structures?
First of all,
if we think about this problem,
we can model the moves of this game as a tree.
I'm up at the start, I have four possible moves.
I can either move left, right, up or down.
So that's that second tier in the tree.
that shows me what my possible first moves are.
So if my first move is right,
then I have four other possible moves.
I can move left, right, up, or down, right?
If my first move is up, I can move left, right, up, or down.
That third row, that bottom row in that tree,
represents all my possible second moves.
So I can model my game using a tree.
And just a little digression here, the algorithm,
I do this as a functional version of this,
which is kind of cool.
It's a breadth-first search.
I want to find the most efficient solution,
so I want to evaluate all the possible first moves
before I evaluate all the possible second moves.
So it's a breadth-first recursive tree search.
And the way that I do that is the functional version.
I take my first node and I flat_map it to four directions,
which gives me a list of four items,
four items I can flat_map to left, right, up, and down,
which gives me a list of 16 items that makes my second row.
And just a little reminder, I'm talking about flat_map here.
So imagine one, two, three, and four.
These represents the nodes in a level of my tree
and I map them to four directions,
I get 16 more directions.
So I'm flat mapping between levels.
I take the previous level,
I concatenate it to the one before it
and I end up with an algorithm,
basically flattens this tree.
It's a breadth-first tree flattening algorithm
that allows me to treat this tree
as it's one long, continuous list
so that I can search through the list and find the answer.
Find the one where the block goes in the whole.
If I lost you there, don't worry.
This is where the persistent data structures come in.
When I get to the end of this, I find the answer, right?
I have to be able to print out the history.
I have to be able to print out the answer
of how I got there.
So I need to keep a history,
otherwise I can't print out the answer at the end.
Additionally, I also have to keep a history
because I have to check for blocks' positions
that I've already visited so I don't get stuck
in an infinite loop.
So how do I keep that history
in a way that's efficient and fast?
Well, this looks kind of like a linked list, right?
What's happening is, say I get
to this green node right here,
the one that's highlighted that says right,
I'm gonna map this to four directions.
There are four possible moves I can take from that.
Each of those four moves needs a different history list,
doesn't it?
I've already got a history list consisting of two elements.
Now I need four new ones.
Well, this kind of looks like what I had before,
just turned on its side, doesn't it?
Basically what I do is I cons a new history cell
onto the front of the list and I get four new history lists,
but because it's a linked list,
I don't have to clone anything.
So here I'm using a linked list to keep the history
in my algorithm so that I can print out the answer
at the end of my Bloxorz game.
Interesting and novel uses of these
persistent data structures.
Interestingly, persistent data structures
are also thread-safe.
So you may imagine that if I want to solve this problem
absolutely as fast as possible,
I may want to spawn off threads,
maybe at the first level where I've got four moves.
My MacBook has four cores,
I probably would've spawn off four threads
if I want to do that real fast.
With a persistent data structure,
you can pass a reference to four threads
to the same list and not worry about it.
If you do that in a Ruby array and all the threads
started mutating the array, you would have utter chaos.
It'd be pandemonium.
Persistent lists are thread-safe.
Interestingly, they're also lazy.
We have laziness in Ruby, so take a look at this algorithm.
Say I wanna find the first 10,000 primes.
Here's some simple Ruby that finds the first 10,000 primes.
Does anyone see a problem with this?
Yeah, I'm finding all the primes
between one and a billion, aren't I?
That's a little efficient, I don't need all those.
I'm not sure that that would ever end,
it might take a very, very long time
just so I can take 10,000 at the end.
So what can I do?
I could use the old procedural version
or get an empty array and do a loop
and keep a counter of the number of things in the loop.
That's not very elegant.
Ruby gave us this dot lazy keyword,
which is brilliant.
The only difference here is that lazy right up there,
this means look at what I got out,
instead of a bunch of prime numbers,
the answer I got there is this stacked up,
lazy data structure, waiting to be executed.
So you can actually do this by doing a lazy select,
then you do dot take.
You have a lazy calculation waiting to be fired off,
so when you like puts it or call dot to_a on it,
it actually fires off and gives you the answer.
Hamster data structures also do this.
Hamster structures work the same way.
They are lazy by default.
So in addition to being immutable, persistent,
they're also lazy.
And that's kind of clever because the way
my algorithm works, basically I'm allowed to
stack up this recursive tree search algorithm,
a flat map, a concat,
and I only fired off at the very end.
In fact, what I actually do is I have this lazy sequence
and I actually call dot select,
basically where's a winner?
And then it still hasn't fired off,
then I basically call dot first
and it actually traverses through the finds.
It's very, very cool.
So Hamster lists are lazy, they're awesome.
And we get a few more really neat things
in the Hamster library.
We have the lit class, which I showed you.
There's a hash, which acts like a Ruby hash, but persistent.
We get a set, a sorted set.
A deque is a stack or a queue,
but we actually get this vector
and the vector's very interesting.
I mentioned there are a lot of limitations
in the list pass, right?
We can't do random access,
we can only traverse on direction.
Those are some limitations, right?
And sometimes we need that.
Sometimes we need random access.
We need to say what's the third element in this list?
Hamster gives us this.
Here I'm setting,
v1 is zero through eight
and v2, here I'm setting the element five to beef.
So I'm getting back zero, one, two, three, four, beef,
six, seven, eight.
And you could see that v1 is untouched.
So this was persistent.
How did they do this?
Think about this for sec, and that's actually
a pretty tough thing to implement.
This one I scratched my head about for a while
and I was very, very pleased
when I saw the answer to this because it's quite clever.
This is this fancy data structure called a
persistent bit-partitioned trie.
That's a mouthful, I know that.
But notice a trie, first of all,
a trie is a tree where all the values are in the leaves.
So you can see that all the numbers that I actually
care about are on the leaf nodes, that's a trie.
All the middle nodes are actually empty.
They're just placeholders.
And the 10 at the top,
that is just a cache of the length of the list.
So, what happens when I replace five with beef in this case?
It actually changes things like this.
So the blue here is the new vector that I've created.
The orange represents the old one.
You can see it builds up the tree,
but the blue one actually shares some of the orange nodes
with the original.
That's pretty slick.
So it has to do absolutely the bare minimum.
We have to make a few of those middle tree nodes,
we have to make a new node at the bottom containing
four and beef, but the rest of the stuff,
the six, seven, the one, two, three, four five,
those are all shared in memory.
It shares the same instance, so that's how I'm able
to change something right in the middle of the vector
and also not have to clone the whole dang thing.
Appending has a similar kind of thing.
So I have zero, one, two, three, four,
I tack a five on the end.
I only have to rebuild just a tiny little bit
part of the tree, so it's nice and fast.
Popping, so here I have one, two, three, four,
five, six, seven, eight, a, b, and c.
I remove c
and you can see that the blue part, the blue vector,
shares most of the memory with the original one.
Very, very clever algorithm.
Very, very clever implementation of a persistent vector.
I thought that was real neat.
Shout out to Jean Niklas L'Orange.
I took this graphics from his excellent, excellent article,
which, if you want to read it, I highly recommend it.
It actually explains Clojure's implementation of this,
but the Hamster version is the same.
That's pretty, pretty neat.
But what do we do with it?
So what?
Why is this useful?
Turns out there's some really, really cool applications
for this.
So what I did is I made a little demo application.
This is Connect Four, do y'all remember this game?
When I was a kid, actually, the colors were different.
It was a yellow board with red and black pieces, I think.
I don't know at what point Mattel decided to change
the colors of this, but,
so what I did is I represented my Connect Four board
as a matrix.
Let me show you Connect Four real fast.
It's a pretty simple program
with a pretty profound difference
than what you might be used to.
So, let's see.
Ruby, Connect Four.
Connect Four, Ruby.
Ruby, boom.
All right, so I'm gonna make this nice and big.
Also to the right a little bit, yup.
Holy cow, okay.
So, I actually made this.
This is a pretty simple little demonstration.
I built it with emojis
just representing a Connect Four board.
So I'm gonna start adding pieces to this board.
So column one, of course I'm a programmer,
so I use zero-based indexes, right?
But I start adding pieces to this board
and I'm playing my little Connect Four game.
All right, pretty simple, nothing real special there.
Why is this interesting?
I represent my board with a Hamster matrix.
Hamster matrix is a gem I made, brand new gem
just for RubyConf, it's a 2D matrix
where if I change something,
an original reference is untouched.
It's a persistent matrix.
It's pretty simple, actually.
It's just built on top of nested Hamster vectors,
but I made my little Hamster matrix gem.
My board is a matrix.
Oh, and not cloning because we don't want clones
in the matrix, right?
No, that'd be bad.
Right, I know, wah, wah, wah.
That was too easy, sorry.
But this command line actually files
a pretty interesting architecture.
Basically, I have a state manager.
The state manager holds my state, which is my matrix,
and when i press a key, I take that state
and I send it through my game logic.
The game logic does things like checks
like is this a legal move?
You know, I drop it in this column,
what row does it actually land on?
So where does that slide in?
Did one of my players win my game?
That's my game logic and I get back a new state.
Because it's persistent, we know for a fact
that the old state, the original state,
is still in memory somewhere, don't we?
That's kind of neat because now I can store
all of the game states that I've ever been through
here in memory, have a reference to them all.
So I have my entire history of game states
here in memory in my state manager.
Why is that cool?
(imitates wooshing)
Because I can travel through time.
Very, very, very cool.
And my state manager, that's really neat
because at any point of this I can go back.
Wonk, wonk, donk,
donk, donk, donk.
So I can rewind my program.
There are a lot of ways to implement this,
but can you think of any one
that's more clever, that's more eloquent?
Because I pretty much got that for free, didn't I?
By using a persistent data structure,
the whole memory mapping, keeping all that state in memory,
it's pretty much done for me.
All I have to do is keep a list of references.
That's real clever.
I don't have to clone anything,
I don't have any inefficient cloning, do I?
It's a real, real easy way to achieve that
and it's also real efficient.
Of course because my persistent data structure
is in my state manager, all I'm really doing
is sorting diffs here, right?
Since I'm not cloning anything,
I'm storing the absolute bear minimum
to represent the difference between one state and the next.
So it's crazy efficient, it hardly takes up any room,
it's very, very, very fast.
There is a little caveat to this to make this work.
My game logic cannot have a side effect, a side effect.
It has to be made of pure functions.
What is a side effect?
All right, so here's a little Ruby class with a person.
I've got this method, steal_cookies,
and you can see that I'm incrementing pent_up_aggression.
I'm changing something, pent_up_aggression is being mutated,
being changed.
That's a side effect.
If I do that in my game logic
and if I have a bunch of objects around
and I'm changing instance variables of my object,
those are not reflected in state, are they?
I didn't capture them in my state manager.
When I go rewind, they won't be rewound.
Sp there's no way to rewind if I have random mutations
and state variables in different places,
which means I have to basically write my game logic
in sort of a functional style, which is data in, data out.
I pass arguments in to a function,
I operate on that, I get the answer back,
and in my virtual circle there, I send the answer
all the way back up to my state manager.
That allows me to time travel.
It's very, very clever.
So where is this all going?
Turns out a lot of the industry's moving that way,
so this is more than just a novelty, right?
Connect Four game?
Okay, that's kind of novel, it's kind of neat,
but this is a lot more than just a demo.
For one thing,
our front-end stack, where I work at Procore
is entirely based on these principles.
In JavaScript, of course, this is our front-end stack.
We're using ReactJS, which is very popular,
along with a library called Redux,
which is an implementation of Relay.
And we're following this virtual circle,
so a key press hits what's called a store,
what's called a redux store,
which has the data stored in immutable vectors,
using Facebook's excellent immutable JS library.
I pass the state into a redux reducer,
which gives us a new state
and then we send it down to React
and we send it down to the DOM.
We can actually rewind and fast forward our front-end
at Procore, which is really, really neat,
really, really clever.
That's nothing to joke about because
we also have hot loading,
which means that we can actually pull up a form,
start pressing buttons, doing JavaScript-y things
to change the page, go back, rewind it,
go back and make a code change, get hot reload,
and that actually start clicking around things again.
That's a pretty powerful debugging tool.
Interestingly, this stuff is moving to the server side too.
So this is Rich Hickey, I think he's the most interesting
computer scientist living today.
He invented the Clojure programming language,
he invented that persistent data structure
that I showed you, that vector.
He invented that, which is really, really cool.
He also invented a database called Datomic.
Datomic is basically a persistent data structure on disk.
It's basically a database that never forgets,
that uses these exact principles.
Basically a vector or a linked list written to disk.
And he said some very interesting things.
He's got a talk, it's called The Value of Values.
It's from 2012, it's a modern classic.
I highly recommend you watch it.
"There's a reason we do place-oriented programming"
and here's what he means by that,
place-oriented programming.
Think of a database row when you update a field,
last logged in at, right?
You update that.
When you go back and retrieve that,
you're looking for a place.
It's a place in memory, it's a place
and you're changing something.
I always know the last updated date
as at this certain place, in this certain row in memory.
In an object, I've got an instance variable.
That is a place and it's something that I change,
but if I need to know the value that I'm looking for,
I go back and I find it in its place.
Place, in other words, we have places
any time we mutate something, any time we change something.
He says, basically, the only reason we ever do that
is because in the early days of programming,
memory was expensive, disks were expensive,
RAMs were expensive.
We had to update, we had to delete
because we couldn't afford to have enough memory
for everything we needed to do.
Those limitations are gone.
RAM is cheap, disk is cheap,
which basically means, why are we still doing that?
Why are we still changing anything?
Why don't we keep the history
of everything we've ever done?
Okay, that has some ominous, like 1984 tones to it.
That's not really what I mean.
I mean, you know, the useful type of this.
But that's a good question
and his database, Datomic, actually,
it follows those principles.
Can you imagine, for a second, if you built a Rails app
on top of Datomic?
So that you could actually pull it up,
start doing things, filling out forms,
doing CRUD operations?
Then you could actually rewind it to a fixed point in time,
then fast forward again.
If you could give your users the power of actually like,
all right, here's where you are now,
here's where your budget or your accounts are
or your email is now.
With very simple implementation details,
you could give the power to your users to rewind,
go back in time, say, here's where you were,
this is what it used to look like.
You have all of your history of everything you've done
on disk.
That would be pretty, pretty amazing.
Could you imagine how great of a debugging tool that is,
if you can pull up a Rails app, do CRUD operations?
Oh man, I messed something up, there's a bug.
Rewind it, fix the bug, play it back forward again?
Pretty slick stuff.
And the thing I'm most excited about
is this new language, it's called Elm.
Has anyone heard of this?
This is good stuff, isn't it?
Elm is a purely functional language
and it takes away a lot of stuff.
It has no classes.
It has no nil.
It has no loops.
It has no variables.
And it's strongly typed; if it compiles,
you're guaranteed to not have runtime errors.
That sounds crazy.
It's literally like a quarter of Ruby.
It has like a quarter as many language features.
It's the most beautiful example I've ever seen
of making something great by taking stuff away.
And I'm gonna show you my little demo here.
That was awfully fast.
It is also very, very fast.
So this is my Connect Four game in Elm.
You'll notice this is running in Chrome.
It compiles to JavaScript,
so this is a language for front-end games.
it enforces that pattern, that pattern of state managers
going in a loop with pure functions.
It enforces that.
It is impossible to write a program in Elm
that does not follow that architecture.
Because it's impossible,
every single program you write in Elm
is out-of-the-box time travellable,
which is so nifty that they built it
right into the language.
I can actually right here go, yoink,
add a query string, debug.
The time travel debugger that's built into Elm
pops up on the right side.
I could start to play Connect Four.
Well, I didn't like that move.
I can pause the program.
I could start to,
sometimes there's a lag.
I could start to go back in time,
I could start to go back in history.
So because the language enforces it,
every single program in Elm
is automatically time travellable.
That's pretty neat.
All right.
So, just a few resources before I hand this off.
All my code that I made is online.
The Bloxorz solver, my new Hamster matrix gem,
my Ruby version of Connect Four,
my Elm version of Connect Four.
I highly recommend this course,
Principles of Objected-Oriented Design in Scala.
Some great reading if you want to read more
about persistent vectors.
There's the article that I took those graphics from
is right there.
If you want to read about git being a
purely persistent data structure,
that's in there.
The Value of Values by Rich Hickey, modern classic.
Highly suggest that if you wanna learn more
about this really, really cool stuff.
I am Brad Urani, I'm addicted to twitter.
Follow me, I'll follow you back.
Connect with me on LinkedIn if you want.
I've got a blog.
I don't update it very much
and usually when I do, it's me complaining
about what I don't like about Active Record.
(all laugh)
So, I work at Procore at Santa Barbara, California,
AKA paradise on Earth.
We make construction management software,
we're one of the diamond sponsors of this conference.
We've got a booth at the end of the hallway
staffed with great people.
All the people we brought are engineers, not recruiters.
Highly mend, check it out if you're interested,
it's a great place to work.
We're growing fast, hiring like crazy.
We're also throwing a party tonight.
It's at 8 o'clock at Howl on the Moon on the River Walk.
It's gonna have dueling piano bar,
a band, free drinks for two hours,
and we're raffling off some cool stuff.
I think an iPad Pro and things like that.
And that's my talk.
I will tweet this and make sure it gets out in the internet.
Follow me on twitter and I will tweet this
and you can find my resources.
I will tweet the slides and the demo.
All this stuff's on my GitHub, by the way.
Thank you ver