Four Key Reasons to Learn Markdown
Back-End Leveling UpWriting documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will...
This post was adapted from a talk called “String Theory”, which I co-presented with James Edward Gray II at Elixir & Phoenix Conf 2016. My posts on Elixir and Unicode were also part of that talk.
It’s been said that “the key to making programs fast is to make them do practically nothing.”
Imagine you’re going to write some data to a file, or send a web response to a browser.
What’s the minimum amount of work you can possibly do?
It’s this: copy each byte of data to the file or socket.
Now, if you’re going to get microsecond response times like Elixir’s Phoenix framework does, you can’t do much more than that.
And Phoenix doesn’t, thanks to a very interesting data structure called an “IO list”.
You can use IO lists to make your own code more efficient. To understand how, let’s first consider something that developers do all the time: string concatenation.
Here’s a concatenation example in Elixir:
name = "James"
IO.puts "Hi, " <> name # => "Hi, James"
Interpolation is just a prettier way of writing the same thing.
name = "James"
IO.puts "Hi, #{name}" # => "Hi, James"
In order to execute this code, the BEAM has to:
Copying the string data is work.
Not only that, but the more strings we allocate, the more memory we use, and the more work we create for the garbage collector.
We can do this more efficiently in Elixir by using an IO list.
An IO list just means “a list of things suitable for input/output operations”, like strings or codepoints.
Functions like IO.puts/1
and File.write/2
accept “IO data”, which can be either a simple string or an IO list.
name = "James"
IO.puts ["Hi, ", name] # => "Hi, James"
IO.puts ["Hi, ", 74, 97, 109, 101, 115] # => "Hi, James"
IO lists can be nested, and they’re still handled by IO functions as if they were flat lists.
IO.puts ["Hi, ", [74, [97, [109, [101, [115]]]]]] # => "Hi, James"
At first glance, this may seem like no big deal; the output is the same.
But the ability to structure our output data like this provides some real performance benefits.
First, because we’re using lists, we can efficiently repeat items.
users = [%{name: "Amy"}, %{name: "Joe"}]
response = Enum.map(users, fn (user) ->
["<li>", user.name, "</li>"]
end)
IO.puts response
In this example, the "<li>"
string is created only once; every time it appears in the list, it’s just a pointer to the original.
That means we can represent our final output with a lot fewer string allocations.
The more repetitive the content, the more allocations we save.
Also, the fact that we can nest these IO lists makes a huge difference in how quickly we can build them.
Normally, appending to a linked list is an O(N) operation: we have to walk through every item in the list, find the last one, and point its tail to a new item.
With immutable data, it’s even worse: we can’t modify the last item, so we have to copy it, which means we have to copy the previous item, and the previous one, all the way back to the start of the list.
However, with nesting, we can append to a list simply by wrapping it in a new list.
names = ["A", "B"] # => ["A", "B"]
names = [names, "C"] # => [["A", "B"], "C"]
This operation is O(1) and doesn’t require copying anything.
IO lists also have big implications for system calls.
Most programs can’t work directly with things like files and sockets.
Instead, they have to use “system calls” to ask the operating system to do things on their behalf.
It’s up to the OS to enforce file permissions and handle the details of working with specific hardware.
Consider the following Elixir script:
# Here I'm calling an Erlang function. I'll explain why later.
{:ok, file} = :file.open("/tmp/tmp.txt", [:write, :raw])
foo = "foo"
bar = "bar"
output = [foo, bar, foo]
output = Enum.join(output)
# Another Erlang function call
:file.write(file, output)
It’s fairly simple: we open a file, create some strings, concatenate them, and write the result to the file.
When executing the last line, the BEAM makes a system call to write the file. Using Evan Miller’s dtrace script (linked from his excellent article), we can see what that looks like:
write:return Write data (9 bytes): 0x00000000146007e2
Here the BEAM uses the system call write
and says “please write 9 bytes from memory address 0x00000000146007e2
”. That 9-byte string contains 3 bytes for “foo”, 3 for “bar”, and 3 for “foo” again.
Now watch what happens if we comment out the line where we join the strings:
{:ok, file} = :file.open("/tmp/tmp.txt", [:write, :raw])
foo = "foo"
bar = "bar"
output = [foo, bar, foo]
# output = Enum.join(output)
:file.write(file, output)
This time, we’re passing an IO list to :file.write/2
.
It’s a small change, but look at the system call:
writev:return Writev data 1/3: (3 bytes): 0x0000000014600430
writev:return Writev data 2/3: (3 bytes): 0x0000000014600120
writev:return Writev data 3/3: (3 bytes): 0x0000000014600430
What we’re seeing is actually one call to writev
asking to write 3 chunks of data: “foo” from one address, “bar” from another, then “foo” from the same address as the first time.
This is very exciting.
Nowhere in our program is the final string, “foobarfoo”, produced.
The only place where the concatenation happens is in the file itself.
When we concatenated in our own code, we took two strings in BEAM memory, copied their contents into a third string, and asked the OS to write that new string to a file.
When we used the IO list, we skipped the work of concatenation, saved ourselves from allocating the full 9-byte string, and removed the need to garbage collect that final string.
All the BEAM had to do was to ask the OS to copy each byte of data to the file.
writev
Now, I said that the BEAM doesn’t concatenate the strings in an IO list when performing I/O.
And if you run this in IEx and trace the system calls, you’ll see each item in the IO list as a separate argument to writev
.
{:ok, file} = :file.open("/tmp/tmp.txt", [:write, :raw])
:file.write(file, some_iolist_of_your_own)
However, I made two important choices in that snippet to ensure that writev
was used.
First, I specified that the file be opened in raw
mode. This tells the BEAM not to create a separate process to handle the file, as it normally would.
You can read about this in the Erlang docs for file:open/2
and in the “Performance” section at the bottom of that page.
Second, I used two Erlang function calls, not the Elixir code you’d expect:
File.write("/tmp/tmp.txt", some_iodata, [:raw])
That Elixir function delegates to a similar Erlang function, file:write_file/3
, which you could call like this:
:file.write_file("/tmp/tmp.txt", some_iodata, [:raw])
This function takes care of opening and closing the file handle for you.
In the current release of Erlang/OTP (19.1.2), :file:write_file/3
has a bug: it always combines the IO data into a single string, even if told to use raw
mode.
I recently fixed that, so once the fix gets released, you’ll be able to just use the Elixir code shown above.
But for now, if you want to use writev
when writing files from Elixir, you’ll have to use Erlang’s :file.open/3
.
Another caveat about writev
: you may see strings combined if they are less than 64 bytes long.
This has to do with the way the BEAM tracks strings in memory and copies data sent between processes.
If your IO list contains larger, “refc” strings (longer than 64 bytes), you will see them show up in the writev
vector as their own entries.
Caveats aside, here’s some simple advice: if you’re going to build up some output and then write it to a file, write it to a socket, or return it from a plug, don’t concatenate strings.
Use an IO list.
It will help you build your output quickly and with minimal memory usage.
And by doing this, you allow the BEAM to use writev
to reduce copying.
As I mentioned, the Phoenix framework takes advantage of this technique.
In my next post, I’ll show you how.
Writing documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will...
Humanity has come a long way in its technological journey. We have reached the cusp of an age in which the concepts we have...
Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove...