Join 4,000+ Subscribers

Get tips & tricks to optimize your ID verification flow

BlogGet Started

5 Pry Features Every Ruby Developer Should Know

By John Backus on May 20, 2017

Pry Features

Pry is a great tool for Ruby. You have probably used it by setting binding.pry in the middle of your code like so:

From: lib/dry/types/hash/schema.rb @ line 58 Dry::Types::Hash::Schema#try:

    40: def try(hash, &block)
    41:   success = true
    42:   output  = {}
    44:   begin
    45:     result = try_coerce(hash) do |key, member_result|
    46:       success &&= member_result.success?
    47:       output[key] = member_result.input
    49:       member_result
    50:     end
    51:   rescue ConstraintError, UnknownKeysError, SchemaError => e
    52:     success = false
    53:     result = e
    54:   end
    56:   binding.pry
 => 58:   if success
    59:     success(output)
    60:   else
    61:     failure = failure(output, result)
    62:     block ? yield(failure) : failure
    63:   end
    64: end

> (#<Dry::Types::Hash::Weak>)

Pry is much more than a tool for setting a breakpoint though. It is a great tool for exploring code interactively.

Discovering available methods

Pry provides a command called ls that lists methods and variables available in the current scope. In the code snippet above, the ls command would print out the following:

> (#<Dry::Types::Hash::Weak>) ls








instance variables:


This is a breakdown of all the methods available in the current scope, grouped by the class or module that owns that method. It also lists the available instance variables and local variables. This is a very powerful tool for quickly understanding the role and responsibility of the code you are debugging.

The ls command also lets you drill down into different parts of the current scope. We can use ls --locals to view the names of local variables alongside their current values:

> (#<Dry::Types::Hash::Weak>) ls -l
result = {
  :name=> #<Dry::Types::Result::Failure
hash = {:name=>nil}
output = {:name=>nil}
success = false
block = nil
e = nil
failure = nil

Learning without documentation

Pry makes it easy to search for methods under a namespace. For example, if we wanted to find methods for handling xpaths with Nokogiri, we can use find-method:

> find-method xpath Nokogiri


We learn some interesting features from this list:

  1. We can convert CSS selectors into XPaths
  2. We can search XML documents with #xpath and #xpath_at

If we want to learn more about how to precisely use one of these methods we can use the stat command:

> stat Nokogiri::CSS.xpath_for
Method Information:
Name: xpath_for
Alias: None.
Owner: #<Class:Nokogiri::CSS>
Visibility: public
Type: Bound
Arity: -2
Method Signature: xpath_for(selector, options=?)
Source Location: /dev/gems/ruby/2.4.1/gems/nokogiri-1.7.2/lib/nokogiri/css.rb:22

If we wanted to learn how the method works, we can use show-source:

> show-source Nokogiri::CSS.xpath_for

From: /dev/gems/ruby/2.4.1/gems/nokogiri-1.7.2/lib/nokogiri/css.rb @ line 22:
Owner: #<Class:Nokogiri::CSS>
Visibility: public
Number of lines: 3

def xpath_for(selector, options={})[:ns] || {}).xpath_for selector, options

We can also see nice, syntax highlighted code examples using show-doc:


These handful of commands are a great daily resource for debugging and exploring new gems. Give it a try!

Say Hello to Cognito

By Alain Meier on May 3, 2017

Hello Cognito

We founded BlockScore in 2014 with the goal of making verifying your users as easy as Stripe made billing your users. Over the past 3 years, we’ve learned a tremendous amount about the identity data industry while helping our customers onboard millions of users and wanted to make a change to reflect that.

Starting today, we are renaming from BlockScore to Cognito. Along with this name change, we are also announcing our completely re-imagined identity verification product. The most common feedback we get with our traditional product is that it requires too much intrusive information to verify a user and that knowledge-based authentication is too high friction while not providing enough security benefit. Our new product directly addresses these two concerns:

Cognito is dramatically lower friction

All Cognito needs to verify a user is a phone number. Using this input, we return your user’s real-world identity including their name, date of birth, address, and SSN. If a user can’t be verified using just a phone number, you can send us another request using their name, date of birth, address, SSN or any combination of the above as inputs and we will attempt to verify them again.

Our gradual approach allows you to give the majority of your users the best signup experience possible while maximizing verification match rates. This means that our built-in fallback is still a better user experience than our competitors’ best case scenario. Cognito adapts to your signup flow rather than defining it.

Cognito improves user authentication

Because we are able to link a phone number with a real-world identity, all you have to do is confirm that a user is in possession of her phone using a one-time passcode and you have a much stronger level of identity assurance that she is who she claims to be. Not only is this a lower friction experience than KBA, but it is also a significantly more secure solution. Cognito ends buying black market data to bypass questions about address history or car loans.

What happens to our current products?

To our current customers, all of our traditional products will remain fully supported and maintained. Some of our customers, big and small, will not want to switch to Cognito and we won’t force you to. We will, however, offer current customers special deals if you would like to switch over.

The team has worked incredibly hard to bring this product to you and we look forward to hearing what you think.

Alain Meier - CEO, Cognito

Get Started with CognitoFrictionless, modern identity verification.

Thinking of Using Social Data?

By Chris Morton on April 28, 2017

Social data

Using a social network for verification became popular to reduce signup friction and tie your user’s real world identity to their online identity. Social networks will return limited, user-reported data. However, for many applications such as insurance, lending, sharing economy, and banking, more trusted sources are required and those sources of data cannot be self reported. Regulated sources include credit bureaus, government agencies, and bank records.

Here are some characteristics of the data types and uses.

Social data

  • Data is reported by a user over time and stored in various databases
  • Data may contain errors or false information
  • Creating multiple online personas is simple and common
  • Data can be changed at will by the user
  • Fast way for users to signup on a desktop without having to fill out long forms

Regulated data

  • Data is collected from authoritative, regulated sources and maintained in controlled repositories such as financial institutions
  • Data is required to be kept up-to-date
  • Creating a fake identity is illegal and requires significant effort and takes years of fraudulent activity to look authentic
  • Data can only be used for anti-fraud and KYC use cases

When is social data a fit?

Social network verification is a great way to reduce signup friction when the user is on a desktop and the impact of fraud is low. For services that required a higher level of trust, traditional identity verification uses name, date of birth, address, and social security number. With mobile signup flows becoming so common, supporting mobile is now a requirement instead of an advantage. 66% of companies that saw a decrease in customer loyalty over the past year do not have a mobile app. However, it’s not enough to simply offer an app and mobile support, you must optimize the entire customer journey for the unique needs of a mobile user without asking for too much personal information.

As mobile devices have become more common, having the user enter their social network username and password is quite cumbersome on a phone screen. Additionally, for trusted services, asking for personal information and social security number dissuades users. A new alternative that offers both convenience and a high level of user trust is Cognito. Using a name and phone number, you can reduce signup friction and ensure the highest level of trust.

Get Started with CognitoFrictionless, modern identity verification.

Why Startups Get Millennials and You Don’t

By Brock Gettemeier on April 24, 2017

You have a new killer website or app where your users can get right down to business after completing a standard sign up form. Sounds great, right? Then why are mobile abandonment rates so high? The number users who view your site on their mobile phones is enormous and growing. If you’re tired of leaving money on the table when frustrated users ditch your lengthy mobile signup flow, it’s time to take action. Cognito can help.

It’s no secret that the attention span of millennials can be a short, and that’s ok. Who wouldn’t get bored with a ten field sign up form that needs to be verified by 12 different forms of government ID and all three of your neighbors? That might be a slight exaggeration, but the point is, the more fields there are to fill out in your signup, the higher your abandonment rate is going to be.

We’re all avid phone users, we love using our mobile devices for so many things because it’s easy. Want to order a pizza? No problem. Need to do some shopping? Two clicks and done. Want to apply for a loan? 12 forms of ID and your right arm please.

What can you do about it?

By making your verification process easy. All you’ll need is your customer’s name and phone number. No wasting time with sending scanned pictures of IDs or making unnecessary visits to the bank just to show you’re really who you say you are. Cognito has developed an automated way to reliably pull back rich, regulated KYC data including name, date of birth, SSN, past addresses, and more with just a customer’s phone number. Our goal is to help you decrease customer abandonment and increase your profits.

Using a powerful ID verification method is a proven measure that leads to a higher conversion rate. Don’t leave money on the table by distracting your valued leads with a lengthy signup flow. In a world where attention comes at a premium, you can’t afford to lose customers simply because the verification process is too long. Cognito is the solution for improving your mobile signup conversion rate.

If you want to dramatically decrease your signup abandonment, see Cognito.

Get Started with CognitoFrictionless, modern identity verification.

How to Fix Your Verification Conversion Rate

By Brock Gettemeier on April 1, 2017

Improving Verification Conversion

When you go to signup for an account, do you get excited to enter your personal information into the looming wall of blank white boxes? No, and neither do you customers. You need to start thinking about the onboarding process from the customer’s perspective.

Businesses are finally realizing that a lengthy signup process results in abandonment and dissatisfaction for the customers who do get through it. This experience is not the first impression your customers deserve. Studies show that conversions increase 120% by reducing the number of form fields from eleven to four.

So what can you do?

Stop collecting unnecessary data! Accelerate your signup flow and reduce the required keystrokes as much as possible. Balancing data collection with a low friction signup flow can be a challenge, but it is one that will have a direct impact on your bottom line. In industries such as banking, lending, gaming, and P2P where a signup requires verifying the identity for KYC compliance reasons, it is crucial to make the first stage of the customer journey as seamless as possible.

Groundbreaking technologies, such as BlockScore’s Cognito verification, easily balance identity verification requirements with a true user-centric signup. To optimize for mobile and online signups, Cognito requires only a phone number and name. With those inputs, the API returns your customer’s full name, address, past addresses, date of birth, and full SSN from regulated sources such as credit bureaus and government records. This significantly reduces the amount of information your customer needs to provide while still satisfying business needs for complete information.

In addition to reducing abandonment, Cognito also does more to mitigate fraud as seen in our comparison blog post. This all translates to the most streamlined customer experience, happier customers, and more revenue to your bottom line.

Cognito vs. Traditional ID Verification

By Chris Morton on February 7, 2017

Cognito vs. Traditional ID Verification

Traditional methods of electronic identity verification use a two phase approach. The first phase asks the user to show “this is who I am” using name, date of birth, address, phone number, and social security number. The second phase challenges the user to prove who they are by assembling questions from that person’s past, often called Knowledge-based Authentication (KBA).

KBA has been an industry standard for over a decade. Unfortunately fraud-mitigation techniques become stale as fraudulent actors find ways around them. While KBA can be tuned to increase efficacy, such as limiting the time permitted to respond to questions and limiting the number of attempts a user has to retry failed question sets, fraudulent actors have augmented black markets to include information that can be used to pass KBA questions.

Even as early as 2010, Gartner warned clients that “criminals can get their hands on anyone’s KBA or identity information through the black market exchanges.” To make matters worse for KBA, Gartner notes that businesses experience KBA failure rates up to 30% depending on the population. For every KBA failure, authentic users may be turned away or required to pass a costly manual process if they are willing to go through the hassle.

How Cognito differs

With a traditional verification, only information needs to be compromised. With Cognito, possession of a person’s phone or compromising the phone network is required to pass user authentication.

Cognito exceeds traditional ID verification and KBA in the following ways.

Near ubiquity. Nearly everyone in the US has a phone number

Provable possession. By sending a text message or placing an automated call, you can prove the person is in control of the phone number associated with their identity records

Low friction. It is much easier to verify and authenticate possession of a phone than to answer intrusive financial questions that the user often does not remember

Secure. Compromising a phone adds a layer of complexity outside of just purchasing KBA information through the same black market that sells identity information

Fraudsters use the path of least resistance to commit fraud and test exploits en masse. Millions of identities are trafficked through black markets and used to create fake accounts. While KBA does add a layer of difficulty, a very small hit rate can yield a lucrative return on stolen identity information. Because Cognito requires possession of a device in addition to compromised identity information, black market identity information isn’t sufficient to pass a Cognito verification.

New Features in Ruby 2.4

By John Backus on July 20, 2016

Ruby 2.4

Faster regular expressions with Regexp#match?

Ruby 2.4 adds a new #match? method for regular expressions which is three times faster than any Regexp method in Ruby 2.3:

Regexp#match?:  2630002.5 i/s
  Regexp#===:   872217.5 i/s - 3.02x slower
   Regexp#=~:   859713.0 i/s - 3.06x slower
Regexp#match:   539361.3 i/s - 4.88x slower
Expand benchmark source

When you call Regexp#===, Regexp#=~, or Regexp#match, Ruby sets the $~ global variable with the resulting MatchData:

/^foo (\w+)$/ =~ 'foo bar'      # => 0
$~                              # => #<MatchData "foo bar" 1:"bar">

/^foo (\w+)$/.match('foo baz')  # => #<MatchData "foo baz" 1:"baz">
$~                              # => #<MatchData "foo baz" 1:"baz">

/^foo (\w+)$/ === 'foo qux'     # => true
$~                              # => #<MatchData "foo qux" 1:"qux">

Regexp#match? returns a boolean and avoids building a MatchData object or updating global state:

/^foo (\w+)$/.match?('foo wow') # => true
$~                              # => nil

By skipping the global variable Ruby is able to avoid work allocating memory for the MatchData.

New #sum method for Enumerable

You can now call #sum on any Enumerable object:

[1, 1, 2, 3, 5, 8, 13, 21].sum # => 54

The #sum method has an optional parameter which defaults to 0. This value is the starting value of a summation meaning that [].sum is 0.

If you are calling #sum on an array of non-integers then you need to provide your own initial value:

class ShoppingList
  attr_reader :items

  def initialize(*items)
    @items = items

  def +(other)*items, *other.items)

eggs   ='eggs')          # => #<ShoppingList:0x007f952282e7b8 @items=["eggs"]>
milk   ='milks')         # => #<ShoppingList:0x007f952282ce68 @items=["milks"]>
cheese ='cheese')        # => #<ShoppingList:0x007f95228271e8 @items=["cheese"]>

eggs + milk + cheese                       # => #<ShoppingList:0x007f95228261d0 @items=["eggs", "milks", "cheese"]>
[eggs, milk, cheese].sum                   # => #<TypeError: ShoppingList can't be coerced into Integer>
[eggs, milk, cheese].sum( # => #<ShoppingList:0x007f9522824cb8 @items=["eggs", "milks", "cheese"]>

On the last line an empty shopping list ( is supplied as the initial value.

New methods for testing if directories or files are empty

In Ruby 2.4 you can test whether directories and files are empty using the File and Dir modules:

Dir.empty?('empty_directory')      # => true
Dir.empty?('directory_with_files') # => false

File.empty?('contains_text.txt')   # => false
File.empty?('empty.txt')           # => true

The File.empty? method is equivalent to which is already available in all supported Ruby versions:'contains_text.txt')  # => false'empty.txt')          # => true

Unfortunately these methods are not available for Pathname yet.

Extract named captures from Regexp match results

In Ruby 2.4 you can called #named_captures on a Regexp match result and get a hash containing your named capture groups and the data they extracted:

pattern  = /(?<first_name>John) (?<last_name>\w+)/
pattern.match('John Backus').named_captures # => { "first_name" => "John", "last_name" => "Backus" }

Ruby 2.4 also adds a #values_at method for extracting just the named captures which you care about:

pattern = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/
pattern.match('2016-02-01').values_at(:year, :month) # => ["2016", "02"]

The #values_at method also works for positional capture groups:

pattern = /(\d{4})-(\d{2})-(\d{2})$/
pattern.match('2016-07-18').values_at(1, 3) # => ["2016", "18"]

New Integer#digits method

If you want to access a digit in a certain position within an integer (from right to left) then you can use Integer#digits:

123.digits                  # => [3, 2, 1]
123.digits[0]               # => 3

# Equivalent behavior in Ruby 2.3: # => [3, 2, 1]

If you want to know positional digit information given a non-decimal base, you can pass in a different radix. For example, to lookup positional digit information for a hexadecimal integer you can pass in 16:

0x7b.digits(16)                                # => [11, 7]
0x7b.digits(16).map { |digit| digit.to_s(16) } # => ["b", "7"]

Improvements to the Logger interface

The Logger library in Ruby 2.3 can be a bit cumbersome to setup:

logger1 =
logger1.level    = :info
logger1.progname = 'LOG1'

logger1.debug('This is ignored')'This is logged')

# >> I, [2016-07-17T23:45:30.571508 #19837]  INFO -- LOG1: This is logged

Ruby 2.4 moves this configuration to Logger’s constructor:

logger2 =, level: :info, progname: 'LOG2')

logger2.debug('This is ignored')'This is logged')

# >> I, [2016-07-17T23:45:30.571556 #19837]  INFO -- LOG2: This is logged

Parse CLI options into a Hash

Parsing command line flags with OptionParser often involves a lot of boilerplate in order to compile the options down into a hash:

require 'optparse'
require 'optparse/date'
require 'optparse/uri'

config = {}

cli = do |options|
    options.define('--from=DATE', Date) do |from|
      config[:from] = from

    options.define('--url=ENDPOINT', URI) do |url|
      config[:url] = url

    options.define('--names=LIST', Array) do |names|
      config[:names] = names

Now you can provide a hash via the :into keyword argument when parsing arguments:

require 'optparse'
require 'optparse/date'
require 'optparse/uri'

cli = do |options|
    options.define '--from=DATE',    Date
    options.define '--url=ENDPOINT', URI
    options.define '--names=LIST',   Array

config = {}

args = %w[
  --from  2016-02-03
  --names John,Daniel,Delmer

cli.parse(args, into: config)

config.keys    # => [:from, :url, :names]
config[:from]  # => #<Date: 2016-02-03 ((2457422j,0s,0n),+0s,2299161j)>
config[:url]   # => #<URI::HTTPS>
config[:names] # => ["John", "Daniel", "Delmer"]

Faster Array#min and Array#max

In Ruby 2.4 the Array class defines its own #min and #max instance methods. This change dramatically speeds up the #min and #max methods on Array:

     Array#min:       35.1 i/s
Enumerable#min:       21.8 i/s - 1.61x slower
Expand benchmark source

Simplified integers

Until Ruby 2.4 you had to manage many numeric types:

# Find classes which subclass the base "Numeric" class:
numerics = ObjectSpace.each_object(Module).select { |mod| mod < Numeric }

# In Ruby 2.3:
numerics # => [Complex, Rational, Bignum, Float, Fixnum, Integer, BigDecimal]

# In Ruby 2.4:
numerics # => [Complex, Rational, Float, Integer, BigDecimal]

Now Fixnum and Bignum are implementation details that Ruby manages for you. This should help avoid subtle bugs like this:

def categorize_number(num)
  case num
  when Fixnum then 'fixed number!'
  when Float  then 'floating point!'

# In Ruby 2.3:
categorize_number(2)        # => "fixed number!"
categorize_number(2.0)      # => "floating point!"
categorize_number(2 ** 500) # => nil

# In Ruby 2.4:
categorize_number(2)        # => "fixed number!"
categorize_number(2.0)      # => "floating point!"
categorize_number(2 ** 500) # => "fixed number!"

If you have Bignum or Fixnum hardcoded in your source code that is fine. These constants now point to Integer:

Fixnum  # => Integer
Bignum  # => Integer
Integer # => Integer

New arguments supported for float modifiers

#round, #ceil, #floor, and #truncate now accept a precision argument

4.55.ceil(1)     # => 4.6
4.55.floor(1)    # => 4.5
4.55.truncate(1) # => 4.5
4.55.round(1)    # => 4.6

These methods all work the same on Integer as well:

4.ceil(1)        # => 4.0
4.floor(1)       # => 4.0
4.truncate(1)    # => 4.0
4.round(1)       # => 4.0

Case sensitivity for unicode characters

Consider the following sentence:

My name is JOHN. That is spelled J-Ο-H-N

Calling #downcase on this string in Ruby 2.3 produces this output:

my name is john. that is spelled J-Ο-H-N

This is because “J-Ο-H-N” in the string above is written with unicode characters.

Ruby’s letter casing methods now handle unicode properly:

sentence =  "\uff2a-\u039f-\uff28-\uff2e"
sentence                              # => "J-Ο-H-N"
sentence.downcase                     # => "j-ο-h-n"
sentence.downcase.capitalize          # => "J-ο-h-n"
sentence.downcase.capitalize.swapcase # => "j-Ο-H-N"

New option to specify size of a new string

When creating a string you can now define a :capacity option which will tell Ruby how much memory it should allocate for your string. This can help performance as Ruby can avoid reallocations as you increase the size of the string in question:

   With capacity:    37225.1 i/s
Without capacity:    16031.3 i/s - 2.32x slower
Expand benchmark source

Fixed matching behavior for symbols

Ruby 2.3’s Symbol#match returned the match position even though String#match returns MatchData. This inconsistency is fixed in Ruby 2.4:

# Ruby 2.3 behavior:

'foo bar'.match(/^foo (\w+)$/)  # => #<MatchData "foo bar" 1:"bar">
:'foo bar'.match(/^foo (\w+)$/) # => 0

# Ruby 2.4 behavior:

'foo bar'.match(/^foo (\w+)$/)  # => #<MatchData "foo bar" 1:"bar">
:'foo bar'.match(/^foo (\w+)$/) # => #<MatchData "foo bar" 1:"bar">

Multiple assignment inside of conditionals

You can now assign multiple variables within a conditional:

branch1 =
  if (foo, bar = %w[foo bar])

branch2 =
  if (foo, bar = nil)

branch1 # => "truthy"
branch2 # => "falsey"

You probably shouldn’t do that though.

Exception reporting improvements for threading

If you encounter an exception within a thread then Ruby defaults to silently swallowing up that error:

puts 'Starting some parallel work'

thread = do
    sleep 1

    fail 'something very bad happened!'

sleep 2

puts 'Done!'
$ ruby parallel-work.rb
Starting some parallel work

If you want to fail the entire process when an exception happens within a thread then you can use Thread.abort_on_exception = true. Adding this to the parallel-work.rb script above would change the output to:

$ ruby parallel-work.rb
Starting some parallel work
parallel-work.rb:9:in 'block in <main>': something very bad happened! (RuntimeError)

In Ruby 2.4 you now have a middle ground between errors being silently ignored and aborting your entire program. Instead of abort_on_exception you can set Thread.report_on_exception = true:

$ ruby parallel-work.rb
Starting some parallel work
#<Thread:0x007ffa628a62b8@parallel-work.rb:6 run> terminated with exception:
parallel-work.rb:9:in 'block in <main>': something very bad happened! (RuntimeError)

How Ruby Hides Complexity

By John Backus on January 6, 2016

Ruby makes it easy to write concise code. This is a benefit of the language and the ecosystem. Matz focuses on “making programs succinct” and Rails boasts that it lets you build “in a matter of days” what used to take months.

Concise code can have a dark side. Convenient interfaces can tuck away complexity and side effects that might surprise you later. Brevity in software comes at the cost of diligence both from developers and reviewers. It is especially important to understand how your abstractions work and the business rules they implicitly handle.

Moving Fast

Imagine you are adding a new feature to your Ruby on Rails web application. This feature breaks down into three small tasks:

  • Integrate with an internal API which provides information about the current user
  • Use information about the current user in order add a welcome message to the header of each page
  • Display a flag alongside the message corresponding to the user’s country field

The current user JSON looks like this

  "status": "success",
  "data": {
    "name": {
      "first": "Edmond",
      "last": "O'Connell"
    "address": {
      "street1": "53236 Camilla Light",
      "street2": null,
      "city": "Pierceville",
      "state": "NJ",
      "country": "United States"

To integrate with the API you create three simple classes with ActiveModel::Model:

class User
  include ActiveModel::Model

  attr_accessor :address, :name

class Name
  include ActiveModel::Model

  attr_accessor :first, :last

class Address
  include ActiveModel::Model

  attr_accessor :street1, :street2, :city, :state, :country

To extract the user data you use the new #dig method introduced in Ruby 2.3:
  name:'data', 'name')),
  address:'data', 'address')))

Finally, you add a current_country view helper method and create a new view partial:

module UserHelper
  def current_country
    return 'Unknown' unless current_user
<div id="user-welcome">
  <% if current_user %>
    <span>Welcome back <%= %>!</span>
  <% end %>

  <div id="user-welcome-flag">
    <%= image_tag("/imgs/flags/#{current_country}.png") %>

Breaking Things

A few weeks pass and you find out that some pages rendered the message “Welcome back !” and a broken image in place of the flag. The internal API encountered its own error and returned

  "status": "error",
  "message": "Internal server error"

Oddly enough this did not break your code:

response = { 'status' => 'error', 'message' => 'Internal server error' }

name    = response.dig('data', 'name')    # => nil
address = response.dig('data', 'address') # => nil

user =, address:            # => #<Name:0x0011910412163>
user.address         # => #<Address:0x0011910412163>      # => nil # => nil

Feeling a bit embarrassed by the bug you reflect on how you could prevent similar issues in the future:

What if the internal API renames the country field to country_code? That would also silently break the view. Can I only avoid these cryptic bugs by being vigilant about every external dependency?


The features in Ruby and Rails which let you write concise code can also let you cut corners. Consider our Name class and how the corresponding response data was originally extracted:

class Name
  include ActiveModel::Model

  attr_accessor :first, :last

module ResponseHandler
  def self.extract_name(response)'data', 'name'))

Let’s rewrite Name without ActiveModel or attr_accessor:

class Name
  # Inlined from Active Model source
  def initialize(params={})
    params.each do |attr, value|
      self.public_send("#{attr}=", value)
    end if params


  def first

  def first=(first)
    @first = first

  def last

  def last=(last)
    @last = last

Imagining our code like this is instructive. It seems like three questions are now immediately obvious

  • Should the initializer invoke setter methods for any key passed to the initializer?
  • Will Name ever be invoked without arguments?
  • Are these public setter methods necessary or is Name a value object?

Let’s throw out #dig and instead handle each edge case manually.

module ResponseHandler
  def self.extract_name(response)
    return unless response.key?('data')
    return if     response['data'].empty?['data']['name'])

Expanding this method highlights three distinct outcomes which are each important to consider. The original code properly handled a valid user object but overlooked two important edge cases:

1. API error handling when response['data'] is nil

return unless response.key?('data')

This happened when the internal API encountered an error. This condition should instead result in our application notifying the end user of an error.

2. Alternate behavior when a user is not returned

return if response['data'].empty?

This corresponds to the following JSON

  "status": "success",
  "data": {}

This might mean that the current user has not yet logged in. It could also be a buggy response.

Depending on how robust you expect the internal API to be you might want to handle this case independently as well. If this is invalid state then the response handler should raise an error. If it is valid state and you want to handle cases where the user is not logged in then there should be a separate Guest class independent of the User class.

Both of these options are better than implicitly assuming this condition never happens. Once the code embedding your assumption is deployed it is too easy to forget and unknowingly introduce a silent regression in the future.


Ruby certainly makes it easy to write concise code. The question then is how do you reap these benefits without cutting corners accidentally? At BlockScore we have a few practices which help us write better Ruby.

1. Strict and simple dependencies

Active Model’s initializer is permissive and this led to surprising behavior. Consider the benefit of a strict alternative like anima:

# Test cases
valid_arguments  = { first: 'John', last: 'Doe'                  }
missing_argument = { first: 'John'                               }
extra_argument   = { first: 'John', last: 'Doe', nickname: 'Jim' }

# With Active Model
class Name
  include ActiveModel::Model

  attr_accessor :first, :last
end  # => #<Name:0x0011910412163 @first="John", @last="Doe"> # => #<Name:0x0011910412163 @first="John">   # => NoMethodError: undefined method `nickname=`              # => #<Name:0x0011910412163>                   # => #<Name:0x0011910412163>

# With Anima
class Name
  include, :last)
end  # => #<Name first="John" last="Doe"> # => Anima::Error: Name attributes missing: [:last]   # => Anima::Error: Name attributes missing: [], unknown: [:nickname]              # => NoMethodError: undefined method `keys'                   # => ArgumentError: wrong number of arguments (given 0, expected 1)

2. Meticulous code review

An inconspicuous line of code like'data', 'name'))

can encode multiple important code paths. With Ruby it is especially important to visualize the equivalent “expanded” code.

3. Static analysis

Tools like reek and rubocop are great for learning how to write better code. Reek might point out a design issue before you notice it. Rubocop now goes way beyond style: the next release will include eight new cops for helping you catch bad performing code.

4. Mutation testing

Mutation testing helps me write better Ruby. It sniffs out dead code, helps me find missing tests, and generally helps me think about the assumptions I’ve made.

New Features in Ruby 2.3

By John Backus on November 13, 2015

Ruby 2.3

Yesterday ruby 2.3-preview1 was released. This update brings several new additions to core classes in ruby as well as some new syntax. Here are a few of the new additions coming in ruby 2.3:

Extract values with Array#dig and Hash#dig

The new #dig instance methods provide concise syntax for accessing deeply nested data. For example:

user = {
  user: {
    address: {
      street1: '123 Main street'

user.dig(:user, :address, :street1) # => '123 Main street'

results = [[[1, 2, 3]]]

results.dig(0, 0, 0) # => 1

Both of these methods will return nil if any access attempt in the deeply nested structure returns nil:

user.dig(:user, :adddresss, :street1) # => nil
user.dig(:user, :address, :street2) # => nil

Grep out the inverse of a pattern with Enumerable#grep_v

This method is the inverse of the Enumerable#grep method. The grep method and its inverse provide several powerful ways to filter enumerables:

Filtering by regular expression

friends = %w[John Alain Jim Delmer]

j_friends = friends.grep(/^J/)   # => ["John", "Jim"]
others    = friends.grep_v(/^J/) # => ["Alain", "Delmer"]

Filtering by types

items = [1, 1.0, '1', nil]

nums   = items.grep(Numeric)   # => [1, 1.0]
others = items.grep_v(Numeric) # => ['1', nil]

Fetching multiple values with Hash#fetch_values

Sometimes Hash#fetch is a better choice than Hash#[] when you want to write more strict code. You can also access multiple values from a hash using Hash#values_at, but there wasn’t a strict equivalent to values_at until ruby 2.3:

values = {
  foo: 1,
  bar: 2,
  baz: 3,
  qux: 4

values.values_at(:foo, :bar)    # => [1, 2]
values.fetch_values(:foo, :bar) # => [1, 2]

values.values_at(:foo, :bar, :invalid)    # => [1, 2, nil]
values.fetch_values(:foo, :bar, :invalid) # => KeyError: key not found: :invalid

Positive and negative predicates for Numeric#positive? and Numeric#negative?

Numeric values now have predicate methods that check if the subject is positive or negative. This can be useful if you want to filter an enumerable:

numbers = (-5..5) # => [1, 2, 3, 4, 5] # => [-5, -4, -3, -2, -1]

Hash superset and subset operators Hash#<=, Hash#<, Hash#>=, and Hash#>

These methods lets you compare hashes to see if they are subsets or proper subsets of each other. For example:

small     = { a: 1                }
medium    = { a: 1, b: 2          }
large     = { a: 1, b: 2, c: 3    }
different = { totally: :different }

{ a: 1, b: 2 } > { a: 1 }             # => true
{ a: 1 } > { a: 1 }                   # => false
{ b: 1 } > { a: 1 }                   # => false
{ a: 1, b: 2 } < { a: 1, b: 2, c: 3 } # => true

Convert a hash to a proc with Hash#to_proc

Now you can use a hash to iterate over an enumerable object:

hash = { a: 1, b: 2, c: 3 }
keys = %i[a c d] # => [1, 3, nil]

Honestly, I can’t think of a use case for this yet.

Ruby 2.3 will introduce new syntax for accessing deeply nested objects safely without accidentally triggering a dreaded NoMethodError on nil. The syntax looks like this:

require 'ostruct'


where each instance of &. is similar to ActiveSupport’s Object#try method. Basically, if a nil value is encountered, then each method call will not be attempted and instead the nil value will be returned immediately.

Experimental frozen string pragma

You’ve probably heard that strings will be frozen by default in ruby 3. Ruby 2.3 lets you specify a pragma which enables this by default:

$ ruby -v
ruby 2.3.0preview1 (2015-11-11 trunk 52539) [x86_64-darwin14]
$ cat default.rb
# frozen_string_literal: false

puts "Hello world".reverse!
$ ruby default.rb
dlrow olleH
$ cat enabled.rb
# frozen_string_literal: true

puts "Hello world".reverse!
$ ruby enabled.rb
enabled.rb:3:in `reverse!': can't modify frozen String (RuntimeError)
  from enabled.rb:3:in `<main>'

Alternatively, you can also enable and disable this using the command line argument --enable=frozen-string-literal

Further reading

You can read about other new features, performance improvements, compatibility issues, and more here. Remember that this is still a preview of ruby 2.3 and some things might be subject to change.

How to Write Better Code Using Mutation Testing

By John Backus on October 29, 2015

Abstract syntax tree

When developers talk about “test coverage” they are typically talking about how many lines of code are executed by their test suite. This is a simple calculation: what percentage of our code was run by our tests? We don’t want to accidentally break our code later so having strong test coverage is important.

Mutation testing is not an alternative to line coverage. While line coverage asks “what percentage of our code is run by our tests,” mutation testing asks “what code can I change without breaking your tests?” Mutation testing tools answer this question by applying and testing small modifications to your application.

This post explores how asking “what changes don’t break my tests?” can benefit more than just test coverage. Using a ruby mutation testing tool called mutest, I’ll introduce and reflect on two separate code examples to demonstrate how mutation testing helps you improve both your tests and your code itself.

Mutest keeps your tests honest

Consider this script for looking up users who tweeted ‘“I really enjoy #pizza”’:

require 'twitter'

class Tweeters
  def recent
    query.first(3).map do |tweet|


  def query'"I really enjoy #pizza"')

  def api_client do |config|
      config.consumer_key        = ENV['TWITTER_CONSUMER_KEY']
      config.consumer_secret     = ENV['TWITTER_CONSUMER_SECRET']
      config.access_token        = ENV['TWITTER_ACCESS_TOKEN']
      config.access_token_secret = ENV['TWITTER_ACCESS_TOKEN_SECRET']

puts if __FILE__ == $0

To illustrate the difference between “line coverage” and “mutation coverage” consider this intentionally bad test:

require 'simplecov'

require 'tweeters'
require 'rspec'

RSpec.describe Tweeters do
  it 'returns results' do
    expect( be(nil)

Now if I run this test:

$ rspec -I. -rpizza_spec.rb

Finished in 0.94429 seconds (files took 1.38 seconds to load)
1 example, 0 failures

Coverage report generated for RSpec to /dev/coverage. 15 / 15 LOC (100.0%) covered.

My test passed with 100% coverage.

If I run this test again with mutest and instruct it to only mutate the recent instance method then I see the following summary:

Mutations:       36
Kills:           19
Coverage:        52.78%
See full output

This tells me that my recent method actually has 52.78% mutation coverage! This means that mutest found 36 ways it could change my method and only 19 of those changes resulted in my test failing.

Mutest shows me what my tests missed. For example, here are three of the nineteen mutations my tests did not catch:

 def recent
-  query.first(3).map do |tweet|
-    "@#{tweet.user.screen_name}"
-  end
+  self

 def recent
   query.first(3).map do |tweet|
-    "@#{tweet.user.screen_name}"
+    nil

 def recent
   query.first(3).map do |tweet|
-    "@#{tweet.user.screen_name}"
+    "@#{tweet.user}"

Again, the test for this script was intentionally bad, but the difference in results is important. All my test did was assert that my recent method did not return null. This assertion did technically exercise 100% of the code though so the line coverage tool is reporting 100% coverage. Mutest quickly showed me that it could make my method return self, [nil, nil, nil], and ['@#<Twitter::User:0x1>', '@#<Twitter::User:0x2>', '@#<Twitter::User:0x3>'] without breaking my tests.

The takeaway here is not that line coverage is bad. You can write good tests without mutest. Instead, think of mutation testing as an x-ray for your tests. Running mutest on new code can help you double check that your tests are covering everything you care about. Mutest can also be a powerful tool when conducting a code review. It is easy to see roughly which methods are tested, but it can be hard to spot what that original author might have overlooked.

Mutest helps you write more robust code

Imagine you are tasked with creating an endpoint in your company’s internal API which does two tasks:

  • Looking up users by their unique id
  • Returning a list of users which signed up after a certain date

A few hours later you write the following code

class UsersController < ApplicationController
  # Looks up GET param `user_id` and returns user
  # @return [User]
  # @api public
  def show
    render json:[:user_id].to_i)
  rescue UserFinder::RecordNotFound => error
    render json: { error: error.to_s }

  # Finds users created after date specified in GET param `after`
  # @return [Array<User>] list of users
  # @api public
  def created_after
    after = Date.parse(params[:after])
    render json:

Along with this code you write some unit tests for the different edge cases you expect your controller to handle:

$ rspec --format documentation users_controller_spec.rb

  returns a user when given a valid id
  renders JSON error when given an invalid id

  returns multiple users given an early date
  excludes users created before date and includes users after
  renders empty array when date is in the future

Finished in 0.00433 seconds (files took 0.23881 seconds to load)
5 examples, 0 failures

You deploy your new features and move on to your next task. Later, you find out that the front end team reported a bug in your API. Apparently every request they make returns

  "error": "Could not find User with 'id'=0"

That same day you find out that the marketing team thinks your “new users” endpoint doesn’t work either. Apparently they sometimes get empty results when they shouldn’t. You end up spending the day debugging for your co-workers and eventually figure out what they were doing wrong.

To the front end developer you explain

The API expects the parameter user_id but you specified id. My code ends up getting nil when it tries to get the user_id parameter which is coerced 0 which explains why you always got that error.

moving on to the marketing team you explain

You need to write your dates in the format "YYYY-MM-DD". The problem was when you were searching things like “last December” which ruby parses as December of this year.

What if we ran mutest on this code before shipping it? Running mutest on UsersController we see the following alive mutations:

 def created_after
-  after = Date.parse(params[:after])
+  after = Date.iso8601(params[:after])

 def created_after
-  after = Date.parse(params[:after])
+  after = Date.parse(params.fetch(:after))

 def show
-  render(json:[:user_id].to_i))
+  render(json:[:user_id])))
 rescue UserFinder::RecordNotFound => error
   render(json: { error: error.to_s })

 def show
-  render(json:[:user_id].to_i))
+  render(json:
 rescue UserFinder::RecordNotFound => error
   render(json: { error: error.to_s })

Mutest is helping me reduce the side effects that my application will permit. These four mutations eliminate subtle bugs which produce misleading errors and incorrect output.

1. Requiring parameters with Hash#fetch


In both actions before we used Hash#[] which implicitly returns nil if the specified key is not present. Hash#fetch on the other hand will raise an error if the specified key is not present. As a result, mutest makes me think about the use case where an implementer of the API does not provide an expected parameter.

2. Better type coercion with Kernel#Integer


In UsersController#show we called #to_i on our user_id parameter. This ended up coercing nil into 0 which made our final error message more confusing. #to_i will do its best to coerce any input, but this is often not what we want:

nil.to_i     # => 0
'hello'.to_i # => 0

Mutest replaces this with Kernel#Integer which is more strict:

Integer(nil)     # => TypeError: can't convert nil into Integer
Integer('hello') # => ArgumentError: invalid value for Integer(): "hello"

3. Rejecting invalid dates with Date.iso8601


In UsersController#created_after we called Date#parse which tries to parse any string it thinks could be a date. This sounds handy, but in practice it often can be a subtle source of bugs since all it really needs to see are two adjacent numbers or three letters which could be a month abbreviation:

# Seems useful!
Date.parse('May 1st 2015')      # => #<Date: 2015-05-01>
Date.parse('2015-05-01')        # => #<Date: 2015-05-01>

# Never mind
Date.parse('Maybe not a date')  # => #<Date: 2015-05-01>
Date.parse('I am 10 years old') # => #<Date: 2015-10-10>

Ruby has many more specific date parsing methods. In this case mutest found that iso8601 still works with the tests cases we specified:

# Actually useful!
Date.iso8601('2015-05-01')        # => #<Date: 2015-05-01>
Date.iso8601('May 1st 2015')      # => invalid date (ArgumentError)
Date.iso8601('Maybe not a date')  # => invalid date (ArgumentError)
Date.iso8601('I am 10 years old') # => invalid date (ArgumentError)

Each mutation was better fit for the use case in question. The replacement methods were more likely to throw errors when given unexpected input. Knowing this during the development cycle causes me to handle these edge cases since I don’t want an exception to go uncaught and produce an application error. Even if I do forget to cover one of these use cases though the alternative is still preferable: an exception is thrown in production instead of weird behavior silently degrading my app’s quality for months. I know about the error the first time a user triggers it instead of the first time a user complains.

Add mutest to your workflow

Mutest is a powerful tool for improving your code. At BlockScore we try to reach 100% mutation coverage before code is shipped to production. You don’t have to aim for 100% coverage though to start benefiting from tools like mutest. Simply running mutest against your codebase and seeing what it can change should help you better understand what tests you are missing and what code could be improved.