home

All Posts

Ensuring the Consistency of Accounting Information in Moneybird

written by Ivo van Hurne on · Comments

Moneybird allows you as an entrepreneur to focus on sending invoices to your customers. No need to occupy yourself with accounting, as Moneybird will take care of this by itself. How do we make sure your books are always in order?

Accounting 101

The record of all things that are happening in your Moneybird account is called the journal. The journal contains transactions that consist of two or more entries. These transactions move money between ledger accounts. A ledger account can be anything from an actual real-world bank account to a virtual account like Accounts Receivable.

When moving money between ledger accounts in a transaction, the sum of all entries must be zero. If it is not, money is lost during the transaction.

Example

  1. Suppose you send an invoice to one of your customers. This invoice will have to be paid by the customer, so Moneybird moves the total amount of the invoice to Accounts Receivable.
  2. Your customer pays the full amount by bank transfer. The total amount of the invoice will be subtracted from Accounts Receivable and added to your bank account.

The Journal in Moneybird

In Moneybird the journal is represented as a view in our database (PostgreSQL). The view collects information from a lot of different places in your account and converts them to journal entries. This makes it very easy to check that everything is in balance and to generate reports.

Unfortunately, querying this view takes a long time because of its complexity. Therefore we use a materialized view that is updated when needed.

Materialized Views in PostgreSQL

Support for materialized views is not that great in PostgreSQL. Only a basic materialized view is available. It is generated completely from scratch on every update and the view is unavailable for reading while it is being updated.

We implemented our own solution that updates just the journal entries for entities that have been changed and keeps the view available for reading during updates.

Updating journal entries for a an invoice
1
2
3
4
5
6
7
-- Delete old journal entries
DELETE FROM journal_entries WHERE document_type='Invoice' AND document_id=1234;

-- Insert new journal entries from other views
INSERT INTO journal_entries SELECT * FROM je_invoices_to_accounts_receivable WHERE document_type='Invoice' AND document_id=1234;
INSERT INTO journal_entries SELECT * FROM je_invoices_to_taxes WHERE document_type='Invoice' AND document_id=1234;
INSERT INTO journal_entries SELECT * FROM je_details_to_ledger_accounts WHERE document_type='Invoice' AND document_id=1234;

The je_X_to_Y views that provide the input for our materialized journal_entries view are more specialized views that extract journal entry information from certain entities in a Moneybird account.

Concurrent Updates

This may not be very complicated, but it gets interesting when multiple threads are trying to update journal entries for the same entity at the same time. This can happen, because multiple users can access the same Moneybird account concurrently.

Additionally, external events can automatically update entities in a Moneybird account. For example, when a customer pays your invoice on-line via PayPal or iDEAL. Or when you are using our API.

Example

  1. A user registers a partial payment for an invoice in Moneybird. The invoice is saved and old journal entries are deleted. New journal entries are starting to be inserted.
  2. At the same time, another user registers another partial payment for the same invoice. Journal entries are updated.
  3. The journal entry update for the first partial payment finishes, selecting part of the information from the new situation created in step 2. Our materialized view is now in an inconsistent state.

Transaction Isolation

Fortunately PostgreSQL allows us to isolate transactions from changes that are happening concurrently. It provides a REPEATABLE READ transaction isolation level that ensures all statements in a transaction see the same snapshot of the database.

Furthermore, when committing the transaction the database checks if another transaction has been committing changes to the same rows we where trying to update. In that case, our current transaction will be aborted automatically.

This solves our problem of inconsistent view states, but introduces a new problem: how can we be sure that the view always contains the most recent state if some transactions are aborted?

Delayed Update

This is a problem that is quite hard to solve for real-time updates. Therefore we decided to split the problem into two different problems and solutions:

  1. First, we want to make sure that no user action will lead to unbalanced journal entries. We check this immediately using entity-specific database constraints, for example when registering a payment.

  2. Second, we want to make sure that accounting reports are always in a consistent state and are eventually at the most recent state. For each journal entry we record the time it was last updated and compare this periodically to the last updates of invoice, payments, etcetera. Should outdated journal entries be found, we update them to the latest version.

Conclusions

Caching complicated views that are accessed concurrently can be a challenge. Using PostgreSQL’s transaction isolation and by delaying certain updates we ensure that they are kept in a consistent state.

We are hiring!

Want to work on the future of accounting using great techniques like Docker, Ruby mutations and Advanced PostgreSQL usage for versioning? We are hiring software engineers! Drop your resume at info@moneybird.com.

Using Buildkite for Docker builds

written by on · Comments

Each project in the Moneybird stack has its own Dockerfile in the root of the project. This allows engineers to change the dependencies of the project when they work on the project. The Dockerfile also makes the changes to dependencies tracable, making upgrades way easier than before.

When an engineer pushes new code to our git repository, we want to test this code and make it ready for deployment. There are a lot of great tools for automating builds and testing. We are using Buildkite because is delivers the best combination of easy configuration and flexibility.

Choosing a CI tool

There are many CI tools available, many of them are capable of executing an arbitrary shell script to build things like Docker images. When we started using Docker, we were running Jenkins and tried to incorporate all our requirements:

  1. We want to start builds when commits are pushed to Github

  2. We want the tool to report the status back to Github using the commit status API

  3. We want the tool to report to our internal chat application, at that time Hipchat, currently we are using Slack

  4. We want the tool to report by email when a team member prefers to receive email

Installing plugins and maintaining Jenkins takes a lot of time. We were able to build Docker images from the Dockerfiles and test them on using Jenkins.

Due to the hassles with Jenkins, we decided to look for a hosted solution. We don’t need CI to run internally and there are great hosted solutions like Travis CI, Codeship and Wercker.

Docker and hosted CI tools

Many hosted CI tools run your test suite on a clean virtual machine. This machine has a predefined set of dependencies installed, allowing your test suite to start quickly. Docker itself has these dependencies defined inside the image. It depends on cached layers to speed up the process of building images.

At the time of our research (early 2014), none of the hosted CI tools had a solution for running Docker. Building a Docker image was possible but slow: in a clean virtual machine the Docker build would take ages to complete because nothing was cached.

Buildkite provides a combination of a SaaS frontend and agents running on any server you like. The frontend fulfills all our requirements about Github integration and notifications about builds (and no maintenance hassles).

We run 10 Buildkite agents on a bare metal server, located in our office. The agents take on jobs from the Buildkite servers and build our projects. Biggest advantage: all agents run on a single server and building images a blazing fast due to the caching!

Setting up Buildkite

Buildkite is very easy to setup: in their online tool, you define how you want to build your project. Currently, we have just one build step, calling a bin/build_container script in the project. Buildkite already checked out the right commit that needs to be build.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Build image
docker build -t registry.acme.org/image_name:$BUILDBOX_COMMIT .

# Push to registry
docker push registry.acme.org/image_name:$BUILDBOX_COMMIT

# Start dependencies: make sure all dependencies are running
docker run -d --name=redis_$BUILDKITE_COMMIT registry.acme.org/redis:2.8-1
REDIS_IP=$($DOCKER inspect --format='' redis_$BUILDKITE_COMMIT)

# Run container with specs
docker run --rm=true --name=image_name_test_$BUILD_ID -e REDIS_IP="$REDIS_IP" registry.acme.org/image_name:$BUILDBOX_COMMIT bin/cibuild

# Tag image with current branch name and push when specs are green
docker tag -f registry.acme.org/image_name:$BUILDBOX_COMMIT registry.acme.org/image_name:$BUILDBOX_BRANCH
docker push registry.acme.org/image_name:$BUILDBOX_BRANCH

The bin/ci_build script contains the test suite to be run. All output of the script is handled by the Buildbox agent and pushes back to their online tool:

More configuration in Buildkite

At this moment, we only have one step in our builds. Buildkite provides options to create a more complex workflow. This allows you to run jobs in parallel for faster results. Another option is to incorporate a step that does deployment only when the job is run in the master branch. We are experimenting with these features and will report back on this blog with the results!

We are hiring!

Want to work on the future of accounting using great techniques like Docker, Ruby mutations and Advanced PostgreSQL usage for versioning? We are hiring software engineers! Drop your resume at info@moneybird.com.

Sharing Rails Assets using Git Submodules

written by on · Comments

Recently we needed two independent Ruby on Rails apps to share some parts of their assets. Both applications are using the same CSS and Javascript files for the main layout. We wanted to maintain those files in one place, without copying files over and over again.

Using a Rails engine (that contains the CSS and Javascript files) seems to be the most obvious way to achieve this. But, we found this not to be the most practical solution during development:

  1. A Rails engine is a separate project in your text-editor, switching between these projects can be annoying.

  2. You need a lot of discipline to commit changes on the Rails engine, as you are not being forced to do this in your normal commit flow.

  3. For just sharing some assets, a Rails engine seems like overkill.

We looked further and figured out that Git submodules solve many of these issues. Basically, you just create a git repository with the javascripts and stylesheets folders and put your shared files in place, like this:

1
2
3
shared-assets/javascripts/menu-interactions.js.coffee
shared-assets/stylesheets/layout.scss.css
shared-assets/stylesheets/menu.scss.css

The next step is to add your shared assets repository as a git submodule to both of your Rails projects.

1
git submodule add git@github.com:yourname-here/shared-assets.git app/assets/shared

Rails needs to know that it has to look for the shared assets folder. You can do this by extending the assets paths in the application.rb file.

1
2
config.assets.paths << "#{Rails.root}/app/assets/shared/stylesheets"
config.assets.paths << "#{Rails.root}/app/assets/shared/javascripts"

Working with the shared assets

Your shared assets are now in the app/assets/shared directory in your Rails application. Changes within the submodule are tracked by your favorite Git client (although we had some issues with Github’s Mac client).

Sourcetree initializes submodules automatically and pulls the submodule while you pull your project. You can do it yourself manually with the following Git commands:

1
git submodule update --init
1
git submodule foreach git pull origin master

Conclusion

Git submodules prove to be handy while developing Moneybird. Style changes for multiple applications are now applied within minutes, while keeping a comfortable workflow.

Using Events for a Modular JavaScript Architecture

written by on · Comments

Engineering an architecture for front end code is often the last thing a software engineer thinks about. On the back end, frameworks like Rails can guide you towards a good architecture. On the front end, many JavaScript frameworks are offered, but not all these frameworks tell you something about good architecture.

At Moneybird we decided to use a very small layer of JavaScript in our application. Our views are plain HTML, enhanced with JavaScript behavior. This behavior is often described in a jQuery Widget. Both custom widgets for our projects and open source widgets co-exist in our codebase. The behavior we describe in JavaScript is purely used for enhancing the experience of the end-user, without JavaScript our application would be quite useable.

In this post I want to explain how we keep our JavaScript modular by using events. We write all our JavaScript in CoffeeScript, so the examples will be in this language. Although we use jQuery and jQuery UI widgets, the techniques described can be applied to vanilla JavaScript and other libraries.

Writing JavaScript widgets

Mostly, the behavior for a view starts in a simple CoffeeScript file. Once the project grows, some behavior is repeated and we decide to create a widget. For us, a widget is a building block which can be used on any page in our application, as long as the required HTML structure is available. An example of such a widget can be a drop-down menu.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
jQuery.widget "moneybird.dropdown",
  _create: ->
    @element.on "click", =>
      if !@element.hasClass("active")
        @open()
      else
        @close()

  open: ->
    @element.addClass("active")
    @element.next().show()

  close: ->
    @element.removeClass("active")
    @element.next().hide()

This widget can be applied to any HTML element on the page, followed by an element that contains the drop-down. When clicking the element, the class is changed and the drop-down is showed.

1
2
3
4
5
6
7
<a href="javascript:;" data-behavior="dropdown">Options</a>
<div class="dropdown">
  <ul>
    <li><a href="...">Option 1</a></li>
    <li><a href="...">Option 2</a></li>
  </ul>
</div>

Instead of using an ID or CSS class, we use the data-behavior attribute in HTML to apply behavior to an element. This increases the separation of style and behavior, allowing a front end engineer to change the class names without affecting the behavior.

1
$('[data-behavior~=dropdown]').dropdown()

Adding widgets and requirements

Chances are the widget described above will not be the only widget on a page. For example, the drop-down can be used on a page with a widget for toggling content:

1
2
<a href="javascript:;" data-behavior="toggle" data-toggleable="some-content">Toggle</a>
<div data-toggle="some-content">...</div>
1
2
3
4
5
6
7
8
9
jQuery.widget "moneybird.toggleContent", ->
  _create: ->
    @element.on "click", =>
      @toggle()

  toggle: ->
    $("[data-toggle='#{@element.data("toggleable")}'").toggle()

$('[data-behavior~=toggle]').toggleContent()

The two widgets can operate independently, but sometimes communication between the widgets is required. Such a requirement could be:

“When the drop-down is active, it should not be possible to toggle the content”

The easiest way to satisfy this dependency, is to add a check to the toggle widget:

1
2
3
4
5
...
toggle: ->
  if !$('[data-behavior~=dropdown]').hasClass("active")
    @element.toggle()
...

This check violates our modular architecture, because it reaches beyond the limits of the toggle widget. Suddenly the toggle widget queries something about the page which is not required to be present. Furthermore, the toggle widget knows an implementation detail about the drop-down widget: maybe the developer of the drop-down widget changes the class active to open, breaking the behavior of the toggle widget.

Using events for communication

JavaScript has a great event handling system. It can be used for events from the browser or end-user, but also for custom events. The interaction between the drop-down and toggle widget should be defined on a higher level and not in the widgets themselves. The first step is to make it possible to disable the toggle widget temporarily:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
jQuery.widget "moneybird.toggleContent", ->
  _create: ->
    @enable()
    @element.on "click", =>
      @toggle()

  toggle: ->
    if !@disabled
      $("[data-toggle='#{@element.data("toggleable")}'").toggle()

  disable: ->
    @disabled = true

  enable: ->
    @disabled = false

jQuery widget allows us to call these methods on elements that have the widget initialized:

1
2
$('[data-behavior~=toggle]').toggleContent("disable")
$('[data-behavior~=toggle]').toggleContent("enable")

The next step is to determine when to disable and enable the widget. Therefore we need to know when the drop-down is opened and closed. We do this by triggering an event from the drop-down widget.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
jQuery.widget "moneybird.dropdown",
  _create: ->
    @element.on "click", =>
      if !@element.hasClass("active")
        @open()
      else
        @close()

  open: ->
    @element.addClass("active")
    @element.next().show()
    @element.trigger("dropdown:open")

  close: ->
    @element.removeClass("active")
    @element.next().hide()
    @element.trigger("dropdown:close")

At this point, we can start listening to the events from the drop-down widget. In a CoffeeScript file that is loaded on the page with both widgets, we can listen to the events and change the state of the toggle widget.

1
2
3
4
5
$('[data-behavior~=dropdown]').on "dropdown:open", ->
  $('[data-behavior~=toggle]').toggle("disable")

$('[data-behavior~=dropdown]').on "dropdown:close", ->
  $('[data-behavior~=toggle]').toggle("enable")

Conclusion

We use custom events in JavaScript to keep our widgets isolated from the page they are used in. Communication between widgets is always implemented via methods and events. This allows us create many widgets and use them independently from each other. More information about event handling in jQuery can be found in the API, read about trigger() and on(). More about writing your own jQuery UI widgets can be read in the guide: How To Use the Widget Factory.

Preserving Model History with ActiveRecord and PostgreSQL

written by on · Comments

When you create an invoice in Moneybird, you select one of your contacts as the recipient. While the invoice is still a draft, you expect any changes to the contact’s address information to be reflected on the invoice as well. However, this shouldn’t happen for invoices that have been sent already, as that would damage the integrity of your administration. Additionally, we want to be able to revert a contact to an earlier state or restore a contact that was deleted by accident.

Versions with a view

We could solve this using any of the myriad of versioning solutions available for Ruby on Rails. Instead, we chose a much simpler solution using PostgreSQL.

In this solution our contacts table is not a normal table any more, it’s a view on the contacts_versions table. All versions of a contact are stored as rows in this table and the view always shows the most recent version.

To create or update a contact, we now have to add a new row to the contacts_versions table containing the updated contact. This would be a bit annoying, not to mention incompatible with ActiveRecord. That’s where PostgreSQL rules come in.

Using rules we can tell PostgreSQL to execute an alternative statement when we INSERT, UPDATE or DELETE a row in a view. This allows us to use the view in ActiveRecord just like we would use a normal table. For example, for inserting new contacts in the contacts view we define the following rule:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE RULE insert_contacts AS
    ON INSERT TO contacts DO INSTEAD
      INSERT INTO contacts_versions (
        firstname, lastname, created_at, updated_at, contact_id)
      VALUES (new.firstname, new.lastname,
              new.created_at, new.updated_at,
              nextval('contacts_id_seq'::regclass))
      RETURNING contacts_versions.contact_id AS id,
        contacts_versions.id AS contact_version_id,
        contacts_versions.firstname,
        contacts_versions.lastname,
        contacts_versions.created_at,
        contacts_versions.updated_at,
        contacts_versions.deleted;

Inserting a record

Now we can insert a contact the usual way using ActiveRecord:

1
2
irb(main):001:0> Contact.create(firstname: 'John', lastname: 'Doe')
=> INSERT INTO contacts (firstname, lastname) VALUES ('John', 'Doe')

ActiveRecord will execute an INSERT statement on the contacts view. PostgreSQL knows how to handle this, because of the rule we defined. Afterwards, the contacts view contains the new contact:

id firstname lastname contact_version_id
1 Jane Doe 2
2 John Doe 3

 

The contacts_versions table now looks like this:

id contact_id firstname lastname deleted
1 1 Jessica Doe false
2 1 Jane Doe false
3 2 John Doe false

Pinning the version

Once a particular invoice is sent, we save the current version ID of its contact as contact_version_id in the invoices table. We wrote a simple Concern for ActiveRecord that overrides the invoice’s association to a contact when the contact_version_id field is set.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
module Concerns
  module VersionedRelationConcern
    extend ActiveSupport::Concern

    module ClassMethods
      def versioned_relation(name)
        define_method "#{name}_with_versioning" do
          if self["#{name}_version_id"].blank?
            self.send("#{name}_without_versioning")
          else
            "#{self.class.reflect_on_association(name).klass}Version".constantize.find(self["#{name}_version_id"])
          end
        end

        alias_method_chain(name, :versioning)
      end
    end
  end
end

The final part of the system is the ContactVersion class. It’s basically the same as a normal Contact class with the table name changed to contacts_versions. Additionally, we mark it read-only to prevent accidental changes to old versions.

1
2
3
4
5
6
require 'activerecord-be_readonly'

class ContactVersion < Contact
  self.table_name += '_versions'
  be_readonly
end

Conclusions

Preserving the history of your ActiveRecord models doesn’t have to be complicated. Use PostgreSQL to keep track of the different versions and access them using a simple override in ActiveRecord.

Introduction to Mutations

written by on · Comments

About a year ago we found ourselves with an overly complicated codebase. The amount of business logic is constantly growing. We followed the ‘fat model’ approach, meaning most of this business logic is contained in our models. Actions on our models can have many obscure side effects caused by ActiveRecord callbacks.

The list of business requirements grows continuously, so the number of actions on models is growing at a constant rate too. It is hard to keep a good overview of all callbacks that are executed when performing an action, this makes the codebase error prone. We needed a way to structure our code to ensure its long-term maintainability.

The solution came in the form of a Ruby gem named Mutations. Mutations offers a way to organize our business logic into separate “commands”. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
require "mutations"

module Contacts
  class CreateContact < Mutations::Command
    required do
      string :first_name
      string :last_name
    end

    optional do
      boolean :receive_newsletter
    end

    def execute
      instance = Contact.new(first_name: first_name, last_name: last_name)

      if instance.save and receive_newsletter
        NewsLetters::SendWelcome.run!(contact: instance.id)
      end

      instance
    end
  end
end

The example above allows us to create a contact using the following call:

1
Contacts::CreateContact.run!(first_name: "Money", last_name: "Bird")

As you can see in the first example, we’ve specified “required” and “optional” blocks. Mutations uses these blocks to automatically validate the input and to throw away anything that isn’t listed.

In doing so we’re effectively programming by contract. We’ve exposed a contract to which the caller must comply. If the input requirements are met the logic will be executed. If they aren’t the execution will halt, usually resulting in an error. In doing so errors will surface with a clear message just after their inception. Without Mutations this wouldn’t be the case.

Conclusion

Mutations helped restructure our codebase, it neatly organized our business logic into commands. It ridded our models of unwanted hooks and made our controllers a lot leaner.

3 Improvements for a Rails I18n Workflow

written by on · Comments

Localizing a web application can be a real pain, especially when you don’t use the right tools. I18n in Rails works great and offers a ton of flexibility to optimize your workflow. A lot of tools focus on the translation process for the language files but overlook the development workflow. At Moneybird we have developed our own I18n workflow to make it faster to work with translations in development.

1. Using an I18n exception handler

We defined our own I18n exception handler to handle missing translations. Each missing translation is added to config/missing_translations.yml. When an untranslated key is used in the source code, the key is automatically added to this YAML file, including the right scope.

1
2
3
4
5
6
7
8
if Rails.env.production?
  I18n.exception_handler = I18n::Workflow::ExceptionHandler.new
end

I18n.t(:foobar)
I18n.t(:foobar, scope: [:first, :second]
# Automatic Rails view scoping
t(".foobar")

This results in the following missing translations YAML.

1
2
3
4
5
6
7
8
9
---
en:
  first:
    second:
      foobar: ""
  foobar: ""
  some_controllers:
    action:
      foobar: ""

This file always contains a list of keys you need to translate. After translating the keys in config/missing_translations.yml, a simple Ruby script merges the translations with the main locale file in config/locales.

1
bundle exec merge_missing_translations

Advantages: Easy overview of keys to translate, scoping is always optimal and we can change it whenever necessary. The main locale files are always nicely sorted.

2. Cascading

In previous workflows we always stored many keys with the same translation. Moneybird is centered around invoices, so the key invoice: Invoice would exist multiple times in a translation file. I18n has a great feature called cascade which allows the lookup to cascade to higher levels in the YAML file.

1
2
3
4
5
6
en:
  invoice: Invoice
  download: Download
  invoices:
    show:
      download: Download PDF

To activate cascading, you need to include it into the backend:

1
2
3
4
5
6
7
8
9
10
I18n.backend.class.send(:include, I18n::Backend::Cascade)

# Inside view invoices/show.html.erb
t(".invoice", cascade: true)   # => "Invoice"
t(".download", cascade: true)  # => "Download PDF"
t("download", cascade: true)   # => "Download"

# To prevent typing cascade: true, we created
# a backend extension which makes cascade true by default.
I18n.backend.class.send(:include, I18n::Workflow::AlwaysCascade)

Using the Rails translation helper we get good controller and action scoping for free. Specific translations can be included in a lower level of the YAML file, while global translations are included in the higher levels and no longer translated multiple times.

Advantages: Reduced number of duplicate keys, therefore less keys to maintain and to translate in a later stadium.

3. Scoping

Using cascading by default can cause conflicts between keys and scopes. When calling I18n.t("invoices") using the previously mentioned YAML file, we get a Hash with all translations in the invoices scope. We probably want only the translations for invoices.

These issues are resolved by explicitly appending _scope to each scope in I18n. Inside our projects, the previous YAML file would look like:

1
2
3
4
5
6
7
en:
  invoice: Invoice
  download: Download
  invoices: Invoices
  invoices_scope:
    show_scope:
      download: Download PDF

By modifying the backend we were able to extend the behaviour of I18n to always append _scope to a requested scope.

1
I18n.backend.class.send(:include, I18n::Workflow::ExplicitScopeKey)

The exception handler also appends the full scope name to config/missing_translations.yml, so we hardly have to think about appending it during development.

Advantages: No conflicts between keys and scope, making cascading easier and less error-prone.

(Small) disadvantage: Some Rails helpers expect I18n to return a hash with options that can be used in the helper. For some specific cases the _scope should therefore be omitted.

4. Bonus: Continuous Integration

Our workflow has a really nice bonus: automation during CI. When a developer creates a new UI and uses translations that are not present in the locale file, these keys should be added to make the UI complete. When config/missing_translations.yml is filled after running our test suite, the tests fail and the developer is notified. This mechanism prevents untranslated keys from being present in our final product.

Conclusion

For the development of Moneybird our new I18n workflow proved to be valuable. Due to the great architecture of I18n we were able to improve our workflow without any monkey patching.

We have open sourced different parts of our I18n workflow. GitHub repo: https://github.com/moneybird/i18n-workflow