Lessons Learnt From a 5-month React Native Project

23.12.201914 Min Read — In React Native


Our journey to build a video maker Android app started more than 5 months ago. There were more potholes in the road to making it a successful one than any of us had initially anticipated. The project started with me, Jarrett and Nazmi for the first phase, then just Nazmi and me for the second. In this article, I will share some of the lessons we learnt along the way in the hope that it saves someone from repeating those mistakes. Doing things the "right" way in Phase Two, while time consuming initially, eventually gave us great productivity and allowed us to deliver a strong performant product in the end.

Tracer bullet development

From the outset, we identified two main challenges that we were unfamiliar with: rendering video using FFmpeg on Android devices, and state management for the "workspace" where most of the user actions would take place (e.g. adding, updating and removing text, changing videos and photos, adding and removing scenes, etc.). We hit a fork in the road. We could aim for the low hanging fruit of crafting the UI, thus having some quick results to show the client. However, that would leave the FFmpeg rendering as a big unknown that we might get stuck on later. It seemed that the best way would be to split the work - I would tackle the biggest risk by creating an FFmpeg module (what The Pragmatic Programmer calls tracer bullet development), while the rest would get started on the more visible features.

If you haven't heard of tracer bullet development, it is the process of looking for important requirements where you have the biggest doubts and sense the highest risks, then prioritising your development so that these are the first areas you code.

The costs and benefits of tracer bullets, and the need for good communication

When you're exploring an unfamiliar area of programming, it's easy to fall into rabbit holes. FFmpeg is a powerful but complex multimedia framework, and during the exploration, we wanted to see how far we could push it. I set off trying to re-create complex text animations, easing functions, fades, and various transitions between scenes using green and white MP4 files as masks. Being an esoteric field, it was difficult to find many examples - not even on Stackoverflow. Much time was spent in trial and error. By the time we had gained a good understanding of what FFmpeg was capable of, we had created hundreds of files in the process of testing out FFMPEG commands on Mac OS and Android.

Unfortunately, the more complex effects turned out to require too much time to render on Android devices. A thirty-second video could take up to two minutes to render on a fast device like the Samsung Galaxy S9, and we needed to cater to users of slower devices as well.

Although there was a variety of delightful effects we were now able to produce, the slow processing speed diminished the impressiveness of our achievement. After a lengthy discussion with the client, we realised that we had spent too much time perfecting complex animations that were far beyond the goals of the current phase. On hindsight, we needed to resist the temptation to scratch the mental itch to know how far we could push the framework and instead to show our early results to the client to know if the simple stuff was good enough. This was a microcosm of the waterfall process we should not have been using - in wanting to impress the client with a big bang, we hid our early results only to realise that we had gone off track without realising it. It was a wake-up call to increase the frequency of communication with the client for the rest of the project. Nonetheless, sending our tracer bullet farther than the target had the benefit of letting us know which kinds of effects and transitions were more performant than others. This helped us in the second phase when there was a request to improve the render durations.

On the bright side, due to the fact we needed the FFmpeg module to be able to plug into the rest of the system with minimal setup, the module we created had a minimal interface that took in a JavaScript nested object and output commands that could be piped directly into the react-native-ffmpeg library that performed the rendering on the device. This served us well when it came time for the rewrite in Phase 2. More on this later.

Putting too much in React Context

To render a video, we needed to pass a JavaScript object to the FFmpeg module. It made sense that we should start this JS object (let's call this the "Template" from now on) with default values whenever we initialised the Workspace. The user would interact with each scene through the Workspace, and each action would change the Template, which would result in live visual updates in the Workspace. Each Template was made up of 3 to 6 scenes, and an audio track that played across all scenes. Each scene would have a photo or video background and zero to several text objects overlaid.

The workspace was complex. Many components needed to share state with other components could be far away in the hierarchy. To avoid prop drilling, we turned to the React Context API. The top-level component in the Workspace (named ProjectEdit) would initialise the EditProjectStore. A dispatch function was passed through the EditProjectStore to any component that needed to make changes. These changes were handled by a useReducer. This all seemed good in theory.

However, we soon realised that the app's performance was poor. It turned out that many components were re-rendering when they should not. This was due to the fact that many components received objects from the EditProjectStore, and the reducer functions were creating new objects each time they were called.

In addition, the EditProjectStore soon became cluttered. It had become a magnet for the code of any Template-related feature, which was most of the time. It was 1,000 lines long at one stage. The EditProjectStore had become a dreaded God object that was difficult to comprehend and even harder to refactor.

The lesson here: don't use React Context for anything complex like the deeply nested object we required as our Template. Ideal use cases are for truly global data that justify the re-render of the entire hierarchy of components, such as the current authenticated user, theme, or locale for i18n. For our use case, it was better to use a solution like Redux of MobX to prevent unnecessary re-renders and to achieve better code organisation.

From the Redux documentation:

In general, use Redux when you have reasonable amounts of data changing over time, you need a single source of truth, and you find that approaches like keeping everything in a top-level React component's state are no longer sufficient.

From the MobX getting started guide:

State is the heart of each application and there is no quicker way to create buggy, unmanageable applications than by producing an inconsistent state or state that is out-of-sync with local variables that linger around. Hence many state management solutions try to restrict the ways in which you can modify state, for example by making state immutable. But this introduces new problems; data needs to be normalized, referential integrity can no longer be guaranteed and it becomes next to impossible to use powerful concepts like prototypes.

MobX makes state management simple again by addressing the root issue: it makes it impossible to produce an inconsistent state. The strategy to achieve that is simple: Make sure that everything that can be derived from the application state, will be derived. Automatically.

To be clear, it might have been possible to use React Context without the unnecessary re-renders by not using useReducer and using useState instead, but that would give rise to another set of problems in reconstituting all the disparate states into a useable JS object to be passed into the FFmpeg module.

In short, it's probably not a good idea to use React Context API for threading complex props into your components hierarchy.

Phase Two

At the end of the first phase (3 months), we had gotten to the point that we could not fix complaints about the slow performance of various components without a lengthy refactoring. We decided to take a step back and rethink. Using Hands-On Design Patterns with React Native by Mateusz Grzesiukiewicz as our guide, we started from scratch. It would be quicker and neater to start from a clean slate. It's never ideal to start over from scratch, but given the technical debt and that there were some modules we could pluck out, and it was better to face the music now than to allow the debt to pile up any further.

Keeping our Tracker accurate

In our rush to complete features in the first phase, we had not paid enough attention to assigning points and keeping all feature requests documented in our Pivotal Tracker project. The consequence of this oversight was that it was difficult to answer questions about whether this or that could be added to the scope. In this new phase, as the anchor, I meticulously assigned points to the stories, and ensured that the stories were detailed and followed the format "As a [User], when I am on [Screen], I should be able to [do something], so that [reason]". Subsequently, it was easier to avoid scope creep in our progress meetings:

"Can you add feature Y?"

"Let's have a look at our tracker. This feature, it would probably be a four-pointer. Let's see how we can reprioritise." (Start typing the new story in Tracker, then dragging it ahead of another story) "Should we put it here? This will push feature X back by a week."

"Oh, it's not that important. Let's put it in the Ice Box for now."

Monitoring performance early

One of the major points of feedback we received at the end of Phase One was that the app felt slow. To ensure we had a good performance this time round, we monitored the React Native Perf Monitor to monitor frame rates from the moment we had a barebones app. It's not a replacement for profiling, but it gives a quicker feedback loop when developing new features.

MST (mobx-state-tree)

mobx-state-tree is a state container that combines the simplicity and ease of mutable data with the traceability of immutable data and the reactiveness and performance of observable data.

from the MST site

mobx-state-tree (I'll refer to it as MST from here onwards) is based on MobX, but is a more opinionated version of it. The central concept is that of a living tree. The tree consists of mutable, but strictly protected objects enriched with type information. This enables MST to generate snapshots automatically. Snapshots, which are basically plain JS objects made up of nested objects and primitives, were immensely useful to our app - it meant we could load a Template from JSON, and serialise an edited Template back to JSON. The strongly-typed data with its predefined shape gave us confidence that all the data required was present for the FFmpeg to produce coherent commands for the react-native-ffmpeg library.

Code organization was much improved. Business logic was moved to the MST models, leaving our components to be concerned only with presentation. An article by Semaphore does a great job of putting it into words:

There’s no shortage of ways to build applications with React, but one thing is for sure — React shines the brightest when it is used as a reactive view layer sitting on top of the state of your application. If you use it for more than that, e.g. the UI is responsible for determining when and how to load data or the UI stores certain aspects of state, this can quickly lead to code smell.

In order to keep our React projects from growing messy, we need store the application state completely outside of the UI. This will not only make our application more stable, but it will also make testing extremely simple, as we can test the UI and the state separately.

Undo and Redo

Undo and redo can be tricky to implement if you're doing it from scratch. Since MST can generate a new snapshot every time any data changes in our Template, a naive implementation would be to save every snapshot into an array. An undo action would move the "frame" from the last snapshot backwards by one. This is often called time travelling - you can see an example of this simple approach here.

Now consider a scenario where you need to perform some set-up before a Template is ready for user interaction - downloading assets, positioning text items on the screen, etc. You need to ensure these automated actions that change state are not added to the history of undo-able states.

Or, you have a slider for adjusting text size. Between the original state and the desired state, there will be multiple intermediate states added to the history that you need to remove somehow. You can see how things can start to get hairy.

MST ships with a number of useful prebuilt middlewares: action-logger, atomic, TimeTraveller, and UndoManager. The UndoManager is a more advanced version of the TimeTraveller. It is more memory efficient that the TimeTraveller because it uses MST patches (changes on a fine-grained level automatically tracked by MST). More importantly, its API allows for simple solutions to the above scenarios and more.

For actions you don't want to include in the history, you can use withoutUndo. For the slider example, you can use startGroup and stopGroup to treat all patches within a group to be saved as one history entry. There are also canUndo and canRedo which help with enabling and disabling the undo and redo buttons respectively. We had an advanced feature where the Workspace would automatically scroll to the scene where the undo action had occurred (e.g. the user had deleted text on scene 1 but now had scrolled to scene 5; the undo action would trigger a scroll back to scene 1). For such cases, we could introspect the history - the array of patches - to find the relevant scene index.

XState for state machines

One of the code smells that we wanted to eliminate was having too many boolean states that controlled the visibility of actionsheets and modals. Here's the main culprit in our Phase One code, where we used the useState React hook:

// EditContextProvider.js
// this is a code smell!
// ...
const [logoModalVisible, setLogoModalVisible] = useState(false);
const [logoSelected, setLogoSelected] = useState(false);
const [advancedModalVisible, setAdvancedModalVisible] = useState(false);
const [bottomNavVisible, setBottomNavVisible] = useState(true);
const [colorsModalVisible, setColorsModalVisible] = useState(false);
const [finalPreview, setFinalPreview] = useState(null);
const [loading, setLoading] = useState(true);
const [musicModalVisible, setMusicModalVisible] = useState(false);
const [previewModalVisible, setPreviewModalVisible] = useState(false);
const [previewVideo, setPreviewVideo] = useState(null);
const [processingPreview, setProcessingPreview] = useState(false);
const [replaceModalVisible, setReplaceModalVisible] = useState(false);
const [textModalVisible, setTextModalVisible] = useState(false);
const [thumbnailUrl, setThumbnailUrl] = useState(null);
const [titleModalVisible, setTitleModalVisible] = useState(false);
const [musicTrimModalVisible, setMusicTrimModalVisible] = useState(false);
const [videoTrimModalVisible, setVideoTrimModalVisible] = useState(false);
const [scenesModalVisible, setScenesModalVisible] = useState(false);
const [adBoostModalVisible, setAdBoostModalVisible] = useState(false);
const [adCampaignModalVisible, setAdCampaignModalVisible] = useState(false);
// ...

If you are using many states to express visibility or loading or selected states, you should consider using XState. Not only did the code organisation improve vastly, but we also avoided many bugs that arose from the many permutations that we had missed out. For instance, it was easy to miss that logoSelected and textModalVisible should not both be true at the same time. State machines force you to be explicit about what states and sub-states are acceptable, and what the allowable transitions are. By doing so, clashes between unrelated states became a thing of the past, and our code was now much more maintainable.


I could go on further about the other improvements we implemented in Phase Two - Atomic structure, naming conventions, versioning, and commit messages, testing, just to mention a few - but those detailed above are the ones that made the greatest impact.

Moving forward, I think it's important to constantly gauge whether things are going right. Here are some quick gut-checks that could be useful as a barometer:

  • Velocity: Slowing velocity over the lifetime of a project is a sign that too much technical debt has built up, or the overall architecture is too coupled. Allocate time for refactoring.
  • Obviousness: There should be an obvious place to put new (especially logic) code. Too much ambiguity could be an indication that more thought needs to be put into the folder structure or state management tool. Lack of obviousness would cause a newer developer to follow the path of least resistance and add logic to the components themselves.
  • Coupling: When you're adding a new feature or fixing a bug, are you touching code in many places? That's a red flag indicating the code is probably overly coupled.
  • Predictability: Is it clear which state is causing the visibility of a component? Are you spending a lot of time tracing the cause of visibility related bugs? If so, consider using a state machine.

One last piece of advice: measure performance from day one for any React Native project, preferably using a slower device from time to time. Performance is a tricky thing to fix when you've already written too much code before realising the problem.

Credit goes to Jarrett for helping with Phase One reflections, and Nazmi with reflections for the entire project.

Further Reading

Zek Chak