Fraser Xu Thoughts on life and code.

My First Production Isomorphic React Graphql Project

The story

During the past few weeks, I’ve been given the opportunity to rebuild the front-end of a project with “modern approach” to replace an existing CoffeeScript, jQuery, Bower based app running on Ruby on Rails.

After about 2 sprints of work(2 weeks for each sprint), we shipped our first version to production last week.

Before I started to share my experience, I’d like to give an overview of the architecture for the project.

The current stack

View React
State management send-actions(like Redux, but simper)
Date fetching GraphQL, Relay
Route React-Router
Assets serving Webpack
Precompile JS Babel
Server Node.js(for server side rendering React)

Why the current stack?

I’ve worked on lots of different projects before with different stacks. And I always have the idea to not use any boilerplate in mind when start a new project. Boilerplates are usually built by and for people with different requirements for a project, and none of them are identical to the one you are trying to build. So usually I will only keep a list of well maintained boilerplate project, and only use them as a reference when my own stack gets into trouble.

The new project has a few requirements:

  • Server side rendering for progressive enhanced experience so the page could work for user without JavaScript
  • SEO, we are mainly an e-commercial website, so SEO is the number one priority
  • The app needs to talk to a couple of micro-services, and tokens are usually stored on the server for safety reasons
  • UI state should persist from url, not only for SEO, but also for a better user experience
  • Fast iteration time, to move fast and delivery better user experience
  • Improve performance, the short time we delivery page to user, the longer we can keep the user on the website

There are also other requirements which are not for business, and most of them are actually for a better developer experience.

  • Babel. For use a couple of handy syntax today that are only available in future browser
  • Webpack. For compiling assets, hot code load, uglify
  • Modern JavaScript libraries that have best practices in the community

With the above requirements, I started from the simplest hello world express server, and deployed it to Heroku. The other day I started to build the static part of the page, and since the code need to render from the server side, I installed React and render a few of Header and Footer component and rendered them on server with React.renderToString.

Since I also need to have other pages like 404, 500, I added React-Router to have the router support. It works super fine and I love what I did so far.

But I think I forget to mention the setup I need to make in order to make those 3 libraries to work for both browser and server. Here’s the dependencies list:

  • babel-cli
  • babel-core
  • babel-loader
  • babel-plugin-transform-runtime
  • babel-preset-es2015
  • babel-preset-react
  • babel-preset-stage-0
  • babel-register
  • react
  • react-dom
  • react-router
  • react-hot-loader
  • webpack
  • webpack-dev-server

And along with them I will need to have 3 webpack configs.

  • webpack.client.config.js
  • webpack.server.config.js

And another .babelrc which include the babel plugins.

I’ve used Webpack in several projects, but it stills feels very hard to make it right each time I do it again. I’m using lots of plugins for different purposes in my configs, but to be honest I can’t say I know what exactly what those individual plugs does in the project. Some of them are inherits from Babel 5 and I may know what it does, but others may be something totally new only in Babel 6, I don’t have the time to go through each of them.

It’s actually an OK experience so far, I managed to have a nice working demo. And I could show my manager what I’ve achieved so far in a rather short time.

The next step is to be able to load dynamic content through our api-gateway. Since we only want to keep the token for internally use and don’t want to expose it in the browser, we had the idea to build a simple “proxy” server which re-direct the request from the browser and then pass the request together with token saved on the server to make a request to the api-gateway.

In addition, if a page need to load multiple results, we could make them together into one and have a custom api endpoint. For example if we want to get the stats total user and total items on the page, normally we will need to do two individual requests, but with the “proxy api” we could do this via a /api/stats so the browser only need to make one request. This could help user on a mobile device since network request on the server side is relatively reliable and faster.

When we get to this stage, we came to the idea of trying our GraphQL, because the “proxy” server is similar to what GraphQL wants to achieve, and in the future we will have more complex logic, the benefit of small pageload and flexible query language could help us in long term. Given that this is an experiment project so we decide to try it out.

To get started, we need to install the following packages, and it still kind of making sense to do so:

  • express-graphql
  • graphql
  • react-relay
  • graphql-relay
  • babel-relay-plugin

Oh, wait! Since we are using react-router, we also need a tool to fetch the data based on the current router so we could fetch all the data before rendering the page:

  • react-router-relay

And, last but not least. We are doing isomorphicuniversal app, so we still need to do something to make them work on server side, let’s install some more other plugins:

  • isomorphic-relay
  • isomorphic

Okay, I think we are almost there, we’ve got almost every tool we need. We’re going to ship the product to production.

With this setup, I got problems that does not belong to any of the framework itself, but how to make them work together. I’ve heard lots of success stories for isomorphic application, I’ve also heard people talking how awesome GraphQL and Relay is for data fetching. But I can hardly find any live example of using all of them together.

Here’s a few pain point I met while hooking them together:

  • process.env management for application running in different environment(dev, ci, staging, production), most boilerplate project do not cover that
  • process.envmanagement for isomorphic applications, using them with Webpack is tricky because it’s mostly design for client-side code, and convert variables from variables to strings and then back to variables to make them work for both env
  • debug with source map support for compiled code for different environment
  • the .babelrc file that varies on different environment

Luckily, with enough time spend on those problems and wonderful resources over the internet, I managed to make the whole stack working, and we shipped it to production last Friday.

After shipping it to production

Yes, please hold on and stop telling me you should never ship something to production on Friday. This is something we all know as developer, there’s a few assumptions that we’ve made when ship the code:

  • We have run the code on our staging and production for 1-2 weeks
  • we have complete monitoring services that show the metrics of the app
  • It’s a shiny new stack and we can’t wait until next week to ship it

And those assumptions are wrong.

All those metrics we have are not facing really user, it’s behind a domain hosted on Heroku app, and we’re the only user who knows about it. And once we resolved the DNS to use the new domain(which takes about 1 hour to take effect), we started to get some new metrics from our tracking services. It was all good at the beginning.

But not until a few hours later, we found that the memory usage of the server keeps going up. Even though I was almost sure there’s a memory leak somewhere in our code, but with this shiny new stack I had no idea what could go wrong as there’s so many possibilities. Normally you could just revert your code to a previous working commit and do a re-deploy, but we don’t even have one.

I ended up sitting in front of my laptop the whole Friday night to watch the memory usage goes up and I restarted again, and wait until it goes up to restarted it.

The other day with the help from some of my nodejs developer friends, I added a process management tool called pm2 which restart the server when memory goes to a max_memory_restart limit so I could have time to have a rest and time to figure out what’s going on.

But pm2 could not help fix the memory leak issue, so the rest of the week I started to look at nodejs profiling solutions and find out a few technicals to find potential memory leak. It’s not the topic of this post but all I can tell is that is hard. Especially you have such a setup with isomorphic babel compiled code base.

Have you find out the memory leak yet?

The answer is no. After discussed with the team, we decided to remove all GraphQL and Relay related code. And here’s the result:

memory usage

We all know that is totally unnecessary for the app, we were keeping it to prepare the code for the future.


So here’s a few lessons I learned from this project:

  • Don’t ship code to production on Friday, for whatever reason.
  • You can plan big, but always start small.
  • There’s no reason should stop you from trying new libraries, but chose the right timing.

Last but not least, I’m not a super fan of hearing people complaining about JavaScript fatigue, this post isn’t about complain them at all. It’s all my personal fault to use a library at a wrong time(or too many libraries at the same time). And it’s also the reason I explain step by step why I bring in those libraries into my project.

It is because whenever I get a problem, those tools, build by lovely open source developers who spend their spare time, helping us solve our problems for free, they are the one making our life easier. If it happens to solve your problem, always appreciate their hard work for that, if it doesn’t, build you own.

Setup Electron on Ubuntu

Start from a fresh ubuntu server.

$ docker-machine start dev
$ eval "$(docker-machine env dev)"

Running docker in interactive mode

# run docker in interactive mode
$ docker run -i -t ubuntu:14.04.3 /bin/bash

Install nodejs on Ubuntu

# install node
$ apt-get install -y curl
$ curl -sL | sudo -E bash -
$ sudo apt-get install -y nodejs

Install tape-run.

# install tape-run
npm i -g tape-run

Checking what’s missing

# checking missing dependencies
[email protected]:/# /usr/lib/node_modules/tape-run/node_modules/browser-run/node_modules/electron-stream/node_modules/electron-prebuilt/dist/electron --help
/usr/lib/node_modules/tape-run/node_modules/browser-run/node_modules/electron-stream/node_modules/electron-prebuilt/dist/electron: error while loading shared libraries: cannot open shared object file: No such file or directory

Install missing dependencies

# install missing dependencies
apt-get install -y libgtk2.0-0 libnotify-bin libgconf-2-4 libnss3 xvfb

Start xvfb server

# start xvfb server
export DISPLAY=':99.0'
Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 &

Start tape-run

# start tape-run
$ [email protected]:/# echo "console.log('yo'); window.close()" | tape-run

2015 Review





今年做的最主要的事情是深JS-2015 JSConf ChinaShanghai JavaScript Meetup,做这些事情的过程当中也在改变着自己。我在之前的文章中也分享过自己的一些感受:

Data Canvas Sense Your City

参加的第一个编程竞赛。之前自己没有参与过任何黑客马拉松之类的活动,原因是自己空余的时间很多时候都在编程,不需要一两个晚上不睡觉,待在同一个地方来完成一些编程挑战。Data Canvas项目有点不一样,是由一个国际组织swissnex举办的,在全球范围内选了7个国家的主要城市,每个地方安装DIY开源硬件,收集空气质量数据然后做出数据可视化效果的比赛。


  • 开源硬件是自己在那个时候比较想尝试的
  • 公司刚好也在做数据可视化的项目
  • 学习和实践一些技术(Nodejs硬件编程,React + D3 数据可视化)
  • 不需要集中在一到两天时间内去完成
  • 上海也是项目选中的城市之一,收集自己身边的空气质量数据然后可视化是最有意思的事情
  • 可以和全球其他地方的开放数据研究人员同步竞争和交流




一直以来都因为公司是做的外包项目(拿钱做事,做完走人),所以没有多少成就感。但从技术角度来讲,这个项目不仅让自己接触并掌握了使用Electron开发桌面端应用的能力,更是开拓了自己的专业技能视野(包括使用React Native做移动端开发)。从非技术来讲,自己做的这些工作真正帮助并影响了一个国家的命运,不能不感到自豪。

离开Wiredcraft, 加入Envato

TLDR: 我离开了工作3年多的上海,辞掉了在Wiredcraft的工作,现在在墨尔本一家叫做Envato的公司工作。


首先,离开Wiredcraft换到另外一个地方工作是完全不在计划范围内的。它起始于今年五一假期来墨尔本的旅行。参加了这边的一期墨尔本JavaScript meetup, 然后听说到Envato这家公司。先前了解到这家公司的其他产品(Themeforest, Tutsplus)但并不知道它们属于同一家公司,网上搜索了相关的信息之后觉得是一家还不错的公司,所以就抱着试一试的心态做了一份简历然后投了出去。后来经过几轮的面试,准备雅思考试,签证,前后大概花了半年的时间,最后拿到了这边的工作签证。


  • Open source and open source community
  • Startup culture and management
  • Multi country culture and diversity
  • Work and life balance

Wiredcraft可能是我目前为止能够在国内找到的最合适的工作地点之一,灵活的工作时间,开放的技术氛围,get sh*t done的精神。和国内大公司的加班,加班,加班形成了鲜明的对比。也发现和认识到了公司在近几年发展过程中的一些问题,以及老板和同事们是如何解决的过程。创业公司在发现和解决问题的这些经验,不能不说是一比宝贵的财富。当然最重要的是有一群可爱的同事。Yuki老师说的对,和前同事一起吐槽新公司是最开心的事情了。










一个不知道要不要做的改变就是社交了。在国内的时候就不太喜欢人多的地方,也不喜欢在人多的时候主动去找人交流。其实有个词形容自己比较合适Introvert. 在Urban词典里是这样定义的:

Opposite of extrovert. A person who is energized by spending time alone. Often found in their homes, libraries, quiet parks that not many people know about, or other secluded places, introverts like to think and be alone.