Hacker News new | past | comments | ask | show | jobs | submit login

Our entire JS is 800kb not each page. So we are splitting it on a per page basis. Everything including issues, MRs, file browsing. There was 800kb of JS in the entire app before I joined as the first FE Eng. There is jQuery and other libraries and our own code. That was the reason we wanted to remove Turbolinks. So we don't load all the JS for the entire app at once. We want to load things on demand.

Funny thing, I have built a full fledged production 2d/3d application in JS. It was less complex than what GL has to do, and it was 670kb fully minified and obfuscated. We are constantly looking for ways at GL to make our file size smaller and our JS faster. It is something we are actively putting our time towards.




Why do you need to use Turbolinks, when you can just do a normal page load, and set the correct cache headers for your javascript bundles, to stop the browser re-downloading the same script.


This isn't specific to GitLab or GitHub, but turbolinks, pjax, and friends are really useful when you have a lot of setup that can be shared between pages, i.e. initiating a websocket connection, rendering a header with notifications and an avatar in it, decompressing and parsing your js and css, etc., and you don't want to implement a single page app.

When I refresh the GitLab sign-in page in Chromium, I see that the initial page load takes 40ms to parse css, 250ms to parse and execute js, and then another 70ms after DOMContentLoaded. This is with a warm cache. It's not unreasonable to think that turbolinks might save about 300ms on page loads, which is a respectable performance boost.


The browser will still have to parse and execute it. That where Turbolinks can improve performance.

It comes with it's own drawbacks though.


Actually we just removed Turbolinks the other day, so this is a non-problem. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: