Cameron Hotchkies

Categories

  • Coding

Tags

  • browser-extension
  • development
  • javascript

One of the first features requested for Shadowmute was a browser extension to simplify the workflow of generating mailboxes. It was also clear very early on that the usage between Chrome and Firefox users was split fairly equally. The documentation for Firefox and Chrome extension development is extremely well written and detailed, so I’m not going to delve too deep into that. What I would like to write about is how to develop for both at the same time.

Compatibility

When I first started, I aimed for one platform (Chrome) and figured I would use that as a base for the second (Firefox). This created two codebases in the same repo that were almost identical. The biggest difference between the two was the manifest, the second being the location of the runtime (chrome.runtime vs browser.runtime).

It’s worth mentioning the Firefox provides a compatibility layer by aliasing chrome, but I wanted to be explicit in my extensions.

After the initial version was built in chrome it was minutes before it was fully functional in Firefox. The compatibility between the two for a basic extension is extremely impressive.

Packaging

For the most part, packaging is simply zipping up the files representing the extension. Each extension’s code referenced a constants.js file in the background scripts section. The contents of these settings were replaced with production settings during packaging via a python script. Similarly, updates to permissions in the manifest could be applied at the same time.

After the first couple builds, it became apparent that I was reinventing a wheel, and it was simply my lack of experience in the node.js ecosystem. Most of the JavaScript projects I have worked on were either inherited or started with a larger framework, such as create react app. This left a bit of a vacuum in the best way to automate this process in a more JavaScript-ey way.

The Javascript Build Universe

Basic setup

If you are unfamiliar with setting up a javascript project, start by picking a package manager. I chose Yarn, so examples will be more geared that way.

yarn init

This will create your package.json which will function as the overarching index of the ultimate build system you will end up using.

Linting

Linting isn’t absolutely required at this point, but honestly should be set up before building the extension in the first place.

yarn add --dev eslint eslint-config-airbnb-base eslint-plugin-import

This will give you the airbnb base style linting capabilities for your project. I always regret not setting up earlier, no matter when I finally get around to it. Once you have the packages added, update package.json to include a “scripts” section

{
  "name": "semisafe-blog-post",
  "version": "1.0.0",
  "main": "index.js",
  "license": "MIT",
  "devDependencies": {
    "eslint": "^6.4.0",
    "eslint-config-airbnb-base": "^14.0.0",
    "eslint-plugin-import": "^2.18.2"
  },
  "scripts": {
    "lint": "eslint src"
  }
}

In the above example, running yarn run lint will scan the src subdirectory where you can put your actual extension code. This has the added effect of allowing IDEs like VS Code to apply linting with minimal setup.

A detour with Webpack

The first avenue I attempted was to use webpack. It was what I had used in the past for react based development. As the name implies, webpack is more targeted at combining several resources into one processed and minified artifact. This is the opposite of what I was aiming to do

After about an hour of messing around, I decided this was probably not the right tool for my needs.

Building with Gulp

Gulp is another build system for javascript projects. The main process is built around taking a set of source files and piping the results into processing functions. These functions perform operations like concatenation, substitution, and injecting fragments.

yarn add --dev gulp

At the simplest level, the gulp tasks and methods are stored in gulpfile.js in the root directory of your project.

Building the Manifest

The simplest example is building a manifest. Start by identifying the common aspects of the manifest.json required by both extensions in manifests/common.json.

{
  "name": "Semisafe Example Extension",
  "description": "Learning is Fun!",

  "background": {
    "scripts": [
      // In a normal manifest, multiple background scripts could be
      // concatenated, but when packaging here it can be easier to
      // combine them.
      "background.js"
    ]
  },

  "browser_action": {
    "default_title": "Semisafe Title",
    "default_popup": "popup.html",
    "default_icon": {
      "16": "images/icon16.png",
      "32": "images/icon32.png",
      "48": "images/icon48.png",
      "128": "images/icon128.png"
    }
  },

  "manifest_version": 2
}

From there adding the portions of the manifest specific to each browser can be added to manifests/chrome.json (or manifests/firefox.json)

{
  "background": {
    // Chrome requires this explicitly
    "persistent": false
  },

  "permissions": [
    // This is here for a reason,
    // we will substitute it later
    "*://localhost/*",
    "activeTab",
    "declarativeContent",
    "storage",
    "identity",
    "clipboardWrite"
  ]
}

Once these are done, start adding logic to gulpfile.js to assemble a complete manifest. Import some basic required values.

const { src, dest } = require('gulp');
const mergeJson = require('gulp-merge-json');

From there import our build time configurable values. These are sourced from the package.json for the entire project as well as supporting environment variables.

// This is useful for pulling the extension version in from the
// package.json at the root level
const packageDef = require('./package.json');

// Also allow for environment variables that
// don't allow for default in release mode
const envSetting = (varName, defaultValue) => {
  if (process.env[varName]) {
    return process.env[varName];
  }

  if (process.env.ENV !== 'prod') {
    return defaultValue;
  }

  throw new Error(`${varName} is required in prod mode`);
};

Set up the API information based on the runtime values.

// Pull the API server value from an Environment variable
const getApiServer = () => envSetting('API_SERVER', 'http://localhost:9001');

// Define the API server as a manifest permission
const apiServerPermission = () => `${getApiServer()}/*`;

Create a method that accepts browser and destination as parameters. For example, browser would be "chrome" and destination could be "build/chrome".

const buildManifest = (browser, destination) => {
  // Set up manifest values created in code
  const mergeOverride = {
    version: packageDef.version,
    permissions: [
      apiServerPermission(),
    ],
  };

  const fileName = 'manifest.json';

  src([
    './src/manifests/common.json',
    `./src/manifests/${browser}.json`,
  ])
    .pipe(
      mergeJson({
        fileName,
        mergeOverride,
      })
    )
    .pipe(dest(`${destination}/`));
};

The first defined is the mergeOverride, which is a dictionary of overwrite fields.

The src command loads both manifest components explicitly as a stream of files. The stream of file objects is then piped into mergeJson.

The mergeJson operation combines all of the file objects in the stream and squashes them into one file (named manifest.json). The second parameter allows for a programmatic dictionary to be merged in as well. This is helpful inserting the package version. In the case of an array overwrite, such as in permissions, it will replace the first elements and leave the remaining elements as they existed in the files. That is why we made sure the first permission was the external host the extensions would be communicating with.

This operation is a plugin from gulp-merge-json package, which will need to be installed separately by your package manager.

While you may be tempted to just try to implement simple functions in the pipe operations, it is not a standard function call. The operation you’re trying to implement is probably already an existing plugin.

Combining the JavaScript sources

A similar mechanism can be used to combine source scripts. To start we will import more gulp plugins that we have added in via yarn.

const concat = require('gulp-concat');
const inject = require('gulp-inject-string');
const replace = require('gulp-replace');

Create a function that first selects the appropriate runtime parent based on the browser.

const combineSources = (browser, destination) => {
  // This function is just a switch on the browser arg
  const runtime = getRuntime(browser);

  // ... snip

Next create a stream of all files that live in the src/background directory, excluding the one named index.js. We want to hold off on that, as we want to ensure that is the very last file added in the chain.

  // ... snip
  const res = src([
      './src/background/*.js',
      '!**/index.js',
    ])
  // ... snip

Create an in-memory file with the contents of the constants dictionary we want inserted into the code. When we inject it into the stream, we want to prepend it to ensure all subsequent logic has access to those constants.

    // ... snip
    .pipe(inject.prepend(
      `constants = ${JSON.stringify(scriptConstants)};\n\n`,
    ))
    // ... snip

Finally, add the src/background/index.js file we had omitted previously to the file stream.

    // ... snip
    .pipe(src('./src/background/index.js'))
    // ... snip

Transform the stream of multiple file objects into a single element named background.js. The background file is still in the stream, just squashed.

    // ... snip
    .pipe(concat('background.js'))
    // ... snip

Include every other script in our source package for substitutions. We exclude the background scripts that were previously combined as they still exist in this stream.

    // ... snip
    .pipe(src([
      './src/**/*.js',
      '!./src/background/**'
    ]))
    // ... snip

Change all of the extension’s code to use the browser-specific runtime location. I had used __RUNTIME__ in the extension’s code instead of either chrome or browser as it was less likely to collide. Note that this will ruin auto-complete for your IDE.

    // ... snip
    .pipe(replace('__RUNTIME__', runtime))
    // ... snip

With the transformations complete, it is time to gather all the remaining assets in the extension including markup, images and fonts.

    // ... snip
    .pipe(src([
      './src/**/*.html',
      './src/**/*.png',
      './src/**/*.ttf'
    ]))
    // ... snip

The final step is to take the stream of file objects and place them into the correct output directory.

    // ... snip
    .pipe(dest(`${destination}/`));

  return res;
};

Expose the functions

We now have methods to create our manifest, and output the augmented sources. We must expose these tasks in a way for the package manager to call them.

    const outputDir = 'dist';

    const packageExtension = (browser) => {
      const destination = `${outputDir}/${browser}`;
      buildManifest(browser, destination);
      combineSources(browser, destination);
    };

    function chromeTask(cb) {
      packageExtension('chrome');

      cb();
    }

    function firefoxTask(cb) {
      packageExtension('firefox');

      cb();
    }

    exports.chrome = chromeTask;
    exports.firefox = firefoxTask;

These exports allow for two tasks that can be added into the package.json.

"scripts": {
  "firefox": "gulp firefox",
  "chrome": "gulp chrome",
  // ...
}

We can now run the following in the command line:

$ API_SERVER="https://example.com/api" yarn run firefox

Which will assemble Firefox build package with a custom API server.

A larger example

This is a general overview of how cross browser source packages can create an output for each individual browser. As mentioned at the beginning, this was done for the Shadowmute extensions, which are available on GitHub to illustrate a slightly more complex example.