Tag Archives: Chakra

Node-ChakraCore Update: N-API, Node.js on iOS and more

Today, we are happy to announce a new preview release of ChakraCore, based on Node.js 8, available for you to try on Windows, macOS, and Linux.

We started our Node-ChakraCore journey with a focus on extending the reach of Node.js to a new platform, Windows 10 IoT Core. From the beginning, it’s been clear that in addition to growing the reach of Node.js ecosystem, there’s a need to address real problems facing developers and the Node.js ecosystem though innovation, openness and community collaboration.

As we continue our journey to bring fresh new ideas and enable the community to imagine new scenarios, we want to take a moment to reflect on some key milestones we’ve achieved in the last year.

Full cross-platform support

While ChakraCore was born on Windows, we’ve always aspired to make it cross-platform. At NodeSummit 2016, we announced experimental support for the Node-ChakraCore interpreter and runtime on Linux and macOS.

In the year since that announcement, we’ve brought support for full JIT compilation and concurrent and partial GC on x64 to both macOS and Ubuntu Linux 14.04 and higher. This has been a massive undertaking that brings Node-ChakraCore features to parity across all major desktop operating systems. We are actively working on cross-platform internationalization to complete this support.

Support for Node.js API (N-API)

This year, our team was part of the community effort to design and develop the next-generation Node.js API (N-API) in Node.js 8 which is fully supported in ChakraCore. N-API is a stable Node API layer for native modules, which provides ABI compatibility guarantees across different Node versions & flavors. This allows N-API-enabled native modules to just work across different versions and flavors of Node.js, without recompilations.

According to some estimates, 30% of the module ecosystem gets impacted every time there is a new Node.js release, due to lack of ABI stability.  This causes friction in Node.js upgrades in production deployments and adds cost to native module maintainers in having to maintain several supported versions for their module.

Node.js on iOS

We are always delighted to see the community build and extend Node-ChakraCore in novel and interesting ways. Janea Systems recently announced their experimental port of Node.js to run on iOS, powered by Node-ChakraCore. This takes Node.js to iOS for the first time, expanding the reach of the Node.js ecosystem to an entire new category of devices.

Node.js on iOS would not be possible without Node-ChakraCore. Because of the JITing restrictions on iOS, stock Node.js cannot run. However, Node-ChakraCore can be built to use the interpreter only, with the JIT completely turned off.

This is particularly useful for scenarios like offline-first mobile apps designed with the expectation of unreliability connectivity or limited bandwidth. These apps primarily rely on local cache on the device, and use store and forward techniques to opportunistically use data connectivity when available. These kinds of apps are common in scenarios like large factory floors, remote oil rigs, disaster zones, and more.

Time-Travel Debugging

This year also brought the debut of Time-Travel debugging with Node-ChakraCore on all the supported platforms, as originally demoed using VSCode at NodeSummit 2016. This innovation directly helps with the biggest pain-point developers have with Node.js – debugging! With this release, Time-Travel Debugging has improved in stability and functionality since its introduction, and is also available with Node-ChakraCore on Linux and macOS.

 And much more …

These are just the start – our team has also made major investments in infrastructure automation, which have resulted in faster turnaround of Node-ChakraCore updates following the Node.js 8. Both stable Node-ChakraCore builds and nightlies are now available from the Node.js foundation build system.

We recently started measuring module compatibility using CITGM modules, and have improved compatibility with a wide variety of modules. Popular node modules like, node-sass, express and body-parser are considering using Node-ChakraCore in their CI system to ensure ongoing compatibility.  Node-ChakraCore also has improved 15% in ACMEAir performance on Linux in the last 2 months, and we’ve identified areas to make further improvements in the near future.

With our initial priority of full cross-platform support behind us, we are moving our focus to new priorities, including performance and module compatibility. These are our primary focus for the immediate future, and we look forward to sharing progress with the community as it happens!

Get involved

As with any open source project, community participation is the key to the health of Node-ChakraCore. We could not have come this far in our journey without the help of everyone who is active on our github repo, and in the broader Node community, for their reviews and guidance.  We are humbled by your enthusiasm and wish to thank you for everything you do. We will be counting on your continued support as we make progress in our journey together.

For those who are looking to get involved outside of directly contributing code, there are several ways to get involved and advance the Node-ChakraCore project. If you are a …

  1. Node.js Developer – Try testing Node-ChakraCore in your project, and use Time-Travel debugging with VSCode and let us know how it goes.
  2. Node.js module maintainer – Try testing your module with Node-ChakraCore. Use these instructions to add Node-ChakraCore in your own CI to ensure ongoing compatibility. If you run into issues, please let us know at our repo or our gitter channel.
  3. Native module maintainer – Consider porting your module to N-API. This will help insulate your module from breakage due to new Node releases and will also work with Node-ChakraCore.

As always, we are eager to hear your feedback, so please keep them coming. Find us on twitter @ChakraCore, our gitter channel or you can open an issue on our github repo to start a conversation.

Arunesh Chandra, Senior Program Manager, Chakra

Register now for Microsoft Edge Web Summit 2017

Registration is now open for Microsoft Edge Web Summit 2017. Join the Microsoft Edge team in Seattle on September 13th for a jam-packed day of energetic technical sessions looking at what’s new, and what’s next, for the web on Windows. Space is limited and reservations are on a first-come, first-served basis, so book your seat today!

Duotone photo of Seattle with superimposed text reading "Microsoft Edge Web Summit 2017, September 13th, 2017, Seattle, WA"

Microsoft Edge Web Summit is a free conference presented by the engineers building Microsoft Edge. The main track consists of 14 jam-packed technical sessions, covering everything from performance, accessibility, and test guidance, to brand-new tools and APIs for building Progressive Web Apps on Windows, adding payments and biometric authentication to your sites, and building modern layouts with new CSS features like CSS Grid.

This year, we’re introducing a new Hallway Track, where you can meet with engineers from across Microsoft to solve real problems today, and build invaluable connections for the future. Looking to reduce a troublesome performance issue? Struggling with best practices for accessibility? Eager to get started with WebVR? Curious about Bash on Windows? We’ve got you covered. The Hallway Track connects you one-to-one with Microsoft engineers throughout the day for tangible results you can take back to your site or app.

We’re excited to meet developers around the world face to face, and look forward to seeing you here in Seattle, WA! Space is limited and reservations are on a first-come, first-served basis, so book your seat today. Can’t make it? Don’t worry – though there’s no substitute for attending in person, we’ll be streaming live on Channel 9 all day (no registration required), and recorded sessions will be available after the fact.

If you have any questions about the event, you can reach the event team on Twitter @MSEdgeDev or by email at edgesummit@microsoft.com. See you there!

Microsoft Edge Web Summit logo (two line-art alpine summits, with stylized angle brackets superimposed above them)

Kyle Pflug, Senior Program Manager, Microsoft Edge

Improved JavaScript performance, WebAssembly, and Shared Memory in Microsoft Edge

JavaScript performance is an evergreen topic on the web. With each new release of Microsoft Edge, we look at user feedback and telemetry data to identify opportunities to improve the Chakra JavaScript engine and enable better performance on real sites.

In this post, we’ll walk you through some new features coming to Chakra with the Windows 10 Creators Update that improve the day-to-day browsing experience in Microsoft Edge, as well as some new experimental features for developers: WebAssembly, and Shared Memory and Atomics.

Under the hood: JavaScript performance improvements

Saving memory by re-deferring functions

Back in the days of Internet Explorer, Chakra introduced the ability to defer-parse functions, and more recently extended the capability to defer-parse event-handlers. For eligible functions, Chakra performs a lightweight pre-parsing phase where the engine checks for syntax errors at startup time, and delays the full parsing and bytecode generation until functions are called for the first time. While the obvious benefit is to improve page load time and avoid wasting time on redundant functions, defer-parsing also prevents memory from being allocated to store metadata such as ASTs or bytecode for those redundant functions. In the Creators Update, Microsoft Edge and Chakra further utilizes the defer-parsing mechanism and improves memory usage by allowing functions to be re-deferred.

The idea of re-deferring is deceptively simple – for every function that Chakra deems would no longer get executed, the engine frees the bulk of the memory the function holds to store metadata generated after pre-parsing, and effectively leaves the function in a deferred state as if it has just been pre-parsed. Imagine a function foo which gets deferred upon startup, called at some point, and re-deferred later.

Illustration showing a function foo which gets deferred upon startup, called at some point, and re-deferred later.

The tricky part about re-deferring is that Chakra cannot perform such actions too aggressively or it risks frequently re-paying the cost of full-parsing, bytecode generation, etc. Chakra checks its record of function call counts every N GC cycles, and re-defers functions that are not called over that period of time. The value of N is based on heuristics, and as a special case a smaller value is used at startup time when memory usage is more susceptible to peak. It is hard to generalize the exact saving from re-deferring as it is very subject to the content served, but in our experiment with a small sample of sites, re-deferring typically reduces the memory allocated by Chakra by 6-12%.

Bar chart showing memory allocated by Chakra for various popular sites, with and without re-deferral. With re-deferral, the memory allocated is reduced by 6-12%.

Post Creators Update, we are working on addressing an existing limitation of re-deferring to handle arrow functions, getters, setters, and functions that capture lexically-scoped variables. We expect to see further memory savings from the re-deferring feature.

Optimizing away heap arguments

The usage of arguments object is fairly common on the web. Whenever a function uses the arguments object, Chakra if necessary, creates a “heap arguments” object so that both formals and the arguments object refer to the same memory location. Allocating such object can be expensive, so the Chakra JIT optimizes away the creation of heap arguments when functions have no formal parameters.

In the Creators Update, the JIT optimization is extended to avoid the creation of heap arguments with the presence of formals as long as there’re no writes to the formals. It is no longer necessary to allocate heap arguments objects for code such as below.

// no writes to formals (a & b) therefore heap args can be optimized away
function plus(a, b) {
  if (arguments.length == 2) {
      return a + b; 
  }
}

To measure the impact of the optimization, our web crawler estimates that the optimization benefits about 95% websites and it allows the React sub-test in the Speedometer benchmark, which runs a simple todoMVC implemented with React, to speed up by about 30% in Microsoft Edge.

Better performance for minified code

Using a minifier before deploying scripts has been a common practice for web developers to reduce the download burden on the client side. However, minifiers could sometimes pose performance issues as they introduce code patterns that developers typically would not write by hand and therefore might not be optimized.

Previously, we’ve made optimizations in Chakra for code patterns observed in UglifyJS, one of the most heavily-used minifiers, and improved performance for some code patterns by 20-50%. For the Creators Update, we investigated the emit pattern of the Closure compiler, another widely used minifier in the JavaScript ecosystem, and added a series of inlining heuristics, fast paths and other optimizations according to our findings.

The changes lead to a visible speedup for code minified by Closure or other minifiers that follow the same patterns. As an experiment to measure the impact consistently in a well-defined and constrained environment, we minified some popular JavaScript benchmarks using Closure and noticed a 5~15% improvement on sub-tests with patterns we’ve optimized for.

WebAssembly

WebAssembly is an emerging portable, size- and load-time-efficient binary format for the web. It aims to achieve near-native performance and provides a viable solution for performance-critical workloads such as games, multimedia encoding/decoding, etc. As part of the WebAssembly Community Group (CG), we have been collaborating closely with Mozilla, Google, Apple and others in the CG to push the design forward.

Following the recent conclusion of WebAssembly browser preview and the consensus over the minimum viable product (MVP) format among browser vendors, we’re excited to share that Microsoft Edge now supports WebAssembly MVP behind the experimental flag in the Creators Update. Users can navigate to about:flags and check the “Enable experimental JavaScript features” box to turn on WebAssembly and other experimental features such as SharedArrayBuffer.

Under the hood, Chakra defers parsing WebAssembly functions until called, unlike other engines that parse and JIT functions at startup time. We’ve observed startup time as a major headache for large web apps and have rarely seen runtime performance being the issue from our experiences with existing WebAssembly & asm.js workloads. As a result, a WebAssembly app often loads noticeably faster in Microsoft Edge. Try out the Tanks! demo in Microsoft Edge to see it for yourself – be sure to enable the “Experimental JavaScript Features” flag in about:flags!

Beyond the Creators Update, we are tuning WebAssembly performance as well as working on the last remaining MVP features such as response APIs and structured cloning before we enable WebAssembly on by default in Microsoft Edge. Critical post-MVP features such as threads are being considered as well.

Shared Memory & Atomics

JavaScript as we know it operates in a run-to-completion single-threaded model. But with the growing complexity of web apps, there is an increasing need to fully exploit the underlying hardware and utilize multi-core parallelism to achieve better performance.

The creation of Web Workers unlocked the possibility of parallelism on the web and executing JavaScript without blocking the UI thread. Communication between the UI thread and workers was initially done via cloning data and postMessage. Transferable object was later added as a welcome change to allow transferring data to another thread without the runtime and memory overhead of cloning, and the original owner forfeits its right to access the data to avoid synchronization problems.

Soon to be ratified in ES2017, Shared Memory & Atomics is the new addition to the picture to further improve parallel programming on the web. With the release of Creators Update, we are excited to preview the feature behind the experimental JavaScript features flag in Microsoft Edge.

In Shared Memory and Atomics, SharedArrayBuffer is essentially an ArrayBuffer shareable between threads and removes the chore of transferring data back and forth. It enables workers to virtually work on the same block of memory, guaranteeing that a change on one thread on a SharedArrayBuffer will eventually be observed (at some unknown point of time) on other threads holding the same buffer. As long as workers operate on different parts of the same SharedArrayBuffer, all operations are thread-safe.

The addition of Atomics gives developers the necessary tools to safely and predictably operate on the same memory location by adding atomics operations and the ability to wait and wake in JavaScript. The new feature allows developers to build more powerful web applications. As a simple illustration of the feature, here’s the producer-consumer problem implemented with shared memory:

// UI thread
var sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 1000);
var i32 = new Int32Array(sab);
producer.postMessage(i32);
consumer.postMessage(i32);

// producer.js – a worker that keeps producing non-zero data
onmessage = ev => {
  let i32 = ev.data;
  let i = 0;
  while (true) {
    let curr = Atomics.load(i32, i);             // load i32[i]
    if (curr != 0) Atomics.wait(i32, i, curr);   // wait till i32[i] != curr
    Atomics.store(i32, i, produceNonZero());     // store in i32[i]
    Atomics.wake(i32, i, 1);                     // wake 1 thread waiting on i32[i]
    i = (i + 1) % i32.length;
  }
}

// consumer.js – a worker that keeps consuming and replacing data with 0
onmessage = ev => {
  let i32 = ev.data;
  let i = 0;
  while (true) {
    Atomics.wait(i32, i, 0);                     // wait till i32[i] != 0
    consumeNonZero(Atomics.exchange(i32, i, 0)); // exchange value of i32[i] with 0
    Atomics.wake(i32, i, 1);                     // wake 1 thread waiting on i32[i]
    i = (i + 1) % i32.length;
  }
}

Shared memory will also play a key role in the upcoming WebAssembly threads.

Built with the community

We hope you enjoy the JavaScript performance update in Microsoft Edge and are as excited as we are to see the progress on WebAssembly and shared memory pushing the performance boundary of the web. We love to hear user feedback and are always on a lookout for opportunities to improve JavaScript performance on the real-world web. Help us improve and share your thoughts with us via @MSEdgeDev and @ChakraCore, or the ChakraCore repo on GitHub.

― Limin Zhu, Program Manager, Chakra Team

Notes from the Node.js VM Summit

The Chakra team was delighted to host the third Node.js VM Summit recently here at Microsoft. These meetings bring together JavaScript VM vendors, Node collaborators, and CTC members including participants from Google, IBM, Intel, Microsoft, nearForm, and NodeSource. This group is currently focused on addressing ABI compatibility problems facing the Node native module ecosystem. Today, we’d like to share a quick recap of the meeting and some of the next steps.

Image showing VM Summit attendees in a conference room at Microsoft

Node Native Modules

Node native modules are written in C/C++ and directly depend on V8 and/or NAN APIs. These modules may need to be updated or re-compiled for every major Node.js release because of ABI/API changes. This adds to the maintenance burden for native module authors and it presents barrier to upgrading Node versions in production deployments for module consumers. The first half of the session included presentations from the two complimentary projects intended to help with this: Fast FFI (foreign function interface) and N-API (Node.js API).

Fast FFI

Fast FFI aims to automatically project native APIs into JavaScript which will allow module authors to expose existing native code to Node.js without having to write additional C/C++ code. The FFI presentation discussed its architecture and use cases. Currently, this project is in its early stages of prototyping and more work is required for it to be considered as a Node.js feature. We will update the community as we make progress and keep evaluating its readiness.

N-API

N-API provides an ABI-stable abstraction layer for native APIs in JavaScript VMs. This helps native module authors compile their module once per platform and architecture and run it on any N-API compliant version of Node.js. The N-API presentation included a demonstration of some ported modules and performance comparisons with their V8/NAN equivalents. N-API has made great progress over the last year and now it is ready for broader usage and feedback. As reflected in the graph below which shows native API usage pattern among top 30 depended-on modules, we have 100% N-API coverage for V8 APIs used in 5 or more modules.  The core team has successfully ported Node-Sass, Canvas, LevelDown, Nanomsg and IoTivity to use N-API.

Chart showing N-API Coverage in top third most-depended-on native modules. 195 total V8 APIs are used. Of these, 140 have an equivalent N-API.

N-API coverage for popular native APIs

VM Summit attendees evaluated the progress and readiness of both projects, and agreed to consider a pull request to include N-API in Node 8.0 as an experimental feature making it easier for native module authors to try out N-API and enable the team to test it broadly and get more feedback. The Fast FFI project will continue to evolve and may be included in a future Node.js release.

Key next steps for N-API project:

  • Submit N-API Pull Request to Node.js master to be included in Node.js 8.0
  • After stabilization, port it to Node.js v6.9 LTS and Node-ChakraCore
  • Broader community engagement to identify API gaps
  • Performance fine tuning

Other topics:

In the second half of the session the discussion focused on other VM compatibility topics including compatibility of the Inspector Protocol for debugging and baseline conformance requirements for Node.js VMs. The attendees also agreed on meeting again around NodeSummit later this year and evaluate progress on Fast FFI, N-API adoption, feedback and next steps.  

It was very a successful and productive VM Summit, and we look forward to feedback on the forthcoming N-API PR. You can review the full recording of the session here, review the raw notes from the meeting, or take an take an early look early look at the API here, Thanks to all the attendees for making the VM Summit awesome!

Arunesh Chandra, Senior Program Manager, Chakra

Node-ChakraCore and VM Neutrality in Node.js

Back when Node.js was launched, the device landscape was simpler, and using a single JavaScript VM helped provide Node.js the focus to grow rapidly. Today, there is a proliferation in the variety of device types, each with differing resource constraints. In this device context, we believe that enabling VM neutrality in Node.js and providing choice to developers across various device types and constraints are key steps to help the Node.js ecosystem continue to grow.

Node-ChakraCore started on this path last year by bringing Node.js to a new platform, Windows 10 IoT Core. As we chart the course for future of Node.js and promote the mission of Node.js everywhere, we’re excited to announce recent developments towards VM Neutrality, starting with Node.js API prototypes and progress towards cross-platform support in Node-ChakraCore.

The case for VM Neutrality

This growing trend of projects trying to port Node.js on to other VMs started a community discussion over whether Node.js should be VM neutral in the future, which led to this year’s VM Summit as an attempt to better understand the technical issues involved in achieving VM Neutrality.

VM Neutrality envisions Node.js as a ubiquitous application framework, highly optimized for any platform, device, or workload.  It describes a state of Node.js where Node Core is neutral to the JS Engine that is powering it via open standards & APIs, enabling a more formal interface between Node.js and the VM that powers it.

In a VM-neutral world, the Node.js ecosystem, especially native modules, will continue to seamlessly work on different JS VMs optimized for a variety of different devices and workloads. The benefits of VM Neutrality are many-fold:

  1. Reach/Ubiquity: Allows Node.js to target new devices and workloads with high optimization.
  2. Developer Productivity: Beneficial for developers as it provides cost savings by extending the reach of Node ecosystem and its ability to reuse code to target more devices and workloads.
  3. Standardized effort: Standardizes multi-VM efforts from already existing forks of Node.js

Node.js API (NAPI)

Node.js API (NAPI) is a community project being driven by the API working group, along with ChakraCore and others, with a goal of providing stable Node API for native module developers. NAPI aims to provide ABI compatibility guarantees across different Node versions and also across different Node VMs – allowing NAPI enabled native modules to just work across different versions and flavors of Node.js without recompilations.

NAPI is a stepping stone towards VM-Neutrality. After NAPI is officially available for native modules, it can be used inside Node core to achieve VM Neutrality and enable Node to seamlessly support multiple JavaScript engines. You can check out the first demo of a working NAPI prototype today at NodeInteractive Austin 2016!

Node-ChakraCore

In addition to supporting the NAPI efforts, Node-ChakraCore has been making progress on its cross-platform support and diagnostics innovation.

Update on Cross-platform support (Linux and macOS)

Bringing cross-platform support to Node-ChakraCore has been a key goal on our roadmap from the beginning. At NodeSummit we announced experimental support for Node-ChakraCore on Linux, and today bringing that same experimental support is available on macOS as well.

Screen capture showing an http-server sample with Node-ChakraCore displaying "Hello World" on macOS.

Running http-server sample on macOS with Node-ChakraCore

These early experimental builds have been validated against Node.js unit tests as well as some synthetic tests. We are looking for help hardening Node-ChakraCore further, so if you run into bugs while trying out these builds, please let us know on our issues page. Performance is a work in progress, with ongoing investments in the underlying ChakraCore engine. Please stay tuned for updates in this regard from the @ChakraCore team.

Time-Travel Debugging with Reverse Continue

One of the key internal guiding principles for us involved in the Node-ChakraCore project is to bring innovation to help advance the Node.js ecosystem. In that vein, we demoed a preview of Time-Travel debugging using VSCode earlier this year. This feature allows developers to capture trace of a running Node process, and then visualize the code execution inside of VSCode by stepping back in time. The “Step Back” functionality not only allows developers to understand the code execution path but also lets them inspect the runtime context using the typical debugger UI affordances.

Today, we are happy to announce that Time-Travel Debugging in VSCode on Windows has reached the Beta milestone, with better reliability, performance and a new feature called “Reverse Continue.”  This new feature is designed to work exactly like the “Debug Continue”, except that it goes backwards and is only available while debugging a TTD trace.

Animated gif of VSCode showing Time-Travel debugging with Step Back and Reverse Continue

VSCode showing Time-Travel debugging with Step Back and Reverse Continue

In addition, we are also making available a preview of Time-Travel debugging on our experimental Linux and macOS support.

Nightly Builds

The Node.js build system has now started producing nightly builds of Node-ChakraCore. Stable milestone builds will still be published on the Node-ChakraCore’s release page on Github.

Head over to the Node.js nightly build page and try it out.

Screen capture of the download page on nodejs.org showing chakracore-nightly build

Download page on nodejs.org showing chakracore-nightly build

What’s Next?

Our main focus in the short term is to continue working with the community to advance the NAPI project and VM-Neutrality goals. We are also working on performance improvements for cross-platform scenarios. Follow @ChakraCore on Twitter to stay tuned for updates on that work.

Get involved

We always love to hear from people trying out Node-ChakraCore for their own projects.  Today’s update makes it easier than ever before to get hands on with Node-ChakraCore on the platform of your choice. Take a look at the instructions to try out Node-CharkraCore and Time-Travel debugging with your existing apps and let us know if you run into problems on our issues page.

If you want to get more involved with the Node.js API efforts, here are some ways to get involved:

  • Convert a native module to use ABI stable APIs and report issues on conversion and performance;
  • Port ABI stable APIs to your fork of Node and let us know if there are gaps;
  • Review the roadmap and see how can you can help accelerate this project.

We’re excited to keep driving towards a VM-neutral future for the Node.js platform, and can’t wait to hear what you think!

― Arunesh Chandra, Senior Program Manager, ChakraCore

A peek into the WebAssembly Browser Preview

Following the introduction of asm.js, we have been working with other browser vendors including Mozilla, Google, Apple, along with the rest of the WebAssembly community group, to push the performance boundary of the web with WebAssembly. WebAssembly is a new, portable, size- and load-time-efficient binary compiler target, which promises near-native performance on the web.

As the community group comes close to consensus over the final design of the MVP (minimum viable product) release, we are pleased to share that the WebAssembly standard is in browser preview and invite the community to provide feedback on the WebAssembly design. We’re continuing to make progress towards a public preview implementation in Microsoft Edge, and today we’re excited to demonstrate WebAssembly in our internal builds.

The Browser Preview

The WebAssembly browser preview is effectively a release candidate for MVP, and includes the latest:

  • Binary format, generalized from previous AST formats to a more efficient stack machine format;  a more compact binary format generally means better loading time.
  • Equivalent human-readable text format for the purpose of reading, debugging, and occasionally handwriting WebAssembly.
  • Built-in JavaScript APIs to integrate WebAssembly modules to the web platform.
  • Up-to-date tools to produce WebAssembly modules, such as the Emscripten/Binaryen toolchain to convert C++ source to asm.js to WebAssembly, and WABT to convert between text and binary format.

To give you a taste of what WebAssembly looks like now, here is an example C++ recursive factorial function with its corresponding WebAssembly:

C++ factorial WebAssembly factorial function body
binary   | text
int factorial(int n)
{
  if (n == 0)
    return 1;
  else
    return n * fac(n-1);
}
20 00    | get_local 0
42 00    | i64.const 0
51       | i64.eq
04 7e    | if i64
42 01    |   i64.const 1
05       | else
20 00    |   get_local 0
20 00    |   get_local 0
42 01    |   i64.const 1
7d       |   i64.sub
10 00    |   call 0
7e       |   i64.mul
0b       | end

The WebAssembly factorial function is extracted from the WebAssembly spec test.

We are eager to hear feedback from the community on WebAssembly. App authors should still expect changes and recompilation of apps for the MVP release, but any feedback from developing an app during the preview will help us make a better standard.

Implementation Progress in Microsoft Edge

We’ve been hard at work developing support for WebAssembly in Microsoft Edge at the open-source ChakraCore project repo. Microsoft Edge and ChakraCore are close to shipping the browser preview, which we expect to come when the full JavaScript APIs are implemented.

To demo the current capability of ChakraCore, we are also excited to showcase the AngryBots demo (with an updated WebAssembly binary) running in an internal build of Microsoft Edge. The demo loads faster than earlier versions compiled to asm.js or older WebAssembly formats, due to a more compact binary and ChakraCore’s new ability to defer parsing WebAssembly functions.

Over the next couple of months, our team will be focused on bringing the browser preview to Microsoft Edge. We look forward to continuing to contribute to the standardization of WebAssembly with the other browser vendors and the community, and would love to hear your thoughts about WebAssembly via @MSEdgeDev and @ChakraCore, or on the ChakraCore repo.

Limin Zhu, Program Manager, Chakra

Bringing ChakraCore to Linux and OS X

In January, we open-sourced ChakraCore, the core of the Chakra JavaScript engine that powers Microsoft Edge and Universal Windows Platform. We expressed our ambition to bring our best-in-class, but Windows-only, JavaScript engine to other platforms, with Linux as the prioritized target. Today at NodeSummit, we are delighted to share our progress – the first experimental implementation of ChakraCore interpreter and runtime on x64 Linux and OS X 10.9+, along with experimental Node.js with ChakraCore (Node-ChakraCore) on x64 Linux. Our development and testing on Linux happens primarily on Ubuntu 16.04 LTS, but the support should easily translate to other modern Linux distributions.

Screen captures showing ChakraCore running inside terminal windows on Ubuntu 16.04 and OS X

ChakraCore on Linux and OS X

ChakraCore and by extension Node-ChakraCore, on other platforms have the same support for the broad set of JavaScript features as their Windows counterparts, as measured by the official ECMAScript conformance suite, test262 (with the exception of Intl features, which are in progress). The current cross-plat implementation doesn’t yet support JIT compilation and concurrent and partial GC features, which we will enable as development progresses further.

Building cross-platform applications with ChakraCore

Bringing ChakraCore to Linux and OS X is all about giving developers the ability to build cross-platform applications with the engine. The JavaScript Runtime (JSRT) APIs to host ChakraCore were originally designed for Windows, so they inevitably had a few Windows dependencies – for example, Win32 usage of UTF16-LE encoding for strings, where other platforms might use UTF8-encoded strings. As part of enabling cross-platform support, some of the JSRT APIs have been refactored and redesigned to allow developers to write platform agnostic code to embed ChakraCore. Maintaining backwards compatibility is a core principle that we follow – so applications written with the previous set of JSRT APIs on Windows will continue to work as is. You can build the engine and write a Hello-world app to get started with ChakraCore on the Windows/Linux/OS X.

Node-ChakraCore on Linux

It has been a little over a year since we started working on Node-ChakraCore, with the intention to grow the reach of Node.js ecosystem. One of the fundamental goals of this project from the beginning has been to ensure that the existing ecosystem continues to just work, in an open and cross-platform way exactly like Node.js.

Earlier this year, shortly after open sourcing ChakraCore, we submitted a pull request to Node.js mainline to enable Node.js to work with ChakraCore.  Today, we are taking another major step in the Node-ChakraCore journey. As part of enabling Linux support for ChakraCore, we are also sharing the first preview for Node-ChakraCore on Linux at our repo. This is a very early step in our full support for Linux, but we are excited to share the progress.

Screen capture showing an http-server sample running with ChakraCore on Ubuntu

Running http-server sample on Node-ChakraCore

Coming up next

This is just the beginning of our cross-platform efforts, and we will keep enhancing our cross-platform support. We will continue to update the ChakraCore roadmap as we make progress. We’re currently working on Intl support, so that ChakraCore has feature parity across platforms. Also high on our list of priorities is to ensure non-Windows ChakraCore users experience the same top-tier JavaScript performance available on Windows today. To enable that, we’ll bring the fully-capable ChakraCore JIT compiler and concurrent and partial GC on Windows to other platforms. These features will bring improved performance to Node.js and other applications hosting ChakraCore as well.

To the community

Our cross-platform journey has been made possible by great support from the community. We are grateful for the advice and feedback we received on ChakraCore issues and Gitter discussions, as well as plenty of high-quality pull requests that we have accepted to date. We look forward to seeing more developers contributing to the projects, and encourage developers to try out our experimental Linux and OS X support, and even build upon what we have and submit PRs to port it to platforms of their choice. As always, we are eager to hear your feedback – you can always reach us by opening issues on ChakraCore or Node-ChakraCore repo or send us a tweet @ChakraCore.

― Limin Zhu, Program Manager, Chakra
― Arunesh Chandra, Sr. Program Manager, Chakra

JavaScript performance updates in Microsoft Edge and Chakra

Since we first started work on the Chakra JavaScript engine, it has been our mission and priority to make JavaScript faster for the real world web, and to continuously improve the experience of browsing in Microsoft Edge (and previously Internet Explorer). We’ve had a very busy year, with highlights like taking Chakra open-source, delivering ES2015 & beyond, experimenting with WebAssembly and more!

Through all of this, performance remains a principal theme – our team regularly looks at customer feedback and telemetry data to find potential patterns that slow down user experience, and tunes Chakra to provide substantial boosts to the web. Today, we’d like to share a few recent Chakra improvements coming up in the Windows 10 Anniversary Update, which you can preview today on recent Windows 10 Insider Preview builds.

Memory optimizations in functions

One of the code patterns on the web is the abundance of small-sized functions in scripts. This isn’t particularly a surprise as it is common developer practice to break down complex coding logic into many smaller pieces. The practice reduces repetitiveness and makes reading, debugging and testing the code much easier. Even better it can have a performance advantage as smaller functions are generally easier to inline and the profiler can target the ‘hottest’ ones to produce JIT’ed code.

To optimize for this pattern especially in terms of memory consumption, Chakra has refactored the metadata format used for each function (internally referred to as FunctionBody). Based on data, pointers in FunctionBody that point to rarely used information have been moved to a dynamic auxiliary structure and will not be instantiated and consume memory unless necessary. A good example is the asm.js related data which is not applicable for most functions. Most of the 32-bit counters in FunctionBody were also observed to hardly have values over 256, such as the variable count or object literal count within a function. Thus these counters have been replaced by a compact structure that uses a single byte for each counter and can be promoted to full 32-bit if needed. Combined with a good number of functions, these seemingly subtle optimizations can make a big difference in reducing memory overhead (we’ll share our experiments later).

Diagram showing pointers and counters in FunctionBody moved to memory-saving structures

Many pointers and counters in FunctionBody are moved to memory-saving structures

Deferred parsing for event-handlers

The modern web is a very interactive place, and inside almost every page there lies an event system with plenty of event-handlers defining the behavior of button-clicks, mouse-overs and many other events. However, unless the associated events are triggered, event-handlers are basically dead code. And in fact more often than not many of them end up unused during a browsing session. Just think about how many buttons or textboxes you left untouched the last time you visited your Facebook home page and imagine these controls and others all have event-handlers associated with them. Taking advantage of the formerly introduced deferred-parsing feature, Microsoft Edge and Chakra now delays the full parsing and bytecode generation of event-handlers until when they are first called. Chakra uses a smaller representation for partially-parsed handlers, so the optimization not only improves the start-up time but also saves memory from any unused handlers.

The combination of deferred parsing for event-handlers and the memory optimizations in FunctionBody can together shrink a fair amount of memory footprint for each page. Though the actual saving depends highly on the pages being loaded and thus is quite hard to generalize, our experiment on a small sample of top visited sites shows that these optimizations along with other smaller tweaks typically reduce about 4 to 10% of memory usage per page opened in Microsoft Edge, with cases where the saving reaches over 20%.

Synthetic JavaScript benchmarks

All of our performance efforts are driven by data from the real world web and aim to enhance the actual end user experience. However, we are still frequently asked about JavaScript benchmark performance, and while it doesn’t always correspond directly to real-world performance, it can be useful at a high level and to illustrate improvement over time. Let’s look at where Microsoft Edge stands at the moment, as compared to other major browsers on two established JavaScript benchmarks maintained by Google and Apple respectively. The results below show Microsoft Edge continuing to lead both benchmarks.

Chart comparing browser performance on Google Octane and Apple Jetstream benchmarks. Octane scores: Edge 31187, Chrome 25910, Firefox 24836; Jetstream score: Edge 233.7, Chrome 168.3, Firefox 146.6.

All measures collected on 64-bit browsers running 64-bit Windows 10 Insider Preview
System Info: Dell Optiplex 7010 Intel(R) Core(TM) i5-3475S CPU @ 2.90GHz (4 cores) 4GB ram

We are very excited to share our performance effort to reduce memory footprint and start-up time with you. The road to better performance never ends, and we’re as committed to making JavaScript faster as we always have been. There will be more improvements in the summer months, so stay tuned for additional updates! Until then, we love feedback and rely on it to help us improve. You can always get in touch on Twitter via @MSEdgeDev and @ChakraCore, or the ChakraCore repo on GitHub.

Limin Zhu, Program Manager, Chakra Team

 

Previewing ES6 Modules and more from ES2015, ES2016 and beyond

Chakra, the recently open-sourced JavaScript engine that powers Microsoft Edge and Universal Windows applications, has been pushing the leading edge of ECMAScript language support.  Most of ES2015 (aka ES6) language support is already available in Edge, and last week’s Windows Insider Preview build 14342 brings more ES6 capabilities including modules, default parameters, and destructuring. We’re not stopping there – Edge also supports all ES2016 (aka ES7) proposals – the exponentiation operator and Array.prototype.includes – as well as future ECMAScript proposals such as Async Functions and utility methods like Object.values/entries and String.prototype.padStart/padEnd.

ES6 Modules

Modules let you write componentized and sharable code. Without complete support from any browser, developers have already started to get a taste of ES6 modules through transpiling from tools such as TypeScript or Babel. Edge now has early support for ES6 modules behind an experimental flag.

Modules aren’t exactly new to JavaScript with formats such as Asynchronous Module Definition (AMD) and CommonJS predating ES6. The key value of having modules built in to the language is that JavaScript developers can easily consume or publish modular code under one unified module ecosystem that spans both client and server. ES6 modules feature a straightforward and imperative-style syntax, and also have a static structure that paves the way for performance optimizations and other benefits such as circular dependency support. There are times when modules need to be loaded dynamically, for example some modules might only be useful and thus loaded if certain conditions are met in execution time. For such cases, there will be a module loader which is still under discussion in the standardization committee and will be specified/ratified in the future.

ES6 Modules in Microsoft Edge

To light up ES6 modules and other experimental JavaScript features in Edge, you can navigate to about:flags and select the “Enable experimental JavaScript features” flag.

Screen capture showing the "Experimental JavaScript features" flag at about:flags

The “Experimental JavaScript features” flag at about:flags

As a first step, Edge and Chakra now support all declarative import/export syntax defined in ES6 with the exception of namespace import/export (import * as name from “module-name” and export * from “module-name”). To load modules in a page, you can use the <script type=”module”> tag. Here is an example of using a math module:

/* index.html */
...
&amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;script type='module' src='./app.js'&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;
...
/* app.js */
import { sum } from './math.js';
console.log(sum(1, 2));
/* math.js */
export function sum(a, b) { return a + b; }
export function mult(a, b) { return a * b; }

Implementation: static modules = faster lookup

One of the best aspects of ES6’s modules design is that all import and export declarations are static. The spec syntactically restricts all declarations to the global scope in the module body (no imports/exports in if-statement, nested function, eval, etc.), so all module imports and exports can be determined during parsing and will not change in execution.

From an implementation perspective, Chakra takes advantage of the static nature in several ways, but the key benefit is to optimize looking up values of import and export binding identifiers. In Microsoft Edge, after parsing the modules, Chakra knows exactly how many exports a module declares and what they are all named, Chakra can allocate the physical storage for the exports before execution. For an import, Chakra resolves the import name and create a link back to the export it refers to. The fact that Chakra is aware of the exact physical storage location for imports and exports allows it to store the location in bytecode and skip dynamically looking up the export name in execution. Chakra can bypass much of the normal property lookup and cache mechanisms and instead directly fetch the stored location to access imports or exports. The end result is that working with an imported object is faster than looking up properties in ordinary objects.

Going forward with modules

We have taken an exciting first step to support ES6 modules. As the experimental status suggests, the work towards turning ES6 modules on by default in Edge is not fully done yet. Module debugging in the F12 Developer Tools is currently supported in internal builds and will be available for public preview soon. Our team is also working on namespace import/export to have the declarative syntax fully supported and will look into module loader once its specification stabilizes. We also plan to add new JSRT APIs to support modules for Chakra embedders outside of Edge.

More ES6 Language Features

Microsoft Edge has led the way on a number of ES6 features. Chakra has most of ES6 implemented and on by default including the new syntax like let and const, classes, arrow functions, destructuring, rest and spread parameters, template literals, and generators as well as all the new built-in types including Map, Set, Promise, Symbol, and the various typed arrays.

In current preview builds, there are two areas of ES6 that are not enabled by default – well-known symbols and Proper Tail Calls. Well-known symbols need to be performance optimized before Chakra can enable them – we look forward to delivering this support in an upcoming Insider flight later this year.

The future of Proper Tail Calls, on the other hand, is somewhat in doubt – PTC requires a complex implementation which may result in performance and standards regressions in other areas. We’re continuing to evaluate this specification based on our own implementation work and ongoing discussions at TC39.

ES2016 & Beyond with the New TC39 Process

TC39, the standards body that works on the ECMAScript language, has a new GitHub-driven process and yearly release cadence. This new process has been an amazing improvement so far for a number of reasons, but the biggest is that it makes it easier for implementations to begin work early for pieces of a specification that are stable and specifications are themselves much smaller. As such, Chakra got an early start on ES2016 and are now shipping a complete ES2016 implementation. ES2016 was finalized (though not ratified) recently and includes two new features: the exponentiation operator and Array.prototype.includes.

The exponentiation operator is a nice syntax for doing `Math.pow` and it uses the familiar `**` syntax as used in a number of other languages. For example, a polynomial can be written like:

let result = 5 * x ** 2 – 2 * x + 5;

Certainly a nice improvement over Math.pow, especially when more terms are present.

Chakra also supports Array.prototype.includes which is a nice ergonomic improvement over the existing Array.prototype.indexOf method. Includes returns `true` or `false` rather than an index which makes usage in Boolean contexts lots easier. `includes` also handles NaN properly, finally allowing an easy way to detect if NaN is present in an array.

Meanwhile, the ES2017 specification is already shaping up and Chakra has a good start on some of those features as well, namely Object.values and entries, String.prototype.padStart and padEnd, and SIMD. We’ve blogged about SIMD in the past, though recently Chakra has made progress in making SIMD available outside of asm.js. With the exception of Object.values and entries, these features are only available with experimental JavaScript features flag enabled.

Object.values & Object.entries are very handy partners to Object.keys. Object.keys gets you an array of keys of an object, while Object.values gives you the values and Object.entries gives you the key-value pairs. The following should illustrate the differences nicely:

let obj = { a: 1, b: 2 };
Object.keys(obj);
// [ 'a', 'b' ]
Object.values(obj);
// [1, 2]
Object.entries(obj);
// [ ['a', 1], ['b', 2] ]

String.prototype.padStart and String.prototype.padEnd are two simple string methods to pad out the left or right side of a string with spaces or other characters. Not having these built-in has had some rather dire consequences recently as a module implementing a similar capability was removed from NPM. Having these simple string padding APIs available in the standard library and avoiding the additional dependency (or bugs) will be very helpful to a lot of developers.

Chakra is also previewing SIMD, with the entire API surface area in the stage 3 SIMD proposal completed, as well as a fully-optimized SIMD implementation integrated with asm.js. We are eager to hear feedback from you on how useful these APIs are for your programs.

You can try ES2015 modules and other new ES2015, ES2016, and beyond features in Microsoft Edge, starting with the latest Windows Insider Preview and share your thoughts and feedback with us on Twitter at @MSEdgeDev and @ChakraCore or on Connect. We look forward to hearing your input!

― Taylor Woll, Software Engineer
― Limin Zhu, Program Manager
― Brian Terlson, Program Manager

Previewing ES6 Modules and more from ES2015, ES2016 and beyond

Chakra, the recently open-sourced JavaScript engine that powers Microsoft Edge and Universal Windows applications, has been pushing the leading edge of ECMAScript language support.  Most of ES2015 (aka ES6) language support is already available in Edge, and last week’s Windows Insider Preview build 14342 brings more ES6 capabilities including modules, default parameters, and destructuring. We’re not stopping there – Edge also supports all ES2016 (aka ES7) proposals – the exponentiation operator and Array.prototype.includes – as well as future ECMAScript proposals such as Async Functions and utility methods like Object.values/entries and String.prototype.padStart/padEnd.

ES6 Modules

Modules let you write componentized and sharable code. Without complete support from any browser, developers have already started to get a taste of ES6 modules through transpiling from tools such as TypeScript or Babel. Edge now has early support for ES6 modules behind an experimental flag.

Modules aren’t exactly new to JavaScript with formats such as Asynchronous Module Definition (AMD) and CommonJS predating ES6. The key value of having modules built in to the language is that JavaScript developers can easily consume or publish modular code under one unified module ecosystem that spans both client and server. ES6 modules feature a straightforward and imperative-style syntax, and also have a static structure that paves the way for performance optimizations and other benefits such as circular dependency support. There are times when modules need to be loaded dynamically, for example some modules might only be useful and thus loaded if certain conditions are met in execution time. For such cases, there will be a module loader which is still under discussion in the standardization committee and will be specified/ratified in the future.

ES6 Modules in Microsoft Edge

To light up ES6 modules and other experimental JavaScript features in Edge, you can navigate to about:flags and select the “Enable experimental JavaScript features” flag.

Screen capture showing the "Experimental JavaScript features" flag at about:flags

The “Experimental JavaScript features” flag at about:flags

As a first step, Edge and Chakra now support all declarative import/export syntax defined in ES6 with the exception of namespace import/export (import * as name from “module-name” and export * from “module-name”). To load modules in a page, you can use the <script type=”module”> tag. Here is an example of using a math module:

/* index.html */
...
&lt;script type='module' src='./app.js'&gt;
...
/* app.js */
import { sum } from './math.js';
console.log(sum(1, 2));

/* math.js */
export function sum(a, b) { return a + b; }
export function mult(a, b) { return a * b; }

Implementation: static modules = faster lookup

One of the best aspects of ES6’s modules design is that all import and export declarations are static. The spec syntactically restricts all declarations to the global scope in the module body (no imports/exports in if-statement, nested function, eval, etc.), so all module imports and exports can be determined during parsing and will not change in execution.

From an implementation perspective, Chakra takes advantage of the static nature in several ways, but the key benefit is to optimize looking up values of import and export binding identifiers. In Microsoft Edge, after parsing the modules, Chakra knows exactly how many exports a module declares and what they are all named, Chakra can allocate the physical storage for the exports before execution. For an import, Chakra resolves the import name and create a link back to the export it refers to. The fact that Chakra is aware of the exact physical storage location for imports and exports allows it to store the location in bytecode and skip dynamically looking up the export name in execution. Chakra can bypass much of the normal property lookup and cache mechanisms and instead directly fetch the stored location to access imports or exports. The end result is that working with an imported object is faster than looking up properties in ordinary objects.

Going forward with modules

We have taken an exciting first step to support ES6 modules. As the experimental status suggests, the work towards turning ES6 modules on by default in Edge is not fully done yet. Module debugging in the F12 Developer Tools is currently supported in internal builds and will be available for public preview soon. Our team is also working on namespace import/export to have the declarative syntax fully supported and will look into module loader once its specification stabilizes. We also plan to add new JSRT APIs to support modules for Chakra embedders outside of Edge.

More ES6 Language Features

Microsoft Edge has led the way on a number of ES6 features. Chakra has most of ES6 implemented and on by default including the new syntax like let and const, classes, arrow functions, destructuring, rest and spread parameters, template literals, and generators as well as all the new built-in types including Map, Set, Promise, Symbol, and the various typed arrays.

In current preview builds, there are two areas of ES6 that are not enabled by default – well-known symbols and Proper Tail Calls. Well-known symbols need to be performance optimized before Chakra can enable them – we look forward to delivering this support in an upcoming Insider flight later this year.

The future of Proper Tail Calls, on the other hand, is somewhat in doubt – PTC requires a complex implementation which may result in performance and standards regressions in other areas. We’re continuing to evaluate this specification based on our own implementation work and ongoing discussions at TC39.

ES2016 & Beyond with the New TC39 Process

TC39, the standards body that works on the ECMAScript language, has a new GitHub-driven process and yearly release cadence. This new process has been an amazing improvement so far for a number of reasons, but the biggest is that it makes it easier for implementations to begin work early for pieces of a specification that are stable and specifications are themselves much smaller. As such, Chakra got an early start on ES2016 and are now shipping a complete ES2016 implementation. ES2016 was finalized (though not ratified) recently and includes two new features: the exponentiation operator and Array.prototype.includes.

The exponentiation operator is a nice syntax for doing `Math.pow` and it uses the familiar `**` syntax as used in a number of other languages. For example, a polynomial can be written like:

let result = 5 * x ** 2 – 2 * x + 5;

Certainly a nice improvement over Math.pow, especially when more terms are present.

Chakra also supports Array.prototype.includes which is a nice ergonomic improvement over the existing Array.prototype.indexOf method. Includes returns `true` or `false` rather than an index which makes usage in Boolean contexts lots easier. `includes` also handles NaN properly, finally allowing an easy way to detect if NaN is present in an array.

Meanwhile, the ES2017 specification is already shaping up and Chakra has a good start on some of those features as well, namely Object.values and entries, String.prototype.padStart and padEnd, and SIMD. We’ve blogged about SIMD in the past, though recently Chakra has made progress in making SIMD available outside of asm.js. With the exception of Object.values and entries, these features are only available with experimental JavaScript features flag enabled.

Object.values & Object.entries are very handy partners to Object.keys. Object.keys gets you an array of keys of an object, while Object.values gives you the values and Object.entries gives you the key-value pairs. The following should illustrate the differences nicely:

let obj = { a: 1, b: 2 };
Object.keys(obj);
// [ 'a', 'b' ]
Object.values(obj);
// [1, 2]
Object.entries(obj);
// [ ['a', 1], ['b', 2] ]

String.prototype.padStart and String.prototype.padEnd are two simple string methods to pad out the left or right side of a string with spaces or other characters. Not having these built-in has had some rather dire consequences recently as a module implementing a similar capability was removed from NPM. Having these simple string padding APIs available in the standard library and avoiding the additional dependency (or bugs) will be very helpful to a lot of developers.

Chakra is also previewing SIMD, with the entire API surface area in the stage 3 SIMD proposal completed, as well as a fully-optimized SIMD implementation integrated with asm.js. We are eager to hear feedback from you on how useful these APIs are for your programs.

You can try ES2015 modules and other new ES2015, ES2016, and beyond features in Microsoft Edge, starting with the latest Windows Insider Preview and share your thoughts and feedback with us on Twitter at @MSEdgeDev and @ChakraCore or on Connect. We look forward to hearing your input!

― Taylor Woll, Software Engineer
― Limin Zhu, Program Manager
― Brian Terlson, Program Manager