Table of Contents Preface JavaScript Introduction to JavaScript ECMAScript ES6 ES2016 ES2017 ES2018 Coding style Lexical Structure Variables Types Expressions Prototypal inheritance Classes Exceptions Semicolons Quotes Template Literals Functions Arrow Functions Closures Arrays Loops Events The Event Loop Asynchronous programming and callbacks Promises Async and Await Loops and Scope
2
Timers this Strict Mode Immediately-invoked Function Expressions (IIFE) Math operators The Math object ES Modules CommonJS Glossary CSS Introduction to CSS CSS Grid Flexbox CSS Custom Properties PostCSS How to center things in modern CSS The CSS margin property CSS System Fonts Style CSS for print CSS Transitions CSS Animations Web Platform The DOM Progressive Web Apps Service Workers XHR Fetch API Channel Messaging API Cache API Push API Notifications API IndexedDB Selectors API
3
Web Storage API Cookies History API Efficiently load JavaScript with defer and async The WebP Image Format SVG Data URLs CORS Web Workers requestAnimationFrame Console API WebSockets The Speech Synthesis API The DOCTYPE v8 The Canvas API Frontend Dev Tools Webpack Parcel Babel Yarn Jest ESLint Prettier Browser DevTools Emmet How to use Visual Studio Code React and Redux React JSX React Router Styled Components Redux
4
Redux Saga Setup an Electron app with React Next.js Vue.js Introduction to Vue Vue First App The Vue CLI DevTools Configuring VS Code for Vue Development Components Single File Components Templates Styling components using CSS Directives Events Methods Watchers Computed Properties Methods vs Watchers vs Computed Properties Props Slots Filters Communication among components Vuex Vue Router Node.js Introduction to Node A brief history of Node How to install Node How much JavaScript do you need to know to use Node? Differences between Node and the Browser Run Node.js scripts from the command line How to exit from a Node.js program
5
How to read environment variables Node hosting options Use the Node REPL Pass arguments from the command line Output to the command line Accept input from the command line Expose functionality from a Node file using exports npm Where does npm install the packages How to use or execute a package installed using npm The package.json file The package-lock.json file Find the installed version of an npm package How to install an older version of an npm package How to update all the Node dependencies to their latest version Semantic versioning rules Uninstalling npm packages Global or local packages npm dependencies and devDependencies npx The event loop nextTick setImmediate The Node Event Emitter Build an HTTP server Making HTTP requests Axios Websockets HTTPS, secure connections File descriptors File stats File paths Reading files
6
Writing files Working with folders The fs module The path module The os module The events module The http module Streams Working with MySQL Difference between development and production Express.js Express overview Request parameters Sending a response Sending a JSON response Manage Cookies Work with HTTP headers Redirects Routing CORS Templating The Pug Guide Middleware Serving static files Send files Sessions Validating input Sanitizing input Handling forms File uploads in forms An Express HTTPS server with a self-signed certificate Setup Let's Encrypt for Express JavaScript Libraries
7
Axios The Beginner's Guide to Meteor Moment.js GraphQL GraphQL Apollo Git and GitHub Git GitHub A Git cheat sheet Deployment, APIs and Services Netlify Firebase Hosting How to authenticate to any Google API Interact with the Google Analytics API using Node.js Glitch, a great Platform for Developers Airtable API for Developers Electron Networking The HTTP protocol The HTTPS protocol HTTP vs HTTPS Caching in HTTP The HTTP Status Codes List The curl guide to HTTP requests What is an RFC? The HTTP Response Headers List The HTTP Request Headers List How HTTP requests work HOW-TOs How to append an item to an array in JavaScript How to check if a JavaScript object property is undefined How to deep clone a JavaScript object
8
How to convert a string to a number in JavaScript How to format a number as a currency value in JavaScript How to get the current timestamp in JavaScript How to redirect to another web page using JavaScript How to remove an item from an Array in JavaScript How to remove a property from a JavaScript object How to check if a string contains a substring in JavaScript How to uppercase the first letter of a string in JavaScript How to replace all occurrences of a string in JavaScript How to trim the leading zero in a number in JavaScript How to inspect a JavaScript object How to generate random and unique strings in JavaScript How to make your JavaScript functions sleep How to check if a file exists in Node.js How to validate an email address in JavaScript How to get the unique properties of a set of object in a JavaScript array How to check if a string starts with another in JavaScript How to create a multiline string in JavaScript How to get the current URL in JavaScript How to initialize a new array with values in JavaScript How to create an empty file in Node.js How to remove a file with Node.js How to wait for the DOM ready event in plain JavaScript How to add a class to a DOM element How to loop over DOM elements from querySelectorAll How to generate a random number between two numbers in JavaScript How to remove a class from a DOM element How to check if a DOM element has a class How to change a DOM node value How to add a click event to a list of DOM elements returned from querySelectorAll How to get the index of an iteration in a for-of loop in JavaScript
9
10
Preface
Preface Welcome! Thank you for getting this ebook. I hope its content will help you achieve what you want. Flavio You can reach me via email at [email protected], on Twitter @flaviocopes. My website is flaviocopes.com.
11
JavaScript
JavaScript
12
Introduction to JavaScript
Introduction to JavaScript JavaScript is one of the most popular programming languages in the world, and now widely used also outside of the browser. The rise of Node.js in the last few years unlocked backend development, once the domain of Java, Ruby, Python, PHP, and more traditional server-side languages. Learn all about it! Introduction A basic definition of JavaScript JavaScript versions
Introduction JavaScript is one of the most popular programming languages in the world. Created 20 years ago, it's gone a very long way since its humble beginnings. Being the first - and the only - scripting language that was supported natively by web browsers, it simply stuck. In the beginnings, it was not nearly powerful as it is today, and it was mainly used for fancy animations and the marvel known at the time as DHTML. With the growing needs that the web platform demands, JavaScript had the responsibility to grow as well, to accommodate the needs of one of the most widely used ecosystems of the world. Many things were introduced in the platform, with browser APIs, but the language grew quite a lot as well. JavaScript is now widely used also outside of the browser. The rise of Node.js in the last few years unlocked backend development, once the domain of Java, Ruby, Python and PHP and more traditional server-side languages. JavaScript is now also the language powering databases and many more applications, and it's even possible to develop embedded applications, mobile apps, TV sets apps and much more. What started as a tiny language inside the browser is now the most popular language in the world.
A basic definition of JavaScript
13
Introduction to JavaScript
JavaScript is a programming language that is: high level: it provides abstractions that allow you to ignore the details of the machine where it's running on. It manages memory automatically with a garbage collector, so you can focus on the code instead of managing memory locations, and provides many constructs which allow you to deal with highly powerful variables and objects. dynamic: opposed to static programming languages, a dynamic language executes at runtime many of the things that a static language does at compile time. This has pros and cons, and it gives us powerful features like dynamic typing, late binding, reflection, functional programming, object runtime alteration, closures and much more. dynamically typed: a variable does not enforce a type. You can reassign any type to a variable, for example assigning an integer to a variable that holds a string. weakly typed: as opposed to strong typing, weakly (or loosely) typed languages do not enforce the type of an object, allowing more flexibility but denying us type safety and type checking (something that TypeScript and Flow aim to improve) interpreted: it's commonly known as an interpreted language, which means that it does not need a compilation stage before a program can run, as opposed to C, Java or Go for example. In practice, browsers do compile JavaScript before executing it, for performance reasons, but this is transparent to you: there is no additional step involved. multi-paradigm: the language does not enforce any particular programming paradigm, unlike Java for example which forces the use of object oriented programming, or C that forces imperative programming. You can write JavaScript using an object-oriented paradigm, using prototypes and the new (as of ES6) classes syntax. You can write JavaScript in functional programming style, with its first class functions, or even in an imperative style (C-like). In case you're wondering, JavaScript has nothing to do with Java, it's a poor name choice but we have to live with it.
JavaScript versions Let me introduce the term ECMAScript here. We have a complete guide dedicated to ECMAScript where you can dive into it more, but to start with, you just need to know that ECMAScript (also called ES) is the name of the JavaScript standard. JavaScript is an implementation of that standard. That's why you'll hear about ES6, ES2015, ES2016, ES2017, ES2018 and so on. For a very long time, the version of JavaScript that all browser ran was ECMAScript 3. Version 4 was canceled due to feature creep (they were trying to add too many things at once), while ES5 was a huge version for JS.
14
Introduction to JavaScript
ES2015, also called ES6, was huge as well. Since then, the ones in charge decided to release one version per year, to avoid having too much time idle between releases, and have a faster feedback loop. Currently, the latest approved JavaScript version is ES2017.
15
ECMAScript
ECMAScript ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Discover everything about ECMAScript, and the last features added in ES6, 7, 8
Current ECMAScript version When is the next version coming out? What is TC39 ES Versions ES Next Whenever you read about JavaScript you'll inevitably see one of these terms: ES3 ES5 ES6 ES7
16
ECMAScript
ES8 ES2015 ES2016 ES2017 ECMAScript 2017 ECMAScript 2016 ECMAScript 2015 What do they mean? They are all referring to a standard, called ECMAScript. ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Beside JavaScript, other languages implement(ed) ECMAScript, including: ActionScript (the Flash scripting language), which is losing popularity since Flash will be officially discontinued in 2020 JScript (the Microsoft scripting dialect), since at the time JavaScript was supported only by Netscape and the browser wars were at their peak, Microsoft had to build its own version for Internet Explorer but of course JavaScript is the most popular and widely used implementation of ES. Why this weird name? Ecma International is a Swiss standards association who is in charge of defining international standards. When JavaScript was created, it was presented by Netscape and Sun Microsystems to Ecma and they gave it the name ECMA-262 alias ECMAScript. This press release by Netscape and Sun Microsystems (the maker of Java) might help figure out the name choice, which might include legal and branding issues by Microsoft which was in the committee, according to Wikipedia. After IE9, Microsoft stopped stopped branding its ES support in browsers as JScript and started calling it JavaScript (at least, I could not find references to it any more) So as of 201x, the only popular language supporting the ECMAScript spec is JavaScript.
Current ECMAScript version The current ECMAScript version is ES2017, AKA ES8 It was released in June 2017.
17
ECMAScript
When is the next version coming out? Historically JavaScript editions have been standardized during the summer, so we can expect ECMAScript 2019 (named ES2019 or ES10) to be released in summer 2019, but this is just speculation.
What is TC39 TC39 is the committee that evolves JavaScript. The members of TC39 are companies involved in JavaScript and browser vendors, including Mozilla, Google, Facebook, Apple, Microsoft, Intel, PayPal, SalesForce and others. Every standard version proposal must go through various stages, which are explained here.
ES Versions I found it puzzling why sometimes an ES version is referenced by edition number and sometimes by year, and I am confused by the year by chance being -1 on the number, which adds to the general confusion around JS/ES Before ES2015, ECMAScript specifications were commonly called by their edition. So ES5 is the official name for the ECMAScript specification update published in 2009. Why does this happen? During the process that led to ES2015, the name was changed from ES6 to ES2015, but since this was done late, people still referenced it as ES6, and the community has not left the edition naming behind - the world is still calling ES releases by edition number. This table should clear things a bit: Edition
Official name
Date published
ES9
ES2018
June 2018
ES8
ES2017
June 2017
ES7
ES2016
June 2016
ES6
ES2015
June 2015
ES5.1
June 2011
ES5.1 ES5
ES5
December 2009
ES4
ES4
Abandoned
18
ECMAScript
ES3
ES3
December 1999
ES2
ES2
June 1998
ES1
ES1
June 1997
ES Next ES.Next is a name that always indicates the next version of JavaScript. So at the time of writing, ES9 has been released, and ES.Next is ES10
19
ES6
ES6 ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Discover everything about ECMAScript, and the last features added in ES6, aka ES2015 Arrow Functions A new this scope Promises Generators let and const
Classes Constructor Super Getters and setters Modules Importing modules Exporting modules Template Literals Default parameters The spread operator Destructuring assignments Enhanced Object Literals Simpler syntax to include variables Prototype super() Dynamic properties For-of loop Map and Set ECMAScript 2015, also known as ES6, is a fundamental version of the ECMAScript standard. Published 4 years after the latest standard revision, ECMAScript 5.1, it also marked the switch from edition number to year number. So it should not be named as ES6 (although everyone calls it as such) but ES2015 instead. ES5 was 10 years in the making, from 1999 to 2009, and as such it was also a fundamental and very important revision of the language, but now much time has passed that it's not worth discussing how pre-ES5 code worked.
20
ES6
Since this long time passed between ES5.1 and ES6, the release is full of important new features and major changes in suggested best practices in developing JavaScript programs. To understand how fundamental ES2015 is, just keep in mind that with this version, the specification document went from 250 pages to ~600. The most important changes in ES2015 include Arrow functions Promises Generators let and const
Classes Modules Multiline strings Template literals Default parameters The spread operator Destructuring assignments Enhanced object literals The for..of loop Map and Set Each of them has a dedicated section in this article.
Arrow Functions Arrow functions since their introduction changed how most JavaScript code looks (and works). Visually, it's a simple and welcome change, from: const foo = function foo() { //... }
to const foo = () => { //... }
And if the function body is a one-liner, just: const foo = () => doSomething()
21
ES6
Also, if you have a single parameter, you could write: const foo = param => doSomething(param)
This is not a breaking change, regular function s will continue to work just as before.
A new this scope The this scope with arrow functions is inherited from the context. With regular function s this always refers to the nearest function, while with arrow functions this problem is removed, and you won't need to write var that = this ever again.
Promises Promises (check the full guide to promises) allow us to eliminate the famous "callback hell", although they introduce a bit more complexity (which has been solved in ES2017 with async , a higher level construct). Promises have been used by JavaScript developers well before ES2015, with many different libraries implementations (e.g. jQuery, q, deferred.js, vow...), and the standard put a common ground across differences. By using promises you can rewrite this code setTimeout(function() { console.log('I promised to run after 1s') setTimeout(function() { console.log('I promised to run after 2s') }, 1000) }, 1000)
as const wait = () => new Promise((resolve, reject) => { setTimeout(resolve, 1000) }) wait().then(() => { console.log('I promised to run after 1s') return wait() }) .then(() => console.log('I promised to run after 2s'))
22
ES6
Generators Generators are a special kind of function with the ability to pause itself, and resume later, allowing other code to run in the meantime. The code decides that it has to wait, so it lets other code "in the queue" to run, and keeps the right to resume its operations "when the thing it's waiting for" is done. All this is done with a single, simple keyword: yield . When a generator contains that keyword, the execution is halted. A generator can contain many yield keywords, thus halting itself multiple times, and it's identified by the *function keyword, which is not to be confused with the pointer dereference operator used in lower level programming languages such as C, C++ or Go. Generators enable whole new paradigms of programming in JavaScript, allowing: 2-way communication while a generator is running long-lived while loops which do not freeze your program Here is an example of a generator which explains how it all works. function *calculator(input) { var doubleThat = 2 * (yield (input / 2)) var another = yield (doubleThat) return (input * doubleThat * another) }
We initialize it with const calc = calculator(10)
Then we start the iterator on our generator: calc.next()
This first iteration starts the iterator. The code returns this object: { done: false value: 5 }
23
ES6
What happens is: the code runs the function, with input = 10 as it was passed in the generator constructor. It runs until it reaches the yield , and returns the content of yield : input / 2 = 5 . So we got a value of 5, and the indication that the iteration is not done (the
function is just paused). In the second iteration we pass the value 7 : calc.next(7)
and what we got back is: { done: false value: 14 }
7 was placed as the value of doubleThat . Important: you might read like input / 2 was the
argument, but that's just the return value of the first iteration. We now skip that, and use the new input value, 7 , and multiply it by 2. We then reach the second yield, and that returns doubleThat , so the returned value is 14 . In the next, and last, iteration, we pass in 100 calc.next(100)
and in return we got { done: true value: 14000 }
As the iteration is done (no more yield keywords found) and we just return (input * doubleThat * another) which amounts to 10 * 14 * 100 .
let and const var is traditionally function scoped. let is a new variable declaration which is block scoped.
24
ES6
This means that declaring let variables in a for loop, inside an if or in a plain block is not going to let that variable "escape" the block, while var s are hoisted up to the function definition. const is just like let , but immutable.
In JavaScript moving forward, you'll see little to no var declarations any more, just let and const . const in particular, maybe surprisingly, is very widely used nowadays with immutability
being very popular.
Classes Traditionally JavaScript is the only mainstream language with prototype-based inheritance. Programmers switching to JS from class-based language found it puzzling, but ES2015 introduced classes, which are just syntactic sugar over the inner working, but changed a lot how we build JavaScript programs. Now inheritance is very easy and resembles other object-oriented programming languages: class Person { constructor(name) { this.name = name } hello() { return 'Hello, I am ' + this.name + '.' } } class Actor extends Person { hello() { return super.hello() + ' I am an actor.' } } var tomCruise = new Actor('Tom Cruise') tomCruise.hello()
(the above program prints "Hello, I am Tom Cruise. I am an actor.") Classes do not have explicit class variable declarations, but you must initialize any variable in the constructor.
Constructor
25
ES6
Classes have a special method called constructor which is called when a class is initialized via new .
Super The parent class can be referenced using super() .
Getters and setters A getter for a property can be declared as class Person { get fullName() { return `${this.firstName} ${this.lastName}` } }
Setters are written in the same way: class Person { set age(years) { this.theAge = years } }
Modules Before ES2015, there were at least 3 major modules competing standards, which fragmented the community: AMD RequireJS CommonJS ES2015 standardized these into a common format.
Importing modules Importing is done via the import ... from ... construct: import * from 'mymodule' import React from 'react' import { React, Component } from 'react' import React as MyLibrary from 'react'
26
ES6
Exporting modules You can write modules and export anything to other modules using the export keyword: export var foo = 2 export function bar() { /* ... */ }
Template Literals Template literals are a new syntax to create strings: const aString = `A string`
They provide a way to embed expressions into strings, effectively interpolating the values, by using the ${a_variable} syntax: const var = 'test' const string = `something ${var}` //something test
You can perform more complex expressions as well: const string = `something ${1 + 2 + 3}` const string2 = `something ${foo() ? 'x' : 'y' }`
and strings can span over multiple lines: const string3 = `Hey this string is awesome!`
Compare how we used to do multiline strings pre-ES2015: var str = 'One\n' + 'Two\n' + 'Three'
See this post for an in-depth guide on template literals
Default parameters 27
ES6
Functions now support default parameters: const foo = function(index = 0, testing = true) { /* ... */ } foo()
The spread operator You can expand an array, an object or a string using the spread operator ... . Let's start with an array example. Given const a = [1, 2, 3]
you can create a new array using const b = [...a, 4, 5, 6]
You can also create a copy of an array using const c = [...a]
This works for objects as well. Clone an object with: const newObj = { ...oldObj }
Using strings, the spread operator creates an array with each char in the string: const hey = 'hey' const arrayized = [...hey] // ['h', 'e', 'y']
This operator has some pretty useful applications. The most important one is the ability to use an array as function argument in a very simple way: const f = (foo, bar) => {} const a = [1, 2] f(...a)
(in the past you could do this using f.apply(null, a) but that's not as nice and readable)
Destructuring assignments 28
ES6
Given an object, you can extract just some values and put them into named variables: const person = { firstName: 'Tom', lastName: 'Cruise', actor: true, age: 54, //made up } const {firstName: name, age} = person
name and age contain the desired values.
The syntax also works on arrays: const a = [1,2,3,4,5] [first, second, , , fifth] = a
Enhanced Object Literals In ES2015 Object Literals gained superpowers.
Simpler syntax to include variables Instead of doing const something = 'y' const x = { something: something }
you can do const something = 'y' const x = { something }
Prototype A prototype can be specified with const anObject = { y: 'y' } const x = { __proto__: anObject
For-of loop ES5 back in 2009 introduced forEach() loops. While nice, they offered no way to break, like for loops always did.
ES2015 introduced the for-of loop, which combines the conciseness of forEach with the ability to break: //iterate over the value for (const v of ['a', 'b', 'c']) { console.log(v); } //get the index as well, using `entries()` for (const [i, v] of ['a', 'b', 'c'].entries()) { console.log(i, v); }
Map and Set Map and Set (and their respective garbage collected WeakMap and WeakSet) are the official implementations of two very popular data structures.
30
ES6
31
ES2016
ES2016 ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Discover everything about ECMAScript, and the last features added in ES2016, aka ES7 Array.prototype.includes() Exponentiation Operator ES7, officially known as ECMAScript 2016, was finalized in June 2016. Compared to ES6, ES7 is a tiny release for JavaScript, containing just two features: Array.prototype.includes Exponentiation Operator
Array.prototype.includes() This feature introduces a more readable syntax for checking if an array contains an element. With ES6 and lower, to check if an array contained an element you had to use indexOf , which checks the index in the array, and returns -1 if the element is not there. Since -1 is evaluated as a true value, you could not do for example if (![1,2].indexOf(3)) { console.log('Not found') }
With this feature introduced in ES7 we can do if (![1,2].includes(3)) { console.log('Not found') }
Exponentiation Operator The exponentiation operator ** is the equivalent of Math.pow() , but brought into the language instead of being a library function. Math.pow(4, 2) == 4 ** 2
32
ES2016
This feature is a nice addition for math intensive JS applications. The ** operator is standardized across many languages including Python, Ruby, MATLAB, Lua, Perl and many others.
33
ES2017
ES2017 ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Discover everything about ECMAScript, and the last features added in ES2017, aka ES8 String padding Object.values() Object.entries() getOwnPropertyDescriptors() In what way is this useful? Trailing commas Async functions Why they are useful A quick example Multiple async functions in series Shared Memory and Atomics ECMAScript 2017, edition 8 of the ECMA-262 Standard (also commonly called ES2017 or ES8), was finalized in June 2017. Compared to ES6, ES8 is a tiny release for JavaScript, but still it introduces very useful features: String padding Object.values Object.entries Object.getOwnPropertyDescriptors() Trailing commas in function parameter lists and calls Async functions Shared memory and atomics
String padding The purpose of string padding is to add characters to a string, so it reaches a specific length. ES2017 introduces two String methods: padStart() and padEnd() . padStart(targetLength [, padString]) padEnd(targetLength [, padString])
34
ES2017
Sample usage: padStart() 'test'.padStart(4)
'test'
'test'.padStart(5)
' test'
'test'.padStart(8)
' test'
'test'.padStart(8, 'abcd')
'abcdtest' padEnd()
'test'.padEnd(4)
'test'
'test'.padEnd(5)
'test '
'test'.padEnd(8)
'test '
'test'.padEnd(8, 'abcd')
'testabcd'
Object.values() This method returns an array containing all the object own property values. Usage: const person = { name: 'Fred', age: 87 } Object.values(person) // ['Fred', 87]
Object.values() also works with arrays:
const people = ['Fred', 'Tony'] Object.values(people) // ['Fred', 'Tony']
Object.entries() This method returns an array containing all the object own properties, as an array of [key, value] pairs.
getOwnPropertyDescriptors() This method returns all own (non-inherited) properties descriptors of an object. Any object in JavaScript has a set of properties, and each of these properties has a descriptor. A descriptor is a set of attributes of a property, and it's composed by a subset of the following: value: the value of the property writable: true the property can be changed get: a getter function for the property, called when the property is read set: a setter function for the property, called when the property is set to a value configurable: if false, the property cannot be removed nor any attribute can be changed, except its value enumerable: true if the property is enumerable Object.getOwnPropertyDescriptors(obj) accepts an object, and returns an object with the set of
descriptors.
In what way is this useful? ES6 gave us Object.assign() , which copies all enumerable own properties from one or more objects, and return a new object. However there is a problem with that, because it does not correctly copies properties with nondefault attributes. If an object for example has just a setter, it's not correctly copied to a new object, using Object.assign() .
For example with const person1 = { set name(newName) { console.log(newName) } }
This won't work: const person2 = {}
36
ES2017
Object.assign(person2, person1)
But this will work: const person3 = {} Object.defineProperties(person3, Object.getOwnPropertyDescriptors(person1))
As you can see with a simple console test: person1.name = 'x' "x" person2.name = 'x' person3.name = 'x' "x"
person2 misses the setter, it was not copied over.
The same limitation goes for shallow cloning objects with Object.create().
Trailing commas This feature allows to have trailing commas in function declarations, and in functions calls: const doSomething = (var1, var2,) => { //... } doSomething('test2', 'test2',)
This change will encourage developers to stop the ugly "comma at the start of the line" habit.
Async functions Check the dedicated post about async/await ES2017 introduced the concept of async functions, and it's the most important change introduced in this ECMAScript edition. Async functions are a combination of promises and generators to reduce the boilerplate around promises, and the "don't break the chain" limitation of chaining promises.
37
ES2017
Why they are useful It's a higher level abstraction over promises. When Promises were introduced in ES6, they were meant to solve a problem with asynchronous code, and they did, but over the 2 years that separated ES6 and ES2017, it was clear that promises could not be the final solution. Promises were introduced to solve the famous callback hell problem, but they introduced complexity on their own, and syntax complexity. They were good primitives around which a better syntax could be exposed to the developers: enter async functions.
A quick example Code making use of asynchronous functions can be written as function doSomethingAsync() { return new Promise((resolve) => { setTimeout(() => resolve('I did something'), 3000) }) } async function doSomething() { console.log(await doSomethingAsync()) } console.log('Before') doSomething() console.log('After')
The above code will print the following to the browser console: Before After I did something //after 3s
Multiple async functions in series Async functions can be chained very easily, and the syntax is much more readable than with plain promises: function promiseToDoSomething() { return new Promise((resolve)=>{ setTimeout(() => resolve('I did something'), 10000) }) } async function watchOverSomeoneDoingSomething() {
38
ES2017
const something = await promiseToDoSomething() return something + ' and I watched' } async function watchOverSomeoneWatchingSomeoneDoingSomething() { const something = await watchOverSomeoneDoingSomething() return something + ' and I watched as well' } watchOverSomeoneWatchingSomeoneDoingSomething().then((res) => { console.log(res) })
Shared Memory and Atomics WebWorkers are used to create multithreaded programs in the browser. They offer a messaging protocol via events. Since ES2017, you can create a shared memory array between web workers and their creator, using a SharedArrayBuffer . Since it's unknown how much time writing to a shared memory portion takes to propagate, Atomics are a way to enforce that when reading a value, any kind of writing operation is completed. Any more detail on this can be found in the spec proposal, which has since been implemented.
39
ES2018
ES2018 ECMAScript is the standard upon which JavaScript is based, and it's often abbreviated to ES. Discover everything about ECMAScript, and the last features added in ES2018, aka ES9 Rest/Spread Properties Asynchronous iteration Promise.prototype.finally() Regular Expression improvements RegExp lookbehind assertions: match a string depending on what precedes it Unicode property escapes \p{…} and \P{…} Named capturing groups The s flag for regular expressions ES2018 is the latest version of the ECMAScript standard. What are the new things introduced in it?
Rest/Spread Properties ES6 introduced the concept of a rest element when working with array destructuring: const numbers = [1, 2, 3, 4, 5] [first, second, ...others] = numbers
and spread elements: const numbers = [1, 2, 3, 4, 5] const sum = (a, b, c, d, e) => a + b + c + d + e const sum = sum(...numbers)
ES2018 introduces the same but for objects. Rest properties: const { first, second, ...others } = { first: 1, second: 2, third: 3, fourth: 4, fifth: 5 } first // 1 second // 2 others // { third: 3, fourth: 4, fifth: 5 }
40
ES2018
Spread properties allow to create a new object by combining the properties of the object passed after the spread operator: const items = { first, second, ...others } items //{ first: 1, second: 2, third: 3, fourth: 4, fifth: 5 }
Asynchronous iteration The new construct for-await-of allows you to use an async iterable object as the loop iteration: for await (const line of readLines(filePath)) { console.log(line) }
Since this uses await , you can use it only inside async functions, like a normal await (see async/await)
Promise.prototype.finally() When a promise is fulfilled, successfully it calls the then() methods, one after another. If something fails during this, the then() methods are jumped and the catch() method is executed. finally() allow you to run some code regardless of the successful or not successful
execution of the promise: fetch('file.json') .then(data => data.json()) .catch(error => console.error(error)) .finally(() => console.log('finished'))
Regular Expression improvements RegExp lookbehind assertions: match a string depending on what precedes it This is a lookahead: you use ?= to match a string that's followed by a specific substring: /Roger(?=Waters)/
41
ES2018
/Roger(?= Waters)/.test('Roger is my dog') //false /Roger(?= Waters)/.test('Roger is my dog and Roger Waters is a famous musician') //true
?! performs the inverse operation, matching if a string is not followed by a specific substring:
/Roger(?!Waters)/ /Roger(?! Waters)/.test('Roger is my dog') //true /Roger(?! Waters)/.test('Roger Waters is a famous musician') //false
Lookaheads use the ?= symbol. They were already available. Lookbehinds, a new feature, uses ? console.log(letter))
Such piece of code: (1 + 2).toString()
prints "3" . const a = 1 const b = 2 const c = a + b (a + b).toString()
instead raises a TypeError: b is not a function exception, because JavaScript tries to interpret it as const a = 1 const b = 2 const c = a + b(a + b).toString()
Another example based on rule 4: (() => { return { color: 'white' } })()
73
Semicolons
You'd expect the return value of this immediately-invoked function to be an object that contains the color property, but it's not. Instead, it's undefined , because JavaScript inserts a semicolon after return . Instead you should put the opening bracket right after return : (() => { return { color: 'white' } })()
You'd think this code shows '0' in an alert: 1 + 1 -1 + 1 === 0 ? alert(0) : alert(2)
but it shows 2 instead, because JavaScript per rule 1 interprets it as: 1 + 1 -1 + 1 === 0 ? alert(0) : alert(2)
Conclusion Be careful. Some people are very opinionated on semicolons. I don't care honestly, the tool gives us the option not to use it, so we can avoid semicolons. I'm not suggesting anything, other than picking your own decision. We just need to pay a bit of attention, even if most of the times those basic scenarios never show up in your code. Pick some rules: be careful with return statements. If you return something, add it on the same line as the return (same for break , throw , continue ) never start a line with parentheses, those might be concatenated with the previous line to form a function call, or array element reference And ultimately, always test your code to make sure it does what you want
74
Semicolons
75
Quotes
Quotes An overview of the quotes allowed in JavaScript and their unique features JavaScript allows you to use 3 types of quotes: single quotes double quotes backticks The first 2 are essentially the same: const test = 'test' const bike = "bike"
There's little to no difference in using one or the other. The only difference lies in having to escape the quote character you use to delimit the string: const test = 'test' const test = 'te\'st' const test = 'te"st' const test = "te\"st" const test = "te'st"
There are various style guides that recommend always using one style vs the other. I personally prefer single quotes all the time, and use double quotes only in HTML. Backticks are a recent addition to JavaScript, since they were introduced with ES6 in 2015. They have a unique feature: they allow multiline strings. Multiline strings are also possible using regular strings, using escape characters: const multilineString = 'A string\non multiple lines'
Using backticks, you can avoid using an escape character: const multilineString = `A string on multiple lines`
Not just that. You can interpolate variables using the ${} syntax: const multilineString = `A string
76
Quotes
on ${1+1} lines`
I cover backticks-powered strings (called template literals) in a separate article, that dives more into the nitty-gritty details.
77
Template Literals
Template Literals Introduced in ES2015, aka ES6, Template Literals offer a new way to declare strings, but also some new interesting constructs which are already widely popular. Introduction to Template Literals Multiline strings Interpolation Template tags
Introduction to Template Literals Template Literals are a new ES2015 / ES6 feature that allow you to work with strings in a novel way compared to ES5 and below. The syntax at a first glance is very simple, just use backticks instead of single or double quotes: const a_string = `something`
They are unique because they provide a lot of features that normal strings built with quotes, in particular: they offer a great syntax to define multiline strings they provide an easy way to interpolate variables and expressions in strings they allow to create DSLs with template tags Let's dive into each of these in detail.
Multiline strings Pre-ES6, to create a string spanned over two lines you had to use the \ character at the end of a line: const string = 'first part \ second part'
This allows to create a string on 2 lines, but it's rendered on just one line: first part second part
78
Template Literals
To render the string on multiple lines as well, you explicitly need to add \n at the end of each line, like this: const string = 'first line\n \ second line'
or const string = 'first line\n' + 'second line'
Template literals make multiline strings much simpler. Once a template literal is opened with the backtick, you just press enter to create a new line, with no special characters, and it's rendered as-is: const string = `Hey this string is awesome!`
Keep in mind that space is meaningful, so doing this: const string = `First Second`
is going to create a string like this: First Second
an easy way to fix this problem is by having an empty first line, and appending the trim() method right after the closing backtick, which will eliminate any space before the first character: const string = ` First Second`.trim()
Interpolation
79
Template Literals
Template literals provide an easy way to interpolate variables and expressions into strings. You do so by using the ${...} syntax: const var = 'test' const string = `something ${var}` //something test
inside the ${} you can add anything, even expressions: const string = `something ${1 + 2 + 3}` const string2 = `something ${foo() ? 'x' : 'y' }`
Template tags Tagged templates is one features that might sound less useful at first for you, but it's actually used by lots of popular libraries around, like Styled Components or Apollo, the GraphQL client/server lib, so it's essential to understand how it works. In Styled Components template tags are used to define CSS strings: const Button = styled.button` font-size: 1.5em; background-color: black; color: white; `;
In Apollo template tags are used to define a GraphQL query schema: const query = gql` query { ... } `
The styled.button and gql template tags highlighted in those examples are just functions: function gql(literals, ...expressions) { }
this function returns a string, which can be the result of any kind of computation. literals is an array containing the template literal content tokenized by the expressions
interpolations.
80
Template Literals
expressions contains all the interpolations.
If we take an example above: const string = `something ${1 + 2 + 3}`
literals is an array with two items. The first is something , the string until the first
interpolation, and the second is an empty string, the space betwene the end of the first interpolation (we only have one) and the end of the string. expressions in this case is an array with a single item, 6 .
A more complex example is: const string = `something another ${'x'} new line ${1 + 2 + 3} test`
in this case literals is an array where the first item is: `something another `
the second is: ` new line `
and the third is: ` test`
expressions in this case is an array with two items, x and 6 .
The function that is passed those values can do anything with them, and this is the power of this kind feature. The most simple example is replicating what the string interpolation does, by simply joining literals and expressions :
const interpolated = interpolate`I paid ${10}€`
81
Template Literals
and this is how interpolate works: function interpolate(literals, ...expressions) { let string = `` for (const [i, val] of expressions) { string += literals[i] + val } string += literals[literals.length - 1] return string }
82
Functions
Functions Learn all about functions, from the general overview to the tiny details that will improve how you use them
Introduction Syntax Parameters Return values Nested functions Object Methods this in Arrow Functions
IIFE, Immediately Invocated Function Expressions Function Hoisting
Introduction Everything in JavaScript happens in functions. A function is a block of code, self contained, that can be defined once and run any times you want. A function can optionally accept parameters, and returns one value.
83
Functions
Functions in JavaScript are objects, a special kind of objects: function objects. Their superpower lies in the fact that they can be invoked. In addition, functions are said to be first class functions because they can be assigned to a value, and they can be passed as arguments and used as a return value.
Syntax Let's start with the "old", pre-ES6/ES2015 syntax. Here's a function declaration: function dosomething(foo) { // do something }
(now, in post ES6/ES2015 world, referred as a regular function) Functions can be assigned to variables (this is called a function expression): const dosomething = function(foo) { // do something }
Named function expressions are similar, but play nicer with the stack call trace, which is useful when an error occurs - it holds the name of the function: const dosomething = function dosomething(foo) { // do something }
ES6/ES2015 introduced arrow functions, which are especially nice to use when working with inline functions, as parameters or callbacks: const dosomething = foo => { //do something }
Arrow functions have an important difference from the other function definitions above, we'll see which one later as it's an advanced topic.
Parameters A function can have one or more parameters.
Starting with ES6/ES2015, functions can have default values for the parameters: const dosomething = (foo = 1, bar = 'hey') => { //do something }
This allows you to call a function without filling all the parameters: dosomething(3) dosomething()
ES2018 introduced trailing commas for parameters, a feature that helps reducing bugs due to missing commas when moving around parameters (e.g. moving the last in the middle): const dosomething = (foo = 1, bar = 'hey') => { //do something } dosomething(2, 'ho!')
You can wrap all your arguments in an array, and use the spread operator when calling the function: const dosomething = (foo = 1, bar = 'hey') => { //do something } const args = [2, 'ho!'] dosomething(...args)
With many parameters, remembering the order can be difficult. Using objects, destructuring allows to keep the parameter names: const dosomething = ({ foo = 1, bar = 'hey' }) => { //do something
Return values Every function returns a value, which by default is undefined .
Any function is terminated when its lines of code end, or when the execution flow finds a return keyword.
When JavaScript encounters this keyword it exits the function execution and gives control back to its caller. If you pass a value, that value is returned as the result of the function: const dosomething = () => { return 'test' } const result = dosomething() // result === 'test'
You can only return one value. To simulate returning multiple values, you can return an object literal, or an array, and use a destructuring assignment when calling the function. Using arrays:
86
Functions
Using objects:
Nested functions Functions can be defined inside other functions: const dosomething = () => { const dosomethingelse = () => {} dosomethingelse() return 'test' }
87
Functions
The nested function is scoped to the outside function, and cannot be called from the outside.
Object Methods When used as object properties, functions are called methods: const car = { brand: 'Ford', model: 'Fiesta', start: function() { console.log(`Started`) } } car.start()
this in Arrow Functions There's an important behavior of Arrow Functions vs regular Functions when used as object methods. Consider this example: const car = { brand: 'Ford', model: 'Fiesta', start: function() { console.log(`Started ${this.brand} ${this.model}`) }, stop: () => { console.log(`Stopped ${this.brand} ${this.model}`) } }
The stop() method does not work as you would expect.
88
Functions
This is because the handling of this is different in the two functions declarations style. this in the arrow function refers to the enclosing function context, which in this case is the window object:
this , which refers to the host object using function()
89
Functions
This implies that arrow functions are not suitable to be used for object methods and constructors (arrow function constructors will actually raise a TypeError when called).
IIFE, Immediately Invocated Function Expressions An IIFE is a function that's immediately executed right after its declaration: ;(function dosomething() { console.log('executed') })()
You can assign the result to a variable: const something = (function dosomething() { return 'something' })()
They are very handy, as you don't need to separately call the function after its definition.
Function Hoisting JavaScript before executing your code reorders it according to some rules. Functions in particular are moved at the top of their scope. This is why it's legal to write dosomething() function dosomething() { console.log('did something') }
Internally, JavaScript moves the function before its call, along with all the other functions found in the same scope:
90
Functions
function dosomething() { console.log('did something') } dosomething()
Now, if you use named function expressions, since you're using variables something different happens. The variable declaration is hoisted, but not the value, so not the function. dosomething() const dosomething = function dosomething() { console.log('did something') }
Not going to work:
This is because what happens internally is: const dosomething dosomething() dosomething = function dosomething() { console.log('did something') }
The same happens for let declarations. var declarations do not work either, but with a different error:
91
Functions
This is because var declarations are hoisted and initialized with undefined as a value, while const and let are hoisted but not initialized.
92
Arrow Functions
Arrow Functions Arrow Functions are one of the most impactful changes in ES6/ES2015, and they are widely used nowadays. They slightly differ from regular functions. Find out how Arrow functions were introduced in ES6 / ECMAScript 2015, and since their introduction they changed forever how JavaScript code looks (and works). In my opinion this change was so welcoming that you now rarely see in modern codebases the usage of the function keyword. Visually, it’s a simple and welcome change, which allows you to write functions with a shorter syntax, from: const myFunction = function foo() { //... }
to const myFunction = () => { //... }
If the function body contains just a single statement, you can omit the parentheses and write all on a single line: const myFunction = () => doSomething()
Parameters are passed in the parentheses: const myFunction = (param1, param2) => doSomething(param1, param2)
If you have one (and just one) parameter, you could omit the parentheses completely: const myFunction = param => doSomething(param)
Thanks to this short syntax, arrow functions encourage the use of small functions.
Implicit return 93
Arrow Functions
Arrow functions allow you to have an implicit return: values are returned without having to use the return keyword. It works when there is a on-line statement in the function body: const myFunction = () => 'test' myFunction() //'test'
Another example, returning an object (remember to wrap the curly brackets in parentheses to avoid it being considered the wrapping function body brackets): const myFunction = () => ({value: 'test'}) myFunction() //{value: 'test'}
How this works in arrow functions this is a concept that can be complicated to grasp, as it varies a lot depending on the
context and also varies depending on the mode of JavaScript (strict mode or not). It's important to clarify this concept because arrow functions behave very differently compared to regular functions. When defined as a method of an object, in a regular function this refers to the object, so you can do: const car = { model: 'Fiesta', manufacturer: 'Ford', fullName: function() { return `${this.manufacturer} ${this.model}` } }
calling car.fullName() will return "Ford Fiesta" . The this scope with arrow functions is inherited from the execution context. An arrow function does not bind this at all, so its value will be looked up in the call stack, so in this code car.fullName() will not work, and will return the string "undefined undefined" : const car = { model: 'Fiesta', manufacturer: 'Ford', fullName: () => {
94
Arrow Functions
return `${this.manufacturer} ${this.model}` } }
Due to this, arrow functions are not suited as object methods. Arrow functions cannot be used as constructors as well, when instantiating an object will raise a TypeError . This is where regular functions should be used instead, when dynamic context is not needed. This is also a problem when handling events. DOM Event listeners set this to be the target element, and if you rely on this in an event handler, a regular function is necessary: const link = document.querySelector('#link') link.addEventListener('click', () => { // this === window })
const link = document.querySelector('#link') link.addEventListener('click', function() { // this === link })
95
Closures
Closures A gentle introduction to the topic of closures, key to understanding how JavaScript functions work
If you've ever written a function in JavaScript, you already made use of closures. It's a key topic to understand, which has implications on the things you can do. When a function is run, it's executed with the scope that was in place when it was defined, and not with the state that's in place when it is executed. The scope basically is the set of variables which are visible. A function remembers its Lexical Scope, and it's able to access variables that were defined in the parent scope. In short, a function has an entire baggage of variables it can access. Let me immediately give an example to clarify this. const bark = dog => { const say = `${dog} barked!` ;(() => console.log(say))() } bark(`Roger`)
96
Closures
This logs to the console Roger barked! , as expected. What if you want to return the action instead: const prepareBark = dog => { const say = `${dog} barked!` return () => console.log(say) } const bark = prepareBark(`Roger`) bark()
This snippet also logs to the console Roger barked! . Let's make one last example, which reuses prepareBark for two different dogs: const prepareBark = dog => { const say = `${dog} barked!` return () => { console.log(say) } } const rogerBark = prepareBark(`Roger`) const sydBark = prepareBark(`Syd`) rogerBark() sydBark()
This prints Roger barked! Syd barked!
As you can see, the state of the variable say is linked to the function that's returned from prepareBark() .
Also notice that we redefine a new say variable the second time we call prepareBark() , but that does not affect the state of the first prepareBark() scope. This is how a closure works: the function that's returned keeps the original state in its scope.
97
Arrays
Arrays JavaScript arrays over time got more and more features, sometimes it's tricky to know when to use some construct vs another. This post aims to explain what you should use, as of 2018
Initialize array Get length of the array Iterating the array Every Some Iterate the array and return a new one with the returned result of a function Filter an array Reduce forEach for..of for @@iterator Adding to an array Add at the end
98
Arrays
Add at the beginning Removing an item from an array From the end From the beginning At a random position Remove and insert in place Join multiple arrays Lookup the array for a specific element ES5 ES6 ES7 Get a portion of an array Sort the array Get a string representation of an array Copy an existing array by value Copy just some values from an existing array Copy portions of an array into the array itself, in other positions JavaScript arrays over time got more and more features, sometimes it's tricky to know when to use some construct vs another. This post aims to explain what you should use in 2018.
Initialize array const a = [] const a = [1, 2, 3] const a = Array.of(1, 2, 3) const a = Array(6).fill(1) //init an array of 6 items of value 1
Don't use the old syntax (just use it for typed arrays) const a = new Array() //never use const a = new Array(1, 2, 3) //never use
Get length of the array const l = a.length
Iterating the array 99
Arrays
Every a.every(f)
Iterates a until f() returns false
Some a.some(f)
Iterates a until f() returns true
Iterate the array and return a new one with the returned result of a function const b = a.map(f)
Iterates a and builds a new array with the result of executing f() on each a element
Filter an array const b = a.filter(f)
Iterates a and builds a new array with elements of a that returned true when executing f() on each a element
reduce() executes a callback function on all the items of the array and allows to progressively
compute a result. If initialValue is specified, accumulator in the first iteration will equal to that value. Example: ;[1, 2, 3, 4].reduce((accumulator, currentValue, currentIndex, array) => { return accumulator * currentValue
Iterates f on a without a way to stop Example: a.forEach(v => { console.log(v) })
for..of ES6 for (let v of a) { console.log(v) }
for for (let i = 0; i < a.length; i += 1) { //a[i] }
Iterates a , can be stopped using return or break and an iteration can be skipped using continue
@@iterator ES6
101
Arrays
Getting the iterator from an array returns an iterator of values const a = [1, 2, 3] let it = a[Symbol.iterator]() console.log(it.next().value) //1 console.log(it.next().value) //2 console.log(it.next().value) //3
.entries() returns an iterator of key/value pairs
let it = a.entries() console.log(it.next().value) //[0, 1] console.log(it.next().value) //[1, 2] console.log(it.next().value) //[2, 3]
.keys() allows to iterate on the keys:
let it = a.keys() console.log(it.next().value) //0 console.log(it.next().value) //1 console.log(it.next().value) //2
.next() returns undefined when the array ends. You can also detect if the iteration ended by
looking at it.next() which returns a value, done pair. done is always false until the last element, which returns true .
Adding to an array Add at the end a.push(4)
Add at the beginning a.unshift(0) a.unshift(-2, -1)
Removing an item from an array 102
Arrays
From the end a.pop()
From the beginning a.shift()
At a random position a.splice(0, 2) // get the first 2 items a.splice(3, 2) // get the 2 items starting from index 3
Do not use remove() as it leaves behind undefined values.
Remove and insert in place a.splice(2, 3, 2, 'a', 'b') //removes 3 items starting from //index 2, and adds 2 items, // still starting from index 2
Join multiple arrays const a = [1, 2] const b = [3, 4] a.concat(b) // 1, 2, 3, 4
Lookup the array for a specific element ES5 a.indexOf()
Returns the index of the first matching item found, or -1 if not found a.lastIndexOf()
103
Arrays
Returns the index of the last matching item found, or -1 if not found
Returns the first item that returns true. Returns undefined if not found. A commonly used syntax is: a.find(x => x.id === my_id)
The above line will return the first element in the array that has id === my_id . findIndex returns the index of the first item that returns true, and if not found, it returns undefined :
a.findIndex((element, index, array) => { //return true or false })
ES7 a.includes(value)
Returns true if a contains value . a.includes(value, i)
Returns true if a contains value after the position i .
Get a portion of an array a.slice()
Sort the array
104
Arrays
Sort alphabetically (by ASCII value - 0-9A-Za-z ) const a = [1, 2, 3, 10, 11] a.sort() //1, 10, 11, 2, 3 const b = [1, 'a', 'Z', 3, 2, 11] b = a.sort() //1, 11, 2, 3, Z, a
Sort by a custom function const a = [1, 10, 3, 2, 11] a.sort((a, b) => a - b) //1, 2, 3, 10, 11
Reverse the order of an array a.reverse()
Get a string representation of an array a.toString()
Returns a string representation of an array a.join()
Returns a string concatenation of the array elements. Pass a parameter to add a custom separator: a.join(', ')
Copy an existing array by value const b = Array.from(a) const b = Array.of(...a)
Copy just some values from an existing array const b = Array.from(a, x => x % 2 == 0)
105
Arrays
Copy portions of an array into the array itself, in other positions const a = [1, 2, 3, 4] a.copyWithin(0, 2) // [3, 4, 3, 4] const b = [1, 2, 3, 4, 5] b.copyWithin(0, 2) // [3, 4, 5, 4, 5] //0 is where to start copying into, // 2 is where to start copying from const c = [1, 2, 3, 4, 5] c.copyWithin(0, 2, 4) // [3, 4, 3, 4, 5] //4 is an end index
106
Loops
Loops JavaScript provides many way to iterate through loops. This tutorial explains all the various loop possibilities in modern JavaScript
Introduction for
forEach do...while while for...in for...of for...in vs for...of
Introduction JavaScript provides many way to iterate through loops. This tutorial explains each one with a small example and the main properties.
for
107
Loops
const list = ['a', 'b', 'c'] for (let i = 0; i < list.length; i++) { console.log(list[i]) //value console.log(i) //index }
You can interrupt a for loop using break You can fast forward to the next iteration of a for loop using continue
forEach Introduced in ES5. Given an array, you can iterate over its properties using list.forEach() : const list = ['a', 'b', 'c'] list.forEach((item, index) => { console.log(item) //value console.log(index) //index }) //index is optional list.forEach(item => console.log(item))
unfortunately you cannot break out of this loop.
do...while const list = ['a', 'b', 'c'] let i = 0 do { console.log(list[i]) //value console.log(i) //index i = i + 1 } while (i < list.length)
You can interrupt a while loop using break : do { if (something) break } while (true)
and you can jump to the next iteration using continue : do { if (something) continue
108
Loops
//do something else } while (true)
while const list = ['a', 'b', 'c'] let i = 0 while (i < list.length) { console.log(list[i]) //value console.log(i) //index i = i + 1 }
You can interrupt a while loop using break : while (true) { if (something) break }
and you can jump to the next iteration using continue : while (true) { if (something) continue //do something else }
The difference with do...while is that do...while always execute its cycle at least once.
for...in Iterates all the enumerable properties of an object, giving the property names. for (let property in object) { console.log(property) //property name console.log(object[property]) //property value }
for...of ES6 introduced the for...of loop, which combines the conciseness of forEach with the ability to break:
109
Loops
//iterate over the value for (const value of ['a', 'b', 'c']) { console.log(value) //value } //get the index as well, using `entries()` for (const [index, value] of ['a', 'b', 'c'].entries()) { console.log(index) //index console.log(value) //value }
Notice the use of const . This loop creates a new scope in every iteration, so we can safely use that instead of let .
for...in vs for...of The difference with for...in is: for...of iterates over the property values for...in iterates the property names
110
Events
Events JavaScript in the browser uses an event-driven programming model. Everything starts by following an event. This is an introduction to JavaScript events and how event handling works
Introduction Event handlers Inline event handlers DOM on-event handlers Using addEventListener() Listening on different elements The Event object Event bubbling and event capturing Stopping the propagation Popular events Load Mouse events Keyboard events Scroll Throttling
111
Events
Introduction JavaScript in the browser uses an event-driven programming model. Everything starts by following an event. The event could be the DOM is loaded, or an asynchronous request that finishes fetching, or a user clicking an element or scrolling the page, or the user types on the keyboard. There are a lot of different kind of events.
Event handlers You can respond to any event using an Event Handler, which is just a function that's called when an event occurs. You can register multiple handlers for the same event, and they will all be called when that event happens. JavaScript offer three ways to register an event handler:
Inline event handlers This style of event handlers is very rarely used today, due to its constrains, but it was the only way in the JavaScript early days: A link
DOM on-event handlers This is common when an object has at most one event handler, as there is no way to add multiple handlers in this case: window.onload = () => { //window loaded }
It's most commonly used when handling XHR requests: const xhr = new XMLHttpRequest() xhr.onreadystatechange = () => { //.. do something }
112
Events
You can check if an handler is already assigned to a property using if ('onsomething' in window) {} .
Using addEventListener() This is the modern way. This method allows to register as many handlers as we need, and it's the most popular you will find: window.addEventListener('load', () => { //window loaded })
This method allows to register as many handlers as we need, and it's the most popular you will find. Note that IE8 and below did not support this, and instead used its own attachEvent() API. Keep it in mind if you need to support older browsers.
Listening on different elements You can listen on window to intercept "global" events, like the usage of the keyboard, and you can listen on specific elements to check events happening on them, like a mouse click on a button. This is why addEventListener is sometimes called on window , sometimes on a DOM element.
The Event object An event handler gets an Event object as the first parameter: const link = document.getElementById('my-link') link.addEventListener('click', event => { // link clicked })
This object contains a lot of useful properties and methods, like: target , the DOM element that originated the event type , the type of event stopPropagation() , called to stop propagating the event in the DOM
(see the full list).
113
Events
Other properties are provided by specific kind of events, as Event is an interface for different specific events: MouseEvent KeyboardEvent DragEvent FetchEvent ... and others Each of those has a MDN page linked, so you can inspect all their properties. For example when a KeyboardEvent happens, you can check which key was pressed, in ar readable format ( Escape , Enter and so on) by checking the key property: window.addEventListener('keydown', event => { // key pressed console.log(event.key) })
On a mouse event we can check which mouse button was pressed: const link = document.getElementById('my-link') link.addEventListener('mousedown', event => { // mouse button pressed console.log(event.button) //0=left, 2=right })
Event bubbling and event capturing Bubbling and capturing are the 2 models that events use to propagate. Suppose you DOM structure is Click me
You want to track when users click on the button, and you have 2 event listeners, one on button , and one on #container . Remember, a click on a child element will always propagate
to its parents, unless you stop the propagation (see later). Those event listeners will be called in order, and this order is determined by the event bubbling/capturing model used.
114
Events
Bubbling means that the event propagates from the item that was clicked (the child) up to all its parent tree, starting from the nearest one. In our example, the handler on button will fire before the #container handler. Capturing is the opposite: the outer event handlers are fired before the more specific handler, the one on button . By default all events bubble. You can choose to adopt event capturing by applying a third argument to addEventListener, setting it to true : document.getElementById('container').addEventListener( 'click', () => { //window loaded }, true )
Note that first all capturing event handlers are run. Then all the bubbling event handlers. The order follows this principle: the DOM goes through all elements starting from the Window object, and goes to find the item that was clicked. While doing so, it calls any event handler associated to the event (capturing phase). Once it reaches the target, it then repeats the journey up to the parents tree until the Window object, calling again the event handlers (bubbling phase).
Stopping the propagation An event on a DOM element will be propagated to all its parent elements tree, unless it's stopped.
A click event on a will propagate to section and then body . You can stop the propagation by calling the stopPropagation() method of an Event, usually at the end of the event handler:
115
Events
const link = document.getElementById('my-link') link.addEventListener('mousedown', event => { // process the event // ... event.stopPropagation() })
Popular events Here's a list of the most common events you will likely handle.
Load load is fired on window and the body element when the page has finished loading.
Mouse events click fires when a mouse button is clicked. dblclick when the mouse is clicked two times.
Of course in this case click is fired just before this event. mousedown , mousemove and mouseup can be used in combination to track drag-and-drop events. Be careful with mousemove , as it fires many times during the mouse movement (see throttling later)
Keyboard events keydown fires when a keyboard button is pressed (and any time the key repeats while the
button stays pressed). keyup is fired when the key is released.
Scroll The scroll event is fired on window every time you scroll the page. Inside the event handler you can check the current scrolling position by checking window.scrollY . Keep in mind that this event is not a one-time thing. It fires a lot of times during scrolling, not just at the end or beginning of the scrolling, so don't do any heavy computation or manipulation in the handler - use throttling instead.
Throttling
116
Events
As we mentioned above, mousemove and scroll are two events that are not fired one-time per event, but rather they continuously call their event handler function during all the duration of the action. This is because they provide coordinates so you can track what's happening. If you perform a complex operation in the event handler, you will affect the performance and cause a sluggish experience to your site users. Libraries that provide throttling like Lodash implement it in 100+ lines of code, to handle every possible use case. A simple and easy to understand implementation is this, which uses setTimeout to cache the scroll event every 100ms: let cached = null window.addEventListener('scroll', event => { if (!cached) { setTimeout(() => { //you can access the original event at `cached` cached = null }, 100) } cached = event })
117
The Event Loop
The Event Loop The Event Loop is one of the most important aspects to understand about JavaScript. This post explains it in simple terms Introduction Blocking the event loop The call stack A simple event loop explanation Queuing function execution The Message Queue ES6 Job Queue
Introduction The Event Loop is one of the most important aspects to understand about JavaScript. I've programmed for years with JavaScript, yet I've never fully understood how things work under the hoods. It's completely fine to not know this concept in detail, but as usual, it's helpful to know how it works, and also you might just be a little curious at this point. This post aims to explain the inner details of how JavaScript works with a single thread, and how it handles asynchronous functions. Your JavaScript code runs single threaded. There is just one thing happening at a time. This is a limitation that's actually very helpful, as it simplifies a lot how you program without worrying about concurrency issues. You just need to pay attention to how you write your code and avoid anything that could block the thread, like synchronous network calls or infinite loops. In general, in most browsers there is an event loop for every browser tab, to make every process isolated and avoid a web page with infinite loops or heavy processing to block your entire browser. The environment manages multiple concurrent event loops, to handle API calls for example. Web Workers run in their own event loop as well. You mainly need to be concerned that your code will run on a single event loop, and write code with this thing in mind to avoid blocking it.
118
The Event Loop
Blocking the event loop Any JavaScript code that takes too long to return back control to the event loop will block the execution of any JavaScript code in the page, even block the UI thread, and the user cannot click around, scroll the page, and so on. Almost all the I/O primitives in JavaScript are non-blocking. Network requests, Node.js filesystem operations, and so on. Being blocking is the exception, and this is why JavaScript is based so much on callbacks, and more recently on promises and async/await.
The call stack The call stack is a LIFO queue (Last In, First Out). The event loop continuously checks the call stack to see if there's any function that needs to run. While doing so, it adds any function call it finds to the call stack and executes each one in order. You know the error stack trace you might be familiar with, in the debugger or in the browser console? The browser looks up the function names in the call stack to inform you which function originates the current call:
as expected. When this code runs, first foo() is called. Inside foo() we first call bar() , then we call baz() .
At this point the call stack looks like this:
The event loop on every iteration looks if there's something in the call stack, and executes it:
121
The Event Loop
until the call stack is empty.
Queuing function execution The above example looks normal, there's nothing special about it: JavaScript finds things to execute, runs them in order.
122
The Event Loop
Let's see how to defer a function until the stack is clear. The use case of setTimeout(() => {}), 0) is to call a function, but execute it once every other function in the code has executed. Take this example: const bar = () => console.log('bar') const baz = () => console.log('baz') const foo = () => { console.log('foo') setTimeout(bar, 0) baz() } foo()
This code prints, maybe surprisingly: foo baz bar
When this code runs, first foo() is called. Inside foo() we first call setTimeout, passing bar as an argument, and we instruct it to run immediately as fast as it can, passing 0 as the timer. Then we call baz(). At this point the call stack looks like this:
123
The Event Loop
Here is the execution order for all the functions in our program:
124
The Event Loop
Why is this happening?
The Message Queue 125
The Event Loop
When setTimeout() is called, the Browser or Node.js start the timer. Once the timer expires, in this case immediately as we put 0 as the timeout, the callback function is put in the Message Queue. The Message Queue is also where user-initiated events like click or keyboard events, or fetch responses are queued before your code has the opportunity to react to them. Or also DOM events like onLoad . The loop gives priority to the call stack, and it first processes everything it finds in the call stack, and once there's nothing in there, it goes to pick up things in the event queue. We don't have to wait for functions like setTimeout , fetch or other things to do their own work, because they are provided by the browser, and they live on their own threads. For example, if you set the setTimeout timeout to 2 seconds, you don't have to wait 2 seconds - the wait happens elsewhere.
ES6 Job Queue ECMAScript 2015 introduced the concept of the Job Queue, which is used by Promises (also introduced in ES6/ES2015). It's a way to execute the result of an async function as soon as possible, rather than being put at the end of the call stack. Promises that resolve before the current function ends will be executed right after the current function. I find nice the analogy of a rollercoaster ride at an amusement park: the message queue puts you at the back of the queue, behind all the other people, where you will have to wait for your turn, while the job queue is the fastpass ticket that lets you take another ride right after you finished the previous one. Example: const bar = () => console.log('bar') const baz = () => console.log('baz') const foo = () => { console.log('foo') setTimeout(bar, 0) new Promise((resolve, reject) => resolve('should be right after baz, before bar') ).then(resolve => console.log(resolve)) baz() }
126
The Event Loop
foo()
This prints foo baz should be right after baz, before bar bar
That's a big difference between Promises (and Async/await, which is built on promises) and plain old asynchronous functions through setTimeout() or other platform APIs.
127
Asynchronous programming and callbacks
Asynchronous programming and callbacks JavaScript is synchronous by default, and is single threaded. This means that code cannot create new threads and run in parallel. Find out what asynchronous code means and how it looks like
Asynchronicity in Programming Languages JavaScript Callbacks Handling errors in callbacks The problem with callbacks Alternatives to callbacks
Asynchronicity in Programming Languages Computers are asynchronous by design. Asynchronous means that things can happen independently of the main program flow.
128
Asynchronous programming and callbacks
In the current consumer computers, every program runs for a specific time slot, and then it stops its execution to let another program continue its execution. This thing runs in a cycle so fast that's impossible to notice, and we think our computers run many programs simultaneously, but this is an illusion (except on multiprocessor machines). Programs internally use interrupts, a signal that's emitted to the processor to gain the attention of the system. I won't go into the internals of this, but just keep in mind that it's normal for programs to be asynchronous, and halt their execution until they need attention, and the computer can execute other things in the meantime. When a program is waiting for a response from the network, it cannot halt the processor until the request finishes. Normally, programming languages are synchronous, and some provide a way to manage asynchronicity, in the language or through libraries. C, Java, C#, PHP, Go, Ruby, Swift, Python, they are all synchronous by default. Some of them handle async by using threads, spawning a new process.
JavaScript JavaScript is synchronous by default and is single threaded. This means that code cannot create new threads and run in parallel. Lines of code are executed in series, one after another, for example: const a = 1 const b = 2 const c = a * b console.log(c) doSomething()
But JavaScript was born inside the browser, its main job, in the beginning, was to respond to user actions, like onClick , onMouseOver , onChange , onSubmit and so on. How could it do this with a synchronous programming model? The answer was in its environment. The browser provides a way to do it by providing a set of APIs that can handle this kind of functionality. More recently, Node.js introduced a non-blocking I/O environment to extend this concept to file access, network calls and so on.
Callbacks 129
Asynchronous programming and callbacks
You can't know when a user is going to click a button, so what you do is, you define an event handler for the click event. This event handler accepts a function, which will be called when the event is triggered: document.getElementById('button').addEventListener('click', () => { //item clicked })
This is the so-called callback. A callback is a simple function that's passed as a value to another function, and will only be executed when the event happens. We can do this because JavaScript has first-class functions, which can be assigned to variables and passed around to other functions (called higher-order functions) It's common to wrap all your client code in a load event listener on the window object, which runs the callback function only when the page is ready: window.addEventListener('load', () => { //window loaded //do what you want })
Callbacks are used everywhere, not just in DOM events. One common example is by using timers: setTimeout(() => { // runs after 2 seconds }, 2000)
XHR requests also accept a callback, in this example by assigning a function to a property that will be called when a particular event occurs (in this case, the state of the request changes): const xhr = new XMLHttpRequest() xhr.onreadystatechange = () => { if (xhr.readyState === 4) { xhr.status === 200 ? console.log(xhr.responseText) : console.error('error') } } xhr.open('GET', 'https://yoursite.com') xhr.send()
Handling errors in callbacks 130
Asynchronous programming and callbacks
How do you handle errors with callbacks? One very common strategy is to use what Node.js adopted: the first parameter in any callback function is the error object: error-first callbacks If there is no error, the object is null . If there is an error, it contains some description of the error and other information. fs.readFile('/file.json', (err, data) => { if (err !== null) { //handle error console.log(err) return } //no errors, process data console.log(data) })
The problem with callbacks Callbacks are great for simple cases! However every callback adds a level of nesting, and when you have lots of callbacks, the code starts to be complicated very quickly: window.addEventListener('load', () => { document.getElementById('button').addEventListener('click', () => { setTimeout(() => { items.forEach(item => { //your code here }) }, 2000) }) })
This is just a simple 4-levels code, but I've seen much more levels of nesting and it's not fun. How do we solve this?
Alternatives to callbacks Starting with ES6, JavaScript introduced several features that help us with asynchronous code that do not involve using callbacks: Promises (ES6) Async/Await (ES8)
131
Asynchronous programming and callbacks
132
Promises
Promises Promises are one way to deal with asynchronous code in JavaScript, without writing too many callbacks in your code. Introduction to promises How promises work, in brief Which JS API use promises? Creating a promise Consuming a promise Chaining promises Example of chaining promises Handling errors Cascading errors Orchestrating promises Promise.all() Promise.race()
Common errors Uncaught TypeError: undefined is not a promise
Introduction to promises A promise is commonly defined as a proxy for a value that will eventually become available. Promises are one way to deal with asynchronous code, without writing too many callbacks in your code. Although being around since years, they have been standardized and introduced in ES6, and now they have been superseded in ES2017 by async functions. Async functions use the promises API as their building block, so understanding them is fundamental even if in newer code you'll likely use async functions instead of promises.
How promises work, in brief Once a promise has been called, it will start in pending state. This means that the caller function continues the execution, while it waits for the promise to do its own processing, and give the caller function some feedback.
133
Promises
At this point, the caller function waits for it to either return the promise in a resolved state, or in a rejected state, but as you know JavaScript is asynchronous, so the function continues its execution while the promise does it work.
Which JS API use promises? In addition to your own code and libraries code, promises are used by standard modern Web APIs such as: the Battery API the Fetch API Service Workers It's unlikely that in modern JavaScript you'll find yourself not using promises, so let's start diving right into them.
Creating a promise The Promise API exposes a Promise constructor, which you initialize using new Promise() : let done = true const isItDoneYet = new Promise( (resolve, reject) => { if (done) { const workDone = 'Here is the thing I built' resolve(workDone) } else { const why = 'Still working on something else' reject(why) } } )
As you can see the promise checks the done global constant, and if that's true, we return a resolved promise, otherwise a rejected promise. Using resolve and reject we can communicate back a value, in the above case we just return a string, but it could be an object as well.
Consuming a promise 134
Promises
In the last section, we introduced how a promise is created. Now let's see how the promise can be consumed or used. const isItDoneYet = new Promise( //... ) const checkIfItsDone = () => { isItDoneYet .then((ok) => { console.log(ok) }) .catch((err) => { console.error(err) }) }
Running checkIfItsDone() will execute the isItDoneYet() promise and will wait for it to resolve, using the then callback, and if there is an error, it will handle it in the catch callback.
Chaining promises A promise can be returned to another promise, creating a chain of promises. A great example of chaining promises is given by the Fetch API, a layer on top of the XMLHttpRequest API, which we can use to get a resource and queue a chain of promises to execute when the resource is fetched. The Fetch API is a promise-based mechanism, and calling fetch() is equivalent to defining our own promise using new Promise() .
Example of chaining promises const status = (response) => { if (response.status >= 200 && response.status < 300) { return Promise.resolve(response) } return Promise.reject(new Error(response.statusText)) } const json = (response) => response.json() fetch('/todos.json') .then(status)
In this example, we call fetch() to get a list of TODO items from the todos.json file found in the domain root, and we create a chain of promises. Running fetch() returns a response, which has many properties, and within those we reference: status , a numeric value representing the HTTP status code statusText , a status message, which is OK if the request succeeded response also has a json() method, which returns a promise that will resolve with the
content of the body processed and transformed into JSON. So given those premises, this is what happens: the first promise in the chain is a function that we defined, called status() , that checks the response status and if it's not a success response (between 200 and 299), it rejects the promise. This operation will cause the promise chain to skip all the chained promises listed and will skip directly to the catch() statement at the bottom, logging the Request failed text along with the error message. If that succeeds instead, it calls the json() function we defined. Since the previous promise, when successful, returned the response object, we get it as an input to the second promise. In this case, we return the data JSON processed, so the third promise receives the JSON directly: .then((data) => { console.log('Request succeeded with JSON response', data) })
and we simply log it to the console.
Handling errors In the example, in the previous section, we had a catch that was appended to the chain of promises. When anything in the chain of promises fails and raises an error or rejects the promise, the control goes to the nearest catch() statement down the chain.
136
Promises
new Promise((resolve, reject) => { throw new Error('Error') }) .catch((err) => { console.error(err) }) // or new Promise((resolve, reject) => { reject('Error') }) .catch((err) => { console.error(err) })
Cascading errors If inside the catch() you raise an error, you can append a second catch() to handle it, and so on. new Promise((resolve, reject) => { throw new Error('Error') }) .catch((err) => { throw new Error('Error') }) .catch((err) => { console.error(err) })
Orchestrating promises Promise.all() If you need to synchronize different promises, Promise.all() helps you define a list of promises, and execute something when they are all resolved. Example: const f1 = fetch('/something.json') const f2 = fetch('/something2.json') Promise.all([f1, f2]).then((res) => { console.log('Array of results', res) }) .catch((err) => { console.error(err) })
The ES6 destructuring assignment syntax allows you to also do Promise.all([f1, f2]).then(([res1, res2]) => {
137
Promises
console.log('Results', res1, res2) })
You are not limited to using fetch of course, any promise is good to go.
Promise.race() Promise.race() runs when the first of the promises you pass to it resolves, and it runs the
attached callback just once, with the result of the first promise resolved. Example: const first = new Promise((resolve, reject) => { setTimeout(resolve, 500, 'first') }) const second = new Promise((resolve, reject) => { setTimeout(resolve, 100, 'second') }) Promise.race([first, second]).then((result) => { console.log(result) // second })
Common errors Uncaught TypeError: undefined is not a promise If you get the Uncaught TypeError: undefined is not a promise error in the console, make sure you use new Promise() instead of just Promise()
138
Async and Await
Async and Await Discover the modern approach to asynchronous functions in JavaScript. JavaScript evolved in a very short time from callbacks to Promises, and since ES2017 asynchronous JavaScript is even simpler with the async/await syntax Introduction Why were async/await introduced? How it works A quick example Promise all the things The code is much simpler to read Multiple async functions in series Easier debugging
Introduction JavaScript evolved in a very short time from callbacks to promises (ES6), and since ES2017 asynchronous JavaScript is even simpler with the async/await syntax. Async functions are a combination of promises and generators, and basically, they are a higher level abstraction over promises. Let me repeat: async/await is built on promises.
Why were async/await introduced? They reduce the boilerplate around promises, and the "don't break the chain" limitation of chaining promises. When Promises were introduced in ES6, they were meant to solve a problem with asynchronous code, and they did, but over the 2 years that separated ES6 and ES2017, it was clear that promises could not be the final solution. Promises were introduced to solve the famous callback hell problem, but they introduced complexity on their own, and syntax complexity. They were good primitives around which a better syntax could be exposed to the developers, so when the time was right we got async functions. They make the code look like it's synchronous, but it's asynchronous and non-blocking behind the scenes.
139
Async and Await
How it works An async function returns a promise, like in this example: const doSomethingAsync = () => { return new Promise((resolve) => { setTimeout(() => resolve('I did something'), 3000) }) }
When you want to call this function you prepend await , and the calling code will stop until the promise is resolved or rejected. One caveat: the client function must be defined as async . Here's an example:
A quick example This is a simple example of async/await used to run a function asynchronously: const doSomethingAsync = () => { return new Promise((resolve) => { setTimeout(() => resolve('I did something'), 3000) }) } const doSomething = async () => { console.log(await doSomethingAsync()) } console.log('Before') doSomething() console.log('After')
The above code will print the following to the browser console: Before After I did something //after 3s
Promise all the things 140
Async and Await
Prepending the async keyword to any function means that the function will return a promise. Even if it's not doing so explicitly, it will internally make it return a promise. This is why this code is valid: const aFunction = async () => { return 'test' } aFunction().then(alert) // This will alert 'test'
and it's the same as: const aFunction = async () => { return Promise.resolve('test') } aFunction().then(alert) // This will alert 'test'
The code is much simpler to read As you can see in the example above, our code looks very simple. Compare it to code using plain promises, with chaining and callback functions. And this is a very simple example, the major benefits will arise when the code is much more complex. For example here's how you would get a JSON resource, and parse it, using promises: const getFirstUserData = () => { return fetch('/users.json') // get users list .then(response => response.json()) // parse JSON .then(users => users[0]) // pick first user .then(user => fetch(`/users/${user.name}`)) // get user data .then(userResponse => response.json()) // parse JSON } getFirstUserData()
And here is the same functionality provided using await/async: const getFirstUserData = async () => { const response = await fetch('/users.json') // get users list const users = await response.json() // parse JSON const user = users[0] // pick first user const userResponse = await fetch(`/users/${user.name}`) // get user data
Multiple async functions in series Async functions can be chained very easily, and the syntax is much more readable than with plain promises: const promiseToDoSomething = () => { return new Promise(resolve => { setTimeout(() => resolve('I did something'), 10000) }) } const watchOverSomeoneDoingSomething = async () => { const something = await promiseToDoSomething() return something + ' and I watched' } const watchOverSomeoneWatchingSomeoneDoingSomething = async () => { const something = await watchOverSomeoneDoingSomething() return something + ' and I watched as well' } watchOverSomeoneWatchingSomeoneDoingSomething().then((res) => { console.log(res) })
Will print: I did something and I watched and I watched as well
Easier debugging Debugging promises is hard because the debugger will not step over asynchronous code. Async/await makes this very easy because to the compiler it's just like synchronous code.
142
Loops and Scope
Loops and Scope There is one feature of JavaScript that might cause a few headaches to developers, related to loops and scoping. Learn some tricks about loops and scoping with var and let There is one feature of JavaScript that might cause a few headaches to developers, related to loops and scoping. Take this example: const operations = [] for (var i = 0; i < 5; i++) { operations.push(() => { console.log(i) }) } for (const operation of operations) { operation() }
It basically iterates and for 5 times it adds a function to an array called operations. This function simply console logs the loop index variable i . Later it runs these functions. The expected result here should be: 0 1 2 3 4
but actually what happens is this: 5 5 5 5 5
Why is this the case? Because of the use of var . Since var declarations are hoisted, the above code equals to
143
Loops and Scope
var i; const operations = [] for (i = 0; i < 5; i++) { operations.push(() => { console.log(i) }) } for (const operation of operations) { operation() }
so, in the for-of loop, i is still visible, it's equal to 5 and every reference to i in the function is going to use this value. So how should we do to make things work as we want? The simplest solution is to use let declarations. Introduced in ES6, they are a great help in avoiding some of the weird things about var declarations. Simply changing var to let in the loop variable is going to work fine: const operations = [] for (let i = 0; i < 5; i++) { operations.push(() => { console.log(i) }) } for (const operation of operations) { operation() }
Here's the output: 0 1 2 3 4
How is this possible? This works because on every loop iteration i is created as a new variable each time, and every function added to the operations array gets its own copy of i . Keep in mind you cannot use const in this case, because there would be an error as for tries to assign a new value in the second iteration.
144
Loops and Scope
Another way to solve this problem was very common in pre-ES6 code, and it is called Immediately Invoked Function Expression (IIFE). In this case you can wrap the entire function and bind i to it. Since in this way you're creating a function that immediately executes, you return a new function from it, so we can execute it later: const operations = [] for (var i = 0; i < 5; i++) { operations.push(((j) => { return () => console.log(j) })(i)) } for (const operation of operations) { operation() }
145
Timers
Timers When writing JavaScript code, you might want to delay the execution of a function. Learn how to use setTimeout and setInterval to schedule functions in the future
setTimeout()
Zero delay setInterval()
Recursive setTimeout
setTimeout() When writing JavaScript code, you might want to delay the execution of a function. This is the job of setTimeout . You specify a callback function to execute later, and a value expressing how later you want it to run, in milliseconds: setTimeout(() => { // runs after 2 seconds }, 2000) setTimeout(() => { // runs after 50 milliseconds }, 50)
This syntax defines a new function. You can call whatever other function you want in there, or you can pass an existing function name, and a set of parameters: const myFunction = (firstParam, secondParam) => { // do something }
146
Timers
// runs after 2 seconds setTimeout(myFunction, 2000, firstParam, secondParam)
setTimeout returns the timer id. This is generally not used, but you can store this id, and clear
it if you want to delete this scheduled function execution: const id = setTimeout(() => { // should run after 2 seconds }, 2000) // I changed my mind clearTimeout(id)
Zero delay If you specify the timeout delay to 0 , the callback function will be executed as soon as possible, but after the current function execution: setTimeout(() => { console.log('after ') }, 0) console.log(' before ')
will print before after . This is especially useful to avoid blocking the CPU on intensive tasks and let other functions be executed while performing a heavy calculation, by queuing functions in the scheduler. Some browsers (IE and Edge) implement a setImmediate() method that does this same exact functionality, but it's not standard and unavailable on other browsers. But it's a standard function in Node.js.
setInterval() setInterval is a function similar to setTimeout , with a difference: instead of running the
callback function once, it will run it forever, at the specific time interval you specify (in milliseconds): setInterval(() => { // runs every 2 seconds }, 2000)
147
Timers
The function above runs every 2 seconds unless you tell it to stop, using clearInterval , passing it the interval id that setInterval returned: const id = setInterval(() => { // runs every 2 seconds }, 2000) clearInterval(id)
It's common to call clearInterval inside the setInterval callback function, to let it autodetermine if it should run again or stop. For example this code runs something unless App.somethingIWait has the value arrived : const interval = setInterval(() => { if (App.somethingIWait === 'arrived') { clearInterval(interval) return } // otherwise do things }, 100)
Recursive setTimeout setInterval starts a function every n milliseconds, without any consideration about when a
function finished its execution. If a function takes always the same amount of time, it's all fine:
Maybe the function takes different execution times, depending on network conditions for example:
And maybe one long execution overlaps the next one:
148
Timers
To avoid this, you can schedule a recursive setTimeout to be called when the callback function finishes: const myFunction = () => { // do something setTimeout(myFunction, 1000) } setTimeout( myFunction() }, 1000)
to achieve this scenario:
setTimeout and setInterval are available in Node.js, through the Timers module.
Node.js also provides setImmediate() , which is equivalent to using setTimeout(() => {}, 0) , mostly used to work with the Node.js Event Loop.
149
this
this `this` is a value that has different values depending on where it's used. Not knowing this tiny detail of JavaScript can cause a lot of headaches, so it's worth taking 5 minutes to learn all the tricks
this is a value that has different values depending on where it's used.
Not knowing this tiny detail of JavaScript can cause a lot of headaches, so it's worth taking 5 minutes to learn all the tricks.
this in strict mode Outside any object, this in strict mode is always undefined . Notice I mentioned strict mode. If strict mode is disabled (the default state if you don't explicitly add 'use strict' on top of your file ), you are in the so-called sloppy mode, and this unless some specific cases mentioned here below - has the value of the global object. Which means window in a browser context.
this in methods A method is a function attached to an object. You can see it in various forms.
150
this
Here's one: const car = { maker: 'Ford', model: 'Fiesta', drive() { console.log(`Driving a ${this.maker} ${this.model} car!`) } } car.drive() //Driving a Ford Fiesta car!
In this case, using a regular function, this is automatically bound to the object. Note: the above method declaration is the same as drive: function() { ..., but shorter: const car = { maker: 'Ford', model: 'Fiesta', drive: function() { console.log(`Driving a ${this.maker} ${this.model} car!`) } }
The same works in this example: const car = { maker: 'Ford', model: 'Fiesta' } car.drive = function() { console.log(`Driving a ${this.maker} ${this.model} car!`) } car.drive() //Driving a Ford Fiesta car!
An arrow function does not work in the same way, as it's lexically bound: const car = { maker: 'Ford', model: 'Fiesta', drive: () => { console.log(`Driving a ${this.maker} ${this.model} car!`) } }
151
this
car.drive() //Driving a undefined undefined car!
Binding arrow functions You cannot bind a value to an arrow function, like you do with normal functions. It's simply not possible due to the way they work. this is lexically bound, which means its value is derived from the context where they are defined.
Explicitly pass an object to be used as this JavaScript offers a few ways to map this to any object you want. Using bind() , at the function declaration step: const car = { maker: 'Ford', model: 'Fiesta' } const drive = function() { console.log(`Driving a ${this.maker} ${this.model} car!`) }.bind(car) drive() //Driving a Ford Fiesta car!
You could also bind an existing object method to remap its this value: const car = { maker: 'Ford', model: 'Fiesta', drive() { console.log(`Driving a ${this.maker} ${this.model} car!`) } } const anotherCar = { maker: 'Audi', model: 'A4' } car.drive.bind(anotherCar)() //Driving a Audi A4 car!
152
this
Using call() or apply() , at the function invocation step: const car = { maker: 'Ford', model: 'Fiesta' } const drive = function(kmh) { console.log(`Driving a ${this.maker} ${this.model} car at ${kmh} km/h!`) } drive.call(car, 100) //Driving a Ford Fiesta car at 100 km/h! drive.apply(car, [100]) //Driving a Ford Fiesta car at 100 km/h!
The first parameter you pass to call() or apply() is always bound to this . The difference between call() and apply() is just that the second one wants an array as the arguments list, while the first accepts a variable number of parameters, which passes as function arguments.
The special case of browser event handlers In event handlers callbacks, this refers to the HTML element that received the event: document.querySelector('#button').addEventListener('click', function(e) { console.log(this) //HTMLElement }
You can bind it using document.querySelector('#button').addEventListener( 'click', function(e) { console.log(this) //Window if global, or your context }.bind(this) )
153
Strict Mode
Strict Mode Strict Mode is an ES5 feature, and it's a way to make JavaScript behave in a better way. And in a different way, as enabling Strict Mode changes the semantics of the JavaScript language. It's really important to know the main differences between JavaScript code in strict mode, and normal JavaScript, which is often referred as sloppy mode
Strict Mode is an ES5 feature, and it's a way to make JavaScript behave in a better way. And in a different way, as enabling Strict Mode changes the semantics of the JavaScript language. It's really important to know the main differences between JavaScript code in strict mode, and "normal" JavaScript, which is often referred as sloppy mode. Strict Mode mostly removes functionality that was possible in ES3, and deprecated since ES5 (but not removed because of backwards compatibility requirements)
How to enable Strict Mode 154
Strict Mode
Strict mode is optional. As with every breaking change in JavaScript, we can't simply change how the language behaves by default, because that would break gazillions of JavaScript around, and JavaScript puts a lot of effort into making sure 1996 JavaScript code still works today. It's a key of its success. So we have the 'use strict' directive we need to use to enable Strict Mode. You can put it at the beginning of a file, to apply it to all the code contained in the file: 'use strict' const name = 'Flavio' const hello = () => 'hey' //...
You can also enable Strict Mode for an individual function, by putting 'use strict' at the beginning of the function body: function hello() { 'use strict' return 'hey' }
This is useful when operating on legacy code, where you don't have the time to test or the confidence to enable strict mode on the whole file.
What changes in Strict Mode Accidental global variables If you assign a value to an undeclared variable, JavaScript by default creates that variable on the global object: ;(function() { variable = 'hey' })()(() => { name = 'Flavio' })() variable //'hey' name //'Flavio'
Turning on Strict Mode, an error is raised if you try to do what we did above:
Assignment errors JavaScript silently fails some conversion errors. In Strict Mode, those silent errors now raise issues: const undefined = 1(() => { 'use strict' undefined = 1 })()
156
Strict Mode
The same applies to Infinity, NaN, eval , arguments and more. In JavaScript you can define a property of an object to be not writable, by using const car = {} Object.defineProperty(car, 'color', { value: 'blue', writable: false })
In strict mode, you can't override this value, while in sloppy mode that's possible:
The same works for getters: const car = { get color() { return 'blue' } } car.color = 'red'( //ok () => { 'use strict'
157
Strict Mode
car.color = 'yellow' //TypeError: Cannot set property color of # which has onl y a getter } )()
Sloppy mode allows to extend a non-extensible object: const car = { color: 'blue' } Object.preventExtensions(car) car.model = 'Fiesta'( //ok () => { 'use strict' car.owner = 'Flavio' //TypeError: Cannot add property owner, object is not extensible } )()
Also, sloppy mode allows to set properties on primitive values, without failing, but also without doing nothing at all: true.false = ''( //'' 1 ).name = 'xxx' //'xxx' var test = 'test' //undefined test.testing = true //true test.testing //undefined
Strict mode fails in all those cases: ;(() => { 'use strict' true.false = ''( //TypeError: Cannot create property 'false' on boolean 'true' 1 ).name = 'xxx' //TypeError: Cannot create property 'name' on number '1' 'test'.testing = true //TypeError: Cannot create property 'testing' on string 'test' })()
Deletion errors In sloppy mode, if you try to delete a property that you cannot delete, JavaScript simply returns false, while in Strict Mode, it raises a TypeError: delete Object.prototype(
158
Strict Mode
//false () => { 'use strict' delete Object.prototype //TypeError: Cannot delete property 'prototype' of function Ob ject() { [native code] } } )()
Function arguments with the same name In normal functions, you can have duplicate parameter names: (function(a, a, b) { console.log(a, b) })(1, 2, 3) //2 3
(function(a, a, b) { 'use strict' console.log(a, b) })(1, 2, 3) //Uncaught SyntaxError: Duplicate parameter name not allowed in this context
Note that arrow functions always raise a SyntaxError in this case: ((a, a, b) => { console.log(a, b) })(1, 2, 3) //Uncaught SyntaxError: Duplicate parameter name not allowed in this context
Octal syntax Octal syntax in Strict Mode is disabled. By default, prepending a 0 to a number compatible with the octal numeric format makes it (sometimes confusingly) interpreted as an octal number: (() => { console.log(010) })() //8 (() => { 'use strict' console.log(010) })() //Uncaught SyntaxError: Octal literals are not allowed in strict mode.
159
Strict Mode
You can still enable octal numbers in Strict Mode using the 0oXX syntax: ;(() => { 'use strict' console.log(0o10) })() //8
Removed with Strict Mode disables the with keyword, to remove some edge cases and allow more optimization at the compiler level.
160
Immediately-invoked Function Expressions (IIFE)
Immediately-invoked Function Expressions (IIFE) An Immediately-invoked Function Expression is a way to execute functions immediately, as soon as they are created. IIFEs are very useful because they don't pollute the global object, and they are a simple way to isolate variables declarations
An Immediately-invoked Function Expression (IIFE for friends) is a way to execute functions immediately, as soon as they are created. IIFEs are very useful because they don't pollute the global object, and they are a simple way to isolate variables declarations. This is the syntax that defines an IIFE: ;(function() { /* */ })()
IIFEs can be defined with arrow functions as well:
161
Immediately-invoked Function Expressions (IIFE)
;(() => { /* */ })()
We basically have a function defined inside parentheses, and then we append () to execute that function: (/* function */)() . Those wrapping parentheses are actually what make our function, internally, be considered an expression. Otherwise, the function declaration would be invalid, because we didn't specify any name:
Function declarations want a name, while function expressions do not require it. You could also put the invoking parentheses inside the expression parentheses, there is no difference, just a styling preference: (function() { /* */ }()) (() => { /* */ }())
Alternative syntax using unary operators There is some weirder syntax that you can use to create an IIFE, but it's very rarely used in the real world, and it relies on using any unary operator: ;-(function() { /* */ })() + (function() { /* */ })()
Named IIFE An IIFE can also be named regular functions (not arrow functions). This does not change the fact that the function does not "leak" to the global scope, and it cannot be invoked again after its execution: ;(function doSomething() { /* */ })()
IIFEs starting with a semicolon You might see this in the wild: ;(function() { /* */ })()
This prevents issues when blindly concatenating two JavaScript files. Since JavaScript does not require semicolons, you might concatenate with a file with some statements in its last line that causes a syntax error. This problem is essentially solved with "smart" code bundlers like webpack.
163
Math operators
Math operators Performing math operations and calculus is a very common thing to do with any programming language. JavaScript offers several operators to help us work with numbers Performing math operations and calculus is a very common thing to do with any programming language. JavaScript offers several operators to help us work with numbers.
Operators Arithmetic operators Addition (+) const three = 1 + 2 const four = three + 1
The + operator also serves as string concatenation if you use strings, so pay attention: const three = 1 + 2 three + 1 // 4 'three' + 1 // three1
Subtraction (-) const two = 4 - 2
Division (/) Returns the quotient of the first operator and the second: const result = 20 / 5 //result === 4 const result = 20 / 7 //result === 2.857142857142857
If you divide by zero, JavaScript does not raise any error but returns the Infinity value (or Infinity if the value is negative).
164
Math operators
1 / 0 //Infinity -1 / 0 //-Infinity
Remainder (%) The remainder is a very useful calculation in many use cases: const result = 20 % 5 //result === 0 const result = 20 % 7 //result === 6
A reminder by zero is always NaN , a special value that means "Not a Number": 1 % 0 //NaN -1 % 0 //NaN
Multiplication (*) 1 * 2 //2 -1 * 2 //-2
Exponentiation (**) Raise the first operand to the power second operand 1 ** 2 //1 2 ** 1 //2 2 ** 2 //4 2 ** 8 //256 8 ** 2 //64
Unary operators Increment (++) Increment a number. This is a unary operator, and if put before the number, it returns the value incremented. If put after the number, it returns the original value, then increments it. let x = 0 x++ //0 x //1 ++x //2
165
Math operators
Decrement (--) Works like the increment operator, except it decrements the value. let x = 0 x-- //0 x //-1 --x //-2
Unary negation (-) Return the negation of the operand let x = 2 -x //-2 x //2
Unary plus (+) If the operand is not a number, it tries to convert it. Otherwise if the operand is already a number, it does nothing. let x = 2 +x //2 x = '2' +x //2 x = '2a' +x //NaN
Assignment shortcuts The regular assignment operator, = , has several shortcuts for all the arithmetic operators which let you combine assignment, assigning to the first operand the result of the operations with the second operand. They are: += : addition assignment -= : subtraction assignment *= : multiplication assignment
Examples: const a = 0 a += 5 //a === 5 a -= 2 //a === 3 a *= 2 //a === 6 a /= 2 //a === 3 a %= 2 //a === 1
Precedence rules Every complex statement will introduce precedence problems. Take this: const a = 1 * 2 + 5 / 2 % 2
The result is 2.5, but why? What operations are executed first, and which need to wait? Some operations have more precedence than the others. The precedence rules are listed in this table: Operator
Description
- + ++ --
unary operators, increment and decrement
* / %
multiply/divide
+ -
addition/subtraction
= += -= *= /= %= **=
assignments
Operations on the same level (like + and - ) are executed in the order they are found Following this table, we can solve this calculation: const a = 1 * 2 + 5 / 2 % 2 const a = 1 * 2 + 5 / 2 % 2 const a = 2 + 2.5 % 2 const a = 2 + 0.5 const a = 2.5
167
Math operators
168
The Math object
The Math object The Math object contains lots of utilities math-related. This tutorial describes them all The Math object contains lots of utilities math-related. It contains constants and functions.
Constants Item
Description
Math.E
The constant e, base of the natural logarithm (means ~2.71828)
Math.LN10
The constant that represents the base e (natural) logarithm of 10
Math.LN2
The constant that represents the base e (natural) logarithm of 2
Math.LOG10E
The constant that represents the base 10 logarithm of e
Math.LOG2E
The constant that represents the base 2 logarithm of e
Math.PI
The π constant (~3.14159)
Math.SQRT1_2
The constant that represents the reciprocal of the square root of 2
Math.SQRT2
The constant that represents the square root of 2
Functions All those functions are static. Math cannot be instantiated.
Math.abs() Returns the absolute value of a number Math.abs(2.5) //2.5 Math.abs(-2.5) //2.5
Math.acos() Returns the arccosine of the operand The operand must be between -1 and 1
169
The Math object
Math.acos(0.8) //0.6435011087932843
Math.asin() Returns the arcsine of the operand The operand must be between -1 and 1 Math.asin(0.8) //0.9272952180016123
Math.atan() Returns the arctangent of the operand Math.atan(30) //1.5374753309166493
Math.atan2() Returns the arctangent of the quotient of its arguments. Math.atan2(30, 20) //0.982793723247329
Math.ceil() Rounds a number up Math.ceil(2.5) //3 Math.ceil(2) //2 Math.ceil(2.1) //3 Math.ceil(2.99999) //3
Math.cos() Return the cosine of an angle expressed in radiants Math.cos(0) //1 Math.cos(Math.PI) //-1
Math.exp() Return the value of Math.E multiplied per the exponent that's passed as argument
Math.floor() Rounds a number down Math.ceil(2.5) //2 Math.ceil(2) //2 Math.ceil(2.1) //2 Math.ceil(2.99999) //2
Math.log() Return the base e (natural) logarithm of a number Math.log(10) //2.302585092994046 Math.log(Math.E) //1
Math.max() Return the highest number in the set of numbers passed Math.max(1,2,3,4,5) //5 Math.max(1) //1
Math.min() Return the smallest number in the set of numbers passed Math.max(1,2,3,4,5) //1 Math.max(1) //1
Math.pow() Return the first argument raised to the second argument Math.pow(1, 2) //1 Math.pow(2, 1) //2 Math.pow(2, 2) //4 Math.pow(2, 4) //16
171
The Math object
Math.random() Returns a pseudorandom number between 0.0 and 1.0 Math.random() //0.9318168241227056 Math.random() //0.35268950194094395
Math.round() Rounds a number to the nearest integer Math.round(1.2) //1 Math.round(1.6) //2
Math.sin() Calculates the sin of an angle expressed in radiants Math.sin(0) //0 Math.sin(Math.PI) //1.2246467991473532e-16)
Math.sqrt() Return the square root of the argument Math.sqrt(4) //2 Math.sqrt(16) //4 Math.sqrt(5) //2.23606797749979
Math.tan() Calculates the tangent of an angle expressed in radiants Math.tan(0) //0 Math.tan(Math.PI) //-1.2246467991473532e-16
172
ES Modules
ES Modules ES Modules is the ECMAScript standard for working with modules. While Node.js has been using the CommonJS standard since years, the browser never had a module system, as every major decision such as a module system must be first standardized by ECMAScript and then implemented
Introduction to ES Modules The ES Modules Syntax Other import/export options CORS What about browsers that do not support modules? Conclusion
Introduction to ES Modules ES Modules is the ECMAScript standard for working with modules.
173
ES Modules
While Node.js has been using the CommonJS standard since years, the browser never had a module system, as every major decision such as a module system must be first standardized by ECMAScript and then implemented by the browser. This standardization process completed with ES6 and browsers started implementing this standard trying to keep everything well aligned, working all in the same way, and now ES Modules are supported in Chrome, Safari, Edge and Firefox (since version 60). Modules are very cool, because they let you encapsulate all sorts of functionality, and expose this functionality to other JavaScript files, as libraries.
The ES Modules Syntax The syntax to import a module is: import package from 'module-name'
while CommonJS uses const package = require('module-name')
174
ES Modules
A module is a JavaScript file that exports one or more value (objects, functions or variables), using the export keyword. For example, this module exports a function that returns a string uppercase: uppercase.js export default str => str.toUpperCase()
In this example, the module defines a single, default export, so it can be an anonymous function. Otherwise it would need a name to distinguish it from other exports. Now, any other JavaScript module can import the functionality offered by uppercase.js by importing it. An HTML page can add a module by using a tag with the special type="module" attribute:
Note: this module import behaves like a defer script load. See efficiently load JavaScript with defer and async It's important to note that any script loaded with type="module" is loaded in strict mode. In this example, the uppercase.js module defines a default export, so when we import it, we can assign it a name we prefer: import toUpperCase from './uppercase.js'
and we can use it: toUpperCase('test') //'TEST'
You can also use an absolute path for the module import, to reference modules defined on another domain: import toUpperCase from 'https://flavio-es-modules-example.glitch.me/uppercase.js'
This is also valid import syntax: import { foo } from '/uppercase.js' import { foo } from '../uppercase.js'
175
ES Modules
This is not: import { foo } from 'uppercase.js' import { foo } from 'utils/uppercase.js'
It's either absolute, or has a ./ or / before the name.
Other import/export options We saw this example above: export default str => str.toUpperCase()
This creates one default export. In a file however you can export more than one thing, by using this syntax: const a = 1 const b = 2 const c = 3 export { a, b, c }
Another module can import all those exports using import * from 'module'
You can import just a few of those exports, using the destructuring assignment: import { a } from 'module' import { a, b } from 'module'
You can rename any import, for convenience, using as : import { a, b as two } from 'module'
You can import the default export, and any non-default export by name, like in this common React import: import React, { Component } from 'react'
176
ES Modules
You can check an ES Modules example on https://glitch.com/edit/#!/flavio-es-modulesexample?path=index.html
CORS Modules are fetched using CORS. This means that if you reference scripts from other domains, they must have a valid CORS header that allows cross-site loading (like AccessControl-Allow-Origin: * )
What about browsers that do not support modules? Use a combination of type="module" and nomodule :
Conclusion ES Modules are one of the biggest features introduced in modern browsers. They are part of ES6 but the road to implement them has been long. We can now use them! But we must also remember that having more than a few modules is going to have a performance hit on our pages, as it's one more step that the browser must perform at runtime. Webpack is probably going to still be a huge player even if ES Modules land in the browser, but having such a feature directly built in the language is huge for a unification of how modules work in the client-side and on Node.js as well.
177
CommonJS
CommonJS The CommonJS module specification is the standard used in Node.js for working with modules. Modules are very cool, because they let you encapsulate all sorts of functionality, and expose this functionality to other JavaScript files, as libraries
The CommonJS module specification is the standard used in Node.js for working with modules. Client-side JavaScript that runs in the browser uses another standard, called ES Modules Modules are very cool, because they let you encapsulate all sorts of functionality, and expose this functionality to other JavaScript files, as libraries. They let you create clearly separate and reusable snippets of functionality, each testable on its own. The huge npm ecosystem is built upon this CommonJS format. The syntax to import a module is: const package = require('module-name')
178
CommonJS
In CommonJS, modules are loaded synchronously, and processed in the order the JavaScript runtime finds them. This system was born with server-side JavaScript in mind, and is not suitable for the client-side (this is why ES Modules were introduced). A JavaScript file is a module when it exports one or more of the symbols it defines, being them variables, functions, objects: uppercase.js exports.uppercase = str => str.toUpperCase()
Any JavaScript file can import and use this module: const uppercaseModule = require('uppercase.js') uppercaseModule.uppercase('test')
A simple example can be found in this Glitch. You can export more than one value: exports.a = 1 exports.b = 2 exports.c = 3
and import them individually using the destructuring assignment: const { a, b, c } = require('./uppercase.js')
or just export one value using: //file.js module.exports = value
and import it using const value = require('./file.js')
179
Glossary
Glossary A guide to a few terms used in frontend development that might be alien to you Asynchronous Block Block Scoping Callback Declarative Fallback Function Scoping Immutability Lexical Scoping Polyfill Pure function Reassignment Scope Scoping Shim Side effect State Stateful Stateless Strict mode Tree Shaking
Asynchronous Code is asynchronous when you initiate something, forget about it, and when the result is ready you get it back without having to wait for it. The typical example is an AJAX call, which might take even seconds and in the meantime you complete other stuff, and when the response is ready, the callback function gets called. Promises and async/await are the modern way to handle async.
Block
180
Glossary
In JavaScript a block is delimited curly braces ( {} ). An if statement contains a block, a for loop contains a block.
Block Scoping With Function Scoping, any variable defined in a block is visible and accessible from inside the whole block, but not outside of it.
Callback A callback is a function that's invoked when something happens. A click event associated to an element has a callback function that's invoked when the user clicks the element. A fetch request has a callback that's called when the resource is downloaded.
Declarative A declarative approach is when you tell the machine what you need to do, and you let it figure out the details. React is considered declarative, as you reason about abstractions rather than editing the DOM directly. Every high level programming language is more declarative than a low level programming language like Assembler. JavaScript is more declarative than C. HTML is declarative.
Fallback A fallback is used to provide a good experience when a user hasn't access to a particular functionality. For example a user that browses with JavaScript disabled should be able to have a fallback to a plain HTML version of the page. Or for a browser that has not implemented an API, you should have a fallback to avoid completely breaking the experience of the user.
Function Scoping With Function Scoping, any variable defined in a function is visible and accessible from inside the whole function.
Immutability
181
Glossary
A variable is immutable when its value cannot change after it's created. A mutable variable can be changed. The same applies to objects and arrays.
Lexical Scoping Lexical Scoping is a particular kind of scoping where variables of a parent function are made available to inner functions as well. The scope of an inner function also includes the scope of a parent function.
Polyfill A polyfill is a way to provide new functionality available in modern JavaScript or a modern browser API to older browsers. A polyfill is a particular kind of shim.
Pure function A function that has no side effects (does not modify external resources), and its output is only determined by the arguments. You could call this function 1M times, and given the same set of arguments, the output will always be the same.
Reassignment JavaScript with var and let declaration allows you to reassign a variable indefinitely. With const declarations you effectively declare an immutable value for strings, integers, booleans,
and an object that cannot be reassigned (but you can still modify it through its methods).
Scope Scope is the set of variables that's visible to a part of the program.
Scoping Scoping is the set of rules that's defined in a programming language to determine the value of a variable.
Shim 182
Glossary
A shim is a little wrapper around a functionality, or API. It's generally used to abstract something, pre-fill parameters or add a polyfill for browsers that do not support some functionality. You can consider it like a compatibility layer.
Side effect A side effect is when a function interacts with some other function or object outside it. Interaction with the network or the file system, or with the UI, are all side effects.
State State usually comes into play when talking about Components. A component can be stateful if it manages its own data, or stateless if it doesn't.
Stateful A stateful component, function or class manages its own state (data). It could store an array, a counter or anything else.
Stateless A stateless component, function or class is also called dumb because it's incapable of having its own data to make decisions, so its output or presentation is entirely based on its arguments. This implies that pure functions are stateless.
Strict mode Strict mode is an ECMAScript 5.1 new feature, which causes the JavaScript runtime to catch more errors, but it helps you improve the JavaScript code by denying undeclared variables and other things that might cause overlooked issues like duplicated object properties and other subtle things. Hint: use it. The alternative is "sloppy mode" which is not a good thing even looking at the name we gave it.
Tree Shaking
183
Glossary
Tree shaking means removing "dead code" from the bundle you ship to your users. If you add some code that you never use in your import statements, that's not going to be sent to the users of your app, to reduce file size and loading time.
184
CSS
CSS
185
Introduction to CSS
Introduction to CSS CSS is the language that defines the visual appearance of an HTML page in the browser. It's evolving quickly, and thanks to the newest features, CSS has never been easier to use
What is CSS How does CSS look like Semicolons Formatting and indentation How do you load CSS in a Web Page Style sheets in the head tag External CSS file Inline styles Error handling The "Cascading" part explained Specificity Importance Order in file
186
Introduction to CSS
CSS Inheritance Normalizing CSS Pseudo classes Pseudo elements
What is CSS CSS (an abbreviation of Cascading Style Sheets) is the language that we use to style an HTML file, and tell the browser how should it render the elements on the page. It was grown out of the necessity of styling web pages. Before CSS was introduced, people wanted a way to style their web pages, which looked all very similar and "academic" back in the day. You couldn't do much in terms of personalization. HTML 3.2 introduced the option of defining colors inline as HTML element attributes, and presentational tags like center and font , but that escalated quickly into a far from ideal situation
.
CSS let us move everything presentation-related from the HTML to the CSS, so that HTML could get back being the format that defines the structure of the document, rather than how things should look in the browser. CSS is continuously evolving, and CSS you used 5 years ago might just be outdated, as new idiomatic CSS techniques emerged and browsers changed.
How does CSS look like A CSS rule set has one part called selector, and the other part called declaration. The declaration contains various rules, each composed by a property, and a value. In this example, p is the selector, and applies one rule which sets the value 20px to the font-size property:
p { font-size: 20px; }
Multiple rules are stacked one after the other: p { font-size: 20px; }
187
Introduction to CSS
a { color: blue; }
A selector can target one or more items: p, a { font-size: 20px; }
and it can target HTML tags, like above, or HTML elements that contain a certain class attribute with .my-class , or HTML elements that have a specific id attribute with #my-id . More advanced selectors allow you to choose items whose attribute matches a specific value, or also items which respond to pseudo-classes (more on that later)
Semicolons Every CSS rule terminates with a semicolon. Semicolons are not optional, except after the last rule, but I suggest to always use them for consistency and to avoid errors if you add another property and forget to add the semicolon on the previous line.
Formatting and indentation There is no fixed rule for formatting. This CSS is valid: p { font-size: 20px ; } a{color:blue;}
but a pain to see. Stick to some conventions, like the ones you see in the examples above: stick selectors and the closing brackets to the left, indent 2 spaces for each rule, have the opening bracket on the same line of the selector, separated by one space. Correct and consistent use of spacing and indentation is a visual aid in understanding your code.
How do you load CSS in a Web Page 188
Introduction to CSS
CSS can be loaded in a page in 3 ways: with a style tag in the page head , with an external CSS file, and inline in tags.
Style sheets in the head tag p { font-size: 20px; }
External CSS file
style.css p { font-size: 20px; }
Inline styles Inline styles allow you to set some CSS directly inside an HTML element, using the style HTML attribute:
Test
189
Introduction to CSS
They are sometimes useful for quick tests, but should be generally avoided.
Error handling CSS is resilient. When it finds an error, it does not act like JavaScript which packs up all its things and goes away altogether, terminating all the script execution after the error is found. CSS tries very hard to do what you want. If a line has an error, it skips it and jumps to the next line without any error. If you forget the semicolon on one line: p { font-size: 20px color: black; border: 1px solid black; }
the line with the error AND the next one will not be applied, but the third rule will be successfully applied on the page. Basically, it scans all until it finds a semicolon, but when it reaches it, the rule is now font-size: 20px color: black; , which is invalid, so it skips it. Sometimes it's tricky to realize there is an error somewhere, and where that error is, because the browser won't tell us. This is why tools like CSS Lint exist:
190
Introduction to CSS
The "Cascading" part explained CSS means Cascading Style Sheets, so what does cascading mean? Two or more competing rules for the same property applied to the same element need to be elaborated according to some specific rules, to determine which one needs to be applied. Those rules involve 3 things: specificity importance order in the file
Specificity The more specific is a selector, the more priority the rule has over the others. Specificity is measured with this set of rules: 1 point: element selectors 10 points: class selectors, attribute selectors, pseudo class selectors 100 points: id selectors 1000 points: rules defined inline, in a style attribute. Those points are summed, and the selector which is more specific gets the prize, and shows up on the page.
Importance 191
Introduction to CSS
Specificity does not matter if a rule ends with !important : p { font-size: 20px!important; }
That rule will take precedence over any rule with more specificity
Order in file If a rule is defined after one with same specificity, the second rule wins: p { color: black; } p { color: white; }
The rule that wins is color: white . This applies to all rules, including the ones with !important .
CSS Inheritance Some CSS rules applied to an element are inherited by its children. Not all rules, just some. It depends on the property. The commonly used rules which apply inheritance are: border-spacing color cursor font-family font-size font-style font-variant font-weight font letter-spacing line-height list-style-image
In a child element, you can set any property to special values, to explicitly set the inheritance behavior: inherit : inherit from the parent initial : use the default browser stylesheet if available. If not, and if the property inherits
by default, inherit the value. Otherwise do nothing. unset : if the property inherits by default, inherit. Otherwise do nothing. revert : use the CSS rule in our stylesheets if available. Otherwise use the default
browser stylesheet.
Normalizing CSS The default browser stylesheet mentioned above is the set of rules that browser have to apply some minimum style to elements. Since every browser has its own set, it's common finding a common ground. Rather than removing all defaults, like a CSS reset does, normalizing removes browser inconsistencies, while keeping a basic set of rules you can rely on. Normalize.css is the most commonly used solution for this problem.
Pseudo classes Pseudo classes are used to specify a specific state of an element, or to target a specific child. They start with a single colon : . They can be used as part of a selector, and they are very useful to style active or visited links for example, change the style on hover, focus, or target the first child, or odd rows. Very handy in many cases. These are the most popular pseudo classes you will likely use:
193
Introduction to CSS
Pseudo class
Targets
:active
an element being activated by the user (e.g. clicked). Mostly used on links or buttons
:checked
a checkbox, option or radio input types that are enabled
:default
the default in a set of choices (like, option in a select or radio buttons)
:disabled
an element disabled
:empty
an element with no children
:enabled
an element that's enabled (opposite to :disabled )
:firstchild
the first child of a group of siblings
:focus
the element with focus
:hover
an element hovered with the mouse
:last-child
the last child of a group of siblings
:link
a link that's not been visited
:not()
any element not matching the selector passed. E.g. :not(span)
:nthchild()
an element matching the specified position
:nth-lastchild()
an element matching the specific position, starting from the end
:only-child
an element without any siblings
:required
a form element with the required attribute set
:root
represents the html element. It's like targeting html , but it's more specific. Useful in CSS Variables.
:target
the element matching the current URL fragment (for inner navigation in the page)
:valid
form elements that validated client-side successfully
:visited
a link that's been visited
:nth-child() and :nth-last-child() are quite complex and deserve a special mention. They
can be used to target odd or even children with :nth-child(odd) and :nth-child(even) . They can target the first 3 children with :nth-child(-n+3) . They can style 1 in every 5 elements with :nth-child(5n) . More details on MDN.
Some pseudo classes are used for printing, like :first , :left , :right . More on using CSS for printing in the CSS printing tutorial. Find the list with links to all the pseudo links on MDN.
194
Introduction to CSS
Pseudo elements Pseudo elements are used to style a specific part of an element. They start with a double colon :: (for backwards compatibility you can use a single colon, but you should not, to distinguish them from pseudo classes). ::before and ::after are probably the most used pseudo elements. They are used to add
content before or after an element, like icons for example. Pseudo element
Targets
::after
creates a pseudo element after the element
::before
creates a pseudo element after the element
::first-letter
can be used to style the first letter of a block of text
::first-line
can be used to style the first line of a block of text
::selection
targets the text selected by the user
195
CSS Grid
CSS Grid CSS Grid is the new kid in the CSS town, and while not yet fully supported by all browsers, it's going to be the future system for layouts
The Grid. A digital frontier. I tried to picture clusters of information as they moved through the computer. What did they look like? Ships? Motorcycles? Were the circuits like freeways? I kept dreaming of a world I thought I'd never see. And then one day.. I got in. --- Tron: Legacy Introduction to CSS Grid The basics grid-template-columns and grid-template-rows Automatic dimensions Different columns and rows dimensions Adding space between the cells Spawning items on multiple columns and/or rows Shorthand syntax More grid configuration Using fractions Using percentages and rem Using repeat()
196
CSS Grid
Specify a minimum width for a row Positioning elements using grid-template-areas Adding empty cells in template areas Fill a page with a grid An example: header, sidebar, content and footer Wrapping up
Introduction to CSS Grid CSS Grid is a fundamentally new approach to building layouts using CSS. Keep an eye on the CSS Grid Layout page on caniuse.com to find out which browsers currently support it. At the time of writing, Feb 2018, all major browsers (except IE, which will never have support for it) are already supporting this technology, covering 78% of all users. CSS Grid is not a competitor to Flexbox. They interoperate and collaborate on complex layouts, because CSS Grid works on 2 dimensions (rows AND columns) while Flexbox works on a single dimension (rows OR columns). Building layouts for the web has traditionally been a complicated topic. I won't dig into the reasons for this complexity, which is a complex topic on its own, but you can think yourself as a very lucky human because nowadays you have 2 very powerful and well supported tools at your disposal: CSS Flexbox CSS Grid These 2 are the tools to build the Web layouts of the future. Unless you need to support old browsers like IE8 and IE9, there is no reason to be messing with things like: Table layouts Floats clearfix hacks display: table hacks
In this guide there's all you need to know about going from a zero knowledge of CSS Grid to being a proficient user.
The basics
197
CSS Grid
The CSS Grid layout is activated on a container element (which can be a div or any other tag) by setting display: grid . As with flexbox, you can define some properties on the container, and some properties on each individual item in the grid. These properties combined will determine the final look of the grid. The most basic container properties are grid-template-columns and grid-template-rows .
grid-template-columns and grid-template-rows Those properties define the number of columns and rows in the grid, and they also set the width of each column/row. The following snippet defines a grid with 4 columns each 200px wide, and 2 rows with a 300px height each. .container { display: grid; grid-template-columns: 200px 200px 200px 200px; grid-template-rows: 300px 300px; }
198
CSS Grid
Here's another example of a grid with 2 columns and 2 rows: .container { display: grid; grid-template-columns: 200px 200px; grid-template-rows: 100px 100px; }
199
CSS Grid
Automatic dimensions Many times you might have a fixed header size, a fixed footer size, and the main content that is flexible in height, depending on its length. In this case you can use the auto keyword: .container { display: grid; grid-template-rows: 100px auto 100px; }
Different columns and rows dimensions In the above examples we made regular grids by using the same values for rows and the same values for columns. You can specify any value for each row/column, to create a lot of different designs: .container { display: grid; grid-template-columns: 100px 200px; grid-template-rows: 100px 50px; }
Adding space between the cells Unless specified, there is no space between the cells. You can add spacing by using those properties: grid-column-gap grid-row-gap
or the shorthand syntax grid-gap . Example: .container { display: grid; grid-template-columns: 100px 200px; grid-template-rows: 100px 50px;
202
CSS Grid
grid-column-gap: 25px; grid-row-gap: 25px; }
The same layout using the shorthand: .container { display: grid; grid-template-columns: 100px 200px; grid-template-rows: 100px 50px; grid-gap: 25px; }
Spawning items on multiple columns and/or rows Every cell item has the option to occupy more than just one box in the row, and expand horizontally or vertically to get more space, while respecting the grid proportions set in the container. Those are the properties we'll use for that: grid-column-start grid-column-end grid-row-start grid-row-end
More grid configuration Using fractions Specifying the exact width of each column or row is not ideal in every case. A fraction is a unit of space. The following example divides a grid into 3 columns with the same width, 1/3 of the available space each. .container { grid-template-columns: 1fr 1fr 1fr; }
Using percentages and rem You can also use percentages, and mix and match fractions, pixels, rem and percentages: .container { grid-template-columns: 3rem 15% 1fr 2fr }
Using repeat() 206
CSS Grid
repeat() is a special function that takes a number that indicates the number of times a
row/column will be repeated, and the length of each one. If every column has the same width you can specify the layout using this syntax: .container { grid-template-columns: repeat(4, 100px); }
This creates 4 columns with the same width. Or using fractions: .container { grid-template-columns: repeat(4, 1fr); }
Specify a minimum width for a row Common use case: Have a sidebar that never collapses more than a certain amount of pixels when you resize the window. Here's an example where the sidebar takes 1/4 of the screen and never takes less than 200px: .container { grid-template-columns: minmax(200px, 3fr) 9fr; }
You can also set just a maximum value using the auto keyword: .container { grid-template-columns: minmax(auto, 50%) 9fr; }
or just a minimum value: .container { grid-template-columns: minmax(100px, auto) 9fr; }
Positioning elements using grid-template-areas By default elements are positioned in the grid using their order in the HTML structure.
207
CSS Grid
Using grid-template-areas You can define template areas to move them around in the grid, and also to spawn an item on multiple rows / columns instead of using grid-column . Here's an example: ... ... ... ...
Fill a page with a grid You can make a grid extend to fill the page using fr : .container { display: grid; height: 100vh; grid-template-columns: 1fr 1fr 1fr 1fr; grid-template-rows: 1fr 1fr; }
An example: header, sidebar, content and footer Here is a simple example of using CSS Grid to create a site layout that provides a header op top, a main part with sidebar on the left and content on the right, and a footer afterwards.
I added some colors to make it prettier, but basically it assigns to every different tag a gridarea name, which is used in the grid-template-areas property in .wrapper .
When the layout is smaller we can put the sidebar below the content using a media query: @media (max-width: 500px) { .wrapper { grid-template-columns: 4fr; grid-template-areas: "header" "content" "sidebar" "footer"; } }
See on CodePen
210
CSS Grid
Wrapping up These are the basics of CSS Grid. There are many things I didn't include in this introduction but I wanted to make it very simple, to start using this new layout system without making it feel overwhelming.
211
Flexbox
Flexbox Flexbox, also called Flexible Box Module, is one of the two modern layouts systems, along with CSS Grid
Introduction Browser support Enable Flexbox Container properties Align rows or columns Vertical and horizontal alignment Change the horizontal alignment Change the vertical alignment A note on baseline Wrap Properties that apply to each single item Moving items before / after another one using order Vertical alignment using align-self Grow or shrink an item if necessary flex-grow flex-shrink flex-basis
212
Flexbox
flex
Introduction Flexbox, also called Flexible Box Module, is one of the two modern layouts systems, along with CSS Grid. Compared to CSS Grid (which is bi-dimensional), flexbox is a one-dimensional layout model. It will control the layout based on a row or on a column, but not together at the same time. The main goal of flexbox is to allow items to fill the whole space offered by their container, depending on some rules you set. Unless you need to support old browsers like IE8 and IE9, Flexbox is the tool that lets you forget about using Table layouts Floats clearfix hacks display: table hacks
Let's dive into flexbox and become a master of it in a very short time.
Browser support At the time of writing (Feb 2018), it's supported by 97.66% of the users, all the most important browsers implement it since years, so even older browsers (including IE10+) are covered:
213
Flexbox
While we must wait a few years for users to catch up on CSS Grid, Flexbox is an older technology and can be used right now.
Enable Flexbox A flexbox layout is applied to a container, by setting display: flex;
or display: inline-flex;
the content inside the container will be aligned using flexbox.
Container properties 214
Flexbox
Some flexbox properties apply to the container, which sets the general rules for its items. They are flex-direction justify-content align-items flex-wrap flex-flow
Align rows or columns The first property we see, flex-direction , determines if the container should align its items as rows, or as columns: flex-direction: row places items as a row, in the text direction (left-to-right for western
countries) flex-direction: row-reverse places items just like row but in the opposite direction flex-direction: column places items in a column, ordering top to bottom flex-direction: column-reverse places items in a column, just like column but in the
opposite direction
Vertical and horizontal alignment By default items start from the left is flex-direction is row, and from the top if flexdirection is column.
215
Flexbox
You can change this behavior using justify-content to change the horizontal alignment, and align-items to change the vertical alignment.
Change the horizontal alignment justify-content has 5 possible values:
216
Flexbox
flex-start : align to the left side of the container. flex-end : align to the right side of the container. center : align at the center of the container. space-between : display with equal spacing between them. space-around : display with equal spacing around them
Change the vertical alignment align-items has 5 possible values:
217
Flexbox
flex-start : align to the top of the container. flex-end : align to the bottom of the container. center : align at the vertical center of the container. baseline : display at the baseline of the container. stretch : items are stretched to fit the container.
218
Flexbox
219
Flexbox
A note on baseline baseline looks similar to flex-start in this example, due to my boxes being too simple.
Check out this Codepen to have a more useful example, which I forked from a Pen originally created by Martin Michálek. As you can see there, items dimensions are aligned.
Wrap By default items in a flexbox container are kept on a single line, shrinking them to fit in the container. To force the items to spread across multiple lines, use flex-wrap: wrap . This will distribute the items according to the order set in flex-direction . Use flex-wrap: wrap-reverse to reverse this order. A shorthand property called flex-flow allows you to specify flex-direction and flex-wrap in a single line, by adding the flex-direction value first, followed by flex-wrap value, for example: flex-flow: row wrap .
Properties that apply to each single item Since now, we've seen the properties you can apply to the container. Single items can have a certain amount of independence and flexibility, and you can alter their appearance using those properties: order align-self flex-grow flex-shrink flex-basis flex
Let's see them in detail.
Moving items before / after another one using order Items are ordered based on a order they are assigned. By default every item has order 0 and the appearance in the HTML determines the final order.
220
Flexbox
You can override this property using order on each separate item. This is a property you set on the item, not the container. You can make an item appear before all the others by setting a negative value.
Vertical alignment using align-self An item can choose to override the container align-items setting, using align-self , which has the same 5 possible values of align-items : flex-start : align to the top of the container. flex-end : align to the bottom of the container. center : align at the vertical center of the container. baseline : display at the baseline of the container. stretch : items are stretched to fit the container.
221
Flexbox
Grow or shrink an item if necessary flex-grow The defaut for any item is 0. If all items are defined as 1 and one is defined as 2, the bigger element will take the space of two "1" items. flex-shrink The defaut for any item is 1. If all items are defined as 1 and one is defined as 3, the bigger element will shrink 3x the other ones. When less space is available, it will take 3x less space. flex-basis If set to auto , it sizes an item according to its width or height, and adds extra space based on the flex-grow property. If set to 0, it does not add any extra space for the item when calculating the layout. If you specify a pixel number value, it will use that as the length value (width or height depends if it's a row or a column item) flex
222
Flexbox
This property combines the above 3 properties: flex-grow flex-shrink flex-basis
and provides a shorthand syntax: flex: 0 1 auto
223
CSS Custom Properties
CSS Custom Properties Discover CSS Custom Properties, also called CSS Variables, a powerful new feature of modern browsers that help you write better CSS
Introduction The basics of using variables Create variables inside any element Variables scope Interacting with a CSS Variable value using JavaScript Handling invalid values Browser support CSS Variables are case sensitive Math in CSS Variables Media queries with CSS Variables Setting a fallback value for var()
Introduction 224
CSS Custom Properties
In the last few years CSS preprocessors had a lot of success. It was very common for greenfield projects to start with Less or Sass. And it's still a very popular technology. The main benefits of those technologies are, in my opinion: They allow to nest selectors The provide an easy imports functionality They give you variables Modern CSS has a new powerful feature called CSS Custom Properties, also commonly known as CSS Variables. CSS is not a programming language like JavaScript, Python, PHP, Ruby or Go where variables are key to do something useful. CSS is very limited in what it can do, and it's mainly a declarative syntax to tell browsers how they should display an HTML page. But a variable is a variable: a name that refers to a value, and variables in CSS helps reduce repetition and inconsistencies in your CSS, by centralizing the values definition. And it introduces a unique feature that CSS preprocessors won't never have: you can access and change the value of a CSS Variable programmatically using JavaScript.
The basics of using variables A CSS Variable is defined with a special syntax, prepending two dashes to a name ( -variable-name ), then a colon and a value. Like this:
:root { --primary-color: yellow; }
(more on :root later) You can access the variable value using var() : p { color: var(--primary-color) }
The variable value can be any valid CSS value, for example: :root { --default-padding: 30px 30px 20px 20px; --default-color: red; --default-background: #fff;
225
CSS Custom Properties
}
Create variables inside any element CSS Variables can be defined inside any element. Some examples: :root { --default-color: red; } body { --default-color: red; } main { --default-color: red; } p { --default-color: red; } span { --default-color: red; } a:hover { --default-color: red; }
What changes in those different examples is the scope.
Variables scope Adding variables to a selector makes them available to all the children of it. In the example above you saw the use of :root when defining a CSS variable: :root { --primary-color: yellow; }
:root is a CSS pseudo-class that identifies the document, so adding a variable to :root
makes it available to all the elements in the page.
226
CSS Custom Properties
It's just like targeting the html element, except that :root has higher specificity (takes priority). If you add a variable inside a .container selector, it's only going to be available to children of .container :
.container { --secondary-color: yellow; }
and using it outside of this element is not going to work. Variables can be reassigned: :root { --primary-color: yellow; } .container { --primary-color: blue; }
Outside .container , --primary-color will be yellow, but inside it will be blue. You can also assign or overwrite a variable inside the HTML using inline styles:
CSS Variables follow the normal CSS cascading rules, with precedence set according to specificity
Interacting with a CSS Variable value using JavaScript The coolest thing with CSS Variables is the ability to access and edit them using JavaScript. Here's how you set a variable value using plain JavaScript: const element = document.getElementById('my-element') element.style.setProperty('--variable-name', 'a-value')
227
CSS Custom Properties
This code below can be used to access a variable value instead, in case the variable is defined on :root : const styles = getComputedStyle(document.documentElement) const value = String(styles.getPropertyValue('--variable-name')).trim()
Or, to get the style applied to a specific element, in case of variables set with a different scope: const element = document.getElementById('my-element') const styles = getComputedStyle(element) const value = String(styles.getPropertyValue('--variable-name')).trim()
Handling invalid values If a variable is assigned to a property which does not accept the variable value, it's considered invalid. For example you might pass a pixel value to a position property, or a rem value to a color property. In this case the line is considered invalid and ignored.
Browser support Browser support for CSS Variables is very good, according to Can I Use. CSS Variables are here to stay, and you can use them today if you don't need to support Internet Explorer and old versions of the other browsers. If you need to support older browsers you can use libraries like PostCSS or Myth, but you'll lose the ability to interact with variables via JavaScript or the Browser Developer Tools, as they are transpiled to good old variable-less CSS (and as such, you lose most of the power of CSS Variables).
CSS Variables are case sensitive This variable: --width: 100px;
is different than:
228
CSS Custom Properties
--Width: 100px;
Math in CSS Variables To do math in CSS Variables, you need to use calc() , for example: :root { --default-left-padding: calc(10px * 2); }
Media queries with CSS Variables Nothing special here. CSS Variables normally apply to media queries: body { --width: 500px; } @media screen and (max-width: 1000px) and (min-width: 700px) { --width: 800px; } .container { width: var(--width); }
Setting a fallback value for var() var() accepts a second parameter, which is the default fallback value when the variable
value is not set: .container { margin: var(--default-margin, 30px); }
229
PostCSS
PostCSS Discover PostCSS, a great tool to help you write modern CSS. PostCSS is a very popular tool that allows developers to write CSS pre-processors or postprocessors
Introduction Why it's so popular Install the PostCSS CLI Most popular PostCSS plugins Autoprefixer cssnext CSS Modules csslint cssnano Other useful plugins How is it different than Sass?
Introduction 230
PostCSS
PostCSS is a very popular tool that allows developers to write CSS pre-processors or postprocessors, called plugins. There is a huge number of plugins that provide lots of functionalities, and sometimes the term "PostCSS" means the tool itself, plus the plugins ecosystem. PostCSS plugins can be run via the command line, but they are generally invoked by task runners at build time. The plugin-based architecture provides a common ground for all the CSS-related operations you need. Note: PostCSS despite the name is not a CSS post-processor, but it can be used to build them, as well as other things
Why it's so popular PostCSS provides several features that will deeply improve your CSS, and it integrates really well with any build tool of your choice.
Install the PostCSS CLI Using Yarn: yarn global add postcss-cli
or npm: npm install -g postcss-cli
Once this is done, the postcss command will be available in your command line. This command for example runs the autoprefixer plugin on CSS files contained in the css/ folder, and save the result in the main.css file: postcss --use autoprefixer -o main.css css/*.css
More info on the PostCSS CLI here: https://github.com/postcss/postcss-cli.
Most popular PostCSS plugins
231
PostCSS
PostCSS provides a common interface to several great tools for your CSS processing. Here are some of the most popular plugins, to get an overview of what's possible to do with PostCSS.
Autoprefixer Autoprefixer parses your CSS and determines if some rules need a vendor prefix. It does so according to the Can I Use data, so you don't have to worry if a feature needs a prefix, or if prefixes you use are now unneeded because obsolete. You get to write cleaner CSS. Example: a { display: flex; }
gets compiled to a { display: -webkit-box; display: -webkit-flex; display: -ms-flexbox; display: flex; }
cssnext https://github.com/MoOx/postcss-cssnext This plugin is the Babel of CSS, allows you to use modern CSS features while it takes care of transpiling them to a CSS more digestible to older browsers: it adds prefixes using Autoprefixer (so if you use this, no need to use Autoprefixer directly) it allows you to use CSS Variables it allows you to use nesting, like in Sass and a lot more!
CSS Modules CSS Modules let you use CSS Modules.
232
PostCSS
CSS Modules are not part of the CSS standard, but they are a build step process to have scoped selectors.
csslint Linting helps you write correct CSS and avoid errors or pitfalls. The stylint plugin allows you to lint CSS at build time.
cssnano cssnano minifies your CSS and makes code optimizations to have the least amount of code delivered in production.
Other useful plugins On the PostCSS GitHub repo there is a full list of the available plugins. Some of the ones I like include: LostGrid is a PostCSS grid system postcss-sassy provides Sass-like mixins postcss-nested provides the ability to use Sass nested rules postcss-nested-ancestors, reference any ancestor selector in nested CSS postcss-simple-vars, use Sass-like variables PreCSS provides you many features of Sass, and this is what is most close to a complete Sass replacement
How is it different than Sass? Or any other CSS preprocessor? The main benefit PostCSS provides over CSS preprocessors like Sass or Less is the ability to choose your own path, and cherry-pick the features you need, adding new capabilities at the same time. Sass or Less are "fixed", you get lots of features which you might or might not use, and you cannot extend them. The fact that you "choose your own adventure" means that you can still use any other tool you like alongside PostCSS. You can still use Sass if this is what you want, and use PostCSS to perform other things that Sass can't do, like autoprefixing or linting. You can write your own PostCSS plugin to do anything you want.
233
PostCSS
234
How to center things in modern CSS
How to center things in modern CSS Centering elements with CSS has always been easy for some things, hard for others. Here is the full list of centering techniques, with modern CSS techniques as well Centering things in CSS is a task that is very different if you need to center horizontally or vertically. In this post I explain the most common scenarios and how to solve them. If a new solution is provided by Flexbox I ignore the old techniques because we need to move forward, and Flexbox is supported by browsers since years, IE10 included.
Center horizontally Text Text is very simple to center horizontally using the text-align property set to center : p { text-align: center; }
Blocks The modern way to center anything that is not text is to use Flexbox: #mysection { display: flex; justify-content: center; }
any element inside #mysection will be horizontally centered.
235
How to center things in modern CSS
Here is the alternative approach if you don't want to use Flexbox. Anything that is not text can be centered by applying an automatic margin to left and right, and setting the width of the element: section { margin-left: 0 auto; width: 50%; }
the above margin-left: 0 auto; is a shorthand for: section { margin-top: 0; margin-bottom: 0; margin-left: auto; margin-right: auto; }
Remember to set the item to display: block if it's an inline element.
Center vertically Traditionally this has always been a difficult task. Flexbox now provides us a great way to do this in the simplest possible way: #mysection { display: flex; align-items: center; }
any element inside #mysection will be vertically centered.
236
How to center things in modern CSS
Center both vertically and horizontally Flexbox techniques to center vertically and horizontally can be combined to completely center an element in the page. #mysection { display: flex; align-items: center; justify-content: center; }
The same can be done using CSS Grid: body { display: grid;
237
How to center things in modern CSS
place-items: center; height: 100vh; }
238
The CSS margin property
The CSS margin property margin is a simple CSS property that has a shorthand syntax I keep forgetting about, so I wrote this reference post Introduction Specific margin properties Using margin with different values 1 value 2 values 3 values 4 values Values accepted Using auto to center elements
Introduction The margin CSS property is commonly used in CSS to add space around an element. Remember: margin adds space around an element border padding adds space inside an element border
Specific margin properties margin has 4 related properties that alter the margin of a single margin at once: margin-top margin-right margin-bottom margin-left
The usage of those is very simple and cannot be confused, for example: margin-left: 30px; margin-right: 3em;
Using margin with different values 239
The CSS margin property
margin is a shorthand to specify multiple margins at the same time, and depending on the
number of values entered, it behaves differently.
1 value Using a single value applies that to all the margins: top, right, bottom, left. margin: 20px;
2 values Using 2 values applies the first to bottom & top, and the second to left & right. margin: 20px 10px;
3 values Using 3 values applies the first to top, the second to left & right, the third to bottom. margin: 20px 10px 30px;
4 values Using 4 values applies the first to top, the second to right, the third to bottom, the fourth to left. margin: 20px 10px 5px 0px;
So, the order is top-right-bottom-left.
Values accepted margin accepts values expressed in any kind of length unit, the most common ones are px,
em, rem, but many others exist. It also accepts percentage values, and the special value auto .
Using auto to center elements
240
The CSS margin property
auto can be used to tell the browser to select automatically a margin, and it's most commonly
used to center an element in this way: margin: 0 auto;
As said above, using 2 values applies the first to bottom & top, and the second to left & right. The modern way to center elements is to use Flexbox, and its justify-content: center; directive. Older browsers of course do not implement Flexbox, and if you need to support them margin: 0 auto; is still a good choice.
241
CSS System Fonts
CSS System Fonts How to use System Fonts in CSS to improve your site and provide a better experience to your users in terms of speed and page load time
A little bit of history Today The impact of Web Fonts Enter System Fonts Popular websites use System Fonts I'm sold. Give me the code A note on system-ui Use font variations by creating @font-face rules Read more
A little bit of history For years, websites could only use fonts available on all computers, such as Georgia, Verdana, Arial, Helvetica, Times New Roman. Other fonts were not guaranteed to work on all websites.
242
CSS System Fonts
If you wanted to use a fancy font you had to use images. In 2008 Safari and Firefox introduced the @font-face CSS property, and online services started to provide licenses to Web Fonts. The first was Typekit in 2009, and later Google Fonts got hugely popular thanks to its free offering. @font-face was implemented in all the major browsers, and nowadays it's a given on every
reasonably recent device. If you're a young web developer you might not realize it, but in 2012 we still had articles explaining this new technology of Web Fonts.
Today You can use whatever font you wish to use, by relying on a service like Google Fonts, or providing your own font to download. You can, but should you? If you have the choice (and by this I mean, you're not implementing a design that a client gave you), you might want to think about it, in a move to go back to the basics (but in style!)
The impact of Web Fonts Everything you load on your pages has a cost. This cost is especially impactful on mobile, where every byte you require is impacting the load time, and the amount of bandwidth you make your users consume. The font must load before the content renders, so you need to wait for that resource loading to complete before the user is able to read even a single word you wrote. But Web Fonts are a way to provide an awesome user experience through good typography.
Enter System Fonts Operating Systems have great default fonts. System Fonts offer the great advantage of speed and performance, and a reduction of your web page size. But as a side effect, they make your website look very familiar to anyone looking at it, because they are used to see that same font every day on their computer or mobile device. It's effectively a native font.
243
CSS System Fonts
And as it's the system font, it's guaranteed to look great.
Popular websites use System Fonts You might know one of these, as an example: GitHub Medium Ghost Bootstrap Booking.com ..they have been using System Fonts for years. Even the Wordpress dashboard - that runs millions of websites - uses system fonts, and Medium, which is all about reading, decided to use system fonts. If it works for them, chances are it works for you as well.
I'm sold. Give me the code This is the CSS line you should add to your website: body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu", "Helvetica Neue", Arial, sans-serif; }
The browser will interpret all these font names, and starting from the first it will check if it's available. Safari and Firefox on macOS "intercept" -apple-system , which means the San Francisco font on newer versions, Helvetica Neue and Lucida Grande on older versions. Chrome works with BlinkMacSystemFont, which defaults to the OS font (again, San Francisco on macOS). Segoe UI is used in modern Windows systems and Windows Phone, Tahoma in Windows XP, Roboto in Android, and so on targeting other platforms. Arial and sans-serif are the fallback fonts. If you use Emojis in your site, make sure you load the symbol fonts as well:
You might want to change the order of the font appearance based on your website usage statistics.
A note on system-ui Maybe you will see system-ui mentioned in System Fonts posts online, but at the moment it's known to cause issues in Windows (see https://infinnie.github.io/blog/2017/systemui.html and https://github.com/twbs/bootstrap/pull/22377) There is work being done towards standardizing system-ui as a generic font family, so in the future you will just write body { font-family: system-ui; }
See https://www.chromestatus.com/feature/5640395337760768 and https://caniuse.com/#feat=font-family-system-ui to keep an eye on the progress. Chrome, Safari already support it, Firefox partially, while Edge does not yet implement it.
Use font variations by creating @font-face rules The approach described above works great until you need to alter the font on a second element, and maybe even on more than one. Maybe you want to specify the italic as a font property rather than in font-style , or set a specific font weight. This nice project by Jonathan Neal https://jonathantneal.github.io/system-font-css/ lets you use System Fonts by simply importing a module, and you can set body { font-family: system-ui; }
This system-ui is defined in https://github.com/jonathantneal/system-font-css/blob/ghpages/system-font.css.
245
CSS System Fonts
You are now able to use different font variations by referencing: .special-text { font: italic 300 system-ui; } p { font: 400 system-ui; }
Read more https://css-tricks.com/snippets/css/system-font-stack/ https://www.smashingmagazine.com/2015/11/using-system-ui-fonts-practical-guide/ https://medium.design/system-shock-6b1dc6d6596f
246
Style CSS for print
Style CSS for print A few tips on printing from the browser to the printer or to a PDF document using CSS Print CSS CSS @media print Links Page margins Page breaks Avoid breaking images in the middle PDF Size Debug the printing presentation
Even though we increasingly stare at our screens, printing is still a thing. Even with blog posts. I remember one time back in 2009 I met a person that told me he made his personal assistant print every blog post I published (yes, I stared blankly for a little bit). Definitely unexpected. My main use case for looking into printing usually is printing to a PDF. I might create something inside the browser, and I want to make it available as PDF. Browsers make this very easy, with Chrome defaulting to "Save" when trying to print a document and a printer is not available, and Safari has a dedicated button in the menu bar:
247
Style CSS for print
Print CSS Some common things you might want to do when printing is to hide some parts of the document, maybe the footer, something in the header, the sidebar. Maybe you want to use a different font for printing, which is totally legit. If you have a large CSS for print, you'd better use a separate file for it. Browsers will only download it when printing:
CSS @media print An alternative to the previous approach is media queries. Anything you add inside this block: @media print { /* ... */ }
is going to be applied only to printed documents.
248
Style CSS for print
Links HTML is great because of links. It's called HyperText for a good reason. When printing we might lose a lot of information, depending on the content. CSS offers a great way to solve this problem by editing the content, appending the link after the tag text, using: @media print { a[href*='//']:after { content:" (" attr(href) ") "; color: $primary; } }
I target a[href*='//'] to only do this for external links. I might have internal links for navigation and internal indexing purposes, which would be useless in most of my use cases. If you also want internal links to be printed, just do: @media print { a:after { content:" (" attr(href) ") "; color: $primary; } }
Page margins You can add margins to every single page. cm or in is a good unit for paper printing. @page { margin-top: 2cm; margin-bottom: 2cm; margin-left: 2cm; margin-right: 2cm; }
@page can also be used to only target the first page, using @page :first , or only the left and
right pages using @page :left and @page: right .
Page breaks
249
Style CSS for print
You might want to add a page break after some elements, or before them. Use page-breakafter and page-break-before :
Avoid breaking images in the middle I experienced this with Firefox: images by default are cut in the middle, and continue on the next page. It might also happen to text. Use p { page-break-inside: avoid; }
and wrap your images in a p tag. Targeting img directly didn't work in my tests. This applies to other content as well, not just images. If you notice something is cut when you don't want, use this property.
PDF Size Trying to print a 400+ pages PDF with images with Chrome initially generated a 100MB+ file, although the total size of the images was not nearly that big. I tried with Firefox and Safari, and the size was less than 10MB. After a few experiments it turned out Chrome has 3 ways to print an HTML to PDF: ❌ Don't print it using the System Dialogue ❌ Don't click "Open PDF in Preview" ✅ Instead, click the "Save" button that appears in the Chrome Print dialogue
250
Style CSS for print
This generates a PDF much quicker than with the other 2 ways, and with a much, much smaller size.
Debug the printing presentation The Chrome DevTools offer ways to emulate the print layout:
251
Style CSS for print
Once the panel opens, change the rendering emulation to print :
252
Style CSS for print
253
CSS Transitions
CSS Transitions CSS Transitions are the most simple way to create an animation in CSS. In a transition, you change the value of a property, and you tell CSS to slowly change it according to some parameters, towards a final state
Introduction to CSS Transitions Example of a CSS Transition Transition timing function values CSS Transitions in Browser DevTools Which Properties you can Animate using CSS Animations
Introduction to CSS Transitions CSS Transitions are the most simple way to create an animation in CSS. In a transition, you change the value of a property, and you tell CSS to slowly change it according to some parameters, towards a final state. CSS Transitions are defined by these properties: Property
Description
transitionproperty
the CSS property that should transition
transitionduration
the duration of the transition
transition-timingfunction
the timing function used by the animation (common values: linear, ease). Default: ease optional number of seconds to wait before starting the
254
CSS Transitions
animation The transition property is a handy shorthand: .container { transition: property duration timing-function delay; }
Example of a CSS Transition This code implements a CSS Transition: .one, .three { background: rgba(142, 92, 205, .75); transition: background 1s ease-in; } .two, .four { background: rgba(236, 252, 100, .75); } .circle:hover { background: rgba(142, 92, 205, .25); /* lighter */ }
See the example on Glitch https://flavio-css-transitions-example.glitch.me When hovering the .one and .three elements, the purple circles, there is a transition animation that ease the change of background, while the yellow circles do not, because they do not have the transition property defined.
Transition timing function values transition-timing-function allows to specify the acceleration curve of the transition.
There are some simple values you can use: Value linear
255
CSS Transitions
linear ease ease-in ease-out ease-in-out This Glitch shows how those work in practice.
You can create a completely custom timing function using cubic bezier curves. This is rather advanced, but basically any of those functions above is built using bezier curves. We have handy names as they are common ones.
CSS Transitions in Browser DevTools The Browser DevTools offer a great way to visualize transitions. This is Chrome:
256
CSS Transitions
This is Firefox:
257
CSS Transitions
From those panels you can live edit the transition and experiment in the page directly without reloading your code.
Which Properties you can Animate using CSS Animations A lot! They are the same you can animate using CSS Transitions, too. Here's the full list: Property background background-color background-position background-size border border-color border-width
CSS Animations CSS Animations are a great way to create visual animations, not limited to a single movement like CSS Transitions, but much more articulated. An animation is applied to an element using the `animation` property
Introduction A CSS Animations Example The CSS animation properties JavaScript events for CSS Animations Which Properties You Can Animate using CSS Animations
Introduction An animation is applied to an element using the animation property. .container { animation: spin 10s linear infinite; }
spin is the name of the animation, which we need to define separately. We also tell CSS to
make the animation last 10 seconds, perform it in a linear way (no acceleration or any difference in its speed) and to repeat it infinitely. You must define how your animation works using keyframes. Example of an animation that rotates an item:
Inside the @keyframes definition you can have as many intermediate waypoints as you want. In this case we instruct CSS to make the transform property to rotate the Z axis from 0 to 360 grades, completing the full loop. You can use any CSS transform here. Notice how this does not dictate anything about the temporal interval the animation should take. This is defined when you use it via animation .
A CSS Animations Example I want to draw four circles, all with a starting point in common, all 90 degrees distant from each other.
You can see them in this Glitch: https://flavio-css-circles.glitch.me
Let's make this structure (all the circles together) rotate. To do this, we apply an animation on the container, and we define that animation as a 360 degrees rotation: @keyframes spin { 0% { transform: rotateZ(0); } 100% { transform: rotateZ(360deg); } } .container { animation: spin 10s linear infinite; }
See it on https://flavio-css-animations-tutorial.glitch.me
You can add more keyframes to have funnier animations:
See the example on https://flavio-css-animations-four-steps.glitch.me
The CSS animation properties CSS animations offers a lot of different parameters you can tweak: Property
Description
animationname
the name of the animation, it references an animation created using
animationduration
how long the animation should last, in seconds
animationtimingfunction
the timing function used by the animation (common values: linear , ease ). Default: ease
animationdelay
optional number of seconds to wait before starting the animation
animationiterationcount
how many times the animation should be performed. Expects a number, or infinite . Default: 1
animationdirection
the direction of the animation. Can be normal , reverse , alternate or alternate-reverse . In the last 2, it alternates going forward and then backwards
animationfill-mode
defines how to style the element when the animation ends, after it finishes its iteration count number. none or backwards go back to the first keyframe styles. forwards and both use the style that's set in the last keyframe
animationplay-state
if set to paused , it pauses the animation. Default is running
@keyframes
266
CSS Animations
The animation property is a shorthand for all these properties, in this order: .container { animation: name duration timing-function delay iteration-count direction fill-mode play-state; }
This is the example we used above: .container { animation: spin 10s linear infinite; }
JavaScript events for CSS Animations Using JavaScript you can listen for the following events: animationstart animationend animationiteration
Be careful with animationstart , because if the animation starts on page load, your JavaScript code is always executed after the CSS has been processed, so the animation is already started and you cannot intercept the event. const container = document.querySelector('.container') container.addEventListener('animationstart', (e) => { //do something }, false) container.addEventListener('animationend', (e) => { //do something }, false) container.addEventListener('animationiteration', (e) => { //do something }, false)
Which Properties You Can Animate using CSS Animations 267
CSS Animations
A lot! They are the same you can animate using CSS Transitions, too. Here's the full list: Property background background-color background-position background-size border border-color border-width border-bottom border-bottom-color border-bottom-left-radius border-bottom-right-radius border-bottom-width border-left border-left-color border-left-width border-radius border-right border-right-color border-right-width border-spacing border-top border-top-color border-top-left-radius border-top-right-radius border-top-width bottom box-shadow caret-color clip
right tab-size text-decoration text-decoration-color text-indent text-shadow top transform. vertical-align visibility width word-spacing z-index
271
Web Platform
Web Platform
272
The DOM
The DOM DOM stands for Document Object Model, a representation of an HTML document in nodes and objects. Browsers expose an API that you can use to interact with the DOM. That's how modern JavaScript frameworks work, they use the DOM API to tell the browser what to display on the page
The Window object Properties Methods The Document object Types of Nodes Traversing the DOM Getting the parent Getting the children Getting the siblings Editing the DOM The DOM is the browser internal representation of a web page. When the browser retrieves your HTML from your server, the parser analyzes the structure of your code, and creates a model of it. Based on this model, the browser then renders the page on the screen. Browsers expose an API that you can use to interact with the DOM. That's how modern JavaScript frameworks work, they use the DOM API to tell the browser what to display on the page. In Single Page Applications, the DOM continuously changes to reflect what appears on the screen, and as a developer you can inspect it using the Browser Developer Tools.
273
The DOM
The DOM is language-agnostic, and the de-facto standard to access the DOM is by using JavaScript, since it's the only language that browsers can run. The DOM is standardized by WHATWG in the DOM Living Standard Spec. With JavaScript you can interact with the DOM to: inspect the page structure access the page metadata and headers edit the CSS styling attach or remove event listeners edit any node in the page change any node attribute and much more. The main 2 objects provided by the DOM API, the ones you will interact the most with, are document and window .
The Window object The window object represents the window that contains the DOM document. window.document points to the document object loaded in the window.
Properties and methods of this object can be called without referencing window explicitly, because it represents the global object. So, the previous property window.document is usually called just document .
Properties Here is a list of useful properties you will likely reference a lot: console points to the browser debugging console. Useful to print error messages or
logging, using console.log , console.error and other tools (see the Browser DevTools article) document as already said, points to the document object, key to the DOM interactions you
will perform history gives access to the History API location gives access to the Location interface, from which you can determine the URL,
the protocol, the hash and other useful information. localStorage is a reference to the Web Storage API localStorage object sessionStorage is a reference to the Web Storage API sessionStorage object
274
The DOM
Methods The window object also exposes useful methods: alert() : which you can use to display alert dialogs postMessage() : used by the Channel Messaging API requestAnimationFrame() : used to perform animations in a way that's both performant and
easy on the CPU setInterval() : call a function every n milliseconds, until the interval is cleared with clearInterval() clearInterval() : clears an interval created with setInterval() setTimeout() : execute a function after n milliseconds setImmediate() : execute a function as soon as the browser is ready addEventListener() : add an event listener to the document removeEventListener() : remove an event listener from the document
See the full reference of all the properties and methods of the window object at https://developer.mozilla.org/en-US/docs/Web/API/Window
The Document object The document object represents the DOM tree loaded in a window. Here is a representation of a portion of the DOM pointing to the head and body tags:
275
The DOM
Here is a representation of a portion of the DOM showing the head tag, containing a title tag with its value:
Here is a representation of a portion of the DOM showing the body tag, containing a link, with a value and the href attribute with its value:
276
The DOM
The Document object can be accessed from window.document , and since window is the global object, you can use the shortcut document object directly from the browser console, or in your JavaScript code. This Document object has a ton of properties and methods. The Selectors API methods are the ones you'll likely use the most: document.getElementById() document.querySelector() document.querySelectorAll() document.getElementsByTagName() document.getElementsByClassName()
You can get the document title using document.title , and the URL using document.URL . The referrer is available in document.referrer , the domain in document.domain . From the document object you can get the body and head Element nodes: document.documentElement : the Document node document.body : the body Element node document.head : the head Element node
277
The DOM
You can also get a list of all the element nodes of a particular type, like an HTMLCollection of all the links using document.links , all the images using document.images , all the forms using document.forms .
The document cookies are accessible in document.cookie . The last modified date in document.lastModified .
You can do much more, even get old school and fill your scripts with document.write() , a method that was used a lot back in the early days of JavaScript to interact with the pages. See the full reference of all the properties and methods of the document object at https://developer.mozilla.org/en-US/docs/Web/API/Document
Types of Nodes There are different types of nodes, some of which you already saw in the example images above. The main ones you will see are: Document: the document Node, the start of the tree Element: an HTML tag Attr: an attribute of a tag Text: the text content of an Element or Attr Node Comment: an HTML comment DocumentType: the Doctype declaration
Traversing the DOM The DOM is a tree of elements, with the Document node at the root, which points to the html Element node, which in turn points to its child element nodes head and body , and so on. From each of those elements, you can navigate the DOM structure and move to different nodes.
278
The DOM
Getting the parent Every element has one and one single parent. To get it, you can use Node.parentNode or Node.parentElement (where Node means a node in the DOM). They are almost the same, except when ran on the html element: parentNode returns the parent of the specified node in the DOM tree, while parentElement returns the DOM node's parent Element, or null if the node either has no parent, or its parent isn't a DOM Element. People most usually use parentNode .
Getting the children To check if a Node has child nodes, use Node.hasChildNodes() which returns a boolean value. To access all the Element Nodes children of a node, use Node.childNodes . The DOM also exposes a Node.children method, but it will not just include Element nodes, but it includes also the white space between elements as Text nodes, which is not something you generally want.
To get the first child Element Node, use Node.firstElementChild , and to get the last child Element Node, use Node.lastElementChild :
279
The DOM
The DOM also exposes Node.firstChild and Node.lastChild , with the difference that they do not "filter" the tree for Element nodes only, and they will also show empty Text nodes that indicate white space. In short, to navigate children Element Nodes use Node.childNodes Node.firstElementChild Node.lastElementChild
Getting the siblings In addition to getting the parent and the children, since the DOM is a tree you can also get the siblings of any Element Node. You can do so using Node.previousElementSibling Node.nextElementSibling
The DOM also exposes previousSibling and nextSibling , but as their counterparts described above, they include white spaces as Text nodes, so you generally avoid them.
Editing the DOM The DOM offers various methods to edit the nodes of the page and alter the document tree. With document.createElement() : creates a new Element Node document.createTextNode() : creates a new Text Node
280
The DOM
you can create new elements, and add them the the DOM elements you want, as children, by using document.appendChild() : const div = document.createElement('div') div.appendChild(document.createTextNode('Hello world!'))
first.removeChild(second) removes the child node "second" from the node "first". document.insertBefore(newNode, existingNode) inserts "newNode" as a sibling of
"existingNode", placing it before that in the DOM tree structure. element.appendChild(newChild) alters the tree under "element", adding a new child Node
"newChild" to it, after all the other children. element.prepend(newChild) alters the tree under "element", adding a new child Node
"newChild" to it, before other child elements. You can pass one or more child Nodes, or even a string which will be interpreted as a Text node. element.replaceChild(existingChild, newChild) alters the tree under "element", replacing
"existingChild" with a new Node "newChild". element.insertAdjacentElement(position, newElement) inserts "newElement" in the DOM,
positioned relatively to "element" depending on "position" parameter value. See the possible values. element.textContent = 'something' changes the content of a Text node to "something".
281
Progressive Web Apps
Progressive Web Apps A Progressive Web App is an app that can provide additional features based on the device support, including offline capabilities, push notifications and almost native app look and speed, and local caching of resources Introduction What is a Progressive Web App Progressive Web Apps alternatives Native Mobile Apps Hybrid Apps Apps built with React Native Progressive Web Apps features Features Benefits Core concepts Service Workers The App Manifest Example The App Shell Caching
Introduction Progressive Web Apps (PWA) are the latest trend of mobile application development using web technologies, at the time of writing (march 2018) work on Android and iOS devices with iOS 11.3 or higher, and macOS 10.13.4 or higher. PWA is a term that identifies a bundle of techniques that have the goal of creating a better experience for web-based apps.
What is a Progressive Web App A Progressive Web App is an app that can provide additional features based on the device support, providing offline capability, push notifications and almost native app look and speed, and local caching of resources. This technique was originally introduced by Google in 2015, and proves to bring many advantages to both the developer and the users.
282
Progressive Web Apps
Developers have access to building almost-first-class applications using a web stack, which is always considerably easier and cheaper than building native applications, especially when considering the implications of building and maintaining cross-platform apps. Devs can benefit from a reduced installation friction, at a time when having an app in the store does not actually bring anything in terms of discoverability for 99,99% of the apps, and Google search can provide the same benefits if not more. A Progressive Web App is a website which is developed with certain technologies that make the mobile experience much more pleasant than a normal mobile-optimized website, to a point that it's almost working like a native app, as it offers the following features: Offline support Loads fast Is secure Is capable of emitting push notifications Has an immersive, full-screen user experience without the URL bar Mobile platforms (Android at the time of writing, but it's not technically limited to that) offer an increasing support for Progressive Web Apps to the point of asking the user to add the app to the home screen when they detect a site a user is visiting is a PWA. But first, a little clarification on the name. Progressive Web App can be a confusing term, and a good definition is web apps that take advantage of modern browsers features (like web workers and the web app manifest) to let their mobile devices "upgrade" the app to the role of a first-class citizen app.
Progressive Web Apps alternatives How does a PWA stand compared to the alternatives when it comes to building a mobile experience? Let's focus on the pros and cons of each, and let's see where PWAs are a good fit.
Native Mobile Apps Native mobile apps are the most obvious way to build a mobile app. Objective-C or Swift on iOS, Java / Kotlin on Android and C# on Windows Phone. Each platform has its own UI and UX conventions, and the native widgets provide the experience that the user expects. They can be deployed and distributed through the platform App Store.
283
Progressive Web Apps
The main pain point with native apps is that cross-platform development requires learning, mastering and keeping up to date with many different methodologies and best practices, so if for example you have a small team or even you're a solo developer building an app on 3 platforms, you need to spend a lot of time learning the technology but also the environment, manage different libraries, and use different workflows (for example, iCloud only works on iOS devices, there's no Android version).
Hybrid Apps Hybrid applications are built using Web Technologies, but deployed to the App Store. In the middle sits a framework or some way to package the application so it's possible to send it for review to the traditional App Store. Most common platforms are Phonegap, Xamarin, Ionic Framework, and many others, and usually what you see on the page is a WebView that essentially loads a local website. The key aspect of Hybrid Apps is the write once, run anywhere concept, the different platform code is generated at build time, and you're building apps using JavaScript, HTML and CSS, which is amazing, and the device capabilities (microphone, camera, network, gps...) are exposed through JavaScript APIs. The bad part of building hybrid apps is that unless you do a great job, you might settle on providing a common denominator, effectively creating an app that's sub-optimal on all platforms because the app is ignoring the platform-specific human-computer interaction guidelines. Also, performance for complex views might suffer.
Apps built with React Native React Native exposes the native controls of the mobile device through a JavaScript API, but you're effectively creating a native application, not embedding a website inside a WebView. Their motto, to distinguish this approach from hybrid apps, is learn once, write anywhere, meaning that the approach is the same across platforms, but you're going to create completely separate apps in order to provide a great experience on each platform. Performance is comparable to native apps, since what you build is essentially a native app, which is distributed through the App Store.
Progressive Web Apps features
284
Progressive Web Apps
In the last section you saw the main competitors of Progressive Web Apps. So how do PWAs stand compared to them, and what are their main features? Remember, currently Progressive Web Apps are Android-only
Features Progressive Web Apps have one thing that separates them completely from the above approaches: they are not deployed to the app store.. This is a key advantage, since the app store is beneficial if you have the reach and luck to be featured, which can make your app go viral, but unless you're in the 0,001% you're not going to get much benefits from having your little place on the App Store. Progressive Web Apps are discoverable using Search Engines, and when a user gets to your site which has PWAs capabilities, the browser in combination with the device asks the user if they want to install the app to the home screen. This is huge because regular SEO can apply to your PWA, leading to much less reliance on paid acquisition. Not being in the App Store means you don't need the Apple or Google approval to be in the users pockets, and you can release updates when you want, without having to go through the standard approval process which is typical of iOS apps. PWAs are basically HTML5 applications / responsive websites on steroids, with some key technologies that were recently introduced that make some of the key features possible. If you remember the original iPhone came without the option to develop native apps, and developers were told to develop HTML5 mobile apps, that could be installed to the home screen, but the tech back then was not ready for this. Progressive Web Apps run offline. The use of service workers allow the app to always have fresh content, and download it in the background, and provide support for push notifications to provide greater re-engagement opportunities. Also, shareability makes for a much nicer experience for users that want to share your app, as they just need a URL.
Benefits So why should users and developers care about Progressive Web Apps? 1. PWA are lighter. Native Apps can weight 200MB or more, while a PWA could be in the range of the KBs. 2. No native platform code
285
Progressive Web Apps
3. Lower the cost of acquisition (it's much more hard to convince a user to install an app than to visit a website to get the first-time experience) 4. Significant less effort is needed to build and release updates 5. Much more support for deep links than regular app-store apps
Core concepts Responsive: the UI adapts to the device screen size App-like feel: it doesn't feel like a website, but rather as an app as much as possible Offline support: it will use the device storage to provide offline experience Installable: the device browser prompts the user to install your app Re-engaging: push notifications help users re-discover your app once installed Discoverable: search engines and SEO optimization can provide a lot more users than the app store Fresh: the app updates itself and the content once online Safe: uses HTTPS Progressive: it will work on any device, even older one, even if with less features (e.g. just as a website, not installable) Linkable: easy to point to it, using URLs
Service Workers Part of the Progressive Web App definition is that it must work offline. Since the thing that allows the web app to work offline is the Service Worker, this implies that Service Workers are a mandatory part of a Progressive Web App. See http://caniuse.com/#feat=serviceworkers for updated data on browsers support. TIP: Don't confuse Service Workers with Web Workers. They are a completely different thing. A Service Worker is a JavaScript file that acts as a middleman between the web app and the network. Because of this it can provide cache services and speed the app rendering and improve the user experience. Because of security reasons, only HTTPS sites can make use of Service Workers, and this is part of the reasons why a Progressive Web App must be served through HTTPS. Service Workers are not available on the device the first time the user visits the app. What happens is that the first visit the web worker is installed, and then on subsequent visits to separate pages of the site will call this Service Worker.
286
Progressive Web Apps
Check out the complete guide to Service Workers
The App Manifest The App Manifest is a JSON file that you can use to provide the device information about your Progressive Web App. You add a link to the manifest in all your web site pages header:
This file will tell the device how to set: The name and short name of the app The icons locations, in various sizes The starting URL, relative to the domain The default orientation The splash screen
The App Manifest is a W3C Working Draft, reachable at https://www.w3.org/TR/appmanifest/
The App Shell The App Shell is not a technology but rather a design concept aimed at loading and rendering the web app container first, and the actual content shortly after, to give the user a nice app-like impression. This is the equivalent of the Apple HIG (Human Interface Guidelines) suggestions to use a splash screen that resembles the user interface, to give a psychological hint that was found to lower the perception of the app taking a long time to load.
Caching The App Shell is cached separately from the contents, and it's setup so that retrieving the shell building blocks from the cache takes very little time. Find out more on the App Shell at https://developers.google.com/web/updates/2015/11/appshell
288
Service Workers
Service Workers Service Workers are a key technology powering Progressive Web Applications on the mobile web. They allow caching of resources and push notifications, two of the main distinguishing features that up to now set native apps apart Introduction to Service Workers Background Processing Offline Support Precache assets during installation Caching network requests A Service Worker Lifecycle Registration Scope Installation Activation Updating a Service Worker Fetch Events Background Sync Push Events A note about console logs
Introduction to Service Workers Service Workers are at the core of Progressive Web Apps, because they allow caching of resources and push notifications, two of the main distinguishing features that up to now set native apps apart. A Service Worker is programmable proxy between your web page and the network, providing the ability to intercept and cache network requests, effectively giving you the ability to create an offline-first experience for your app. It's a special kind of web worker, a JavaScript file associated with a web page which runs on a worker context, separate from the main thread, giving the benefit of being non-blocking - so computations can be done without sacrificing the UI responsiveness. Being on a separate thread it has no DOM access, and no access to the Local Storage APIs and the XHR API as well, and it can only communicate back to the main thread using the Channel Messaging API.
289
Service Workers
Service Workers cooperate with other recent Web APIs: Promises Fetch API Cache API And they are only available on HTTPS protocol pages, except for local requests, which do not need a secure connection for an easier testing.
Background Processing Service Workers run independent of the application they are associated to, and they can receive messages when they are not active. For example they can work: when your mobile application is in the background, not active when your mobile application is closed, so even not running in the background when the browser is closed, if the app is running in the browser The main scenarios where Service Workers are very useful are: they can be used as a caching layer to handle network requests, and cache content to be used when offline to allow push notifications A Service Worker only runs when needed, and it's stopped when not used.
Offline Support Traditionally the offline experience for web apps has been very poor. Without a network, often web mobile apps simply won't work, while native mobile apps have the ability to offer either a working version, or some kind of nice message. This is not a nice message, but this is what web pages look like in Chrome without a network connection:
290
Service Workers
Possibly the only nice thing about this is that you get to play a free game by clicking the dinosaur, but it gets boring pretty quickly.
In the recent past the HTML5 AppCache already promised to allow web apps to cache resources and work offline, but its lack of flexibility and confusing behavior made it clear that it wasn't good enough for the job, failing its promises (and it's been discontinued). Service Workers are the new standard for offline caching. Which kind of caching is possible?
Precache assets during installation Assets that are reused throughout the application, like images, CSS, JavaScript files, can be installed the first time the app is opened. This gives the base of what is called the App Shell architecture.
Caching network requests
291
Service Workers
Using the Fetch API we can edit the response coming from the server, determining if the server is not reachable and providing a response from the cache instead.
A Service Worker Lifecycle A Service Worker goes through 3 steps to be fully working: Registration Installation Activation
Registration Registration tells the browser where the server worker is, and it starts the installation in the background. Example code to register a Service Worker placed in worker.js : if ('serviceWorker' in navigator) { window.addEventListener('load', () => { navigator.serviceWorker.register('/worker.js') .then((registration) => { console.log('Service Worker registration completed with scope: ', registration.scope) }, (err) => { console.log('Service Worker registration failed', err) }) }) } else { console.log('Service Workers not supported') }
Even if this code is called multiple times, the browser will only perform the registration if the service worker is new, not registered previously, or if it has been updated.
Scope The register() call also accepts a scope parameter, which is a path that determines which part of your application can be controlled by the service worker. It defaults to all files and subfolders contained in the folder that contains the service worker file, so if you put it in the root folder, it will have control over the entire app. In a subfolder, it will only control pages accessible under that route. The example below registers the worker, by specifying the /notifications/ folder scope.
The / is important: in this case, the page /notifications won't trigger the Service Worker, while if the scope was { scope: '/notifications' }
it would have worked. NOTE: The service worker cannot "up" itself from a folder: if its file is put under /notifications , it cannot control the / path or any other path that is not under /notifications .
Installation If the browser determines that a service worker is outdated or has never been registered before, it will proceed to install it. self.addEventListener('install', (event) => { //... });
This is a good event to prepare the Service Worker to be used, by initializing a cache, and cache the App Shell and static assets using the Cache API.
Activation The activation stage is the third step, once the service worker has been successfully registered and installed. At this point, the service worker will be able to work with new page loads. It cannot interact with pages already loaded, which means the service worker is only useful on the second time the user interacts with the app, or reloads one of the pages already open. self.addEventListener('activate', (event) => { //... });
293
Service Workers
A good use case for this event is to cleanup old caches and things associated with the old version but unused in the new version of the service worker.
Updating a Service Worker To update a Service Worker you just need to change one byte into it, and when the register code is run, it will be updated. Once a Service Worker is updated, it won't become available until all pages that were loaded with the old service worker attached are closed. This ensures that nothing will break on the apps / pages already working. Refreshing the page is not enough, as the old worker is still running and it's not been removed.
Fetch Events A fetch event is fired when a resource is requested on the network. This offers us the ability to look in the cache before making network requests. For example the snippet below uses the Cache API to check if the request URL was already stored in the cached responses, and return the cached response if this is the case. Otherwise, it executes the fetch request and returns it. self.addEventListener('fetch', (event) => { event.respondWith( caches.match(event.request) .then((response) => { if (response) { //entry found in cache return response } return fetch(event.request) } ) ) })
Background Sync Background sync allows outgoing connections to be deferred until the user has a working network connection.
294
Service Workers
This is key to ensure a user can use the app offline, and take actions on it, and queue serverside updates for when there is a connection open, instead of showing an endless spinning wheel trying to get a signal. navigator.serviceWorker.ready.then((swRegistration) => { return swRegistration.sync.register('event1') });
This code listens for the event in the Service Worker: self.addEventListener('sync', (event) => { if (event.tag == 'event1') { event.waitUntil(doSomething()) } })
doSomething() returns a promise. If it fails, another sync event will be scheduled to retry
automatically, until it succeeds. This also allows an app to update data from the server as soon as there is a working connection available.
Push Events Service Workers enable web apps to provide native Push Notifications to users. Push and Notifications are actually two different concepts and technologies, but combined to provide what we know as Push Notifications. Push provides the mechanism that allows a server to send information to a service worker, and Notifications are the way service workers can show information to the user. Since Service Workers run even when the app is not running, they can listen for push events coming, and either provide user notifications, or update the state of the app. Push events are initiated by a backend, through a browser push service, like the one provided by Firebase. Here is an example of how the service worker can listen for incoming push events: self.addEventListener('push', (event) => { console.log('Received a push event', event) const options = { title: 'I got a message for you!', body: 'Here is the body of the message', icon: '/img/icon-192x192.png',
A note about console logs If you have any console log statement ( console.log and friends) in the Service Worker, make sure you turn on the Preserve log feature provided by the Chrome DevTools, or equivalent. Otherwise, since the service worker acts before the page is loaded, and the console is cleared before loading the page, you won't see any log in the console.
296
XHR
XHR The introduction of XMLHttpRequest (XHR) in browsers have been a huge win for the Web Platform, in the mid 2000. Let's see how it works.
Introduction An example XHR request Additional open() parameters onreadystatechange
Aborting an XHR request Comparison with jQuery Comparison with Fetch Cross Domain Requests
Introduction The introduction of XMLHttpRequest (XHR) in browsers have been a huge win for the Web Platform, in the mid 2000. Things that now look normal, back in the day looked like coming from the future. I'm thinking about GMail or Google Maps, for example, all based in great part on XHR. XHR was invented at Microsoft in the nineties, and became a de-facto standard as all browsers implemented it in the 2002-2006 period, and the W3C standardized XMLHttpRequest in 2006.
297
XHR
As it sometimes happen in the Web Platform, initially there were a few inconsistencies that made working with XHR quite different cross-browser. Libraries like jQuery got a boost of popularity by providing an easy to use abstraction for developers, and in turn helped spread the usage of this technology.
An example XHR request The following code creates an XMLHttpRequest (XHR) request object, and attaches a callback function that responds on the onreadystatechange event. The xhr connection is set up to perform a GET request to https://yoursite.com , and it's started with the send() method: const xhr = new XMLHttpRequest() xhr.onreadystatechange = () => { if (xhr.readyState === 4) { xhr.status === 200 ? console.log(xhr.responseText) : console.error('error') } } xhr.open('GET', 'https://yoursite.com') xhr.send()
Additional open() parameters In the example above we just passed the method and the URL to the request. We can specify the other HTTP methods of course ( get , post , head , put , delete , options ).
Other parameters let you specify a flag to make the request synchronous if set to false, and a set of credentials for HTTP authentication: open(method, url, asynchronous, username, password)
onreadystatechange The onreadystatechange is called multiple times during an XHR request. We explicitly ignore all the states other than readyState === 4 , which means the request is done. The states are 1 (OPENED): the request starts
298
XHR
2 (HEADERS_RECEIVED): the HTTP headers have been received 3 (LOADING): the response begins to download 4 (DONE): the response has been downloaded
Aborting an XHR request An XHR request can be aborted by calling the abort() method on the xhr object.
Comparison with jQuery With jQuery these lines can be translated to: $.get('https://yoursite.com', data => { console.log(data) }).fail(err => { console.error(err) })
Comparison with Fetch With the Fetch API this is the equivalent code: fetch('https://yoursite.com') .then(data => { console.log(data) }) .catch(err => { console.error(err) })
Cross Domain Requests Note that an XMLHttpRequest connection is subject to specific limits that are enforced for security reasons. One of the most obvious is the enforcement of the same origin policy. You cannot access resources on another server, unless the server explicitly supports this using CORS (Cross Origin Resource Sharing).
299
XHR
300
Fetch API
Fetch API Learn all about the Fetch API, the modern approach to asynchronous network requests which uses Promises as a building block
Introduction to the Fetch API Using Fetch Catching errors Response Object Metadata headers status statusText url Body content Request Object Request headers POST Requests Fetch drawbacks How to cancel a fetch request
Introduction to the Fetch API Since IE5 was released in 1998, we've had the option to make asynchronous network calls in the browser using XMLHttpRequest (XHR).
301
Fetch API
Quite a few years after this, GMail and other rich apps made heavy use of it, and made the approach so popular that it had to have a name: AJAX. Working directly with the XMLHttpRequest has always been a pain and it was almost always abstracted by some library, in particular jQuery has its own helper functions built around it: jQuery.ajax() jQuery.get() jQuery.post()
and so on. They had a huge impact on making this more accessible in particular with regards to making sure all worked on older browsers as well. The Fetch API, has been standardized as a modern approach to asynchronous network requests, and uses Promises as a building block. Fetch has a good support across the major browsers, except IE.
The polyfill https://github.com/github/fetch released by GitHub allows us to use fetch on any browser.
Using Fetch Starting to use Fetch for GET requests is very simple: fetch('/file.json')
and you're already using it: fetch is going to make an HTTP request to get the file.json resource on the same domain. As you can see, the fetch function is available in the global window scope. Now let's make this a bit more useful, let's actually see what the content of the file is: fetch('./file.json') .then(response => response.json()) .then(data => console.log(data))
Calling fetch() returns a promise. We can then wait for the promise to resolve by passing a handler with the then() method of the promise. That handler receives the return value of the fetch promise, a Response object.
302
Fetch API
We'll see this object in detail in the next section.
Catching errors Since fetch() returns a promise, we can use the catch method of the promise to intercept any error occurring during the execution of the request, and the processing done in the then callbacks: fetch('./file.json') .then(response => { //... }) .catch(err => console.error(err))
Response Object The Response Object returned by a fetch() call contains all the information about the request and the response of the network request.
Metadata headers Accessing the headers property on the response object gives you the ability to look into the HTTP headers returned by the request: fetch('./file.json').then(response => { console.log(response.headers.get('Content-Type')) console.log(response.headers.get('Date')) })
status 303
Fetch API
This property is an integer number representing the HTTP response status. 101, 204, 205, or 304 is a null body status 200 to 299, inclusive, is an OK status (success) 301, 302, 303, 307, or 308 is a redirect fetch('./file.json').then(response => console.log(response.status))
statusText statusText is a property representing the status message of the response. If the request is
successful, the status is OK . fetch('./file.json').then(response => console.log(response.statusText))
url url represents the full URL of the property that we fetched.
Body content A response has a body, accessible using the text() or json() methods, which return a promise. fetch('./file.json') .then(response => response.text()) .then(body => console.log(body))
The same can be written using the ES2017 async functions: ;(async () => { const response = await fetch('./file.json') const data = await response.json() console.log(data) })()
Request Object The Request object represents a resource request, and it's usually created using the new Request() API.
Example: const req = new Request('/api/todos')
The Request object offers several read-only properties to inspect the resource request details, including method : the request's method (GET, POST, etc.) url : the URL of the request. headers : the associated Headers object of the request referrer : the referrer of the request cache : the cache mode of the request (e.g., default, reload, no-cache).
And exposes several methods including json() , text() and formData() to process the body of the request. The full API can be found at https://developer.mozilla.org/docs/Web/API/Request
Request headers Being able to set the HTTP request header is essential, and fetch gives us the ability to do this using the Headers object: const headers = new Headers() headers.append('Content-Type', 'application/json')
or more simply const headers = new Headers({
305
Fetch API
'Content-Type': 'application/json' })
To attach the headers to the request, we use the Request object, and pass it to fetch() instead of simply passing the URL. Instead of: fetch('./file.json')
we do const request = new Request('./file.json', { headers: new Headers({ 'Content-Type': 'application/json' }) }) fetch(request)
The Headers object is not limited to setting value, but we can also query it: headers.has('Content-Type') headers.get('Content-Type')
and we can delete a header that was previously set: headers.delete('X-My-Custom-Header')
POST Requests Fetch also allows to use any other HTTP method in your request: POST, PUT, DELETE or OPTIONS. Specify the method in the method property of the request, and pass additional parameters in the header and in the request body: Example of a POST request: const options = { method: 'post', headers: { 'Content-type': 'application/x-www-form-urlencoded; charset=UTF-8' }, body: 'foo=bar&test=1' }
Fetch drawbacks While it's a great improvement over XHR, especially considering its Service Workers integration, Fetch currently has no way to abort a request once it's done. With Fetch it's also hard to measure upload progress. If you need those things in your app, the Axios JavaScript library might be a better fit.
How to cancel a fetch request For a few years after fetch was introduced, there was no way to abort a request once opened. Now we can, thanks to the introduction of AbortController and AbortSignal , a generic API to notify abort events You integrate this API by passing a signal as a fetch parameter: const controller = new AbortController() const signal = controller.signal fetch('./file.json', { signal })
You can set a timeout that fires an abort event 5 seconds after the fetch request has started, to cancel it: setTimeout(() => controller.abort(), 5 * 1000)
Conveniently, if the fetch already returned, calling abort() won't cause any error. When an abort signal occurs, fetch will reject the promise with a DOMException named AbortError :
fetch('./file.json', { signal }) .then(response => response.text()) .then(text => console.log(text)) .catch(err => { if (err.name === 'AbortError') { console.error('Fetch aborted')
307
Fetch API
} else { console.error('Another error', err) } })
308
Channel Messaging API
Channel Messaging API The Channel Messaging API allows iframes and workers to communicate with the main document thread, by passing messages Introduction to Channel Messaging API How it works An example with an iframe An example with a Service Worker
Introduction to Channel Messaging API Given two scripts running in the same document, but in a different context, the Channel Messaging API allows them to communicate by passing messages through a channel. This use case involves communication between the document and an iframe two iframes two documents
How it works Calling new MessageChannel() a message channel is initialized. The channel has 2 properties, called port1 port2 Those properties are a MessagePort object. port1 is the port used by the part that created the channel, and port2 is the port used by the channel receiver (by the way, the channel is bidirectional, so the receiver can send back messages as well). Sending the message is done through the otherWindow.postMessage()
method, where otherWindow is the other browsing context. It accepts a message, an origin and the port.
309
Channel Messaging API
A message can be a JavaScript basic value like strings, numbers, and some data structures are supported, namely File Blob FileList ArrayBuffer
"Origin" is a URI (e.g. https://example.org ). You can use '*' to allow less strict checking, or specify a domain, or specify '/' to set a same-domain target, without needing to specify which domain is it. The other browsing context listens for the message using MessagePort.onmessage , and it can respond back by using MessagePort.postMessage . A channel can be closed by invoking MessagePort.close . Let's see a practical example in the next lesson.
An example with an iframe Here's an example of a communication happening between a document and an iframe embedded into it. The main document defines an iframe and a span where we'll print a message that's sent from the iframe document. As soon as the iframe document is loaded, we send it a message on the channel we created. const channel = new MessageChannel() const display = document.querySelector('span') const iframe = document.querySelector('iframe') iframe.addEventListener('load', () => { iframe.contentWindow.postMessage('Hey', '*', [channel.port2]) }, false) channel.port1.onmessage = (e) => { para.innerHTML = e.data }
310
Channel Messaging API
The iframe page source is even simpler: window.addEventListener("message", (event) => { if (event.origin !== "http://example.org:8080") { return } // process // send a message back event.ports[0].postMessage('Message back from the iframe') }, false)
As you can see we don't even need to initialize a channel, because the window.onmessage handler is automatically run when the message is received from the container page. e is the event that's sent, and is composed by the following properties: data : the object that's been sent from the other window origin : the origin URI of the window that sent the message source : the window object that sent the message
Always verify the origin of the message sender. e.ports[0] is the way we reference port2 in the iframe, because ports is an array, and the
port was added as the first element.
An example with a Service Worker A Service Worker is an event-driven worker, a JavaScript file associated with web page. Check out the Service Workers guide to know more about them. What's important to know is that Service Workers are isolated from the main thread, and we must communicate with them using messages. This is how a script attached to the main document will handle sending messages to the Service Worker: // `worker` is the service worker already instantiated
In the Service Worker code, we add an event listener for the message event: self.addEventListener('message', (event) => { console.log(event.data) })
And it can send messages back by posting a message to messageChannel.port2 , with self.addEventListener('message', (event) => { event.ports[0].postMessage(data) })
More on the inner workings of Service Workers in the Service Workers guide.
312
Cache API
Cache API The Cache API is part of the Service Worker specification, and is a great way to have more power on resources caching. Introduction Detect if the Cache API is available Initialize a cache Add items to the cache cache.add() cache.addAll()
Manually fetch and add Retrieve an item from the cache Get all the items in a cache Get all the available caches Remove an item from the cache Delete a cache
Introduction The Cache API is part of the Service Worker specification, and is a great way to have more power on resources caching. It allows you to cache URL-addressable resources, which means assets, web pages, HTTP APIs responses. It's not meant to cache individual chunks of data, which is the task of the IndexedDB API. It's currently available in Chrome >= 40, Firefox >=39 and Opera >= 27. Safari and Edge recently introduced support for it. Internet Explorer does not support it. Mobile support is good on Android, supported on the Android Webview and in Chrome for Android, while on iOS it's only available to Opera Mobile and Firefox Mobile users.
Detect if the Cache API is available The Cache API is exposed through the caches object. To detect if the API is implemented in the browser, just check for its existence using:
313
Cache API
if ('caches' in window) { //ok }
Initialize a cache Use the caches.open API, which returns a promise with a cache object ready to be used: caches.open('mycache').then(cache => { // you can start using the cache })
mycache is a name that I use to identify the cache I want to initialize. It's like a variable name,
you can use any name you want. If the cache does not exist yet, caches.open creates it.
Add items to the cache The cache object exposes two methods to add items to the cache: add and addAll .
cache.add() add accepts a single URL, and when called it fetches the resource and caches it.
To allow more control on the fetch, instead of a string you can pass a Request object, part of the Fetch API specification: caches.open('mycache').then(cache => { const options = { // the options } cache.add(new Request('/api/todos', options)) })
cache.addAll() addAll accepts an array, and returns a promise when all the resources have been cached.
Manually fetch and add cache.add() automatically fetches a resource, and caches it.
The Cache API offers a more granular control on this via cache.put() . You are responsible for fetching the resource and then telling the Cache API to store a response: const url = '/api/todos' fetch(url).then(res => { return caches.open('mycache').then(cache => { return cache.put(url, res) }) })
Retrieve an item from the cache cache.match() returns a Response object which contains all the information about the
request and the response of the network request caches.open('mycache').then(cache => { cache.match('/api/todos').then(res => { //res is the Response Object }) })
Get all the items in a cache caches.open('mycache').then(cache => { cache.keys().then(cachedItems => { // }) })
cachedItems is an array of Request objects, which contain the URL of the resource in the url property.
315
Cache API
Get all the available caches The caches.keys() method lists the keys of every cache available. caches.keys().then(keys => { // keys is an array with the list of keys })
Remove an item from the cache Given a cache object, its delete() method removes a cached resource from it. caches.open('mycache').then(cache => { cache.delete('/api/todos') })
Delete a cache The caches.delete() method accepts a cache identifier and when executed it wipes the cache and its cached items from the system. caches.delete('mycache').then(() => { // deleted successfully })
316
Push API
Push API The Push API allows a web app to receive messages pushed by a server, even if the web app is not currently open in the browser or not running on the device.
What is the Push API What can you do with it How it works Overview Getting the user's permission Check if Service Workers are supported Check if the Push API is supported
317
Push API
Register a Service Worker Request permission from the user Subscribe the user and get the PushSubscription object Send the PushSubscription object to your server How the Server side works Registering a new client subscription Sending a Push message In the real world... Receive a Push event Displaying a notification
What is the Push API The Push API is a recent addition to the browser APIs, and it's currently supported by Chrome (Desktop and Mobile), Firefox and Opera since 2016. See more about the current state of browsers support at https://caniuse.com/#feat=push-api IE, Edge do not support it yet, and Safari has its own implementation. Since Chrome and Firefox support it, approximately 60% of the users browsing on the desktop have access to it, so it's quite safe to use.
What can you do with it You can send messages to your users, pushing them from the server to the client, even when the user is not browsing the site. This lets you deliver notifications and content updates, giving you the ability to have a more engaged audience. This is huge because one of the missing pillars of the mobile web, compared to native apps, was the ability to receive notifications, along with offline support.
How it works Overview When a user visits your web app, you can trigger a panel asking permission to send updates. A Service Worker is installed, and operating in the background listens for a Push Event.
318
Push API
Push and Notifications are a separate concept and API, sometimes mixed because of the push notifications term used in iOS. Basically, the Notifications API is invoked when a push event is received using the Push API. Your server sends the notification to the client, and the Service Worker, if given permission, receives a push event. The Service Worker reacts to this event by triggering a notification.
Getting the user's permission The first step in working with the Push API is getting the user's permission to receive data from you. Many sites implement this panel badly, showing it on the first page load. The user is not yet convinced your content is good, and they will deny the permission. Do it wisely. There are 6 steps: 1. Check if Service Workers are supported 2. Check if the Push API is supported 3. Register a Service Worker 4. Request permission from the user 5. Subscribe the user and get the PushSubscription object 6. Send the PushSubscription object to your server
Check if Service Workers are supported if (!('serviceWorker' in navigator)) { // Service Workers are not supported. Return return }
Check if the Push API is supported if (!('PushManager' in window)) { // The Push API is not supported. Return return }
Register a Service Worker This code register the Service Worker located in the worker.js file placed in the domain root:
To know more about how Service Workers work in detail, check out the Service Workers guide.
Request permission from the user Now that the Service worker is registered, you can request the permission. The API to do this changed over time, and it went from accepting a callback function as a parameter to returning a Promise, breaking the backward and forward compatibility, and we need to do both as we don't know which approach is implemented by the user's browser. The code is the following, calling Notification.requestPermission() . const askPermission = () => { return new Promise((resolve, reject) => { const permissionResult = Notification.requestPermission((result) => { resolve(result) }) if (permissionResult) { permissionResult.then(resolve, reject) } }) .then((permissionResult) => { if (permissionResult !== 'granted') { throw new Error('Permission denied') } }) }
The permissionResult value is a string, that can have the value of: granted default denied
This code causes the browser to show the permission dialogue:
320
Push API
If the user clicks Block, you won't be able to ask for the user's permission any more, unless they manually go and unblock the site in an advanced settings panel in the browser (very unlikely to happen).
Subscribe the user and get the PushSubscription object If the user gave us permission, we can subscribe it and by calling registration.pushManager.subscribe() .
APP_SERVER_KEY is a string - called Application Server Key or VAPID key - that identifies the
application public key, part of a public / private key pair. It will be used as part of the validation that for security reasons occurs to make sure you (and only you, not someone else) can send a push message back to the user.
Send the PushSubscription object to your server In the previous snippet we got the pushSubscription object, which contains all we need to send a push message to the user. We need to send this information to our server, so we're able to send notifications later on.
321
Push API
We first create a JSON representation of the object const subscription = JSON.stringify(pushSubscription)
and we can post it to our server using the Fetch API: const sendToServer = (subscription) => { return fetch('/api/subscription', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(subscription) }) .then((res) => { if (!res.ok) { throw new Error('An error occurred') } return res.json() }) .then((resData) => { if (!(resData.data && resData.data.success)) { throw new Error('An error occurred') } }) } sendToServer(subscription)
Server-side, the /api/subscription endpoint receives the POST request and can store the subscription information into its storage.
How the Server side works So far we only talked about the client-side part: getting a user's permission to be notified in the future. What about the server? What should it do, and how should it interact with the client? These server-side examples uses Express.js (http://expressjs.com/) as the base HTTP framework, but you can write a server-side Push API handler in any language or framework
Registering a new client subscription When the client sends a new subscription, remember we used the /api/subscription HTTP POST endpoint, sending the PushSubscription object details in JSON format, in the body.
This utility function makes sure the request is valid, has a body and an endpoint property, otherwise it returns an error to the client: const isValidSaveRequest = (req, res) => { if (!req.body || !req.body.endpoint) { res.status(400) res.setHeader('Content-Type', 'application/json') res.send(JSON.stringify({ error: { id: 'no-endpoint', message: 'Subscription must have an endpoint' } })) return false } return true }
The next utility function saves the subscription to the database, returning a promise resolved when the insertion completed (or failed). The insertToDatabase function is a placeholder, we're not going into those details here: const saveSubscriptionToDatabase = (subscription) => { return new Promise((resolve, reject) => { insertToDatabase(subscription, (err, id) => { if (err) { reject(err) return } resolve(id) }) }) }
We use those functions in the POST request handler below. We check if the request is valid, then we save the request and then we return a data.success: true response back to the client, or an error: app.post('/api/subscription', (req, res) => { if (!isValidSaveRequest(req, res)) { return }
323
Push API
saveSubscriptionToDatabase(req, res.body) .then((subscriptionId) => { res.setHeader('Content-Type', 'application/json') res.send(JSON.stringify({ data: { success: true } })) }) .catch((err) => { res.status(500) res.setHeader('Content-Type', 'application/json') res.send(JSON.stringify({ error: { id: 'unable-to-save-subscription', message: 'Subscription received but failed to save it' } })) }) }) app.listen(3000, () => { console.log('App listening on port 3000') })
Sending a Push message Now that the server has registered the client in its list, we can send it Push messages. Let's see how that works by creating an example code snippet that fetches all subscriptions and sends a Push message to all of them at the same time. We use a library because the Web Push protocol is complex, and a lib allows us to abstract away a lot of low level code that makes sure we can work safely and correctly handle any edge case. This example uses the web-push Node.js library (https://github.com/web-push-libs/webpush) to handle sending the Push message We first initialize the web-push lib, and we generate a tuple of private and public keys, and set them as the VAPID details: const webpush = require('web-push') const vapidKeys = webpush.generateVAPIDKeys() const PUBLIC_KEY = 'XXX' const PRIVATE_KEY = 'YYY' const vapidKeys = { publicKey: PUBLIC_KEY, privateKey: PRIVATE_KEY } webpush.setVapidDetails( 'mailto:[email protected]', vapidKeys.publicKey,
324
Push API
vapidKeys.privateKey )
Then we set up a triggerPush() method, responsible for sending the push event to a client. It just calls webpush.sendNotification() and catches any error. If the return error HTTP status code is 410, which means gone, we delete that subscriber from the database. const triggerPush = (subscription, dataToSend) => { return webpush.sendNotification(subscription, dataToSend) .catch((err) => { if (err.statusCode === 410) { return deleteSubscriptionFromDatabase(subscription._id) } else { console.log('Subscription is no longer valid: ', err) } }) }
We don't implement getting the subscriptions from the database, but we leave it as a stub: const getSubscriptionsFromDatabase = () => { //stub }
The meat of the code is the callback of the POST request to the /api/push endpoint: app.post('/api/push', (req, res) => { return getSubscriptionsFromDatabase() .then((subscriptions) => { let promiseChain = Promise.resolve() for (let i = 0; i < subscriptions.length; i++) { const subscription = subscriptions[i] promiseChain = promiseChain.then(() => { return triggerPush(subscription, dataToSend) }) } return promiseChain }) .then(() => { res.setHeader('Content-Type', 'application/json') res.send(JSON.stringify({ data: { success: true } })) }) .catch((err) => { res.status(500) res.setHeader('Content-Type', 'application/json') res.send(JSON.stringify({ error: { id: 'unable-to-send-messages', message: `Failed to send the push ${err.message}` } }))
325
Push API
}) })
What the above code does is: it gets all the subscriptions from the database, then it iterates on them, and it calls the triggerPush() function we explained before. Once the subscriptions are done, we return a successful JSON response, unless an error occurred and we return a 500 error.
In the real world... It's unlikely that you'll set up your own Push server unless you have a very special use case, or you just want to learn the tech or you like to DIY. Instead, you usually want to use platforms such as OneSignal (https://onesignal.com) which transparently handle Push events to all kind of platforms, Safari and iOS included, for free.
Receive a Push event When a Push event is sent from the server, how does the client get it? It's a normal JavaScript event listener, on the push event, which runs inside a Service Worker: self.addEventListener('push', (event) => { // data is available in event.data })
event.data contains the PushMessageData object which exposes methods to retrieve the push
data sent by the server, in the format you want: arrayBuffer() : as an ArrayBuffer object blob(): as a Blob object json(): parsed as json text(): plain text You'll normally use event.data.json() .
Displaying a notification Here we intersect a bit with the Notifications API, but for a good reason, as one of the main use cases of the Push API is to display notifications.
326
Push API
Inside our push event listener in the Service Worker, we need to display the notification to the user, and to tell the event to wait until the browser has shown it before the function can terminate. We extend the event lifetime until the browser has done displaying the notification (until the promise has been resolved), otherwise the Service Worker could be stopped in the middle of your processing: self.addEventListener('push', (event) => { const promiseChain = self.registration.showNotification('Hey!') event.waitUntil(promiseChain) })
More on notifications in the Notifications API Guide.
327
Notifications API
Notifications API The Notifications API is responsible for showing the user system notifications. It's the interface that browsers expose to the developer to allow showing messages to the user, with their permission, even if the web site is not open in the browser Introduction to the Notification API Permissions Create a notification Add a body Add an image Close a notification
Introduction to the Notification API The Notifications API is the interface that browsers expose to the developer to allow showing messages to the user, with their permission, even if the web site / web app is not open in the browser. Those messages are consistent and native, which means that the receiving person is used to the UI and UX of them, being system-wide and not specific to your site. In combination with the Push API this technology can be a successful way to increase user engagement and to enhance the capabilities of your app. The Notifications API interacts heavily with Service Workers, as they are required for Push Notifications. You can use the Notifications API without Push, but its use cases are limited. if (window.Notification && Notification.permission !== "denied") { Notification.requestPermission((status) => { // status is "granted", if accepted by user var n = new Notification('Title', { body: 'I am the body text!', icon: '/path/to/icon.png' // optional }) }) }
n.close()
328
Notifications API
Permissions To show a notification to the user, you must have permission to do so. The Notification.requestPermission() method call requests this permission. You can call Notification.requestPermission()
in this very simple form, and it will show a permission permission granting panel - unless permission was already granted before. To do something when the user interacts (allows or denies), you can attach a processing function to it: const process = (permission) => { if (permission === "granted") { // ok we can show the permission } } Notification.requestPermission((permission) => { process(permission) }).then((permission) => { process(permission) })
See how we pass in a callback and also we expect a promise. This is because of different implementations of Notification.requestPermission() made in the past, which we now must support as we don't know in advance which version is running in the browser. So to keep things in a single location I extracted the permission processing in the process() function. In both cases that function is passed a permission string which can have one of these values: granted : the user accepted, we can show a permission denied : the user denied, we can't show any permission
Those values can also be retrieved checking the Notification.permission property, which - if the user already granted permissions - evaluates to granted or denied , but if you haven't called Notification.requestPermission() yet, it will resolve to default .
Create a notification
329
Notifications API
The Notification object exposed by the window object in the browser allows you to create a notification and to customize its appearance. Here is the simplest example, that works after you asked for permissions: Notification.requestPermission() new Notification('Hey')
You have a few options to customize the notification.
Add a body First, you can add a body, which is usually shown as a single line: new Notification('Hey', { body: 'You should see this!' })
Add an image You can add an icon property: new Notification('Hey', { body: 'You should see this!', icon: '/user/themes/writesoftware/favicon.ico' })
330
Notifications API
More customization options, with platform-specific properties, can be found at https://developer.mozilla.org/docs/Web/API/Notification
Close a notification You might want to close a notification once you opened it. To do so, create a reference to the notification you open: const n = new Notification('Hey')
and then you can close it later, using: n.close()
or with a timeout: setTimeout(n.close(), 1 * 1000)
331
IndexedDB
IndexedDB IndexedDB is one of the storage capabilities introduced into browsers over the years. Here's an introduction to IndexedDB, the Database of the Web supported by all modern Browsers Introduction to IndexedDB Create an IndexedDB Database How to create a database Create an Object Store How to create an object store or add a new one Indexes Check if a store exists Deleting from IndexedDB Delete a database Delete an object store To delete data in an object store use a transaction Add an item to the database Getting items from a store Getting a specific item from a store using get() Getting all the items using getAll() Iterating on all the items using a cursor via openCursor() Iterating on a subset of the items using bounds and cursors
Introduction to IndexedDB IndexedDB is one of the storage capabilities introduced into browsers over the years. It's a key/value store (a noSQL database) considered to be the definitive solution for storing data in browsers. It's an asynchronous API, which means that performing costly operations won't block the UI thread providing a sloppy experience to users. It can store an indefinite amount of data, although once over a certain threshold the user is prompted to give the site higher limits. It's supported on all modern browsers. It supports transactions, versioning and gives good performance. Inside the browser we can also use: Cookies: can host a very small amount of strings
332
IndexedDB
Web Storage (or DOM Storage), a term that commonly identifies localStorage and sessionStorage, two key/value stores. sessionStorage, does not retain data, which is cleared when the session ends, while localStorage keeps the data across sessions Local/session storage have the disadvantage of being capped at a small (and inconsistent) size, with browsers implementation offering from 2MB to 10MB of space per site. In the past we also had Web SQL, a wrapper around SQLite, but now this is deprecated and unsupported on some modern browsers, it's never been a recognized standard and so it should not be used, although 83% of users have this technology on their devices according to Can I Use. While you can technically create multiple databases per site, you generally create one single database, and inside that database you can create multiple object stores. A database is private to a domain, so any other site cannot access another website IndexedDB stores. Each store usually contains a set of things, which can be strings numbers objects arrays dates For example you might have a store that contains posts, another that contains comments. A store contains a number of items which have a unique key, which represents the way by which an object can be identified. You can alter those stores using transactions, by performing add, edit and delete operations, and iterating over the items they contain. Since the advent of Promises in ES6, and the subsequent move of APIs to using promises, the IndexedDB API seems a bit old school. While there's nothing wrong in it, in all the examples that I'll explain I'll use the IndexedDB Promised Library by Jake Archibald, which is a tiny layer on top of the IndexedDB API to make it easier to use. This library is also used on all the examples on the Google Developers website regarding IndexedDB
333
IndexedDB
Create an IndexedDB Database Include the idb lib using: yarn add idb
And then include it in your page, either using Webpack or Browserify or any other build system, or simply:
And we're ready to go. Before using the IndexedDB API, always make sure you check for support in the browser, even though it's widely available, you never know which browser the user is using: (() => { 'use strict' if (!('indexedDB' in window)) { console.warn('IndexedDB not supported') return } //...IndexedDB code })()
How to create a database Using idb.open() : const name = 'mydbname' const version = 1 //versions start at 1 idb.open(name, version, upgradeDb => {})
The first 2 parameters are self-explanatory. The third param, which is optional, is a callback called only if the version number is higher than the current installed database version. In the callback function body you can upgrade the structure (stores and indexes) of the db. We use the name upgradeDB for the callback to identify this is the time to update the database if needed.
Create an Object Store 334
IndexedDB
How to create an object store or add a new one An object store is created or updated in this callback, using the db.createObjectStore('storeName', options) syntax:
If you installed a previous version, the callback allows you to perform a the migration: const dbPromise = idb.open('keyval-store', 3, (upgradeDB) => { switch (upgradeDB.oldVersion) { case 0: // no db created before // a store introduced in version 1 upgradeDB.createObjectStore('store1') case 1: // a new store in version 2 upgradeDB.createObjectStore('store2', { keyPath: 'name' }) } }) .then(db => console.log('success'))
createObjectStore() as you can see in case 1 accepts a second parameter that indicates the
index key of the database. This is very useful when you store objects: put() calls don't need a second parameter, but can just take the value (an object) and the key will be mapped to the object property that has that name. The index gives you a way to retrieve a value later by that specific key, and it must be unique (every item must have a different key) A key can be set to auto increment, so you don't need to keep track of it on the client code. If you don't specify a key, IndexedDB will create it transparently for us: upgradeDb.createObjectStore('notes', { autoIncrement: true })
but you can specify a specific field of object value to auto increment as well: upgradeDb.createObjectStore('notes', { keyPath: 'id', autoIncrement: true })
As a general rule, use auto increment if your values do not contain a unique key already (for example, an email address for users).
335
IndexedDB
Indexes An index is a way to retrieve data from the object store. It's defined along with the database creation in the idb.open() callback in this way: const dbPromise = idb.open('dogsdb', 1, (upgradeDB) => { const dogs = upgradeDB.createObjectStore('dogs') dogs.createIndex('name', 'name', { unique: false }) })
The unique option determines if the index value should be unique, and no duplicate values are allowed to be added. You can access an object store already created using the upgradeDb.transaction.objectStore() method:
Check if a store exists You can check if an object store already exists by calling the objectStoreNames() method: if (!upgradeDb.objectStoreNames.contains('store3')) { upgradeDb.createObjectStore('store3') }
Deleting from IndexedDB Deleting the database, an object store and data
Delete a database idb.delete('mydb') .then(() => console.log('done'))
Delete an object store An object store can only be deleted in the callback when opening a db, and that callback is only called if you specify a version higher than the one currently installed:
To delete data in an object store use a transaction const key = 232 dbPromise.then((db) => { const tx = db.transaction('store', 'readwrite') const store = tx.objectStore('store') store.delete(key) return tx.complete }) .then(() => { console.log('Item deleted') })
Add an item to the database You can use the put method of the object store, but first we need a reference to it, which we can get from upgradeDB.createObjectStore() when we create it. When using put , the value is the first argument, the key is the second. This is because if you specify keyPath when creating the object store, you don't need to enter the key name on every put() request, you can just write the value. This populates store0 as soon as we create it: idb.open('mydb', 1, (upgradeDB) => { keyValStore = upgradeDB.createObjectStore('store0') keyValStore.put('Hello world!', 'Hello') })
To add items later down the road, you need to create a transaction, that ensures database integrity (if an operation fails, all the operations in the transaction are rolled back and the state goes back to a known state). For that, use a reference to the dbPromise object we got when calling idb.open() , and run: dbPromise.then((db) => { const val = 'hey!' const key = 'Hello again' const tx = db.transaction('store1', 'readwrite') tx.objectStore('store1').put(val, key)
The IndexedDB API offers the add() method as well, but since put() allows us to both add and update, it's simpler to just use it.
Getting items from a store Getting a specific item from a store using get() dbPromise.then(db => db.transaction('objs') .objectStore('objs') .get(123456)) .then(obj => console.log(obj))
Getting all the items using getAll() dbPromise.then(db => db.transaction('store1') .objectStore('store1') .getAll()) .then(objects => console.log(objects))
Iterating on all the items using a cursor via openCursor() dbPromise.then((db) => { const tx = db.transaction('store', 'readonly') const store = tx.objectStore('store') return store.openCursor() }) .then(function logItems(cursor) { if (!cursor) { return } console.log('cursor is at: ', cursor.key) for (const field in cursor.value) { console.log(cursor.value[field]) } return cursor.continue().then(logItems) }) .then(() => { console.log('done!') })
338
IndexedDB
Iterating on a subset of the items using bounds and cursors const searchItems = (lower, upper) => { if (lower === '' && upper === '') { return } let range if (lower !== '' && upper !== '') { range = IDBKeyRange.bound(lower, upper) } else if (lower === '') { range = IDBKeyRange.upperBound(upper) } else { range = IDBKeyRange.lowerBound(lower) } dbPromise.then((db) => { const tx = db.transaction(['dogs'], 'readonly') const store = tx.objectStore('dogs') const index = store.index('age') return index.openCursor(range) }) .then(function showRange(cursor) { if (!cursor) { return } console.log('cursor is at:', cursor.key) for (const field in cursor.value) { console.log(cursor.value[field]) } return cursor.continue().then(showRange) }) .then(() => { console.log('done!') }) } searchDogsBetweenAges(3, 10)
339
Selectors API
Selectors API Access DOM elements using querySelector and querySelectorAll. They accept any CSS selector, so you are no longer limited by selecting elements by `id` Introduction The Selectors API Basic jQuery to DOM API examples Select by id Select by class Select by tag name More advanced jQuery to DOM API examples Select multiple items Select by HTML attribute value Select by CSS pseudo class Select the descendants of an element
Introduction jQuery and other DOM libraries got a huge popularity boost in the past, among with other features they provided, thanks to an easy way to select elements on a page. Traditionally browsers provided one single way to select a DOM element, and that was by its id attribute, with getElementById() , a method offered by the document object.
The Selectors API Since 2013 the Selectors API, the DOM allows you to use two more useful methods: document.querySelector() document.querySelectorAll()
They can be safely used, as caniuse.com tells us, and they are even fully supported on IE9 in addition to all the other modern browsers, so there is no reason to avoid them, unless you need to support IE8 (which has partial support) and below. They accept any CSS selector, so you are no longer limited by selecting elements by id . document.querySelector() returns a single element, the first found document.querySelectorAll() returns all the elements, wrapped in a NodeList object.
340
Selectors API
Those are all valid selectors: document.querySelector('#test') document.querySelector('.my-class') document.querySelector('#test .my-class') document.querySelector('a:hover')
Basic jQuery to DOM API examples Here below is a translation of the popular jQuery API into native DOM API calls.
Select by id $('#test') document.querySelector('#test')
We use querySelector since an id is unique in the page
Select by class $('.test') document.querySelectorAll('.test')
Select by tag name $('div') document.querySelectorAll('div')
More advanced jQuery to DOM API examples Select multiple items $('div, span') document.querySelectorAll('div, span')
Select by HTML attribute value $('[data-example="test"]')
Select by CSS pseudo class $(':nth-child(4n)') document.querySelectorAll(':nth-child(4n)')
Select the descendants of an element For example all li elements under #test : $('#test li') document.querySelectorAll('#test li')
342
Web Storage API
Web Storage API The Web Storage API provides a way to store data in the browser. It defines two storage mechanisms which are very important: Session Storage and Local Storage, part of the set of storage options available on the Web Platform
Introduction How to access the storage Methods setItem(key, value) getItem(key) removeItem(key) key(n) clear()
Storage size limits Desktop Mobile Going over quota Developer Tools Chrome Firefox Safari
343
Web Storage API
Introduction The Web Storage API defines two storage mechanisms which are very important: Session Storage and Local Storage. They are part of the set of storage options available on the Web Platform, which includes: Cookies IndexedDB The Cache API Application Cache is deprecated, and Web SQL is not implemented in Firefox, Edge and IE. Both Session Storage and Local Storage provide a private area for your data. Any data you store cannot be accessed by other websites. Session Storage maintains the data stored into it for the duration of the page session. If multiple windows or tabs visit the same site, they will have two different Session Storage instances. When a tab/window is closed, the Session Storage for that particular tab/window is cleared. Session storage is meant to allow the scenario of handling different processes happening on the same site independently, something not possible with cookies for example, which are shared in all sessions. Local Storage instead persists the data until it's explicitly removed, either by you or by the user. It's never cleaned up automatically, and it's shared in all the sessions that access a site. Both Local Storage and Session Storage are protocol specific: data stored when the page is accessed using http is not available when the page is served with https , and vice versa. Web Storage is only accessible in the browser. It's not sent to the server like cookies do.
How to access the storage Both Local and Session Storage are available on the window object, so you can access them using sessionStorage and localStorage . Their set of properties and methods is exactly the same, because they return the same object, a Storage object. The Storage Object has a single property, length , which is the number of data items stored into it.
344
Web Storage API
Methods setItem(key, value) setItem() adds an item to the storage. Accepts a string as key, and a string as a value:
If you pass any value that's not a string, it will be converted to string: localStorage.setItem('test', 123) //stored as the '123' string localStorage.setItem('test', { test: 1 }) //stored as "[object Object]"
getItem(key) getItem() is the way to retrieve a string value from the storage, by using the key string that
was used to store it: localStorage.getItem('username') // 'flaviocopes' localStorage.setItem('id') // '123'
removeItem(key) removeItem() removes the item identified by key from the storage, returning nothing (an undefined value):
localStorage.removeItem('id')
key(n) Every item you store has an index number. So the first time you use setItem() , that item can be referenced using key(0) . The next with key(1) and so on. If you reference a number that does not point to a storage item, it returns null . Every time you remove an item with removeItem(), the index consolidates: localStorage.setItem('a', 'a') localStorage.setItem('b', 'b') localStorage.key(0) //"a" localStorage.key(1) //"b" localStorage.removeItem('b')
Storage size limits Through the Storage API you can store a lot more data than you would be able with cookies. The amount of storage available on Web might differ by storage type (local or session), browser, and by device type. A research by html5rocks.com points out those limits:
Desktop Chrome, IE, Firefox: 10MB Safari: 5MB for local storage, unlimited session storage
Mobile Chrome, Firefox: 10MB iOS Safari and WebView: 5MB for local storage, session storage unlimited unless in iOS6 and iOS7 where it's 5MB Android Browser: 2MB local storage, unlimited session storage
Going over quota You need to handle quota errors, especially if you store lots of data. You can do so with a try/catch: try { localStorage.setItem('key', 'value')
Developer Tools The DevTools of the major browsers all offer a nice interface to inspect and manipulate the data stored in the Local and Session Storage.
Chrome
Firefox
347
Web Storage API
Safari
348
Web Storage API
349
Cookies
Cookies Cookies are a fundamental part of the Web, as they allow sessions and in general to recognize the users during the navigation
Introduction Restrictions of cookies Set cookies Set a cookie expiration date Set a cookie path Set a cookie domain Cookie Security Secure HttpOnly SameSite
Update a cookie value or parameter Delete a cookie Access the cookies values Check if a cookie exists Abstractions libraries Use cookies server-side Inspect cookies with the Browser DevTools Chrome Firefox Safari Alternatives to cookies
350
Cookies
Introduction By using Cookies we can exchange information between the server and the browser to provide a way to customize a user session, and for servers to recognize the user between requests. HTTP is stateless, which means all request origins to a server are exactly the same and a server cannot determine if a request comes from a client that already did a request before, or it's a new one. Cookies are sent by the browser to the server when an HTTP request starts, and they are sent back from the server, which can edit their content. Cookies are essentially used to store a session id. In the past cookies were used to store various types of data, since there was no alternative. But nowadays with the Web Storage API (Local Storage and Session Storage) and IndexedDB, we have much better alternatives. Especially because cookies have a very low limit in the data they can hold, since they are sent back-and-forth for every HTTP request to our server - including requests for assets like images or CSS / JavaScript files. Cookies have a long history, they had their first version in 1994, and over time they were standardized in multiple RFC revisions. RFC stands for Request for Comments, the way standards are defined by the Internet Engineering Task Force (IETF), the entity responsible for setting standards for the Internet The latest specification for Cookies is defined in the RFC 6265, which is dated 2011.
Restrictions of cookies Cookies can only store 4KB of data Cookies are private to the domain. A site can only read the cookies it set, not other domains cookies You can have up to 20 limits of cookies per domain (but the exact number depends on the specific browser implementation) Cookies are limited in their total number (but the exact number depends on the specific browser implementation). If this number is exceeded, new cookies replace the older ones. Cookies can be set or read server side, or client side.
351
Cookies
In the client side, cookies are exposed by the document object as document.cookie
Set cookies The simplest example to set a cookie is: document.cookie = 'foo=bar'
This will add a new cookie to the existing ones (it does not overwrite existing cookies) The cookie value should be url encoded with encodeURIComponent() , to make sure it does not contain any whitespace, comma or semicolon which are not valid in cookie values.
Set a cookie expiration date If you don't set anything else, the cookie will expire when the browser is closed. To prevent so, add an expiration date, expressed in the UTC format ( Mon, 26 Mar 2018 17:04:05 UTC ) document.cookie = 'foo=bar; expires=Mon, 26 Mar 2018 17:04:05 UTC'
A simple JavaScript snippet to set a cookie that expires in 24 hours is: const date = new Date() date.setHours(date.getHours() + 5) document.cookie = 'foo=bar; expires=' + date.toUTCString()
Alternatively you can use the max-age parameter to set an expiration expressed in number of seconds: document.cookie = 'foo=bar; max-age=3600' //expires in 60 minutes document.cookie = 'foo=bar; max-age=31536000' //expires in 1 year
Set a cookie path The path parameter specifies a document location for the cookie, so it's assigned to a specific path, and sent to the server only if the path matches the current document location, or a parent: document.cookie = 'foo=bar; path="/dashboard"'
352
Cookies
this cookie is sent on /dashboard , /dashboard/today and other sub-urls of /dashboard/ , but not on /posts for example. If you don't set a path, it defaults to the current document location. This means that to apply a global cookie from an inner page, you need to specify path="/" .
Set a cookie domain The domain can be used to specify a subdomain for your cookie. document.cookie = 'foo=bar; domain="mysite.com";'
If not set, it defaults to the host portion even if using a subdomain (if on subdomain.mydomain.com, by default it's set to mydomain.com). Domain cookies are included in subdomains.
Cookie Security Secure Adding the Secure parameter makes sure the cookie can only be transmitted securely over HTTPS, and it will not be sent over unencrypted HTTP connections: document.cookie = 'foo=bar; Secure;'
Note that this does not make cookies secure in any way - always avoid adding sensitive information to cookies
HttpOnly One useful parameter is HttpOnly , which makes cookies inaccessible via the document.cookie API, so they are only editable by the server:
document.cookie = 'foo=bar; Secure; HttpOnly'
SameSite
353
Cookies
SameSite , still experimental and only supported by Chrome and Firefox
(https://caniuse.com/#feat=same-site-cookie-attribute, lets servers require that a cookie is not sent on cross-site requests, but only on resources that have the cookie domain as the origin, which should be a great help towards reducing the risk of CSRF (Cross Site Request Forgery) attacks.
Update a cookie value or parameter To update the value of a cookie, just assign a new value to the cookie name: document.cookie = 'foo=bar2'
Similar to updating the value, to update the expiration date, reassign the value with a new expires or max-age property:
document.cookie = 'foo=bar; max-age=31536000' //expires in 1 year
Just remember to also add any additional parameters you added in the first place, like path or domain .
Delete a cookie To delete a cookie, unset its value and pass a date in the past: document.cookie = 'foo=; expires=Thu, 01 Jan 1970 00:00:00 UTC;'
(and again, with all the parameters you used to set it)
Access the cookies values To access a cookie, lookup document.cookie : const cookies = document.cookie
This will return a string with all the cookies set for the page, semicolon separated: 'foo1=bar1; foo2=bar2; foo3=bar3'
354
Cookies
Check if a cookie exists //ES5 if ( document.cookie.split(';').filter(item => { return item.indexOf('foo=') >= 0 }).length ) { //foo exists } //ES7 if ( document.cookie.split(';').filter(item => { return item.includes('foo=') }).length ) { //foo exists }
Abstractions libraries There are a number of different libraries that will provide a friendlier API to manage cookies. One of them is https://github.com/js-cookie/js-cookie, which supports up to IE7, and has a lot of stars on GitHub (which is always good). Some examples of its usage: Cookies.set('name', 'value') Cookies.set('name', 'value', { expires: 7, path: '', domain: 'subdomain.site.com', secure: true }) Cookies.get('name') // => 'value' Cookies.remove('name') //JSON Cookies.set('name', { foo: 'bar' }) Cookies.getJSON('name') // => { foo: 'bar' }
Use that or the native Cookies API? It all comes down to adding more kilobytes to download for each user, so it's your choice.
355
Cookies
Use cookies server-side Every environment used to build an HTTP server allows you to interact with cookies, because cookies are a pillar of the Modern Web, and not much could be built without them. PHP has $_COOKIE Go has cookies facilities in the net/http standard library and so on. Let's do an example with Node.js When using Express.js, you can create cookies using the res.cookie API: res.cookie('foo1', '1bar', { domain: '.example.com', path: '/admin', secure: true }) res.cookie('foo2', 'bar2', { expires: new Date(Date.now() + 900000), httpOnly: true }) res.cookie('foo3', 'bar3', { maxAge: 900000, httpOnly: true }) //takes care of serializing JSON res.cookie('foo4', { items: [1, 2, 3] }, { maxAge: 900000 })
To parse cookies, a good choice is to use the https://github.com/expressjs/cookie-parser middleware. Every Request object will have cookies information in the req.cookie property: req.cookies.foo //bar req.cookies.foo1 //bar1
If you create your cookies using signed: true : res.cookie('foo5', 'bar5', { signed: true })
they will be available in the req.signedCookies object instead. Signed cookies will be completely unreadable in the frontend, but transparently encoded/decoded on the server side. https://github.com/expressjs/session and https://github.com/expressjs/cookie-session are two different middleware options to build cookie-based authentication, which one to use depends on your needs.
Inspect cookies with the Browser DevTools 356
Cookies
All browsers in their DevTools provide an interface to inspect and edit cookies.
Chrome
Firefox
Safari
357
Cookies
Alternatives to cookies Are cookies the only way to build authentication and sessions on the Web? No! There is a technology that recently got popular, called JSON Web Tokens (JWT), which is a Token-based Authentication.
358
History API
History API The History API is the way browsers let you interact with the address bar and the navigation history
Introduction Access the History API Navigate the history Add an entry to the history Modify history entries Access the current history entry state The onpopstate event
Introduction The History API lets you interact with the browser history, trigger the browser navigation methods and change the address bar content. It's especially useful in combination with modern Single Page Applications, on which you never make a server-side request for new pages, but instead the page is always the same: just the internal content changes. A modern JavaScript application running in the browser that does not interact with the History API, either explicitly or at the framework level, is going to be a poor experience to the user, since the back and forward buttons break.
359
History API
Also, when navigating the app, the view changes but the address bar does not. And also the reload button breaks: reloading the page, since there is no deep linking, is going to make the browser show a different page The History API was introduced in HTML5 and is now supported by all modern browsers. IE supports it since version 10, and if you need to support IE9 and older, use the History.js library.
Access the History API The History API is available on the window object, so you can call it like this: window.history or simply history , since window is the global object.
Navigate the history Let's start with the simplest thing you can do with the History API. Go back to the previous page: history.back()
this goes to the previous entry in the session history. You can forward to the next page using history.forward()
This is exactly just like using the browser back and forward buttons. go() lets you navigate back or forward multiple levels deep. For example
360
History API
history.go(-1) //equivalent to history.back() history.go(-2) //equivalent to calling history.back() twice history.go(1) //equivalent to history.forward() history.go(3) //equivalent to calling history.forward() 3 times
To know how many entries there are in the history, you can call history.length
Add an entry to the history Using pushState() you can create a new history entry programmatically. You pass 3 parameters. The first is an object which can contain anything (there is a size limit however, and the object needs to be serializable). The second parameter is currently unused by major browsers, so you generally pass an empty string. The third parameter is a URL associated to the new state. Note that the URL needs to belong to the same origin domain of the current URL. const state = { foo: 'bar' } history.pushState(state, '', '/foo')
Calling this won't change the content of the page, and does not cause any browser action like changing window.location would.
Modify history entries While pushState() lets you add a new state to the history, replaceState() allows you to edit the current history state. history.pushState({}, '', '/posts') const state = { post: 'first' } history.pushState(state, '', '/post/first') const state = { post: 'second' } history.replaceState(state, '', '/post/second')
If you now call
361
History API
history.back()
the browser goes straight to /posts , since /post/first was replaced by /post/second
Access the current history entry state Accessing the property history.state
returns the current state object (the first parameter passed to pushState or replaceState ).
The onpopstate event This event is called on window every time the active history state changes, with the current state as the callback parameter: window.onpopstate = event => { console.log(event.state) }
will log the new state object (the first parameter passed to pushState or replaceState ) every time you call history.back() , history.forward() or history.go() .
362
Efficiently load JavaScript with defer and async
Efficiently load JavaScript with defer and async When loading a script on an HTML page, you need to be careful not to harm the loading performance of the page. Depending on where and how you add your scripts to an HTML page will influence the loading time
The position matters Async and Defer Performance comparison No defer or async, in the head No defer or async, in the body With async, in the head With defer, in the head Blocking parsing Blocking rendering domInteractive Keeping things in order TL;DR, tell me what's the best When loading a script on an HTML page, you need to be careful not to harm the loading performance of the page. A script is traditionally included in the page in this way:
whenever the HTML parser finds this line, a request will be made to fetch the script, and the script is executed. Once this process is done, the parsing can resume, and the rest of the HTML can be analyzed. As you can imagine, this operation can have a huge impact on the loading time of the page.
363
Efficiently load JavaScript with defer and async
If the script takes a little longer to load than expected, for example if the network is a bit slow or if you're on a mobile device and the connection is a bit sloppy, the visitor will likely see a blank page until the script is loaded and executed.
The position matters When you first learn HTML, you're told script tags live in the tag: Title ...
As I told earlier, when the parser finds this line, it goes to fetch the script and executes it. Then, after it's done with this task, it goes on to parse the body. This is bad because there is a lot of delay introduced. A very common solution to this issue is to put the script tag to the bottom of the page, just before the closing tag. Doing so, the script is loaded and executed after all the page is already parsed and loaded, which is a huge improvement over the head alternative. This is the best thing you can do, if you need to support older browsers that do not support two relatively recent features of HTML: async and defer .
Async and Defer Both async and defer are boolean attributes. Their usage is similar:
if you specify both, async takes precedence on modern browsers, while older browsers that support defer but not async will fallback to defer .
364
Efficiently load JavaScript with defer and async
For the support table, check caniuse.com for async https://caniuse.com/#feat=scriptasync and for defer https://caniuse.com/#feat=script-defer These attributes make only sense when using the script in the head portion of the page, and they are useless if you put the script in the body footer like we saw above.
Performance comparison No defer or async, in the head Here's how a page loads a script without neither defer or async, put in the head portion of the page:
The parsing is paused until the script is fetched, and executed. Once this is done, parsing resumes.
No defer or async, in the body Here's how a page loads a script without neither defer or async, put at the end of the body tag, just before it closes:
The parsing is done without any pauses, and when it finishes, the script is fetched, and executed. Parsing is done before the script is even downloaded, so the page appears to the user way before the previous example.
With async, in the head Here's how a page loads a script with async , put in the head tag:
365
Efficiently load JavaScript with defer and async
The script is fetched asynchronously, and when it's ready the HTML parsing is paused to execute the script, then it's resumed.
With defer, in the head Here's how a page loads a script with defer , put in the head tag:
The script is fetched asynchronously, and it's executed only after the HTML parsing is done. Parsing finishes just like when we put the script at the end of the body tag, but overall the script execution finishes well before, because the script has been downloaded in parallel with the HTML parsing. So this is the winning solution in terms of speed
Blocking parsing async blocks the parsing of the page while defer does not.
Blocking rendering Neither async nor defer guarantee anything on blocking rendering. This is up to you and your script (for example, making sure your scripts run after the onLoad ) event.
domInteractive
366
Efficiently load JavaScript with defer and async
Scripts marked defer are executed right after the domInteractive event, which happens after the HTML is loaded, parsed and the DOM is built. CSS and images at this point are still to be parsed and loaded. Once this is done, the browser will emit the domComplete event, and then onLoad . domInteractive is important because its timing is recognized as a measure of perceived
loading speed. See the MDN for more.
Keeping things in order Another case pro defer : scripts marked async are executed in casual order, when they become available. Scripts marked defer are executed (after parsing completes) in the order which they are defined in the markup.
TL;DR, tell me what's the best The best thing to do to speed up your page loading when using scripts is to put them in the head , and add a defer attribute to your script tag:
This is the scenario that triggers the faster domInteractive event. Considering the pros of defer , is seems a better choice over async in a variety of scenarios. Unless you are fine with delaying the first render of the page, making sure that when the page is parsed the JavaScript you want is already executed.
367
The WebP Image Format
The WebP Image Format WebP is an Open Source image format developed at Google, which promises to generate images smaller in size compared to JPG and PNG formats, while generating better looking images
Introduction How much smaller? Generating WebP images Browsers support How can you use WebP today?
Introduction WebP is an Open Source image format developed at Google, which promises to generate images smaller in size compared to JPG and PNG formats, while generating better looking images. WebP supports transparency, like PNG and GIF images. WebP supports animations, like GIF images And, using WebP you can set the quality ratio of your images, so you decide if you want to get better quality or a smaller size (like it happens in JPG images).
368
The WebP Image Format
So WebP can do all GIF, JPG and PNG images can do, in a single format, and generate smaller images. Sounds like a deal. If you want to compare how images look in the various formats, here's a great gallery by Google. WebP is not new, it's been around for several years now.
How much smaller? The premise of generating smaller images is very interesting, especially when you consider than most of a web page size is determined by the amount and size of the image assets the user should download. Google published a comparative study on the results of 1 million images with this result: WebP achieves overall higher compression than either JPEG or JPEG 2000. Gains in file size minimization are particularly high for smaller images which are the most common ones found on the web. You should experiment with the kind of images you intend to serve, and form your opinion based on that. In my tests, lossless compression compared to PNG generates WebP images 50% smaller. PNG reaches that file sizes only when using lossy compression.
Generating WebP images All modern graphical and image editing tools let you export to .webp files. Command-line tools also exist to convert images to WebP directly. Google provides a set of tools for this. cwebp is the main command line utility to convert any image to .webp , use it with
cwebp image.png -o image.webp
Check out all the options using cwebp -longhelp .
Browsers support WebP is currently supported by
369
The WebP Image Format
Chrome Opera Opera Mini UC Browser Samsung Internet However, only Chrome for Desktop and Opera 19+ implement all the features of WebP, which are: lossy compression lossless compression transparency animation Other browsers only implement a subset, and Firefox, Safari, Edge and IE do not support WebP at all, and there are no signs of WebP being implemented any time soon in those browsers. But Chrome alone is a good portion of the web market, so if we can serve those users an optimized image, to speed up serving them and consume less bandwidth, it's great. But check if it actually reduces your images size. Check with your JPG/PNG image optimization tooling results, and see if adding an additional layer of complexity introduced by WebP is useful or not.
How can you use WebP today? There are a few different ways to do so. You can use a server-level mechanism that serves WebP images instead of JPG and PNG when the HTTP_ACCEPT request header contains image/webp . The first is the most convenient, as completely transparent to you and to your web pages. Another option is to use Modernizr and check the Modernizr.webp setting. If you don't need to support Internet Explorer, a very convenient way is to use the tag, which is now supported by all the major browsers except Opera Mini and IE (all versions). The tag is generally used for responsive images, but we can use it for WebP too, as this tutorial from HTML5 Rocks explains. You can specify a list of images, and they will be used in order, so in the next example, browsers that support WebP will use the first image, and fallback to JPG if not:
370
The WebP Image Format
371
SVG
SVG SVG is an awesome and incredibly powerful image format. This tutorial gives you an overview of SVG by explaining all you need to know in a simple way
Introduction The advantages of SVG Your first SVG image Using SVG SVG Elements text circle rect line path textPath polygon g
SVG viewport and viewBox Inserting SVG in Web Pages With an img tag With the CSS background-image property Inline in the HTML With an object , iframe or embed tag Inline SVG using a Data URL Styling elements Interacting with a SVG with CSS or JavaScript
372
SVG
CSS inside SVG JavaScript inside SVG JavaScript outside the SVG CSS outside the SVG SVG vs Canvas API SVG Symbols Validate an SVG Should I include the xmlns attribute? Should I worry about browser support?
Introduction Despite being standardized in the early 2000s, SVG (a shorthand for Scalable Vector Graphics) is a hot topic these days. SVG has been penalized for quite a few years by the poor browser support (most notably IE). I found this quote from a 2011 book: "at the time of writing, direct embedding of SVG into HTML works only in the very newest browsers". 7 years ago, this is now a thing of the past, and we can use SVG images safely. Today we can use SVG images safely, unless you have a lot of users with IE8 and below, or with older Android devices. In this case, fallbacks exist.
Some part of the success of SVG is due to the variety of screen displays we must support, at different resolutions and sizes. A perfect task for SVG. Also, the rapid decline of Flash in the last few years led to a renewed interest in SVG, which is great for a lot of things that Flash did in the past. SVG is a vector image file format. This makes them very different than image format such as PNG, GIF or JPG, which are raster image file formats.
The advantages of SVG
373
SVG
SVG images, thanks to being vector images, can infinitely scale and not have any issue in image quality degradation. How so? Because SVG images are built using XML markup, and the browser prints them by plotting each point and line, rather than filling some space with predefined pixels. This ensures SVG images can adapt to different screen sizes and resolutions, even ones that have yet to be invented. Thanks to being defined in XML, SVG images are much more flexible than JPG or PNG images, and we can use CSS and JavaScript to interact with them. SVG images can even contain CSS and JavaScript. SVG images can render vector-style images a lot smaller than other formats, and are mainly used on logos and illustrations. Another huge use case is icons. Once domain of Icon Fonts like FontAwesome, now designers prefer using SVG images because they are smaller and they allow to have multi-color icons. SVG is easy to animate, which is a very cool topic. SVG provides some image editing effects, like masking and clipping, applying filters, and more. SVG are just text, and as such it can be efficiently compressed using GZip.
Your first SVG image SVG images are defined using XML. This means that SVG will look very familiar if you are proficient in HTML, except rather than having tags that are suited for document construction (like p , article , footer , aside ) in SVG we have the building blocks of vector images: path , rect , line and so on.
This is an example SVG image:
Notice how it's very easy to read and understand how the image will look like: it's a simple blue rectangle of 10x10 pixels (the default unit). Most of the times you won't have to edit the SVG code, but you will use tools like Sketch or Figma or any other vector graphics tool to create the image, and export it as SVG. The current version of SVG is 1.1, and SVG 2.0 is under development.
374
SVG
Using SVG SVG images can be displayed by the browser by including them in a img tag:
just like you would do for other pixel-based image formats:
In addition, pretty uniquely, SVG they can be directly included in the HTML page: A page
Please note that HTML5 and XHTML require a different syntax for inline SVG images. Luckily XHTML is a thing of the past, as it was more complex than necessary, but it's worth knowing in case you still need to work on XHTML pages. The ability to inline SVG in HTML makes this format a unicorn in the scene, as other images can't do this, and must be fetched by opening a separate request for each one.
SVG Elements In the example above you saw the usage of the rect element. SVG has a lot of different elements. The most used ones are text : creates a text element circle : creates a circle rect : creates a rectangle
375
SVG
line : creates a line path : create a path between two points textPath : create a path between two points, and a linked text element polygon : allows to create any kind of polygon g : groups separate elements
Coordinates start at 0,0 at the top-left of the drawing area, and extend from left to right for x , from top to bottom for y .
The images you see reflect the code shown above. Using the Browser DevTools you can inspect and change them.
text The text element adds text. The text can be selected using the mouse. x and y define the starting point of the text A nice rectangle
circle Define a circle. cx and cy are the center coordinates, and r is the radius. fill is a common attribute and represents the figure color.
rect
376
SVG
Defines a rectangle. x , y are the starting coordinates, width and height are selfexplanatory.
line x1 and y1 define the starting coordinates. x2 and y2 define the ending coordinates. stroke is a common attribute and represents the line color.
path A path is a sequence of lines and curves. It's the most powerful tool to draw using SVG, and as such it's the most complex. d contains the directions commands. These commands start with the command name, and a
set of coordinates: M means Move, it accepts a set of coordinates x, y L means Line, it accepts a set of coordinates x, y to draw the line to H is an Horizontal Line, it only accept an x coordinate V is a Vertical Line, it only accept an y coordinate Z means Close Path, puts a line back to the start
377
SVG
A means Arch, it needs a whole tutorial on its own Q is a quadratic Bezier curve, again it needs a whole tutorial on its own
textPath Adds a text along the shape of a path element. Wow such a nice SVG tut
378
SVG
polygon Draw any random polygon with polygon . `points represents a set of x, y coordinates the polygon should link:
g Using the g element you can group multiple elements:
379
SVG
SVG viewport and viewBox The size of an SVG relative to its container is set by the width and height attributes of the svg element. Those units default to pixels, but you can use any other usual unit like % or em . This is the viewport.
Generally "container" means the browser window, but a svg element can contain other svg elements, in that case the container is the parent svg .
An important attribute is viewBox . It lets you define a new coordinates system inside the SVG canvas. Say you have a simple circle, in a 200x200px SVG:
380
SVG
By specifying a viewBox you can choose to only show a portion of this SVG. For example you can start at point 0, 0 and only show a 100x100px canvas:
starting at 100, 100 you will see another portion, the bottom right half of the circle:
381
SVG
A great way to visualize this is to imagine Google Maps being a gigantic SVG image, and your browser is a viewBox as big as the window size. When you move around, the viewBox changes its starting point (x, y) coordinates, and when you resize the window, you change the width and height of the viewBox.
Inserting SVG in Web Pages There are various ways to add SVG to a webpage. The most common ones are: with an img tag with the CSS background-image property inline in the HTML with an object , iframe or embed tag See all these examples live on Glitch: https://flavio-svg-loading-ways.glitch.me/
With an img tag
With the CSS background-image property .svg-background { background-image: url(flag.svg); height: 200px; width: 300px; }
382
SVG
Inline in the HTML Italian Flag By Flavio Copes https://flaviocopes.com
With an object , iframe or embed tag
Using embed you have the option to get the SVG document from the parent document using document.getElementById('my-svg-embed').getSVGDocument()
and from inside the SVG you can reference the parent document with: window.parent.document
Inline SVG using a Data URL You can use any of the above examples combined with Data URLs to inline the SVG in the HTML:
and in CSS too: .svg-background {
383
SVG
background-image: url('data:image/svg+xml;'); }
Just change with the appropriate Data URL.
Styling elements Any SVG element can accept a style attribute, just like HTML tags. Not all CSS properties work as you would expect, due to the SVG nature. For example to change the color of a text element, use fill instead of color . A nice text A nice text
You can use fill as an element attribute as well, as you saw before: A nice text
Other common properties are fill-opacity , background color opacity stroke , defines the border color stroke-width , sets the width of the border
CSS can target SVG elements like you would target HTML tags: rect { fill: red; } circle { fill: blue; }
Interacting with a SVG with CSS or JavaScript
384
SVG
SVG images can be styled using CSS, or scripted with JavaScript, in those cases: when the SVG is inlined in the HTML when the image is loaded through object , embed or iframe tags but (⚠ depending on the browser implementation) they must be loaded from the same domain (and protocol), due to the same-origin policy. iframe needs to be explicitly sized, otherwise the content is cropped, while object and embed resize to fit their content.
If the SVG is loaded using a img tag, or through CSS as a background, independently of the origin: CSS and JavaScript cannot interact with it JavaScript contained in the SVG is disabled External resources like images, stylesheets, scripts, fonts cannot be loaded in detail
Feature
Inline SVG
object / embed / iframe
img
Can interact with the user
✅
✅
✅
Supports animations
✅
✅
✅
Can run its own JavaScript
✅
✅
Can be scripted from outside
✅
Inline SVG images are definitely the most powerful and flexible, and it's the only way to perform certain operations with SVG. If you want to do any interaction with the SVG with your scripts, it must be loaded inline in the HTML. Loading an SVG in an img , object or embed works if you don't need to interact with it, just show it in the page, and it's especially convenient if you reuse SVG images in different pages, or the SVG size is quite big.
CSS inside SVG Add the CSS in a CDATA:
An SVG file can also include an external style sheet:
JavaScript inside SVG You can put the JavaScript first, and wrap in in a load event to execute it when the page is fully loaded and the SVG is inserted in the DOM: { //... }, false) ]]>
or you can avoid adding an event listener if you put the JS at the end of the other SVG code, to make sure the JavaScript runs when the SVG is present in the page:
SVG elements, just like html tags, can have id and class attributes, so we can use the Selectors API to reference them:
Check out this Glitch https://flaviocopes-svg-script.glitch.me/ for an example of this functionality.
JavaScript outside the SVG If you can interact with the SVG (the SVG is inline in the HTML), you can change any SVG attribute using JavaScript, for example: document.getElementById('my-svg-rect').setAttribute('fill', 'black')
or really do any other DOM manipulation you want.
CSS outside the SVG You can change any styling of the SVG image using CSS. SVG attributes can be easily overwritten in CSS, and they have a lower priority over CSS. They do not behave like inline CSS, which has higher priority. #my-rect { fill: red }
SVG vs Canvas API The Canvas API is a great addition to the Web Platform, and it has similar browser support as SVG. The main (and big) difference with SVG is that Canvas is not vector based, but rather pixel based, so
387
SVG
it has the same scaling issues as pixel-based PNG, JPG and GIF image formats it makes it impossible to edit a Canvas image using CSS or JavaScript like you can do with SVG
SVG Symbols Symbols let you define an SVG image once, and reuse it in multiple places. This is a great help if you need to reuse an image, and maybe just change a bit some of its properties. You do so by adding a symbol element and assigning an id attribute:
( xlink:href is for Safari support, even if it's a deprecated attribute) This starts to give an idea of the power of SVG. If you want to style those 2 rectangles differently, for example using a different color for each? You can use CSS Variables.
Validate an SVG An SVG file, being XML, can be written in an invalid format, and some services or apps might not accept an invalid SVG file. SVG can be validated using the W3C Validator.
Should I include the xmlns attribute? Sometimes an svg is defined as ...
sometimes as ...
This second form is XHTML. It can also be used with HTML5 (documents with ) but in this case the first form is simpler.
Should I worry about browser support? In 2018 SVG is supported by the vast majority of user's browsers. You can still check for missing support using libraries like Modernizr, and provide a fallback: if (!Modernizr.svg) { $('.my-svg').attr('src', 'images/logo.png')
389
SVG
}
390
Data URLs
Data URLs A Data URL is a URI scheme that provides a way to inline data in a document, and it's commonly used to embed images in HTML and CSS
Introduction How does a Data URL look Browser support Security
Introduction A Data URL is a URI scheme that provides a way to inline data in an HTML document. Say you want to embed a small image. You could go the usual way, upload it to a folder and use the img tag to make the browser reference it from the network:
or you can encode it in a special format, called Data URL, which makes it possible to embed the image directly in the HTML document, so the browser does not have to make a separate request to get it.
391
Data URLs
Data URLs might save some time for small files, but for bigger files there are downsides in the increased HTML file size, and they are especially a problem if the image reloads on all your pages, as you can't take advantage of the browser caching capabilities. Also, an image encoded as Data URL is generally a bit bigger than the same image in binary format. They aren't much practical if your images are frequently edited, since every time the image is changed, it must be encoded again. That said, they have their place on the Web Platform.
How does a Data URL look A Data URL is a string that starts with data: followed by the MIME type format. For example a PNG image has mime type image/png . This is followed by a comma and then by the actual data. Text is usually transferred in plain text, while binary data is usually base64 encoded. Here is an example of how such Data URL will look like:
And here is a small version of the banner image of this article encoded in a link Here is how a base64 encoded Data URL looks like. Notice it starts with data:image/png;base64 :
And here is a small version of the banner image of this article base64 encoded in a link. This site has a very nice Data URL generator: https://dopiaza.org/tools/datauri/index.php which you can use to transform any image sitting in your desktop to a Data URL. Data URLs can be used anywhere a URL can be used, as you saw you can use it for links, but it's also common to use them in CSS: .main { background-image url('data:image/png;base64,iVBORw0KGgoAA...'); }
392
Data URLs
Browser support They are supported in all modern browsers.
Security Data URLs can encode any kind of information, not just images, and so they come with their set of security implications. From Wikipedia: The data URI can be utilized to construct attack pages that attempt to obtain usernames and passwords from unsuspecting web users. It can also be used to get around crosssite scripting (XSS) restrictions, embedding the attack payload fully inside the address bar, and hosted via URL shortening services rather than needing a full website that is controlled by a third party. Check this article from the Mozilla Firefox Blog for more information on how Data URLs can be used by malicious users to do bad things, and how the Firefox browser mitigates such risks. Data URLs are defined in RFC 2397.
393
CORS
CORS An introduction to Cross-Origin Resource Sharing, the way to let clients and servers communicate even if they are not on the same domain
A JavaScript application running in the browser can usually only access HTTP resources on the same domain (origin) that serves it. Loading images or scripts/styles always works, but XHR and Fetch calls to another server will fail, unless that server implements a way to allow that connection. This way is called CORS, Cross-Origin Resource Sharing. Also loading Web Fonts using @font-face has same-origin policy by default, and other less popular things (like WebGL textures and drawImage resources loaded in the Canvas API). One very important thing that needs CORS is ES Modules, recently introduced in modern browsers. If you don't set up a CORS policy on the server that allows to serve 3rd part origins, the request will fail. Fetch example:
394
CORS
XHR example:
A Cross-Origin resource fails if it's: to a different domain to a different subdomain to a different port to a different protocol and it's there for your security, to prevent malicious users to exploit the Web Platform. But if you control both the server and the client, you have all the good reasons to allow them to talk to each other. How? It depends on your server-side stack.
Browser support Pretty good (basically all except IE { res.json({ msg: 'WHOAH with CORS it works!
' })
})
396
CORS
/* the rest of the app */
I made a simple Glitch example. Here is the client working, and here's its code: https://glitch.com/edit/#!/flavio-cors-client. This is the Node.js Express server: https://glitch.com/edit/#!/flaviocopes-cors-example-express Note how the request that fails because it does not handle the CORS headings correctly is still received, as you can see in the Network panel, where you find the message the server sent:
Allow only specific origins This example has a problem however: ANY request will be accepted by the server as crossorigin. As you can see in the Network panel, the request that passed has a response header accesscontrol-allow-origin: * :
397
CORS
You need to configure the server to only allow one origin to serve, and block all the others. Using the same cors Node library, here's how you would do it: const cors = require('cors') const corsOptions = { origin: 'https://yourdomain.com' } app.get('/products/:id', cors(corsOptions), (req, res, next) => { //... })
You can serve more as well: const whitelist = ['http://example1.com', 'http://example2.com'] const corsOptions = { origin: function(origin, callback) { if (whitelist.indexOf(origin) !== -1) { callback(null, true) } else { callback(new Error('Not allowed by CORS')) } } }
Preflight There are some requests that are handled in a "simple" way. All GET requests belong to this group. Also some POST and HEAD requests do as well. POST requests are also in this group, if they satisfy the requirement of using a Content-Type
of application/x-www-form-urlencoded multipart/form-data text/plain
All other requests must run through a pre-approval phase, called preflight. The browser does this to determine if it has the permission to perform an action, by issuing an OPTIONS request. A preflight request contains a few headers that the server will use to check permissions (irrelevant fields omitted):
398
CORS
OPTIONS /the/resource/you/request Access-Control-Request-Method: POST Access-Control-Request-Headers: origin, x-requested-with, accept Origin: https://your-origin.com
The server will respond with something like this(irrelevant fields omitted): HTTP/1.1 200 OK Access-Control-Allow-Origin: https://your-origin.com Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE
We checked for POST, but the server tells us we can also issue other HTTP request types for that particular resource. Following the Node.js Express example above, the server must also handle the OPTIONS request: var express = require('express') var cors = require('cors') var app = express() //allow OPTIONS on just one resource app.options('/the/resource/you/request', cors()) //allow OPTIONS on all resources app.options('*', cors())
399
Web Workers
Web Workers Learn the way to run JavaScript code in the background using Web Workers
Introduction Browser support for Web Workers Create a Web Worker Communication with a Web Worker Using postMessage in the Web Worker object Send back messages Multiple event listeners Using the Channel Messaging API Web Worker Lifecycle Loading libraries in a Web Worker APIs available in Web Workers
Introduction JavaScript is single threaded. Nothing can run in parallel at the same time.
400
Web Workers
This is great because we don't need to worry about a whole set of issues that would happen with concurrent programming. With this limitation, JavaScript code is forced to be efficient from the start, otherwise the user would have a bad experience. Expensive operations should be asynchronous to avoid blocking the thread. As the needs of JavaScript applications grew, this started to become a problem in some scenarios. Web Workers introduce the possibility of parallel execution inside the browser. They have quite a few limitations: no access to the DOM: the Window object and the Document object they can communicate back with the main JavaScript program using messaging they need to be loaded from the same origin (domain, port and protocol) they don't work if you serve the page using the file protocol ( file:// ) The global scope of a Web Worker, instead of Window which is in the main thread, is a WorkerGlobalScope object.
Browser support for Web Workers Pretty good!
401
Web Workers
You can check for Web Workers support using if (typeof Worker !== 'undefined') { }
Create a Web Worker You create a Web Worker by initializing a Worker object, loading a JavaScript file from the same origin: const worker = new Worker('worker.js')
Communication with a Web Worker There are two main ways to communicate to a Web Worker: the postMessage API offered by the Web Worker object the Channel Messaging API
Using postMessage in the Web Worker object
402
Web Workers
You can send messages using postMessage on the Worker object. Important: a message is transferred, not shared. main.js const worker = new Worker('worker.js') worker.postMessage('hello')
worker.js worker.onmessage = e => { console.log(e.data) } worker.onerror = e => { console.error(e.message) }
Send back messages A worker can send back messages to the function that created it. using worker.pushMessage() : worker.js worker.onmessage = e => { console.log(e.data) worker.pushMessage('hey') } worker.onerror = e => { console.error(e.message) }
main.js const worker = new Worker('worker.js') worker.postMessage('hello') worker.onmessage = e => { console.log(e.data) }
Multiple event listeners If you want to setup multiple listeners for the message event, instead of using onmessage create an event listener (applies to the error event as well):
403
Web Workers
worker.js worker.addEventListener( 'message', e => { console.log(e.data) worker.pushMessage('hey') }, false ) worker.addEventListener( 'message', e => { console.log(`I'm curious and I'm listening too`) }, false ) worker.addEventListener( 'error', e => { console.log(e.message) }, false )
main.js const worker = new Worker('worker.js') worker.postMessage('hello') worker.addEventListener( 'message', e => { console.log(e.data) }, false )
Using the Channel Messaging API Since Workers are isolated from the main thread, they have their own, to communicate to them we need a special API: the Channel Messaging API. main.js const worker = new Worker('worker.js') const messageChannel = new MessageChannel() messageChannel.port1.addEventListener('message', e => { console.log(e.data)
worker.js self.addEventListener('message', e => { console.log(e.data) })
A Web Worker can send messages back by posting a message to messageChannel.port2 , like this: self.addEventListener('message', event => { event.ports[0].postMessage(data) })
Web Worker Lifecycle Web Workers are launched and if they do not stay in listening mode for messages through worker.onmessage or by adding an event listener, they will be shut down as soon as their code
is run through completion. A Web Worker can be stopped using its terminate() method from the main thread, and inside the worker itself using the global method close() : main.js const worker = new Worker('worker.js') worker.postMessage('hello') worker.terminate()
worker.js worker.onmessage = e => { console.log(e.data) close() } worker.onerror = e => { console.error(e.message) }
Loading libraries in a Web Worker 405
Web Workers
Web Workers can use the importScripts() global function defined in their global scope: importScripts('../utils/file.js', './something.js')
APIs available in Web Workers As said before, the DOM is not reachable by a Web Worker, so you cannot interact with the window and document objects. Also parent is unavailable.
You can however use many other APIs, which include: the XHR API the Fetch API the Broadcast Channel API the FileReader API IndexedDB the Notifications API Promises Service Workers the Channel Messaging API the Cache API the Console API ( console.log() and friends) the JavaScript Timers ( setTimeout , setInterval ...) the CustomEvents API: addEventListener() and removeEventListener() the current URL, which you can access through the location property in read mode WebSockets WebGL SVG Animations
406
requestAnimationFrame
requestAnimationFrame Learn the API to perform animations and schedule event in a predictable way requestAnimationFrame() is a relatively recent browser API. It gives a more predictable way to
hook into the browser render cycle. It's currently supported by all modern browsers (and IE 10+)
It's not an API specific to animations, but that's where it is used the most. JavaScript has an event loop. It continuously runs to execute JavaScript. In the past, animations were performed using setTimeout() or setInterval() . You perform a little bit of an animation, and you call setTimeout() to repeat again this code in a few milliseconds from now: const performAnimation = () => { //... setTimeout(performAnimation, 1000 / 60) } setTimeout(performAnimation, 1000 / 60)
You can stop an animation by getting the timeout or interval reference, and clearing it: let timer const performAnimation = () => { //... timer = setTimeout(performAnimation, 1000 / 60) } timer = setTimeout(performAnimation, 1000 / 60) //... clearTimeout(timer)
The 1000 / 60 interval between the performAnimation() calls is determined by the monitor refresh rate, which is in most of the cases 60 Hz (60 repaints per second), because it's useless to perform a repaint if the monitor cannot show it due to its limitations. It leads to ~16.6ms of time we have at our disposal to display every single frame. The problem with this approach is that even though we specify this precision accurately, the browser might be busy performing other operations, and our setTimeout calls might not make it in time for the repaint, and it's going to be delayed to the next cycle. This is bad because we lose one frame, and in the next the animation is performed 2 times, causing the eye to notice the clunky animation. Check this example on Glitch of an animation built using of setTimeout(). requestAnimationFrame is the standard way to perform animations, and it works in a very
different way event though the code looks very similar to the setTimeout/setInterval code: let request const performAnimation = () => { request = requestAnimationFrame(performAnimation) //animate something } requestAnimationFrame(performAnimation)
408
requestAnimationFrame
//... cancelAnimationFrame(request) //stop the animation
This example on Glitch of an animation built using of requestAnimationFrame() shows how
Optimization requestAnimationFrame() since its introduction was very CPU friendly, causing animations to
stop if the current window or tab is not visible. At the time requestAnimationFrame() was introduced, setTimeout/setInterval did run even if the tab was hidden, but now since this approach proved to be successful also to battery savings, browsers also implemented throttling for those events, allowing max 1 execution per each second. Using requestAnimationFrame the browser can further optimize the resource consumption and make the animations smoother.
Timeline examples This is the perfect timeline if you use setTimeout or setInterval:
you have a set of paint (green) and render (purple) events, and your code is in the yellow box by the way, these are the colors used in the Browser DevTools as well to represent the timeline:
409
requestAnimationFrame
The illustration shows the perfect case. You have painting and rendering every 60ms, and your animation happens in between, perfectly regular. If you used a higher frequency call for your animation function:
Notice how in each frame we call 4 animation steps, before any rendering happens, an this will make the animation feel very choppy. What if setTimeout cannot run on time due to other code blocking the event loop? We end up with a missed frame:
410
requestAnimationFrame
What if an animation step takes a little bit more than you anticipate?
the render and paint events will be delayed as well. This is how requestAnimationFrame() works visually:
all the animation code runs before the rendering and painting events. This makes for a more predictable code, and there's a lot of time to do the animation without worrying about going past the 16ms time we have at our disposal.
411
Console API
Console API Every browser exposes a console that lets you interact with the Web Platform APIs and also gives you an inside look at the code by printing messages that are generated by your JavaScript code running in the page
Every browser exposes a console that lets you interact with the Web Platform APIs and also gives you an inside look at the code by printing messages that are generated by your JavaScript code running in the page.
412
Console API
Overview of the console Use console.log formatting Clear the console Counting elements Log more complex objects Logging different error levels Preserve logs during navigation Grouping console messages Print the stack trace Calculate the time spent Generate a CPU profile
Overview of the console The console toolbar is simple. There's a button to clear the console messages, something you can also do by clicking cmd-K in macOS, or ctrl-K on Windows, a second button that activates a filtering sidebar, that lets you filter by text, or by type of message, for example error, warning, info, log, or debug messages. You can also choose to hide network-generated messages, and just focus on the JavaScript log messages.
413
Console API
The console is not just a place where you can see messages, but also the best way to interact with JavaScript code, and many times the DOM. Or, just get information from the page. Let's type our first message. Notice the >, let's click there and type console.log('test')
The console acts as a REPL, which means read–eval–print loop. In short, it interprets our JavaScript code and prints something.
Use console.log formatting As you see, console.log('test') prints 'test' in the Console. Using console.log in your JavaScript code can help you debug for example by printing static strings, but you can also pass it a variable, which can be a JavaScript native type (for example an integer) or an object. You can pass multiple variables to console.log , for example: console.log('test1', 'test2')
We can also format pretty phrases by passing variables and a format specifier. For example: console.log('My %s has %d years', 'cat', 2)
%s format a variable as a string %d or %i format a variable as an integer %f format a variable as a floating point number %o can be used to print a DOM Element %O used to print an object representation
Another useful format specifier is %c , which allows to pass CSS to format a string. For example: console.log( '%c My %s has %d years', 'color: yellow; background:black; font-size: 16pt', 'cat', 2 )
Clear the console There are three ways to clear the console while working on it, with various input methods. The first way is to click the Clear Console Log button on the console toolbar.
415
Console API
The second method is to type console.clear() inside the console, or in your a JavaScript function that runs in your app / site. You can also just type clear() . The third way is through a keyboard shortcut, and it's cmd-k (mac) or ctrl + l (Win)
Counting elements console.count() is a handy method.
Take this code: const x = 1 const y = 2 const z = 3 console.count( 'The value of x is ' + x + ' and has been checked .. how many times?' ) console.count( 'The value of x is ' + x + ' and has been checked .. how many times?' ) console.count( 'The value of y is ' + y + ' and has been checked .. how many times?' )
What happens is that count will count the number of times a string is printed, and print the count next to it:
416
Console API
You can just count apples and oranges: const oranges = ['orange', 'orange'] const apples = ['just one apple'] oranges.forEach(fruit => { console.count(fruit) }) apples.forEach(fruit => { console.count(fruit) })
Log more complex objects console.log is pretty amazing to inspect variables. You can pass it an object too, and it will do
its best to print it to you in a readable way. Most of the times this means it prints a string representation of the object. For example try console.log([1, 2])
Another option to print objects is to use console.dir : console.dir([1, 2])
As you can see this method prints the variable in a JSON-like representation, so you can inspect all its properties. The same thing that console.dir outputs is achievable by doing console.log('%O', [1, 2])
417
Console API
Which one to use depends on what you need to debug of course, and one of the two can do the best job for you. Another function is console.table() which prints a nice table. We just need to pass it an array of elements, and it will print each array item in a new row. For example console.table([[1, 2], ['x', 'y']])
or you can also set column names, by passing instead of an array, an Object Literal, so it will use the object property as the column name console.table([ { x: 1, y: 2, z: 3 }, { x: 'First column', y: 'Second column', z: null } ])
418
Console API
console.table can also be more powerful and if you pass it an object literal that in turn
contains an object, and you pass an array with the column names, it will print a table with the row indexes taken from the object literal. For example: const shoppingCart = {} shoppingCart.firstItem = { color: 'black', size: 'L' } shoppingCart.secondItem = { color: 'red', size: 'L' } shoppingCart.thirdItem = { color: 'white', size: 'M' } console.table(shoppingCart, ['color', 'size'])
419
Console API
Logging different error levels As we saw console.log is great for printing messages in the Console. We'll now discover three more handy methods that will help us debug, because they implicitly indicate various levels of error. First, console.info() As you can see a little 'i' is printed beside it, making it clear the log message is just an information. Second, console.warn() prints a yellow exclamation point. If you activate the Console filtering toolbar, you can see that the Console allows you to filter messages based on the type, so it's really convenient to differentiate messages because for example if we now click 'Warnings', all the printed messages that are not warnings will be hidden. The third function is console.error() this is a bit different than the others because in addition to printing a red X which clearly states there's an error, we have the full stack trace of the function that generated the error, so we can go and try to fix it.
420
Console API
Preserve logs during navigation Console messages are cleared on every page navigation, unless you check the Preserve log in the console settings:
Grouping console messages The Console messages can grow in size and the noise when you're trying to debug an error can be overwhelming. To limit this problem the Console API offers a handy feature: Grouping the Console messages. Let's do an example first. console.group('Testing the location') console.log('Location hash', location.hash) console.log('Location hostname', location.hostname) console.log('Location protocol', location.protocol) console.groupEnd()
421
Console API
As you can see the Console creates a group, and there we have the Log messages. You can do the same, but output a collapsed message that you can open on demand, to further limit the noise: console.groupCollapsed('Testing the location') console.log('Location hash', location.hash) console.log('Location hostname', location.hostname) console.log('Location protocol', location.protocol) console.groupEnd()
The nice thing is that those groups can be nested, so you can end up doing console.group('Main') console.log('Test') console.group('1') console.log('1 text') console.group('1a') console.log('1a text') console.groupEnd() console.groupCollapsed('1b') console.log('1b text') console.groupEnd() console.groupEnd()
422
Console API
Print the stack trace There might be cases where it's useful to print the call stack trace of a function, maybe to answer the question how did you reach that part of code? You can do so using console.trace() : const function2 = () => console.trace() const function1 = () => function2() function1()
423
Console API
Calculate the time spent You can easily calculate how much time a function takes to run, using time() and timeEnd() const doSomething = () => console.log('test') const measureDoingSomething = () => { console.time('doSomething()') //do something, and measure the time it takes doSomething() console.timeEnd('doSomething()') } measureDoingSomething()
Generate a CPU profile The DevTools allow you to analyze the CPU profile performance of any function.
424
Console API
You can start that manually, but the most accurate way to do so is to wrap what you want to monitor between the profile() and profileEnd() commands. They are similar to time() and timeEnd() , except they don't just measure time, but create a more detailed report. const doSomething = () => console.log('test') const measureDoingSomething = () => { console.profile('doSomething()') //do something, and measure its performance doSomething() console.profileEnd() } measureDoingSomething()
425
WebSockets
WebSockets WebSockets are an alternative to HTTP communication in Web Applications. They offer a long lived, bidirectional communication channel between client and server. Learn how to use them to perform network interactions WebSockets are an alternative to HTTP communication in Web Applications. They offer a long lived, bidirectional communication channel between client and server. Once established, the channel is kept open, offering a very fast connection with low latency and overhead.
Browser support for WebSockets WebSockets are supported by all modern browsers.
426
WebSockets
How WebSockets differ from HTTP HTTP is a very different protocol, and also a different way of communicate. HTTP is a request/response protocol: the server returns some data when the client requests it. With WebSockets: the server can send a message to the client without the client explicitly requesting something the client and the server can talk to each other simultaneously very little data overhead needs to be exchanged to send messages. This means a low latency communication. WebSockets are great for real-time and long-lived communications. HTTP is great for occasional data exchange and interactions initiated by the client. HTTP is much simpler to implement, while WebSockets require a bit more overhead.
Secured WebSockets Always use the secure, encrypted protocol for WebSockets, wss:// . ws:// refers to the unsafe WebSockets version (the http:// of WebSockets), and should be
avoided for obvious reasons.
Create a new WebSockets connection const url = 'wss://myserver.com/something' const connection = new WebSocket(url)
connection is a WebSocket object.
When the connection is successfully established, the open event is fired. Listen for it by assigning a callback function to the onopen property of the connection object: connection.onopen = () => { //... }
If there's any error, the onerror function callback is fired:
Sending data to the server using WebSockets Once the connection is open, you can send data to the server. You can do so conveniently inside the onopen callback function: connection.onopen = () => { connection.send('hey') }
Receiving data from the server using WebSockets Listen with a callback function on onmessage , which is called when the message event is received: connection.onmessage = e => { console.log(e.data) }
Implement a server in Node.js ws is a popular WebSockets library for Node.js. We'll use it to build a WebSockets server. It can also be used to implement a client, and use WebSockets to communicate between two backend services. Install it using npm: npm init npm install ws
The code you need to write is very little: const WebSocket = require('ws') const wss = new WebSocket.Server({ port: 8080 })
This code creates a new server on port 8080 (the default port for WebSockets), and adds a callback function when a connection is established, sending ho! to the client, and logging the messages it receives.
See a live example on Glitch Here is a live example of a WebSockets server: https://glitch.com/edit/#!/flavio-websocketsserver-example Here is a WebSockets client that interacts with the server: https://glitch.com/edit/#!/flaviowebsockets-client-example
429
The Speech Synthesis API
The Speech Synthesis API The Speech Synthesis API is an awesome API, great to experiment new kind of interfaces and let the browser talk to you
The Speech Synthesis API is an awesome tool provided by modern browsers. Introduced in 2014, it's now widely adopted and available in Chrome, Firefox, Safari and Edge. IE is not supported.
430
The Speech Synthesis API
It's part of the Web Speech API, along with the Speech Recognition API. I used it recently to provide an alert on a page that monitored some parameters. When one of the numbers went up, I was alerted thought the computer speakers.
Getting started The most simple example of using the Speech Synthesis API stays on one line: speechSynthesis.speak(new SpeechSynthesisUtterance('Hey'))
Copy and paste it in your browser console, and your computer should speak!
The API The API exposes several objects to the window object.
SpeechSynthesisUtterance SpeechSynthesisUtterance represents a speech request. In the example above we passed it a
string. That's the message the browser should read aloud.
431
The Speech Synthesis API
Once you got the utterance object, you can perform some tweaks to edit the speech properties: const utterance = new SpeechSynthesisUtterance('Hey')
utterance.rate : set the speed, accepts between [0.1 - 10], defaults to 1 utterance.pitch : set the pitch, accepts between [0 - 2], defaults to 1 utterance.volume : sets the volume, accepts between [0 - 1], defaults to 1 utterance.lang : set the language (values use a BCP 47 language tag, like en-US or itIT ) utterance.text : instead of setting it in the constructor, you can pass it as a property. Text
can be maximum 32767 characters utterance.voice : sets the voice (more on this below)
Set a voice The browser has a different number of voices available. To see the list, use this code: console.log(`Voices #: ${speechSynthesis.getVoices().length}`) speechSynthesis.getVoices().forEach(voice => { console.log(voice.name, voice.lang) })
432
The Speech Synthesis API
Here is one of the cross browser issues. The above code works in Firefox, Safari (and possibly Edge but I didn't test it), but does not work in Chrome. Chrome requires the voices handling in a different way, and requires a callback that is called when the voices have been loaded: const voiceschanged = () => { console.log(`Voices #: ${speechSynthesis.getVoices().length}`) speechSynthesis.getVoices().forEach(voice => { console.log(voice.name, voice.lang) }) } speechSynthesis.onvoiceschanged = voiceschanged
After the callback is called, we can access the list using speechSynthesis.getVoices() . I believe this is because Chrome - if there is a network connection - checks additional languages from the Google servers:
433
The Speech Synthesis API
If there is no network connection, the number of languages available is the same as Firefox and Safari. The additional languages are available where the network is enabled, but the API works offline as well.
Cross browser implementation to get the language Since we have this difference, we need a way to abstract it to use the API. This example does this abstraction: const getVoices = () => { return new Promise(resolve => { let voices = speechSynthesis.getVoices() if (voices.length) {
Use a custom language The default voice speaks in english. You can use any language you want, by simply setting the utterance lang property: let utterance = new SpeechSynthesisUtterance('Ciao') utterance.lang = 'it-IT' speechSynthesis.speak(utterance)
Use another voice If there is more than one voice available, you might want to choose the other. For example the default italian voice is female, but maybe I want a male voice. That's the second one we get from th voices list. const lang = 'it-IT' const voiceIndex = 1 const speak = async text => { if (!speechSynthesis) { return } const message = new SpeechSynthesisUtterance(text) message.voice = await chooseVoice() speechSynthesis.speak(message) }
Values for the language Those are some examples of the languages you can use: Arabic (Saudi Arabia) ➡ ar-SA Chinese (China) ➡ zh-CN Chinese (Hong Kong SAR China) ➡ zh-HK Chinese (Taiwan) ➡ zh-TW Czech (Czech Republic) ➡ cs-CZ Danish (Denmark) ➡ da-DK Dutch (Belgium) ➡ nl-BE Dutch (Netherlands) ➡ nl-NL English (Australia) ➡ en-AU English (Ireland) ➡ en-IE English (South Africa) ➡ en-ZA English (United Kingdom) ➡ en-GB English (United States) ➡ en-US Finnish (Finland) ➡ fi-FI French (Canada) ➡ fr-CA French (France) ➡ fr-FR
436
The Speech Synthesis API
German (Germany) ➡ de-DE Greek (Greece) ➡ el-GR Hindi (India) ➡ hi-IN Hungarian (Hungary) ➡ hu-HU Indonesian (Indonesia) ➡ id-ID Italian (Italy) ➡ it-IT Japanese (Japan) ➡ ja-JP Korean (South Korea) ➡ ko-KR Norwegian (Norway) ➡ no-NO Polish (Poland) ➡ pl-PL Portuguese (Brazil) ➡ pt-BR Portuguese (Portugal) ➡ pt-PT Romanian (Romania) ➡ ro-RO Russian (Russia) ➡ ru-RU Slovak (Slovakia) ➡ sk-SK Spanish (Mexico) ➡ es-MX Spanish (Spain) ➡ es-ES Swedish (Sweden) ➡ sv-SE Thai (Thailand) ➡ th-TH Turkish (Turkey) ➡ tr-TR
Mobile On iOS the API works but must be triggered by a user action callback, like a response to a tap event, to provide a better experience to users and avoid unexpected sounds out of your phone. You can't do like in the desktop where you can make your web pages speak something out of the blue.
437
The DOCTYPE
The DOCTYPE Any HTML document must start with a Document Type Declaration, abbreviated Doctype, which tells the browser the version of HTML used in the page
Any HTML document must start with a Document Type Declaration (abbreviated doctype) in the first line, which tells the browser the version of HTML used in the page. This doctype declaration (case insensitive):
tells the browser this is an HTML5 document.
Browser rendering mode With this declaration, the browser can render the document in standards mode. Without it, browsers render the page in quirks mode.
438
The DOCTYPE
If you've never heard of quirks mode, you must know that browsers introduced this rendering mode to make pages written in an "old style" compatible with new functionality and standards used. Without it, as browsers and HTML evolved, old pages would break their appearance, and the Web Platform has historically been very protective in this regard (which I think is part of its success). Browsers basically default to quirks mode unless they recognize the page is written for standards mode. You want standards mode, and
is the way to get it. There's an additional care to be put for Internet Explorer n + 1)
Which is now supported by all modern browsers. IE11 does not support it, nor Opera Mini (How do I know? By checking the ES6 Compatibility Table). So how should you deal with this problem? Should you move on and leave the customers with older/incompatible browsers behind, or should you write older JavaScript code to make all your users happy?
481
Babel
Enter Babel. Babel is a compiler: it takes code written in one standard, and it transpiles it to code written into another standard. You can configure Babel to transpile modern ES2017 JavaScript into JavaScript ES5 syntax: [1, 2, 3].map(function(n) { return n + 1 })
This must happen at build time, so you must setup a workflow that handles this for you. Webpack is a common solution. (P.S. if all this ES thing sounds confusing to you, see more about ES versions in the ECMAScript guide)
Installing Babel Babel is easily installed using npm, locally in a project: npm install --save-dev @babel/core @babel/cli
In the past I recommended installing babel-cli globally, but this is now discouraged by the Babel maintainers, because by using it locally you can have different versions of Babel in each project, and also checking in babel in your repository is better for team play Since npm now comes with npx , locally installed CLI packages can run by typing the command in the project folder: So we can run Babel by just running npx babel script.js
An example Babel configuration Babel out of the box does not do anything useful, you need to configure it and add plugins. Here is a list of Babel plugins To solve the problem we talked about in the introduction (using arrow functions in every browser), we can run npm install --save-dev \ @babel/plugin-transform-es2015-arrow-functions
482
Babel
to download the package in the node_modules folder of our app, then we need to add { "plugins": ["transform-es2015-arrow-functions"] }
to the .babelrc file present in the application root folder. If you don't have that file already, you just create a blank file, and put that content into it. TIP: If you have never seen a dot file (a file starting with a dot) it might be odd at first because that file might not appear in your file manager, as it's a hidden file. Now if we have a script.js file with this content: var a = () => {}; var a = (b) => b; const double = [1,2,3].map((num) => num * 2); console.log(double); // [2,4,6] var bob = { _name: "Bob", _friends: ["Sally", "Tom"], printFriends() { this._friends.forEach(f => console.log(this._name + " knows " + f)); } }; console.log(bob.printFriends());
running babel script.js will output the following code: var a = function () {};var a = function (b) { return b; }; const double = [1, 2, 3].map(function (num) { return num * 2; });console.log(double); // [2,4,6] var bob = { _name: "Bob", _friends: ["Sally", "Tom"], printFriends() { var _this = this; this._friends.forEach(function (f) { return console.log(_this._name + " knows " + f); }); }
483
Babel
}; console.log(bob.printFriends());
As you can see arrow functions have all been converted to JavaScript ES5 function s.
Babel presets We just saw in the previous article how Babel can be configured to transpile specific JavaScript features. You can add much more plugins, but you can't add to the configuration features one by one, it's not practical. This is why Babel offers presets. The most popular presets are env and react . Tip: Babel 7 deprecated (and removed) yearly presets like preset-es2017 , and stage presets. Use @babel/preset-env instead.
env preset The env preset is very nice: you tell it which environments you want to support, and it does everything for you, supporting all modern JavaScript features. E.g. "support the last 2 versions of every browser, but for Safari let's support all versions since Safari 7` { "presets": [ ["env", { "targets": { "browsers": ["last 2 versions", "safari >= 7"] } }] ] }
or "I don't need browsers support, just let me work with Node.js 6.10" { "presets": [ ["env", { "targets": { "node": "6.10" } }]
484
Babel
] }
react preset The react preset is very convenient when writing React apps, by adding preset-flow , syntax-jsx , transform-react-jsx , transform-react-display-name .
By including it, you are all ready to go developing React apps, with JSX transforms and Flow support.
More info on presets https://babeljs.io/docs/plugins/
Using Babel with webpack If you want to run modern JavaScript in the browser, Babel on its own is not enough, you also need to bundle the code. Webpack is the perfect tool for this. TIP: read the webpack guide if you're not familiar with webpack Modern JS needs two different stages: a compile stage, and a runtime stage. This is because some ES6+ features need a polyfill or a runtime helper. To install the Babel polyfill runtime functionality, run npm install --save @babel/polyfill \ @babel/runtime \ @babel/plugin-transform-runtime
Now in your webpack.config.js file add: entry: [ 'babel-polyfill', // your app scripts should be here ], module: { loaders: [ // Babel loader compiles ES2015 into ES5 for // complete cross-browser support { loader: 'babel-loader', test: /\.js$/, // only include files present in the `src` subdirectory
485
Babel
include: [path.resolve(__dirname, "src")], // exclude node_modules, equivalent to the above line exclude: /node_modules/, query: { // Use the default ES2015 preset // to include all ES2015 features presets: ['es2015'], plugins: ['transform-runtime'] } } ] }
By keeping the presets and plugins information inside the webpack.config.js file, we can avoid having a .babelrc file.
486
Yarn
Yarn Yarn is a JavaScript Package Manager, a direct competitor of npm, one of Facebook most popular Open Source projects Intro to Yarn Install Yarn Managing packages Initialize a new project Install the dependencies of an existing project Install a package locally Install a package globally Install a package locally as a development dependency Remove a package Inspecting licenses Inspecting dependencies Upgrading packages How to upgrade Yarn
Intro to Yarn Yarn is a JavaScript Package Manager, a direct competitor of npm, and it's one of Facebook most popular Open Source projects. It's compatible with npm packages, so it has the great advantage of being a drop-in replacement for npm. The reason you might want to use Yarn over npm are: faster download of packages, which are installed in parallel support for multiple registries offline installation support To me offline installation support seems like the killer feature, because once you have installed a package one time from the network, it gets cached and you can recreate a project from scratch without being connected (and without consuming a lot of your data, if you're on a mobile plan). Since some projects could require a huge amount of dependencies, every time you run npm install to initialize a project you might download hundreds of megabytes from the network.
With Yarn, this is done just once.
487
Yarn
This is not the only feature, many other goodies are provided by Yarn, which we'll see in this article. In particular Yarn devotes a lot of care to security, by performing a checksum on every package it installs. Tools eventually converge to a set of features that keeps them on the same level to stay relevant, so we'll likely see those features in npm in the future - competition is nice for us users.
Install Yarn While there is a joke around about installing Yarn with npm ( npm install -g yarn ), it's not recommended by the Yarn team. System-specific installation methods are listed at https://yarnpkg.com/en/docs/install. On MacOS for example you can use Homebrew and run brew install yarn
but every Operating System has its own package manager of choice that will make the process very smooth. In the end, you'll end up with the yarn command available in your shell:
Managing packages
488
Yarn
Yarn writes its dependencies to a file named package.json , which sits in the root folder of your project, and stores the dependencies files into the node_modules folder, just like npm if you used it in the past.
Initialize a new project yarn init
starts an interactive prompt that helps you quick start a project:
Install the dependencies of an existing project If you already have a package.json file with the list of dependencies but the packages have not been installed yet, run yarn
or
489
Yarn
yarn install
to start the installation process.
Install a package locally Installing a package into a project is done using yarn add package-name
This is equivalent to running npm install --save package-name , thus avoiding the invisible dependency issue when running npm install package-name , which does not add the dependency to the package.json file
Install a package globally yarn global add package-name
Install a package locally as a development dependency yarn add --dev package-name
Equivalent to the --save-dev flag in npm
Remove a package yarn remove package-name
Inspecting licenses When installing many dependencies, which in turn might have lots of dependencies, you install a number of packages, of which you don't have any idea about the license they use. Yarn provides a handy tool that prints the license of any dependency you have: yarn licenses ls
490
Yarn
and it can also generate a disclaimer automatically including all the licenses of the projects you use: yarn licenses generate-disclaimer
491
Yarn
Inspecting dependencies 492
Yarn
Do you ever check the node_modules folder and wonder why a specific package was installed? yarn why tells you:
yarn why package-name
Upgrading packages If you want to upgrade a single package, run yarn upgrade package-name
To upgrade all your packages, run yarn upgrade
But this command can sometimes lead to problems, because you're blindly upgrading all the dependencies without worrying about major version changes. Yarn has a great tool to selectively update packages in your project, which is a huge help for this scenario: yarn upgrade-interactive
493
Yarn
How to upgrade Yarn At the time of writing there is no auto-update command. If you used brew to install it, like suggested above, simply use: brew upgrade yarn
If instead you installed using npm, use: npm uninstall yarn -g npm install yarn -g
494
Jest
Jest Jest is a library for testing JavaScript code. It's an open source project maintained by Facebook, and it's especially well suited for React code testing, although not limited to that: it can test any JavaScript code. Jest is very fast and easy to use
Introduction to Jest Installation Create the first Jest test Run Jest with VS Code Matchers Setup Teardown Group tests using describe() Testing asynchronous code Callbacks Promises Async/await Mocking
495
Jest
Spy packages without affecting the functions code Mock an entire package Mock a single function Pre-built mocks Snapshot testing
Introduction to Jest Jest is a library for testing JavaScript code. It's an open source project maintained by Facebook, and it's especially well suited for React code testing, although not limited to that: it can test any JavaScript code. Its strengths are: it's fast it can perform snapshot testing it's opinionated, and provides everything out of the box without requiring you to make choices Jest is a tool very similar to Mocha, although they have differences: Mocha is less opinionated, while Jest has a certain set of conventions Mocha requires more configuration, while Jest works usually out of the box, thanks to being opinionated Mocha is older and more established, with more tooling integrations In my opinion the biggest feature of Jest is it's an out of the box solution that works without having to interact with other testing libraries to perform its job.
Installation Jest is automatically installed in create-react-app , so if you use that, you don't need to install Jest. Jest can be installed in any other project using Yarn: yarn add --dev jest
or npm: npm install --save-dev jest
496
Jest
notice how we instruct both to put Jest in the devDependencies part of the package.json file, so that it will only be installed in the development environment and not in production. Add this line to the scripts part of your package.json file: { "scripts": { "test": "jest" } }
so that tests can be run using yarn test or npm run test . Alternatively, you can install Jest globally: yarn global add jest
and run all your tests using the jest command line tool.
Create the first Jest test Projects created with create-react-app have Jest installed and preconfigured out of the box, but adding Jest to any project is as easy as typing yarn add --dev jest
Add to your package.json this line: { "scripts": { "test": "jest" } }
and run your tests by executing yarn test in your shell. Now, you don't have any test here, so nothing is going to be executed:
497
Jest
Let's create the first test. Open a math.js file and type a couple functions that we'll later test: const sum = (a, b) => a + b const mul = (a, b) => a * b const sub = (a, b) => a - b const div = (a, b) => a / b export default { sum, mul, sub, div }
Now create a math.test.js file, in the same folder, and there we'll use Jest to test the functions defined in math.js : const { sum, mul, sub, div } = require("./math") test("Adding 1 + 1 equals 2", () => { expect(sum(1, 1)).toBe(2) }) test("Multiplying 1 * 1 equals 1", () => { expect(mul(1, 1)).toBe(1) }) test("Subtracting 1 - 1 equals 0", () => { expect(sub(1, 1)).toBe(0) }) test("Dividing 1 / 1 equals 1", () => { expect(div(1, 1)).toBe(1) })
Running yarn test results in Jest being run on all the test files it finds, and returning us the end result:
498
Jest
Run Jest with VS Code Visual Studio Code is a great editor for JavaScript development. The Jest extension offers a top notch integration for our tests. Once you install it, it will automatically detect if you have installed Jest in your devDependencies and run the tests. You can also invoke the tests manually by selecting the Jest: Start Runner command. It will run the tests and stay in watch mode to re-run them whenever you change one of the files that have a test (or a test file):
499
Jest
Matchers In the previous article I used toBe() as the only matcher: test("Adding 1 + 1 equals 2", () => { expect(sum(1, 1)).toBe(2) })
A matcher is a method that lets you test values. Most commonly used matchers, comparing the value of the result of expect() with the value passed in as argument, are: toBe compares strict equality, using === toEqual compares the values of two variables. If it's an object or array, checks equality of
all the properties or elements toBeNull is true when passing a null value toBeDefined is true when passing a defined value (opposite as above) toBeUndefined is true when passing an undefined value toBeCloseTo is used to compare floating values, avoid rounding errors
500
Jest
toBeTruthy true if the value is considered true (like an if does) toBeFalsy true if the value is considered false (like an if does) toBeGreaterThan true if the result of expect() is higher than the argument toBeGreaterThanOrEqual true if the result of expect() is equal to the argument, or higher
than the argument toBeLessThan true if the result of expect() is lower than the argument toBeLessThanOrEqual true if the result of expect() is equal to the argument, or lower than
the argument toMatch is used to compare strings with regular expression pattern matching toContain is used in arrays, true if the expected array contains the argument in its
elements set toHaveLength(number) : checks the length of an array toHaveProperty(key, value) : checks if an object has a property, and optionally checks its
value toThrow checks if a function you pass throws an exception (in general) or a specific
exception toBeInstanceOf() : checks if an object is an instance of a class
All those matchers can be negated using .not. inside the statement, for example: test("Adding 1 + 1 does not equal 3", () => { expect(sum(1, 1)).not.toBe(3) })
For use with promises, you can use .resolves and .rejects : expect(Promise.resolve('lemon')).resolves.toBe('lemon') expect(Promise.reject(new Error('octopus'))).rejects.toThrow('octopus')
Setup Before running your tests you will want to perform some initialization. To do something once before all the tests run, use the beforeAll() function: beforeAll(() => { //do something })
To perform something before each test runs, use beforeEach() :
501
Jest
beforeEach(() => { //do something })
Teardown Just as you could do with the setup, you can perform something after each test runs: afterEach(() => { //do something })
and after all tests end: afterAll(() => { //do something })
Group tests using describe() You can create groups of tests, in a single file, that isolate the setup and teardown functions: describe('first set', () => { beforeEach(() => { //do something }) afterAll(() => { //do something }) test(/*...*/) test(/*...*/) }) describe('second set', () => { beforeEach(() => { //do something }) beforeAll(() => { //do something }) test(/*...*/) test(/*...*/) })
502
Jest
Testing asynchronous code Asynchronous code in modern JavaScript can have basically 2 forms: callbacks and promises. On top of promises we can use async/await.
Callbacks You can't have a test in a callback, because Jest won't execute it - the execution of the test file ends before the callback is called. To fix this, pass a parameter to the test function, which you can conveniently call done . Jest will wait until you call done() before ending that test: //uppercase.js function uppercase(str, callback) { callback(str.toUpperCase()) } module.exports = uppercase //uppercase.test.js const uppercase = require('./src/uppercase') test(`uppercase 'test' to equal 'TEST'`, (done) => { uppercase('test', (str) => { expect(str).toBe('TEST') done() } })
503
Jest
Promises With functions that return promises, we simply return a promise from the test: //uppercase.js const uppercase = (str) => { return new Promise((resolve, reject) => { if (!str) { reject('Empty string') return } resolve(str.toUpperCase()) }) } module.exports = uppercase //uppercase.test.js const uppercase = require('./uppercase') test(`uppercase 'test' to equal 'TEST'`, () => { return uppercase('test').then(str => { expect(str).toBe('TEST') }) })
504
Jest
Promises that are rejected can be tested using .catch() : //uppercase.js const uppercase = (str) => { return new Promise((resolve, reject) => { if (!str) { reject('Empty string') return } resolve(str.toUpperCase()) }) } module.exports = uppercase //uppercase.test.js const uppercase = require('./uppercase')
Async/await To test functions that return promises we can also use async/await, which make the syntax very straightforward and simple: //uppercase.test.js const uppercase = require('./uppercase')
Mocking In testing, mocking allows you to test functionality that depends on: Database Network requests access to Files any External system so that: 1. your tests run faster, giving a quick turnaround time during development
507
Jest
2. your tests are independent of network conditions, the state of the database 3. your tests do not pollute any data storage because they do not touch the database 4. any change done in a test does not change the state for subsequent tests, and re-running the test suite should start from a known and reproducible starting point 5. you don't have to worry about rate limiting on API calls and network requests Mocking is useful when you want to avoid side effects (e.g. writing to a database) or you want to skip slow portions of code (like network access), and also avoids implications with running your tests multiple times (e.g. imagine a function that sends an email or calls a rate-limited API). Even more important, if you are writing a Unit Test, you should test the functionality of a function in isolation, not with all its baggage of things it touches. Using mocks, you can inspect if a module function has been called and which parameters were used, with: expect().toHaveBeenCalled() : check if a spied function has been called expect().toHaveBeenCalledTimes() : count how many times a spied function has been
called expect().toHaveBeenCalledWith() : check if the function has been called with a specific set
of parameters expect().toHaveBeenLastCalledWith() : check the parameters of the last time the function
has been invoked
Spy packages without affecting the functions code When you import a package, you can tell Jest to "spy" on the execution of a particular function, using spyOn() , without affecting how that method works. Example: const mathjs = require('mathjs') test(`The mathjs log function`, () => { const spy = jest.spyOn(mathjs, 'log') const result = mathjs.log(10000, 10) expect(mathjs.log).toHaveBeenCalled() expect(mathjs.log).toHaveBeenCalledWith(10000, 10) })
Mock an entire package
508
Jest
Jest provides a convenient way to mock an entire package. Create a __mocks__ folder in the project root, and in this folder create one JavaScript file for each of your packages. Say you import mathjs . Create a __mocks__/mathjs.js file in your project root, and add this content: module.exports = { log: jest.fn(() => 'test') }
This will mock the log() function of the package. Add as many functions as you want to mock: const mathjs = require('mathjs') test(`The mathjs log function`, () => { const result = mathjs.log(10000, 10) expect(result).toBe('test') expect(mathjs.log).toHaveBeenCalled() expect(mathjs.log).toHaveBeenCalledWith(10000, 10) })
Mock a single function More simply, you can mock a single function using jest.fn() : const mathjs = require('mathjs') mathjs.log = jest.fn(() => 'test') test(`The mathjs log function`, () => { const result = mathjs.log(10000, 10) expect(result).toBe('test') expect(mathjs.log).toHaveBeenCalled() expect(mathjs.log).toHaveBeenCalledWith(10000, 10) })
You can also use jest.fn().mockReturnValue('test') to create a simple mock that does nothing except returning a value.
Pre-built mocks You can find pre-made mocks for popular libraries. For example this package https://github.com/jefflau/jest-fetch-mock allows you to mock fetch() calls, and provide sample return values without interacting with the actual server in your tests.
Snapshot testing 509
Jest
Snapshot testing is a pretty cool feature offered by Jest. It can memorize how your UI components are rendered, and compare it to the current test, raising an error if there's a mismatch. This is a simple test on the App component of a simple create-react-app application (make sure you install react-test-renderer ): import React from 'react' import App from './App' import renderer from 'react-test-renderer' it('renders correctly', () => { const tree = renderer .create() .toJSON() expect(tree).toMatchSnapshot() })
the first time you run this test, Jest saves the snapshot to the __snapshots__ folder. Here's what App.test.js.snap contains: // Jest Snapshot v1, https://goo.gl/fbAQLP exports[`renders correctly 1`] = `
Welcome to React
To get started, edit src/App.js and save to reload.
`;
510
Jest
As you see it's the code that the App component renders, nothing more. The next time the test compares the output of to this. If App changes, you get an error:
When using yarn test in create-react-app you are in watch mode, and from there you can press w and show more options: Watch Usage › Press u to update failing snapshots. › Press p to filter by a filename regex pattern. › Press t to filter by a test name regex pattern. › Press q to quit watch mode. › Press Enter to trigger a test run.
511
Jest
If your change is intended, pressing u will update the failing snapshots, and make the test pass. You can also update the snapshot by running jest -u (or jest --updateSnapshot ) outside of watch mode.
512
ESLint
ESLint Learn the basics of the most popular JavaScript linter, which can help to make your code adhere to a certain set of syntax conventions, check if the code contains possible sources of problems and if the code matches a set of standards you or your team define What is linter? ESLint Install ESLint globally Install ESLint locally Use ESLint in your favourite editor Common ESLint configurations Airbnb style guide React Use a specific version of ECMAScript Force strict mode More advanced rules Disabling rules on specific lines ESLint is a JavaScript linter.
What is linter? Good question! A linter is a tool that identifies issues in your code. Running a linter against your code can tell you many things: if the code adheres to a certain set of syntax conventions if the code contains possible sources of problems if the code matches a set of standards you or your team define It will raise warnings that you, or your tools, can analyze and give you actionable data to improve your code.
ESLint ESLint is a linter for the JavaScript programming language, written in Node.js. It is hugely useful because JavaScript, being an interpreted language, does not have a compilation step and many errors are only possibly found at runtime.
513
ESLint
ESLint will help you catch those errors. Which errors in particular you ask? avoid infinite loops in the for loop conditions make sure all getter methods return something disallow console.log (and similar) statements check for duplicate cases in a switch check for unreachable code check for JSDoc validity and much more! The full list is available at https://eslint.org/docs/rules/ The growing popularity of Prettier as a code formatter made the styling part of ESLint kind of obsolete, but ESLint is still very useful to catch errors and code smells in your code. ESLint is very flexible and configurable, and you can choose which rules you want to check for, or which kind of style you want to enforce. Many of the available rules are disabled and you can turn them on in your .eslintrc configuration file, which can be global or specific to your project.
Install ESLint globally Using npm. npm install -g eslint # create a `.eslintrc` configuration file eslint --init # run ESLint against any file with eslint yourfile.js
Install ESLint locally npm install eslint --save-dev # create a `.eslintrc` configuration file ./node_modules/.bin/eslint --init # run ESLint against any file with ./node_modules/.bin/eslint yourfile.js
514
ESLint
Use ESLint in your favourite editor The most common use of ESLint is within your editor of course.
Common ESLint configurations ESLint can be configured in tons of different ways.
Airbnb style guide A common setup is to use the Airbnb JavaScript coding style to lint your code. Run yarn add --dev eslint-config-airbnb
or npm install --save-dev eslint-config-airbnb
to install the Airbnb configuration package, and add in your .eslintrc file in the root of your project: { "extends": "airbnb", }
React Linting React code is easy with the React plugin: yarn add --dev eslint-plugin-react
Use a specific version of ECMAScript ECMAScript changes version every year now. The default is currently set to 5, which means pre-2015. Turn on ES6 (or higher) by setting this property in .eslintrc : { "parserOptions": { "ecmaVersion": 6, } }
More advanced rules A detailed guide on rules can be found on the official site at https://eslint.org/docs/userguide/configuring
Disabling rules on specific lines Sometimes a rule might give a false positive, or you might be explicitly willing to take a route that ESLint flags. In this case, you can disable ESLint entirely on a few lines:
or on a single line: alert('test'); // eslint-disable-line
or just disable one or more specific rules for multiple lines: /* eslint-disable no-alert, no-console */ alert('test'); console.log('test'); /* eslint-enable no-alert, no-console */
or for a single line: alert('test'); // eslint-disable-line no-alert, quotes, semi
517
Prettier
Prettier Prettier is an opinionated code formatter. It is a great way to keep code formatted consistently for you and your team, and supports a lot of different languages out of the box
Introduction to Prettier Less options Difference with ESLint Installation Prettier for beginners
Introduction to Prettier Prettier is an opinionated code formatter.
It supports a lot of different syntax out of the box, including:
518
Prettier
JavaScript Flow, TypeScript CSS, SCSS, Less JSX GraphQL JSON Markdown and with plugins you can use it for Python, PHP, Swift, Ruby, Java and more. It integrates with the most popular code editors, including VS Code, Sublime Text, Atom and more. Prettier is hugely popular, as in February 2018 it has been downloaded over 3.5 million times. The most important links you need to know more about Prettier are https://prettier.io/ https://github.com/prettier/prettier https://www.npmjs.com/package/prettier
Less options I learned Go recently and one of the best things about Go is gofmt, an official tool that automatically formats your code according to common standards. 95% (made up stat) of the Go code around looks exactly the same, because this tool can be easily enforced and since the style is defined on you by the Go maintainers, you are much more likely to adapt to that standard instead of insisting on your own style. Like tabs vs spaces, or where to put an opening bracket. This might sound like a limitation, but it's actually very powerful. All Go code looks the same. Prettier is the gofmt for the rest of the world. It has very few options, and most of the decisions are already taken for you so you can stop arguing about style and little things, and focus on your code.
Difference with ESLint ESLint is a linter, it does not just format, but it also highlights some errors thanks to its static analysis of the code. It is an invaluable tool and it can be used alongside Prettier.
519
Prettier
ESLint also highlights formatting issues, but since it's a lot configurable, everyone could have a different set of formatting rules. Prettier provides a common ground for all. Now, there are a few things you can customize, like: the tab width the use of single quotes vs double quotes the line columns number the use of trailing commas and some others, but Prettier tries to keep the number of those customizations under control, to avoid becoming too customizable.
Installation Prettier can run from the command line, and you can install it using Yarn or npm. Another great use case for Prettier is to run it on PRs for your Git repositories, for example on GitHub. If you use a supported editor the best thing is to use Prettier directly from the editor, and the Prettier formatting will be run every time you save. For example here is the Prettier extension for VS Code: https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode
Prettier for beginners If you think Prettier is just for teams, or for pro users, you are missing a good value proposition of this tool. A good style enforces good habits. Formatting is a topic that's mostly overlooked by beginners, but having a clean and consistent formatting is key to your success as a new developer. Also, even if you started using JavaScript 2 weeks ago, with Prettier your code - style wise will look just like code written from a JavaScript Guru writing JS since 1998.
520
Browser DevTools
Browser DevTools The Browser DevTools are a fundamental element in the frontend developer toolbox, and they are available in all modern browsers. Discover the basics of what they can do for you
The Browser DevTools HTML Structure and CSS The HTML panel The CSS styles panel The Console Executing custom JavaScript Error reporting The emulator The network panel JavaScript debugger Application and Storage Storage Application
521
Browser DevTools
Security tab Audits
The Browser DevTools I don't think there was a time where websites and web applications were easy to build, as for backend technologies, but client-side development was surely easier than now, generally speaking. Once you figured out the differences between Internet Explored and Netscape Navigator, and avoided the proprietary tags and technology, all you had to use was HTML and later CSS. JavaScript was a tech for creating dialog boxes and a little bit more, but was definitely not as pervasive as today. Although lots of web pages are still plain HTML + CSS, like this page, many other websites are real applications that run in the browser. Just providing the source of the page, like browser did once upon a time, was not enough. Browser had to provide much more information on how they rendered the page, and what the page is currently doing, hence they introduced a feature for developers: their developer tools. Every browser is different and so their dev tools are slightly different. At the time of writing my favorite developer tools are provided by Chrome, and this is the browser we'll talk here, although also Firefox and Edge have great tools as well. I will soon add coverage of the Firefox DevTools.
HTML Structure and CSS The most basic form of usage, and a very common one, is inspecting the content of a webpage. When you open the DevTools that's the panel the Elements panel is what you see:
522
Browser DevTools
The HTML panel On the left, the HTML that composes the page. Hovering the elements in the HTML panel highlights the element in the page, and clicking the first icon in the toolbar allows you to click an element in the page, and analyze it in the inspector. You can drag and drop elements in the inspector to live change their positioning in the page.
The CSS styles panel On the right, the CSS styles that are applied to the currently selected element.
523
Browser DevTools
In addition to editing and disabling properties, you can add a new CSS property, with any target you want, by clicking the + icon. Also you can trigger a state for the selected element, so you can see the styles applied when it’s active, hovered, on focus. At the bottom, the box model of the selected element helps you figure out margins, paddings, border and dimensions at a quick glance:
The Console The second most important element of the DevTools is the Console. The Console can be seen on its own panel, or by pressing Esc in the Elements panel, it will show up in the bottom. The Console serves mainly two purposes: executing custom JavaScript and error reporting.
Executing custom JavaScript At the bottom of the Console there is a blinking cursor. You can type any JavaScript there, and it will be promptly executed. As an example, try running: alert('test')
The special identifier $0 allows you to reference the element currently selected in the elements inspector. If you want to reference that as a jQuery selector, use $($0) . You can write more than one line with shift-enter . Pressing enter at the end of the script runs it.
Error reporting 524
Browser DevTools
Any error, warning or information that happens while rendering the page, and subsequently executing the JavaScript, is listed here. For example failing to load a resource from the network, with information on why, is reported in the console.
In this case, clicking the resource URL brings you to the Network panel, showing more info which you can use to determine the cause of the problem. You can filter those messages by level (Error / Warning / Info) and also filter them by content. Those messages can be user-generated in your own JavaScript by using the Console API: console.log('Some info message') console.warn('Some warning message') console.error('Some error message')
The emulator The Chrome DevTools embed a very useful device emulator which you can use to visualize your page in every device size you want.
525
Browser DevTools
You can choose from the presets the most popular mobile devices, including iPhones, iPads, Android devices and much more, or specify the pixel dimensions yourself, and the screen definition (1x, 2x retina, 3x retina HD). In the same panel you can setup network throttling for that specific Chrome tab, to emulate a low speed connection and see how the page loads, and the "show media queries" option shows you how media queries modify the CSS of the page.
The network panel The Network Panel of the DevTools allows you to see all the connections that the browser must process while rendering a page.
526
Browser DevTools
At a quick glance the page shows: a toolbar where you can setup some options and filters a loading graph of the page as a whole every single request, with HTTP method, response code, size and other details a footer with the summary of the total requests, the total size of the page and some timing indications. A very useful option in the toolbar is preserve log. By enabling it, you can move to another page, and the logs will not be cleared. Another very useful tool to track loading time is disable cache. This can be enabled globally in the DevTools settings as well, to always disable cache when DevTools is open. Clicking a specific request in the list shows up the detail panel, with HTTP Headers report:
527
Browser DevTools
And the loading time breakdown:
528
Browser DevTools
JavaScript debugger If you click an error message in the DevTools Console, the Sources tab opens and in addition to pointing you to the file and line where the error happened, you have the option to use the JavaScript debugger.
529
Browser DevTools
This is a full-featured debugger. You can set breakpoints, watch variables, and listen to DOM changes or break on specific XHR (AJAX) network requests, or event listeners.
Application and Storage The Application tab gives you lots of information about which information is stored inside the browser relative to your website.
530
Browser DevTools
Storage You gain access to detailed reports and tools to interact with the application storage: Local Storage Session Storage IndexedDb Web SQL Cookies and you can quickly wipe any information, to start with a clean slate.
Application This tab also gives you tools to inspect and debug Progressive Web Apps. Click manifest to get information about the web app manifest, used to allow mobile users to add the app to their home, and simulate the "add to homescreen" events. Service workers let you inspect your application service workers. If you don't know what service workers are, in short they are a fundamental technology that powers modern web apps, to provide features like notification, capability to run offline and synchronize across devices.
531
Browser DevTools
Security tab The Security tab gives you all the information that the browser has relatively to the security of the connection to the website.
If there is any problem with the HTTPS connection, if the site is served over TLS, it will provide you more information about what's causing it.
Audits The Audits tab will help you find and solve some issues relative to performance and in general the quality of the experience that users have when accessing your website. You can perform various kinds of audits depending on the kind of website:
532
Browser DevTools
The audit is provided by Lighthouse, an open source automated website quality check tool. It takes a while to run, then it provides you a very nice report with key actions to check.
533
Browser DevTools
If you want to know more about the Chrome DevTools, check out this Chrome DevTools Tips list
534
Emmet
Emmet Emmet is a pretty cool tool that helps you write HTML very very fast. It's like magic. Emmet is not something new, it's been around for years and there is a plugin for every editor out there. Create an HTML file from scratch > and + Level up Multipliers Group an expression to make it more readable id and class attributes
Adding an unique class or id Other attributes Adding content Adding an incremental number in your markup A reference for tags used in the page head A reference for common tags A reference for semantic HTML tags A reference for form elements
Emmet is a pretty cool tool that helps you write HTML very very fast. It's like magic. Emmet is not something new, it's been around for years and there is a plugin for every editor out there. On VS Code, Emmet is integrated out of the box, and whenever the editor recognizes a possible Emmet command, it will show you a tooltip.
If the thing you write has no other interpretations, and VS Code thinks it must be an Emmet expression, it will preview it directly in the tooltip, nicely enough:
535
Emmet
Yet I didn't really know how to use it in all its intricacies until I set out to research and write about it, so I had to learn how to use it in depth. I want to use it in my day to day work, so here's what I learned about it.
Create an HTML file from scratch Type ! and you will get a basic HTML boilerplate to work with: Document
> and + > means child + means sibling
nav>ul>li
536
Emmet
div+p+span
You can combine those to perform more complex markups. VS Code is so nice to show a preview when the Emmet snippet has no ul>li>div+p+span
Level up Using ^ you can level up from any time you used > to create a children: ul>li>div+p^li>span
You can use it multiple times to "up" more than once: ul>li>div+p^^p
537
Emmet
Multipliers Any tag can be added multiple times using * : ul>li*5>p
Group an expression to make it more readable With multiplication in the mix, things start to get a bit more complex. What if you want to multiply 2 items? You group them in parentheses ( ) : ul>li>(p+span)*2
id and class attributes id and class are probably the most used attributes in HTML.
538
Emmet
You can create an HTML snippet that includes them by using a CSS-like syntax: ul>li>p.text#first
You can add multiple classes: ul>li>p.text.paragraph#first
Adding an unique class or id id must be unique in your page, at any time. class can be repeated, but sometimes you want an incremental one for your elements.
You can do that using $ : ul>li.item$*2>p
Other attributes Attributes other than class and id must be added using [] parentheses: ul>li.item$*2>p[style="color: red"]
539
Emmet
You can add multiple attribute at once: ul>li.item$*2>p[style="color: red" title="A color"]
Adding content Of course you can also fill the HTML with content: ul>li.item$*2>p{Text}
Text
Text
Adding an incremental number in your markup You can add an incremental number in the text: ul>li.item$*2>p{Text $}
Text 1
540
Emmet
Text 2
That number normally starts at 1, but you can make it start at an arbitrary number: ul>li.item$@10*2>p{Text $@3}
Text 3
Text 4
A reference for tags used in the page head Abbreviation
Rendered html
link
link:css
link:favicon
link:rss
meta:utf
meta:vp
style
script
script:src
A reference for common tags Abbreviation
Rendered html
img
a
br
hr
541
Emmet
c
tr+
ol+
ul+
A reference for semantic HTML tags Abbreviation
Rendered html
mn
sect
art
hdr
ftr
adr
str
A reference for form elements Abbreviation
Rendered html
form
form:get
form:post
label
input
inp
input:hidden, input:h
input:text, input:t
input:search
input:email
input:url
input:password, input:p
input:datetime
542
Emmet
input:date
input:datetime-local
input:month
input:week
input:time
input:tel
input:number
input:color
input:checkbox, input:c
input:radio, input:r
input:range
input:file, input:f
input:submit, input:s
input:image, input:i
input:button, input:b
input:reset
button:submit, button:s, btn:s
button:reset, button:r, btn:r
button:disabled, button:d, btn:d
btn
fieldset:disabled, fieldset:d, fset:d, fst:d
fst, fset
optg
select
select:disabled, select:d
select+
option, opt
table+
textarea
tarea
543
Emmet
544
How to use Visual Studio Code
How to use Visual Studio Code Visual Studio Code, VSCode for friends, is an incredibly powerful editor that's hugely growing in popularity. Find out why, and its main features for developers
Index Index Introduction Should I switch to VS Code? And why? Getting started Explorer Search Source Control Debugger Extensions The Terminal
545
How to use Visual Studio Code
The Command Palette Themes Customization Nice configuration options The best font for coding Workspaces Editing IntelliSense Code Formatting Errors and warnings Keyboard shortcuts Keymaps Code snippets Extensions showcase The VS Code CLI command Solving high usage CPU issues
Introduction Since the beginning, editors are a strange beast. Some people defend their editor choice strenuously. In the Unix world you have those Emacs vs vi "wars", and I kind of imagine why so much time is spend debating the advantages of one versus another. I used tons of editors and IDEs in the past few years. I can remember TextMate, TextWrangler, Espresso, BBEdit, XCode, Coda, Brackets, Sublime Text, Atom, vim, PHPStorm. The difference between an IDE and an editor is mostly in the feature set, and complexity. I largely prefer an editor over an IDE, as it's faster and gets less in the way. In the last 12 months I've been using VS Code, the Open Source editor from Microsoft, and it's quickly become my favorite editor ever.
Should I switch to VS Code? And why? If you're looking for suggestions for whether to use it or not, let me say yes, you should switch to it from whatever other editor you are using now. This editor builds on top of decades of editor experience from Microsoft. The code of the editor is completely Open Source, and there's no payment required to use it.
546
How to use Visual Studio Code
It uses Electron as its base, which enables it to be cross platform and work on Mac, Windows and Linux. It's built using Node.js, and you can extend it using JavaScript (which makes it a win for all us JavaScript developers). It's fast, easily the fastest editor I've used after Sublime Text. It has won the enthusiasm of the community: there are thousands of extensions, some official, and some made by the community, and it's winning surveys. Microsoft releases an update every month. Frequent updates foster innovation and Microsoft is listening to its users, while keeping the platform as stable as possible (I should say I never had an issue with VS Code in 1 year of using it every day almost all day).
Getting started The home page of Visual Studio Code on the internet is https://code.visualstudio.com/. Go to that site to download the latest stable release of the editor.
547
How to use Visual Studio Code
The installation process depends on the platform, and you should be used to it. When you start the editor for the first time you will see the welcome screen:
548
How to use Visual Studio Code
There is a toolbar on the left with 5 icons. That gives access to: The File Explorer Search Source Control The Debugger The Extensions
Explorer Let's start the exploration with the explorer (pun intended).
549
How to use Visual Studio Code
Press the "Open Folder" button in the sidebar, or the Open folder... link in the Welcome page. Both will trigger the file picker view. Choose one folder where you have source code, or even just text files, and open it. VS Code will show that folder content in your view:
550
How to use Visual Studio Code
On the right, the empty view shows some commands to perform some quick operations, and their keyboard shortcut. If you select a file on the left, that file will open on the main panel:
and if you start editing it, notice a dot will appear next to the file name in the tab, and in the sidebar as well:
Pressing CMD+P will show you a quick file picker to easily move in files on large projects:
551
How to use Visual Studio Code
You can hide the sidebar that hosts the file using the shortcut CMD+B . Note: I'm using the Mac keyboard shortcuts. Most of the times, on Windows and Linux you just change CMT to CTRL and it works, but not always. Print your keyboard shortcuts reference.
Search The second icon in the toolbar is "Search". Clicking it shows the search interface:
552
How to use Visual Studio Code
You can click the icons to make the search case sensitive, to match whole words (not substrings), and to use a regular expression for the search string. To perform the search, press enter . Clicking the ▷ symbol on the left enables the search and replace tool. Clicking the 3 dots shows a panel that lets you just include some specific kind of files, and exclude other files:
553
How to use Visual Studio Code
Source Control The Source Control tab is enabled by clicking the third icon in the toolbar.
554
How to use Visual Studio Code
VS Code comes with Git support out of the box. In this case the folder we opened does not have source control initialized. Clicking the first icon on top, with the Git logo, allows us to initialize the Git repository:
555
How to use Visual Studio Code
The U beside each file means that it's been updated since the last commit (since we never did a commit in the first place, all files are updated). Create the first commit by writing a text message and pressing Cmd-Enter , or clicking the ✔ ︎ icon on top.
556
How to use Visual Studio Code
I usually set this to automatically stage the changes when I commit them. The 3 dots icon, when clicked, offers lots of options for interacting with Git:
557
How to use Visual Studio Code
Debugger The fourth icon in the toolbar opens the JavaScript debugger. This deserves an article on its own. In the meantime check out the official docs.
Extensions The fifth icon brings us to extensions.
558
How to use Visual Studio Code
Extensions are one killer feature of VS Code. They can provide so much value that you'll surely end up using tons of them. I have lots of extensions installed. One thing to remember is that every extension you install is going to impact (more or less) the performance of your editor. You can disable an extension you install, and enable only when you need it. You can also disable an extension for a specific workspace (we'll talk about work workspaces later). For example, you don't want to enable the JavaScript extensions in a Go project. There is a list of recommended extensions, which include all the most popular tools.
559
How to use Visual Studio Code
Since I edit lots of markdown files for my blog, VS Code suggests me the markdownlint extension, which provides linting and syntax checking for Markdown files. As an example, let's install it. First, I inspect the number of views. It's 1.2M, so many! And the reviews are positive (4.5/5). Clicking the extension name opens the details on the right.
560
How to use Visual Studio Code
Pressing the green Install button starts the installation process, which is straightforward. It does everything for you, and you just need to click the "Reload" button to activate it, which basically reboots the editor window. Done! Let's test it by creating a markdown file with an error, like a missing alt attribute on an image. It successfully tells us so:
561
How to use Visual Studio Code
Down below I introduce some popular extensions you don't want to miss, and the ones I use the most.
The Terminal VS Code has an integrated terminal. You can activate it from the menu View ➤ Integrated Terminal , or using `CMD+`` and it will open with your default shell.
562
How to use Visual Studio Code
This is very convenient because in modern web development you almost always have some npm or yarn process running in the background.
You can create more than one terminal tab, and show them one next to the other, and also stack them to the right rather than in the bottom of the window:
563
How to use Visual Studio Code
The Command Palette The Command Palette is a very powerful tool. You enable it by clicking View ➤ Command Palette , or using CMD+SHIFT+P
A modal window will appear at the top, offering you various options, depending on which plugins you have installed, and which commands you used last. Common operations I perform are: Extensions: Install Extensions Preferences: Color Theme to change the color theme (I sometimes change from night to day) Format Document, which formats code automatically Run Code, which is provided by Code Runner, and executes the highlighted lines of JavaScript you can activate any of those by starting typing, and the autocomplete functionality will show you the one you want. Remember when you typed CMD+P to see the list of files, before? That's a shortcut to a specific feature of the Command Palette. There are others:
564
How to use Visual Studio Code
Ctrl-Shift-Tab shows you the active files Ctrl-G opens the command palette to let you enter a line number to go to CMD+SHIFT+O shows the list of symbols found in the current file
What symbols are depends on the file type. In JavaScript, those might be classes or functions. In Markdown, section titles.
Themes You can switch the color theme used by clicking CMD-k + CMD-t , or by invoking the Preferences: Color Theme command. This will show you the list of themes installed:
you can click one, or move with the keyboard, and VS Code will show you a preview. Click enter to apply the theme:
565
How to use Visual Studio Code
Themes are just extensions. You can install new themes by going to the extensions manager. Probably the best thing for discoverability is to use the marketplace website. My favorite theme is Ayu, which provides a great style for any time of the day, night, morning/evenings and afternoon.
Customization Theme is just one customization you can make. The sidebar icons that are assigned to a file are also a big part of a nice user experience. You can change those by going to Preferences ➤ File Icon Theme . Ayu comes with its own icons theme, which perfectly matches the theme colors:
566
How to use Visual Studio Code
All those customizations we made so far, the theme and the icon theme, are saved to the user preferences. Go to Preferences ➤ Settings (also reachable via CMD-, ) to see them:
567
How to use Visual Studio Code
The view shows the default settings on the left, for an easy reference, and the overridden settings on the right. You can see the name of the theme and the icon theme we set up, in workbench.colorTheme and workbench.iconTheme .
I zoomed in using CMD-+ , and this setting was saved as well to window.zoomLevel , so the next time VS Code starts up, it remembers my choice for zooming. You can decide to apply some setting globally, in User Settings, or relative to a workspace, in Workspace settings. Most of the times those settings are automatically added by extensions or by the VS Code itself, but in some cases you'll directly edit them in this place.
Nice configuration options Some nice configuration options I set in my code: Option
Description
"editor.minimap.enabled": false
Remove the minimap, which is shown at the right of the editor
"explorer.confirmDelete": false
Stop asking me for confirmation when I want to remove a file (I have source control!)
"explorer.confirmDragAndDrop": false
Disable the confirmation for drag and drop
"editor.formatOnSave": true
Format the code automatically when I save it
"editor.formatOnPaste": true
Format the code automatically when I paste it in my code
"javascript.format.enable": true
Enable formatting for JavaScript code
"files.trimTrailingWhitespace": true
Trim whitespace in files
"editor.multiCursorModifier": "alt"
When clicking the Alt key and clicking with the mouse, I can select multiple lines
"editor.detectIndentation": true
Adapt to the file indentation, useful when editing other people code
"editor.quickSuggestionsDelay": 0
Show the code suggestion immediately, not after some seconds
The best font for coding I like Fira Code. It's free, and has some very nice programming ligatures, which transform common constructs like !== and => to nicer symbols:
568
How to use Visual Studio Code
Enable it by installing the font and adding this to your configuration: "editor.fontFamily": "Fira Code", "editor.fontLigatures": true`
Workspaces All User settings can be overridden in Workspace settings. They take precedence. They are useful for example when you use a project that has linting rules different from all the other projects you use, and you don't want to edit your favorite settings just for it. You create a workspace from an existing project by clicking the File ➤ Save Workspace as... menu. The currently opened folder will be enabled as the workspace main folder. The next time you open VS code, or you switch project, instead of opening a folder, you open a workspace, and that will automatically open the folder containing your code, and it will remember all the settings you set specific to that workspace. In addition to having workspace-level settings, you can disable extensions for a specific workspace. You can just work with folders until you have a specific reason for wanting a workspace.
569
How to use Visual Studio Code
One good reason is the ability to have multiple, separate root folders. You can use the File ➤ Add Folder to Workspace to add a new root folder, which can be located anywhere in the
filesystem, but will be shown along with the other existing folder you had.
Editing IntelliSense When you edit in one of the supported languages (JavaScript, JSON, HTML, CSS, Less, Sass, C# and TypeScript) VS Code has IntelliSense, a technology that hints at autocompletion of functions and parameters, as you type them.
Code Formatting Two handy commands ( Format Document and Format Selection ) are available on the Commands Palette to autoformat the code. VS Code by defaults supports automatic formatting for HTML, JavaScript, TypeScript and JSON.
Errors and warnings When you open a file you will see on the right a bar with some colors. Those colors indicate some issues in your code. For example here's what I see right now:
570
How to use Visual Studio Code
Those are al warnings or errors. You can try to find them in the code, where you see pieces underlined in red, or you can also press CMD-Shift-M (or choose View ➤ Problems )
571
How to use Visual Studio Code
Keyboard shortcuts I showed you a lot of keyboard shortcuts up to now. It's starting to get complicated to remember them all, but they are a nice productivity aid. I suggest to print the official shortcuts cheat sheet, for Mac, Linux and Windows.
Keymaps If you're used to keyboard shortcuts from other editors, maybe because you worked with one editor for a long time, you can use a keymap. The VS Code team provides keymaps for the most popular editors out of the box: vim, Sublime Text, Atom, IntelliJ, Eclipse and more. They are available as plugins. By opening the Preferences ➤ Keymaps Extensions menu.
Code snippets Snippets are very cool.
572
How to use Visual Studio Code
For every language you might be developing in, there are extensions that provide ready-made snippets for you to use. For JavaScript/React, one popular one is VS Code ES7 React/Redux/React-Native/JS snippets You just type rfe , press TAB and this appears in your editor: import React from 'react' const $1 = props => { return $0 } export default $1
there are lots of these shortcuts, and they save a lot of time. Not just from typing, but also from looking up the correct syntax. You can also define your own snippets. Click Preferences ➤ User Snippets and follow the instructions to create your own snippets file.
Extensions showcase GitLens: visualize who made the last change to a line of your code, and when this happened Git History visualize and search the Git history CSS Peek lets you see and edit CSS definitions by inspecting the class of an HTML element. Very handy. Code Runner lets you run bits of code that you select in the editor, and much more. Supports lots of languages. Debugger for Chrome allows you to debug a JavaScript code running in the browser using the VS code debugger. Bracket Pair Colorizer handy for visualizing brackets endings in your code. Indent-Rainbow colors the indentation levels of your code. Prettier check my Prettier guide ESLint check my ESLint guide IntelliSense for CSS improved autocompletion for CSS based on your workspace definitions npm enables npm utility functions from the command palette Auto Close Tag automatically close HTML/JSX/* tags Auto Rename Tag automatically renames the closing tag when you change the opening one, and the opposite as well
573
How to use Visual Studio Code
The VS Code CLI command When you install VS Code, the code command is available globally in your command line. This is very useful to start the editor and open a new window with the content of the current folder, with code . . code -n will create a new window.
A useful thing that's not always knows is that VS Code can quickly show the diff between two files, with code --diff file1.js file2.js .
Solving high usage CPU issues I ran into an issue of high CPU usage, and spinning fans, with a project with lots of files under node_modules . I added this configuration and things looked normal again:
React React is a JavaScript library that aims to simplify development of visual interfaces. Learn why it's so popular and what problems does it solve.
Introduction to React What is React Why is React so popular? Less complex than the other alternatives Perfect timing Backed by Facebook Is React really that simple? JSX React Components What is a React Component Custom components Fragment PropTypes Which types can we use
576
React
Requiring properties Default values for props How props are passed Children Setting the default state Accessing the state Mutating the state Why you should always use setState() State is encapsulated Unidirectional Data Flow Moving the State Up in the Tree Events Event handlers Bind this in methods The events reference Clipboard Composition Keyboard Focus Form Mouse Selection Touch UI Mouse Wheel Media Image Animation Transition The React Declarative approach The Virtual DOM The "real" DOM The Virtual DOM Explained Why is the Virtual DOM helpful: batching The Context API
Introduction to React What is React 577
React
React is a JavaScript library that aims to simplify development of visual interfaces. Developed at Facebook and released to the world in 2013, it drives some of the most widely used apps, powering Facebook and Instagram among many other applications. Its primary goal is to make it easy to reason about an interface and its state in any point in time, by dividing the UI into a collection of components. React is used to build single-page web applications.
Why is React so popular? React has taken the frontend web development world by storm. Why?
Less complex than the other alternatives At the time when React was announced, Ember.js and Angular 1.x were the predominant choices as a framework. Both these imposed too many conventions on the code that porting an existing app was not convenient at all. React made a choice to be very easy to integrate into an existing project, because that's how they had to do it at Facebook in order to introduce it to the existing codebase. Also, those 2 frameworks brought too much to the table, while React only chose to implement the View layer instead of the full MVC stack.
Perfect timing At the time, Angular 2.x was announced by Google, along with the backwards incompatibility and major changes it was going to bring. Moving from Angular 1 to 2 was like moving to a different framework, so this, along with execution speed improvements that React promised, made it something developers were eager to try.
Backed by Facebook Being backed by Facebook obviously is going to benefit a project if it turns to be successful, but it's not a guarantee, as you can see from many failed open source projects by both Facebook and Google as an example.
Is React really that simple? Even though I said that React is simpler than alternative frameworks, diving into React is still complicated, but mostly because of the corollary technologies that can be integrated with React, like Redux, Relay or GraphQL. React in itself has a very small API.
578
React
There isn't too much into React than these concepts: Components JSX State Props
JSX Many developers, including who is writing this article, at first sight thought that JSX was horrible, and quickly dismissed React. Even though they said JSX was not required, using React without JSX was painful. It took me a couple years of occasionally looking at it to start digesting JSX, and now I largely prefer it over the alternative, which is: using templates. The major benefit of using JSX is that you're only interacting with JavaScript object, not template strings. JSX is not embedded HTML Many tutorials for React beginners like to postpone the introduction of JSX later, because they assume the reader would be better off without it, but since I am now a JSX fan, I'll immediately jump into it. Here is how you define a h1 tag containing a string: const element =
Hello, world!
It looks like a strange mix of JavaScript and HTML, but in reality it's all JavaScript. What looks like HTML, is actually syntactic sugar for defining components and their positioning inside the markup. Inside a JSX expression, attributes can be inserted very easily: const myId = 'test' const element =
Hello, world!
You just need to pay attention when an attribute has a dash ( - ) which is converted to camelCase syntax instead, and these 2 special cases: class becomes className for becomes htmlFor
579
React
because they are reserved words in JavaScript. Here's a JSX snippet that wraps two components into a div tag:
A tag always needs to be closed, because this is more XML than HTML (if you remember the XHTML days, this will be familiar, but since then the HTML5 loose syntax won). In this case a self-closing tag is used. JSX while introduced with React is no longer a React-only technology.
React Components What is a React Component A component is one isolated piece of interface. For example in a typical blog homepage you might find the Sidebar component, and the Blog Posts List component. They are in turn composed by components themselves, so you could have a list of Blog post components, each for every blog post, and each with its own peculiar properties.
React makes it very simple: everything is a component. Even plain HTML tags are component on their own, and they are added by default.
580
React
The next 2 lines are equivalent, they do the same thing. One with JSX, one without, by injecting
Hello World!
into an element with id app . import React from 'react' import ReactDOM from 'react-dom' ReactDOM.render(
See, React.DOM exposed us an h1 component. Which other HTML tags are available? All of them! You can inspect what React.DOM offers by typing it in the Browser Console:
(the list is longer) The built-in components are nice, but you'll quickly outgrow them. What React excels in is letting us compose a UI by composing custom components.
Custom components There are 2 ways to define a component in React. A stateless component does not manage internal state, and is just a function:
581
React
const BlogPostExcerpt = () => { return (
Title
Description
) }
A stateful component is a class, which manages state in its own properties: import React, { Component } from 'react' class BlogPostExcerpt extends Component { render() { return (
Title
Description
) } }
As they stand, they are equivalent because there is no state management yet (coming in the next couple articles). There is a third syntax which uses the ES5 syntax, without the classes: import React from 'react' React.createClass({ render() { return (
Title
Description
) } })
You'll rarely see this in modern, > ES6 codebases. Props is how Components get their properties. Starting from the top component, every child component gets its props from the parent. In a stateless component, props is all it gets passed, and they are available by adding props as the function argument: const BlogPostExcerpt = props => {
582
React
return (
{props.title}
{props.description}
) }
In a stateful component, props are passed by default. There is no need to add anything special, and they are accessible as this.props in a Component instance. import React, { Component } from 'react' class BlogPostExcerpt extends Component { render() { return (
{this.props.title}
{this.props.description}
) } }
Passing props down to child components is a great way to pass values around in your application. A component either holds data (has state) or receives data through its props. It gets complicated when: you need to access the state of a component from a child that's several levels down (all the previous children needs to act as a pass-through, even if they do not need to know the state, complicating things) you need to access the state of a component from a completely unrelated component. Redux was traditionally very popular for this, and this is the reason it's included in many tutorials. Recently React (in version 16.3.0) introduced the Context API, which makes Redux redundant for this simple use case. We talk about the Context API later in this guide. Redux is still useful if you: need to move your data outside of the app altogetherm for some reason create complex reducers and actions to manipulate the data in any way you want but it's no more "required" for any React application.
583
React
Fragment Notice how I wrapped the return values in a div . This is because a component can only return one single element, and if you want more than one, you need to wrap it into another container tag. This however causes an unnecessary div in the output. You can avoid this by using React.Fragment :
import React, { Component } from 'react' class BlogPostExcerpt extends Component { render() { return (
{this.props.title}
{this.props.description}
) } }
which also has a very nice shorthand syntax that is supported only in recent releases (and Babel 7+): import React, { Component } from 'react' class BlogPostExcerpt extends Component { render() { return (
{this.props.title}
{this.props.description}
) } }
PropTypes Since JavaScript is a dynamically typed language, we don't really have a way to enforce the type of a variable at compile time, and if we pass invalid types, they will fail at runtime or give weird results if the types are compatible but not what we expect. Flow and TypeScript help a lot, but React has a way to directly help with props types, and even before running the code, our tools (editors, linters) can detect when we are passing the wrong values:
584
React
import PropTypes from 'prop-types' import React from 'react' class BlogPostExcerpt extends Component { render() { return (
Which types can we use These are the fundamental types we can accept: PropTypes.array PropTypes.bool PropTypes.func PropTypes.number PropTypes.object PropTypes.string PropTypes.symbol We can accept one of two types: PropTypes.oneOfType([ PropTypes.string, PropTypes.number ]),
We can accept one of many values: PropTypes.oneOf(['Test1', 'Test2']),
We can accept an instance of a class: PropTypes.instanceOf(Something)
585
React
We can accept any React node: PropTypes.node
or even any type at all: PropTypes.any
Arrays have a special syntax that we can use to accept an array of a particular type: PropTypes.arrayOf(PropTypes.string)
Objects, we can compose an object properties by using PropTypes.shape({ color: PropTypes.string, fontSize: PropTypes.number })
Requiring properties Appending isRequired to any PropTypes option will cause React to return an error if that property is missing: PropTypes.arrayOf(PropTypes.string).isRequired, PropTypes.string.isRequired,
Default values for props If any value is not required we need to specify a default value for it if it's missing when the Component is initialized. BlogPostExcerpt.propTypes = { title: PropTypes.string, description: PropTypes.string } BlogPostExcerpt.defaultProps = { title: '', description: '' }
586
React
Some tooling like ESLint have the ability to enforce defining the defaultProps for a Component with some propTypes not explicitly required.
How props are passed When initializing a component, pass the props in a way similar to HTML attributes: const desc = 'A description' //...
We passed the title as a plain string (something we can only do with strings!), and description as a variable.
Children A special prop is children . That contains the value of anything that is passed in the body of the component, for example: Something
In this case, inside BlogPostExcerpt we could access "Something" by looking up this.props.children .
While Props allow a Component to receive properties from its parent, be "instructed" to print some data for example, state allows a component to take life on itself, and be independent of the surrounding environment. Remember: only class-based Components can have a state, so if you need to manage state in a stateless (function-based) Component, you first need to "upgrade" it to a Class component: const BlogPostExcerpt = () => { return (
Title
Description
) }
becomes:
587
React
import React, { Component } from 'react' class BlogPostExcerpt extends Component { render() { return (
Title
Description
) } }
Setting the default state In the Component constructor, initialize this.state . For example the BlogPostExcerpt component might have a clicked state: class BlogPostExcerpt extends Component { constructor(props) { super(props) this.state = { clicked: false } } render() { return (
Title
Description
) } }
Accessing the state The clicked state can be accessed by referencing this.state.clicked : class BlogPostExcerpt extends Component { constructor(props) { super(props) this.state = { clicked: false } } render() { return (
Title
Description
Clicked: {this.state.clicked}
588
React
) } }
Mutating the state A state should never be mutated by using this.state.clicked = true
Instead, you should always use setState() instead, passing it an object: this.setState({ clicked: true })
The object can contain a subset, or a superset, of the state. Only the properties you pass will be mutated, the ones omitted will be left in their current state.
Why you should always use setState() The reason is that using this method, React knows that the state has changed. It will then start the series of events that will lead to the Component being re-rendered, along with any DOM update.
State is encapsulated A parent of a Component cannot tell if the child is stateful or stateless. Same goes for children of a Component. Being stateful or stateless (class-based or functional) is entirely an implementation detail that other components don't need to care about. This leads us to Unidirectional Data Flow
Unidirectional Data Flow A state is always owned by one Component. Any data that's affected by this state can only affects Components below it: its children. Changing a state on a Component will never affects its parent, or its siblings, or any other Component in the application: just its children. This is the reason many times the state is moved up in the Components tree.
589
React
Moving the State Up in the Tree Because of the Unidirectional Data Flow rules, if two components need to share a state, the state needs to be moved up to a common ancestor. Many times the closest ancestor is the best place to manage the state, but it's not a mandatory rule. The state is passed down to the components that need that value via props: class Converter extends React.Component { constructor(props) { super(props) this.state = { currency: '€' } } render() { return ( ) } }
The state can be mutated by a child component by passing a mutating function down as a prop: class Converter extends React.Component { constructor(props) { super(props) this.state = { currency: '€' } } handleChangeCurrency = (event) => { this.setState({ currency: this.state.currency === '€' ? '$' : '€' }) } render() { return ( ) } }
Events React provides an easy way to manage events. Prepare to say goodbye to addEventListener :) In the previous article about the State you saw this example: const CurrencySwitcher = (props) => { return ( Current currency is {props.currency}. Change it! ) }
If you've been using JavaScript for a while, this is just like plain old JavaScript event handlers, except that this time you're defining everything in JavaScript, not in your HTML, and you're passing a function, not a string. The actual event names are a little bit different because in React you use camelCase for everything, so onclick becomes onClick , onsubmit becomes onSubmit . For reference, this is old school HTML with JavaScript events mixed in: ...
591
React
Event handlers It's a convention to have event handlers defined as methods on the Component class: class Converter extends React.Component { handleChangeCurrency = (event) => { this.setState({ currency: this.state.currency === '€' ? '$' : '€' }) } }
All handlers receive an event object that adheres, cross-browser, to the W3C UI Events spec.
Bind this in methods Don't forget to bind methods. The methods of ES6 classes by default are not bound. What this means is that this is not defined unless you define methods as arrow functions: class Converter extends React.Component { handleClick = (e) => { /* ... */ } //... }
when using the the property initializer syntax with Babel (enabled by default in create-reactapp ), otherwise you need to bind it manually in the constructor:
The React Declarative approach You'll run across articles describing React as a declarative approach to building UIs. See declarative programming to read more about declarative programming. React made its "declarative approach" quite popular and upfront so it permeated the frontend world along with React. It's really not a new concept, but React took building UIs a lot more declarative than with HTML templates: you can build Web interfaces without even touching the DOM directly, you can have an event system without having to interact with the actual DOM Events. For example looking up elements in the DOM using jQuery or DOM events is an iterative approach. The React declarative approach abstracts that for us. We just tell React we want a component to be rendered in a specific way, and we never have to interact with the DOM to reference it later.
595
React
The Virtual DOM Many existing frameworks, before React came on the scene, were directly manipulating the DOM on every change.
The "real" DOM What is the DOM, first: the DOM (Document Object Model) is a Tree representation of the page, starting from the tag, going down into every children, called nodes. It's kept in the browser memory, and directly linked to what you see in a page. The DOM has an API that you can use to traverse it, access every single node, filter them, modify them. The API is the familiar syntax you have likely seen many times, if you were not using the abstract API provided by jQuery and friends: document.getElementById(id) document.getElementsByTagName(name) document.createElement(name) parentNode.appendChild(node) element.innerHTML element.style.left element.setAttribute() element.getAttribute() element.addEventListener() window.content window.onload window.dump() window.scrollTo()
React keeps a copy of the DOM representation, for what concerns the React rendering: the Virtual DOM
The Virtual DOM Explained Every time the DOM changes, the browser has to do two intensive operations: repaint (visual or content changes to an element that do not affects the layout and positioning relatively to other elements) and reflow (recalculate the layout of a portion of the page - or the whole page layout). React uses a Virtual DOM to help the browser use less resources when changes need to be done on a page. When you call setState() on a Component, specifying a state different than the previous one, React marks that Component as dirty. This is key: React only updates when a Component changes the state explicitly.
596
React
What happens next is: React updates the Virtual DOM relative to the components marked as dirty (with some additional checks, like triggering shouldComponentUpdate() ) Runs the diffing algorithm to reconciliate the changes Updates the real DOM
Why is the Virtual DOM helpful: batching The key thing is that React batches much of the changes and performs a unique update to the real DOM, by changing all the elements that need to be changed at the same time, so the repaint and reflow the browser must perform to render the changes are executed just once.
The Context API The Context API was introduced to allow you to pass state (and allow to update the state) across the app, without having to use props for it. The React team suggests to stick to props if you have just a few levels of children to pass, because it's still a much less complicated API than the Context API. In many cases, it enables us to avoid using Redux, simplifying a lot our apps, and also learning how to use React. How does it work? You create a context using React.createContext() , which returns a Context object.: const {Provider, Consumer} = React.createContext()
Then you create a wrapper component that returns a Provider component, and you add as children all the components from which you want to access the context: class Container extends React.Component { constructor(props) { super(props) this.state = { something: 'hey' } } render() { return ( {this.props.children}
I used Container as the name of this component because this will be a global provider. You can also create smaller contexts. Inside a component that's wrapped in a Provider, you use a Consumer component can make use of the context: class Button extends React.Component { render() { return ( {(context) => ( {context.state.something} )} ) } }
You can also pass functions into a Provider value, and those functions will be used by the Consumer to update the context state: this.setState({something: 'ho!'}) {this.props.children} /* ... */ {(context) => ( {context.state.something} )}
You can see this in action in this Glitch.
598
React
You can create multiple contexts, to make your state distributed across components, yet expose it and make it reachable by any component you want. When using multiple files, you create the content in one file, and import it in all the places you use it: //context.js import React from 'react' export default React.createContext()
//component1.js import Context from './context' //... use Context.Provider //component2.js import Context from './context' //... use Context.Consumer
599
JSX
JSX JSX is a technology that was introduced by React. Let's dive into it
Introduction to JSX A JSX primer Transpiling JSX JS in JSX HTML in JSX You need to close all tags camelCase is the new standard class becomes className
The style attribute changes its semantics Forms CSS in React Why is this preferred over plain CSS / SASS / LESS? Is this the go-to solution? Forms in JSX value and defaultValue
A more consistent onChange JSX auto escapes White space in JSX Horizontal white space is trimmed to 1 Vertical white space is eliminated Adding comments in JSX
600
JSX
Spread attributes
Introduction to JSX JSX is a technology that was introduced by React. Although React can work completely fine without using JSX, it's an ideal technology to work with components, so React benefits a lot from JSX. At first, you might think that using JSX is like mixing HTML and JavaScript (and as you'll see CSS). But this is not true, because what you are really doing when using JSX syntax is writing a declarative syntax of what a component UI should be. And you're describing that UI not using strings, but instead using JavaScript, which allows you to do many nice things.
A JSX primer Here is how you define a h1 tag containing a string: const element =
Hello, world!
It looks like a strange mix of JavaScript and HTML, but in reality it's all JavaScript. What looks like HTML, is actually syntactic sugar for defining components and their positioning inside the markup. Inside a JSX expression, attributes can be inserted very easily: const myId = 'test' const element =
Hello, world!
You just need to pay attention when an attribute has a dash ( - ) which is converted to camelCase syntax instead, and these 2 special cases: class becomes className for becomes htmlFor
because they are reserved words in JavaScript. Here's a JSX snippet that wraps two components into a div tag:
601
JSX
A tag always needs to be closed, because this is more XML than HTML (if you remember the XHTML days, this will be familiar, but since then the HTML5 loose syntax won). In this case a self-closing tag is used. Notice how I wrapped the 2 components into a div . Why? Because the render() function can only return a single node, so in case you want to return 2 siblings, just add a parent. It can be any tag, not just div .
Transpiling JSX A browser cannot execute JavaScript files containing JSX code. They must be first transformed to regular JS. How? By doing a process called transpiling. We already said that JSX is optional, because to every JSX line, a corresponding plain JavaScript alternative is available, and that's what JSX is transpiled to. For example the following two constructs are equivalent: Plain JS ReactDOM.render( React.DOM.div( { id: 'test' }, React.DOM.h1(null, 'A title'), React.DOM.p(null, 'A paragraph') ), document.getElementById('myapp') )
JSX ReactDOM.render(
A title
A paragraph
, document.getElementById('myapp') )
602
JSX
This very basic example is just the starting point, but you can already see how more complicated the plain JS syntax is compared to using JSX. At the time of writing the most popular way to perform the transpilation is to use Babel, which is the default option when running create-react-app , so if you use it you don't have to worry, everything happens under the hoods for you. If you don't use create-react-app you need to setup Babel yourself.
JS in JSX JSX accepts any kind of JavaScript mixed into it. Whenever you need to add some JS, just put it inside curly braces {} . For example here's how to use a constant value defined elsewhere: const paragraph = 'A paragraph' ReactDOM.render(
A title
{paragraph}
, document.getElementById('myapp') )
This is a basic example. Curly braces accept any JS code: const paragraph = 'A paragraph' ReactDOM.render(
{rows.map((row, i) => { return
{row.text}
})} , document.getElementById('myapp') )
As you can see we nested JavaScript inside a JSX defined inside a JavaScript nested in a JSX. You can go as deep as you need.
HTML in JSX JSX resembles a lot HTML, but it's actually a XML syntax.
603
JSX
In the end you render HTML, so you need to know a few differences between how you would define some things in HTML, and how you define them in JSX.
You need to close all tags Just like in XHTML, if you have ever used it, you need to close all tags: no more but instead use the self-closing tag: (the same goes for other tags)
camelCase is the new standard In HTML you'll find attributes without any case (e.g. onchange ). In JSX, they are renamed to their camelCase equivalent: onchange => onChange onclick => onClick onsubmit => onSubmit
class becomes className Due to the fact that JSX is JavaScript, and class is a reserved word, you can't write
but you need to use
The same applies to for which is translated to htmlFor .
The style attribute changes its semantics The style attribute in HTML allows to specify inline style. In JSX it no longer accepts a string, and in CSS in React you'll see why it's a very convenient change.
Forms Form fields definition and events are changed in JSX to provide more consistency and utility. Forms in JSX goes into more details on forms.
CSS in React 604
JSX
JSX provides a cool way to define CSS. If you have a little experience with HTML inline styles, at first glance you'll find yourself pushed back 10 or 15 years, to a world where inline CSS was completely normal (nowadays it's demonized and usually just a "quick fix" go-to solution). JSX style is not the same thing: first of all, instead of accepting a string containing CSS properties, the JSX style attribute only accepts an object. This means you define properties in an object: var divStyle = { color: 'white' } ReactDOM.render(Hello World!, mountNode)
or ReactDOM.render(Hello World!, mountNode)
The CSS values you write in JSX is slightly different than plain CSS: the keys property names are camelCased values are just strings you separate each tuple with a comma
Why is this preferred over plain CSS / SASS / LESS? CSS is an unsolved problem. Since its inception, dozens of tools around it rose and then fell. The main problem with JS is that there is no scoping and it's easy to write CSS that is not enforced in any way, thus a "quick fix" can impact elements that should not be touched. JSX allows components (defined in React for example) to completely encapsulate their style.
Is this the go-to solution? Inline styles in JSX are good until you need to 1. write media queries 2. style animations 3. reference pseudo classes (e.g. :hover ) 4. reference pseudo elements (e.g. ::first-letter ) In short, they cover the basics, but it's not the final solution.
605
JSX
Forms in JSX JSX adds some changes to how HTML forms work, with the goal of making things easier for the developer.
value and defaultValue The value attribute always holds the current value of the field. The defaultValue attribute holds the default value that was set when the field was created. This helps solve some weird behavior of regular DOM interaction when inspecting input.value and input.getAttribute('value') returning one the current value and one the
original default value. This also applies to the textarea field, e.g. Some text
but instead
For select fields, instead of using ...
use ...
A more consistent onChange Passing a function to the onChange attribute you can subscribe to events on form fields. It works consistently across fields, even radio , select and checkbox input fields fire a onChange event.
606
JSX
onChange also fires when typing a character into a input or textarea field.
or by using a constant that prints the Unicode representation corresponding to the HTML entity you need to print:
{'\u00A9 2017'}
White space in JSX To add white space in JSX there are 2 rules:
Horizontal white space is trimmed to 1 If you have white space between elements in the same line, it's all trimmed to 1 white space.
Something becomes this
becomes
Something becomes this
Vertical white space is eliminated
607
JSX
Something becomes this
becomes
Somethingbecomesthis
To fix this problem you need to explicitly add white space, by adding a space expression like this:
Something {' '}becomes {' '}this
or by embedding the string in a space expression:
Something {' becomes '} this
Adding comments in JSX You can add comments to JSX by using the normal JavaScript comments inside an expression:
{/* a comment */} { //another comment }
Spread attributes In JSX a common operation is assigning values to attributes. Instead of doing it manually, e.g.
608
JSX
you can pass
and the properties of the data object will be used as attributes automatically, thanks to the ES6 spread operator
609
React Router
React Router React Router 4 is the perfect tool to link together the URL and your React app. React Router is the de-facto React routing library, and it's one of the most popular projects built on top of React.
Installation Types of routes Components BrowserRouter Link Route Match multiple paths Inline rendering Match dynamic route parameter This tutorial introduces React Router 4, the last stable version React Router is the de-facto React routing library, and it's one of the most popular projects built on top of React. React at its core is a very simple library, and it does not dictate anything about routing. Routing in a Single Page Application is the way to introduce some features to navigating the app through links, which are expected in normal web applications: 1. The browser should change the URL when you navigate to a different screen 2. Deep linking should work: if you point the browser to a URL, the application should
610
React Router
reconstruct the same view that was presented when the URL was generated. 3. The browser back (and forward) button should work like expected. Routing links together your application navigation with the navigation features offered by the browser: the address bar and the navigation buttons. React Router offers a way to write your code so that it will show certain components of your app only if the route matches what you define.
Installation With npm: npm i --save react-router-dom
With Yarn: yarn add react-router-dom
Types of routes React Router provides two different kind of routes: BrowserRouter HashRouter
One builds classic URLs, the other builds URLs with the hash: https://application.com/dashboard /* BrowserRouter */ https://application.com/#/dashboard /* HashRouter */
Which one to use is mainly dictated by the browsers you need to support. BrowserRouter uses the History API, which is relatively recent, and not supported in IE9 and below. If you don't have to worry about older browsers, it's the recommended choice.
Components The 3 components you will interact the most when working with React Router are: BrowserRouter , usually aliased as Router Link
611
React Router
Route BrowserRouter wraps all your Route components. Link components are - as you can imagine - used to generate links to your routes Route components are responsible for showing - or hiding - the components they contain.
BrowserRouter Here's a simple example of the BrowserRouter component. You import it from react-routerdom, and you use it to wrap all your app: import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter as Router } from 'react-router-dom' ReactDOM.render( , document.getElementById('app') )
A BrowserRouter component can only have one child element, so we wrap all we're going to add in a div element.
Link The Link component is used to trigger new routes. You import it from react-router-dom , and you can add the Link components to point at different routes, with the to attribute: import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter as Router, Link } from 'react-router-dom' ReactDOM.render( Dashboard About
612
React Router
, document.getElementById('app') )
Route Now let's add the Route component in the above snippet to make things actually work as we want: import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter as Router, Link, Route } from 'react-router-dom' const Dashboard = () => (
Dashboard
... ) const About = () => (
About
... ) ReactDOM.render( Dashboard About , document.getElementById('app') )
Check this example on Glitch: https://flaviocopes-react-router-v4.glitch.me/ When the route matches / , the application shows the Dashboard component. When the route is changed by clicking the "About" link to /about , the Dashboard component is removed and the About component is inserted in the DOM.
613
React Router
Notice the exact attribute. Without this, path="/" would also match /about , since / is contained in the route.
Match multiple paths You can have a route respond to multiple paths simply using a regex, because path can be a regular expressions string:
Inline rendering Instead of specifying a component property on Route , you can set a render prop: (
About
... )} />
Match dynamic route parameter You already saw how to use static routes like const Posts = () => (
Posts
... ) //...
Here's how to handle dynamic routes: const Post = ({match}) => (
614
React Router
Post #{match.params.id}
... ) //...
In your Route component you can lookup the dynamic parameters in match.params . match is also available in inline rendered routes, and this is especially useful in this case,
because we can use the id parameter to lookup the post data in our data source before rendering Post: const posts = [ { id: 1, title: 'First', content: 'Hello world!' }, { id: 2, title: 'Second', content: 'Hello again!' } ] const Post = ({post}) => (
Styled Components Styled Components are one of the new ways to use CSS in modern JavaScript. It is the meant to be a successor of CSS Modules, a way to write CSS that's scoped to a single component, and not leak to any other element in the page A brief history Introducing Styled Components Installation Your first styled component Using props to customize components Extending an existing Styled Component It's Regular CSS Using Vendor Prefixes Conclusion
A brief history Once upon a time, the Web was really simple and CSS didn't even exist. We laid out pages using tables and frames. Good times. Then CSS came to life, and after some time it became clear that frameworks could greatly help especially in building grids and layouts, Bootstrap and Foundation playing a big part of this. Preprocessors like SASS and others helped a lot to slow down the frameworks adoption, and to better organize the code conventions like BEM and SMACSS grew in their usage, especially within teams. Conventions are not a solution to everything, and they are complex to remember, so in the last few years with the increasing adoption of JavaScript and build processes in every frontend project, CSS got its way into JavaScript (CSS-in-JS). New tools explored new ways of doing CSS-in-JS and a few succeeded with increasing popularity: React Style jsxstyle Radium and more.
616
Styled Components
Introducing Styled Components One of the most popular of these tools is Styled Components. It is the meant to be a successor of CSS Modules, a way to write CSS that's scoped to a single component, and not leak to any other element in the page. (more on CSS modules here and here) Styled Components allow you to write plain CSS in your components without worrying about class names collisions.
Installation Simply install styled-components using npm or yarn: npm install --save styled-components yarn add styled-components
That's it! Now all you have to do is to add this import: import styled from "styled-components";
Your first styled component With the styled object imported, you can now start creating Styled Components. Here's the first one: const Button = styled.button` font-size: 1.5em; background-color: black; color: white; `;
Button is now a React Component in all its greatness.
We created it using a function of the styled object, called button in this case, and passing some CSS properties in a template literal. Now this component can be rendered in our container using the normal React syntax: render(
617
Styled Components
)
Styled Components offer other functions you can use to create other components, not just button , like section , h1 , input and many others.
The syntax used, with the backtick, might be weird at first, but it's called Tagged Templates, it's plain JavaScript and it's a way to pass an argument to the function.
Using props to customize components When you pass some props to a Styled Component, it will pass them down to the DOM node mounted. For example here's how we pass the placeholder and type props to an input component: const Input = styled.input` //... `; render( );
This will do just what you think, inserting those props as HTML attributes. Props instead of just being blindly passed down to the DOM can also be used to customize a component based on the prop value. Here's an example: const Button = styled.button` background: ${props => props.primary ? 'black' : 'white'}; color: ${props => props.primary ? 'white' : 'black'}; `; render( A normal button A normal button The primary button );
Setting the primary prop changes the color of the button.
Extending an existing Styled Component 618
Styled Components
If you have one component and you want to create a similar one, just styled slightly differently, you can use extend : const Button = styled.button` color: black; //... `; const WhiteButton = Button.extend` color: white; `; render( A black button, like all buttons A white button );
It's Regular CSS In Styled Components, you can use the CSS you already know and love. It's just plain CSS. It is not pseudo CSS nor inline CSS with its limitations. You can use media queries, nesting and everything you might come up with.
Using Vendor Prefixes Styled Components automatically add all the vendor prefixes needed, so you don't need to worry about this problem.
Conclusion That's it for this Styled Components introduction! These concepts will help you get an understanding on the concept and help you get up and running with this way of using CSS in JavaScript.
619
Redux
Redux Redux is a state manager that's usually used along with React, but it's not tied to that library. Learn Redux by reading this simple and easy to follow guide Why you need Redux When should you use Redux? Immutable State Tree Actions Actions types should be constants Action creators Reducers What is a reducer What a reducer should not do Multiple reducers A simulation of a reducer The state A list of actions A reducer for every part of the state A reducer for the whole state The Store Can I initialize the store with server-side data? Getting the state Update the state Listen to state changes Data Flow
Why you need Redux Redux is a state manager that's usually used along with React, but it's not tied to that library it can be used with other technologies as well, but we'll stick to React for the sake of the explanation. React has its own way to manage state, as you can read on the React Beginner's Guide, where I introduce how you can manage State in React. Moving the state up in the tree works in simple cases, but in a complex app you might find you moving almost all the state up, and then down using props.
620
Redux
React in version 16.3.0 introduced the Context API, which makes Redux redundant for the use case of accessing the state from different parts of your app, so consider using the Context API instead of Redux, unless you need a specific feature that Redux provides. Redux is a way to manage an application state, and move it to an external global store. There are a few concepts to grasp, but once you do, Redux is a very simple approach to the problem. Redux is very popular with React applications, but it's in no way unique to React: there are bindings for nearly any popular framework. That said, I'll make some examples using React as it is its primary use case.
When should you use Redux? Redux is ideal for medium to big apps, and you should only use it when you have trouble managing the state with the default state management of React, or the other library you use. Simple apps should not need it at all (and there's nothing wrong with simple apps).
Immutable State Tree In Redux, the whole state of the application is represented by one JavaScript object, called State or State Tree. We call it Immutable State Tree because it is read only: it can't be changed directly. It can only be changed by dispatching an Action.
Actions An Action is a JavaScript object that describes a change in a minimal way (just with the information needed): { type: 'CLICKED_SIDEBAR' } // e.g. with more data { type: 'SELECTED_USER', userId: 232 }
621
Redux
The only requirement of an action object is having a type property, whose value is usually a string.
Actions types should be constants In a simple app an action type can be defined as a string, as I did in the example in the previous lesson. When the app grows is best to use constants: const ADD_ITEM = 'ADD_ITEM' const action = { type: ADD_ITEM, title: 'Third item' }
and to separate actions in their own files, and import them import { ADD_ITEM, REMOVE_ITEM } from './actions'
Action creators Actions Creators are functions that create actions. function addItem(t) { return { type: ADD_ITEM, title: t } }
You usually run action creators in combination with triggering the dispatcher: dispatch(addItem('Milk'))
or by defining an action dispatcher function: const dispatchAddItem = i => dispatch(addItem(i)) dispatchAddItem('Milk')
Reducers When an action is fired, something must happen, the state of the application must change. This is the job of reducers.
622
Redux
What is a reducer A reducer is a pure function that calculates the next State Tree based on the previous State Tree, and the action dispatched. (currentState, action) => newState
A pure function takes an input and returns an output without changing the input nor anything else. Thus, a reducer returns a completely new state tree object that substitutes the previous one.
What a reducer should not do A reducer should be a pure function, so it should: never mutate its arguments never mutate the state, but instead create a new one with Object.assign({}, ...) never generate side-effects (no API calls changing anything) never call non-pure functions, functions that change their output based on factors other than their input (e.g. Date.now() or Math.random() ) There is no reinforcement, but you should stick to the rules.
Multiple reducers Since the state of a complex app could be really wide, there is not a single reducer, but many reducers for any kind of action.
A simulation of a reducer At its core, Redux can be simplified with this simple model:
The state { list: [ { title: "First item" }, { title: "Second item" }, ], title: 'Groceries list' }
A reducer for every part of the state const title = (state = '', action) => { if (action.type === 'CHANGE_LIST_TITLE') { return action.title } else { return state } } const list = (state = [], action) => { switch (action.type) { case 'ADD_ITEM': return state.concat([{ title: action.title }]) case 'REMOVE_ITEM': return state.map((item, index) => action.index === index ? { title: item.title } : item default: return state } }
A reducer for the whole state const listManager = (state = {}, action) => { return { title: title(state.title, action), list: list(state.list, action), } }
The Store The Store is an object that: holds the state of the app exposes the state via getState() allows to update the state via dispatch() allows to (un)register as a state change listener using subscribe()
624
Redux
A store is unique in the app. Here is how a store for the listManager app is created: import { createStore } from 'redux' import listManager from './reducers' let store = createStore(listManager)
Can I initialize the store with server-side data? Sure, just pass a starting state: let store = createStore(listManager, preexistingState)
Getting the state store.getState()
Update the state store.dispatch(addItem('Something'))
Listen to state changes const unsubscribe = store.subscribe(() => const newState = store.getState() ) unsubscribe()
Data Flow Data flow in Redux is always unidirectional. You call dispatch() on the Store, passing an Action. The Store takes care of passing the Action to the Reducer, generating the next State. The Store updates the State and alerts all the Listeners.
625
Redux
626
Redux Saga
Redux Saga Redux Saga is a library used to handle side effects in Redux. When you fire an action something changes in the state of the app and you might need to do something that derives from this state change When to use Redux Saga Basic example of using Redux Saga How it works behind the scenes Basic Helpers takeEvery() takeLatest() take() put() call()
Running effects in parallel all() race()
When to use Redux Saga In an application using Redux, when you fire an action something changes in the state of the app. As this happens, you might need to do something that derives from this state change. For example you might want to: make a HTTP call to a server send a WebSocket event fetch some data from a GraphQL server save something to the cache or browser local storage ...you got the idea. Those are all things that don't really relate to the app state, or are async, and you need to move them into a place different than your actions or reducers (while you technically could, it's not a good way to have a clean codebase). Enter Redux Saga, a Redux middleware helping you with side effects.
627
Redux Saga
Basic example of using Redux Saga To avoid diving into too much theory before showing some actual code, I briefly present how I solved a problem I faced when building a sample app. In a chat room, when a user writes a message I immediately show the message on the screen, to provide a prompt feedback. This is done through a Redux Action: const addMessage = (message, author) => ({ type: 'ADD_MESSAGE', message, author })
and the state is changed through a reducer: const messages = (state = [], action) => { switch (action.type) { case 'ADD_MESSAGE': return state.concat([{ message: action.message, author: action.author }]) default: return state } }
You initialize Redux Saga by first importing it, then by applying a saga as a middleware to the Redux Store: //... import createSagaMiddleware from 'redux-saga' //...
Then we create a middleware and we apply it to our newly created Redux Store: const sagaMiddleware = createSagaMiddleware() const store = createStore( reducers, applyMiddleware(sagaMiddleware) )
The last step is running the saga. We import it and pass it to the run method of the middleware:
628
Redux Saga
import handleNewMessage from './sagas' //... sagaMiddleware.run(handleNewMessage)
We just need to write the saga, in ./sagas/index.js : import { takeEvery } from 'redux-saga/effects' const handleNewMessage = function* handleNewMessage(params) { const socket = new WebSocket('ws://localhost:8989') yield takeEvery('ADD_MESSAGE', (action) => { socket.send(JSON.stringify(action)) }) } export default handleNewMessage
What this code means is: every time the ADD_MESSAGE action fires, we send a message to the WebSockets server, which responds in this case on localhost:8989 . Notice the use of function* , which is not a normal function, but a generator.
How it works behind the scenes Being a Redux Middleware, Redux Saga can intercept Redux Actions, and inject its own functionality. There are a few concepts to grasp, and here are the main keywords that you'll want to stick in your head, altogether: saga, generator, middleware, promise, pause, resume, effect, dispatch, action, fulfilled, resolved, yield, yielded. A saga is some "story" that reacts to an effect that your code is causing. That might contain one of the things we talked before, like an HTTP request or some procedure that saves to the cache. We create a middleware with a list of sagas to run, which can be one or more, and we connect this middleware to the Redux store. A saga is a generator function. When a promise is run and yielded, the middleware suspends the saga until the promise is resolved. Once the promise is resolved the middleware resumes the saga, until the next yield statement is found, and there it is suspended again until its promise resolves. Inside the saga code, you will generate effects using a few special helper functions provided by the redux-saga package. To start with, we can list:
629
Redux Saga
takeEvery() takeLatest() take() call() put()
When an effect is executed, the saga is paused until the effect is fulfilled. For example: import { takeEvery } from 'redux-saga/effects' const handleNewMessage = function* handleNewMessage(params) { const socket = new WebSocket('ws://localhost:8989') yield takeEvery('ADD_MESSAGE', (action) => { socket.send(JSON.stringify(action)) }) } export default handleNewMessage
When the middleware executes the handleNewMessage saga, it stops at the yield takeEvery instruction and waits (asynchronously, of course) until the ADD_MESSAGE action is dispatched. Then it runs its callback, and the saga can resume.
Basic Helpers Helpers are abstractions on top of the low-level saga APIs. Let's introduce the most basic helpers you can use to run your effects: takeEvery() takeLatest() take() put() call()
takeEvery() takeEvery() , used in some examples, is one of those helpers.
In the code: import { takeLatest } from 'redux-saga/effects'
The watchMessages generator pauses until an ADD_MESSAGE action fires, and every time it fires, it's going to call the postMessageToServer function, infinitely, and concurrently (there is no need for postMessageToServer to terminate its execution before a new once can run)
takeLatest() Another popular helper is takeLatest() , which is very similar to takeEvery() but only allows one function handler to run at a time, avoiding concurrency. If another action is fired when the handler is still running, it will cancel the it, and run again with the latest data available. As with takeEvery() , the generator never stops and continues to run the effect when the specified action occurs.
take() take() is different in that it only waits a single time. When the action it's waiting for occurs,
the promise resolves and the iterator is resumed, so it can go on to the next instruction set.
put() Dispatches an action to the Redux store. Instead of passing in the Redux store or the dispatch action to the saga, you can just use put() : yield put({ type: 'INCREMENT' }) yield put({ type: "USER_FETCH_SUCCEEDED", data: data })
which returns a plain object that you can easily inspect in your tests (more on testing later).
call() When you want to call some function in a saga, you can do so by using a yielded plain function call that returns a promise: delay(1000)
but this does not play nice with tests. Instead, call() allows you to wrap that function call and returns an object that can be easily inspected:
631
Redux Saga
call(delay, 1000)
returns { CALL: {fn: delay, args: [1000]}}
Running effects in parallel Running effects in parallel is possible using all() and race() , which are very different in what they do.
all() If you write import { call } from 'redux-saga/effects' const todos = yield call(fetch, '/api/todos') const user = yield call(fetch, '/api/user')
the second fetch() call won't be executed until the first one succeeds. To execute them in parallel, wrap them into all() : import { all, call } from 'redux-saga/effects' const [todos, user] = yield all([ call(fetch, '/api/todos'), call(fetch, '/api/user') ])
all() won't be resolved until both call() return.
race() race() differs from all() by not waiting for all of the helpers calls to return. It just waits for
one to return, and it's done. It's a race to see which one finishes first, and then we forget about the other participants. It's typically used to cancel a background task that runs forever until something occurs: import { race, call, take } from 'redux-saga/effects'
when the CANCEL_TASK action is emitted, we stop the other task that would otherwise run forever.
633
Setup an Electron app with React
Setup an Electron app with React How to create an Electron Node.js desktop application using `create-reactapp`
Install npm if you haven't already Move to your development folder Create react app Add electron Install foreman to allow executing the app from command line Install the create-react-app dependencies Configure eslint (your mileage might vary) Enough with the setup! Start up Thanks to When I first used Electron in 2015 it was not yet clear that it would be so pervasive in modern apps, and I was kind of shocked by the resulting app size.
634
Setup an Electron app with React
But, Electron is clearly here to stay and it's not mandatory that your app should feel slow and consume tons of memory, like VS Code demonstrates every day to me (on a not blazing fast machine). So, here's a quick start for a React app with create-react-app , ready to roll with ESlint integration.
Install npm if you haven't already On OSX: brew install npm
Move to your development folder cd ~/dev
Create react app npx create-react-app app cd app
Add electron npm install electron npm install electron-builder -D
Install foreman to allow executing the app from command line npm install foreman -g
Now add ESLint and some of its helpers npm install eslint eslint-config-airbnb eslint-plugin-jsx-a11y eslint-plugin-import eslint -plugin-react eslint-plugin-import
636
Setup an Electron app with React
This is what you should have right now:
Now tweak the package.json file to add some electron helpers. Right now its content is something like { "name": "gitometer", "version": "0.1.0", "private": true, "dependencies": { "electron": "^1.7.5", "eslint": "^4.5.0", "eslint-config-airbnb": "^15.1.0", "eslint-plugin-import": "^2.7.0", "eslint-plugin-jsx-a11y": "^6.0.2", "eslint-plugin-react": "^7.3.0", "react": "^15.6.1", "react-dom": "^15.6.1", "react-scripts": "1.0.11" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject" }, "devDependencies": { "electron-builder": "^19.24.1"
637
Setup an Electron app with React
} }
(don't mind versions, outdated as soon as I publish this) Remove the scripts property and change it with "scripts": { "start": "nf start -p 3000", "build": "react-scripts build", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject", "electron": "electron .", "electron-start": "node src/electron-wait-react", "react-start": "react-scripts start", "pack": "build --dir", "dist": "npm run build && build", "postinstall": "install-app-deps" },
As you can see, start was moved to react-start , but the rest is unchanged, and some electron utils were added. Also add "homepage": "./", "main": "src/electron-starter.js",
and you should see the React sample app coming up in a native app:
641
Setup an Electron app with React
Thanks to This post was heavily inspired by https://gist.github.com/matthewjberger/6f42452cb1a2253667942d333ff53404
642
Next.js
Next.js Next.js is a very popular Node.js framework which enables an easy serverside React rendering, and provides many other amazing features
Introduction Main features Installation Getting started Create a page Server-side rendering Add a second page Hot reloading Client rendering Dynamic pages CSS-in-JS Exporting a static site Deploying Now Zones Plugins Starter kit on Glitch Read more on Next.js
643
Next.js
Introduction Working on a modern JavaScript application powered by React is awesome until you realize that there are a couple problems related to rendering all the content on the client-side. First, the page takes longer to the become visible to the user, because before the content loads, all the JavaScript must load, and your application needs to run to determine what to show on the page. Second, if you are building a publicly available website, you have a content SEO issue. Search engines are getting better at running and indexing JavaScript apps, but it's much better if we can send them content instead of letting them figure it out. The solution to both of those problems is server rendering, also called static pre-rendering. Next.js is one React framework to do all of this in a very simple way, but it's not limited to this. It's advertised by its creators as a zero-configuration, single-command toolchain for React apps. It provides a common structure that allows you to easily build a frontend React application, and transparently handle server-side rendering for you.
Main features Here is a non-exhaustive list of the main Next.js features: Hot Code Reloading: Next.js reloads the page when it detects any change saved to disk. Automatic Routing: any URL is mapped to the filesystem, to files put in the pages folder, and you don't need any configuration (you have customization options of course). Single File Components: using styled-jsx, completely integrated as built by the same team, it's trivial to add styles scoped to the component. Server Rendering: you can (optionally) render React components on the server side, before sending the HTML to the client. Ecosystem Compatibility: Next.js plays well with the rest of the JavaScript, Node and React ecosystem. Automatic Code Splitting: pages are rendered with just the libraries and JavaScript that they need, not more. Prefetching: the Link component, used to link together different pages, supports a prefetch prop which automatically prefetches page resources (including code missing
due to code splitting) in the background. Dynamic Components: you can import JavaScript modules and React Components dynamically (https://github.com/zeit/next.js#dynamic-import).
644
Next.js
Static Exports: using the next export command, Next.js allows you to export a fully static site from your app.
Installation Next.js supports all the major platforms: Linux, macOS, Windows. A Next.js project is started easily with npm: npm install --save next react react-dom
or with Yarn: yarn add next react react-dom
Getting started Create a package.json file with this content: { "scripts": { "dev": "next" } }
If you run this command now: npm run dev
the script will raise an error complaining about not finding the pages folder. This is the only thing that Next.js requires to run. Create an empty pages folder, and run the command again, and Next.js will start up a server on localhost:3000 . If you go to that URL now, you'll be greeted by a friendly 404 page, with a nice clean design.
645
Next.js
Next.js handles other error types as well, like the 500 errors for example.
Create a page In the pages folder create an index.js file with a simple React functional component: export default () => (
Hello World!
)
If you visit localhost:3000 , this component will automatically be rendered. Why is this so simple? Next.js uses a declarative pages structure, which is based on the filesystem structure. Simply put, pages are inside a pages folder, and the page URL is determined by the page file name. The filesystem is the pages API.
Server-side rendering Open the page source, View -> Developer -> View Source with Chrome. As you can see, the HTML generated by the component is sent directly in the page source. It's not rendered in the client-side, but instead it's server rendered. The Next.js team wanted to create a developer experience for server rendered pages similar to the one you get when creating a basic PHP project, where you simply drop PHP files and you call them, and they show up as pages. Internally of course it's all very different, but the
646
Next.js
apparent ease of use is clear.
Add a second page Let's create another page, in pages/contact.js export default () => (
Contact us!
)
If you point your browser to localhost:3000/contact this page will be rendered. As you can see, also this page is server rendered.
Hot reloading Note how you did not have to restart the npm process to load the second page. Next.js does this for you under the hood.
Client rendering Server rendering is very convenient in your first page load, for all the reasons we saw above, but when it comes to navigating inside the website, client-side rendering is key to speed up the page load and improve the user experience. Next.js provides a Link component you can use to build links. Try linking the two pages above. Change index.js to this code: import Link from 'next/link' export default () => (
Hello World!
Contact me! )
647
Next.js
Now go back to the browser and try this link. As you can see, the Contact page loads immediately, without a page refresh. This is client-side navigation working correctly, with complete support for the History API, which means your users back button won't break. If you now cmd-click the link, and the same Contact page will open in a new tab, now server rendered.
Dynamic pages A good use case for Next.js is a blog, as it's something all developers know how it works, and it's a good fit for a simple example of how to handle dynamic pages. A dynamic page is a page that has no fixed content, but instead display some data based on some parameters. Change index.js to import Link from 'next/link' const Post = props => (
{props.title}
) export default () => (
My blog
)
This will create a series of posts and will fill the title query parameter to the post title:
648
Next.js
Now create a post.js file in the pages folder, and add: export default props =>
{props.url.query.title}
Now clicking a single post will render the post title in a h1 tag:
You can use clean URLs without query parameters. The Next.js Link component helps us by accepting an as attribute, which you can use to pass a slug: import Link from 'next/link' const Post = props => (
{props.title}
) export default () => (
My blog
649
Next.js
)
CSS-in-JS Next.js by default provides support for styled-jsx, which is a CSS-in-JS solution provided by the same development team, but you can use whatever library you prefer, like Styled Components. Example: export default () => (
Exporting a static site A Next.js application can be easily exported as a static site, which can be deployed on one of the super fast static site hosts, like Netlify or Firebase Hosting, without the need to set up a Node environment. The process requires you to declare the URLs that compose the site, but it's a straightforward process.
Deploying Creating a production-ready copy of the application, without source maps or other development tooling that is unneeded in the final build, is easy. At the beginning of this tutorial you created a package.json file with this content: { "scripts": { "dev": "next" } }
which was the way to start up a development server using npm run dev . Now just add the following content to package.json instead: { "scripts": { "dev": "next", "build": "next build", "start": "next start" } }
and prepare your app by running npm run build and npm run start .
Now 651
Next.js
The company behind Next.js provides an awesome hosting service for Node.js applications, called Now. Of course they integrate both their products so you can deploy Next.js apps seamlessly, once you have Now installed, by running the now command in the application folder. Behind the scenes Now sets up a server for you, and you don't need to worry about anything, just wait for your application URL to be ready.
Zones You can set up multiple Next.js instances to listen to different URLs, yet the application to an outside user will simply look like being powered by a single server: https://github.com/zeit/next.js/#multi-zones
Plugins Next.js has a list of plugins at https://github.com/zeit/next-plugins
Starter kit on Glitch If you're looking to experiment, I recommend Glitch. Check out my Next.js Glitch Starter Kit.
Read more on Next.js I can't possibly describe every feature of this great framework, and the main place to read more about Next.js is the project readme on GitHub.
652
Introduction to Vue
Introduction to Vue Vue is a very impressive project. It's a very popular JavaScript framework, one that's experiencing a huge growth. It is simple, tiny and very performant. Learn more about it First, what is a JavaScript frontend framework? The popularity of Vue Why developers love Vue Where does Vue.js position itself in the frameworks landscape
Vue is a very popular JavaScript frontend framework, one that's experiencing a huge growth. It is simple, tiny (~24KB) and very performant. It feels different from all the other JavaScript frontend frameworks and view libraries. Let's find out why.
First, what is a JavaScript frontend framework? If you're unsure what a JavaScript framework is, Vue is the perfect first encounter with one. A JavaScript framework helps us to create modern applications. Modern JavaScript applications are mostly used on the Web, but also power a lot of Desktop and Mobile applications. Until the early 2000s, browsers didn't have the capabilities they have now. They were a lot less powerful, and building complex applications inside them was not feasible performancewise, and the tooling was not even something that people thought about. Everything changed when Google unveiled Google Maps and GMail, two applications that ran inside the browser. Ajax made asynchronous network requests possible, and over time developers started building on top of the Web platform, while engineers worked on the platform itself: browsers, the Web standards, the browser APIs, and the JavaScript language. Libraries like jQuery and Mootools were the first big projects that built upon JavaScript and were hugely popular for a while. They basically provided a nicer API to interact with the browser and provided workarounds for bugs and inconsistencies among the various browsers. Frameworks like Backbone, Ember, Knockout, AngularJS were the first wave of modern JavaScript frameworks. The second wave, which is the current one, has React, Angular, and Vue as its main actors.
653
Introduction to Vue
Note that jQuery, Ember and the other projects I mentioned are still being heavily used, actively maintained, and millions of websites rely on them. That said, techniques and tools evolve, and as a JavaScript developer, you're now likely to be required to know React, Angular or Vue rather than those older frameworks. Frameworks abstract the interaction with the browser and the DOM. Instead of manipulating elements by referencing them in the DOM, we declaratively define and interact with them, at a higher level. Using a framework is like using the C programming language instead of using the Assembly language to write system programs. It's like using a computer to write a document instead of using a typewriter. It's like having a self-driving car instead of driving the car yourself. Well, not that far, but you get the idea. Instead of using low-level APIs offered by the browser to manipulate elements, and build hugely complex systems to write an application, you use tools built by very smart people that make our life easier.
The popularity of Vue How much popular is Vue.js? Vue had: 7600 stars on GitHub in 2016 36700 stars on GitHub in 2017 and it has more than 100.000+ stars on GitHub, as of June 2018. Its npm download count is growing every day, and now it's at ~350.000 downloads per week. I would say Vue is very popular, given those numbers. In relative terms, it has approximately the same numbers of GitHub stars of React, which was born years before. Numbers are not everything, of course. The impression I have of Vue is that developers love it. A key point in time of the rise of Vue has been the adoption in the Laravel ecosystem, a hugely popular PHP web application framework, but since then it has widespread among many other development communities.
Why developers love Vue First, Vue is called a progressive framework.
654
Introduction to Vue
This means that it adapts to the needs of the developer. While other frameworks require a complete buy-in from a developer or team and often want you to rewrite an existing application because they require some specific set of conventions, Vue happily lands inside your app with a simple script tag, to start with, and it can grow along with your needs, spreading from 3 lines to managing your entire view layer. You don't need to know about webpack, Babel, npm or anything to get started with Vue, but when you're ready Vue makes it simple for you to rely on them. This is one great selling point, especially in the current ecosystem of JavaScript frontend frameworks and libraries that tends to alienate newcomers and also experienced developers that feel lost in the ocean of possibilities and choices. Vue.js is probably the more approachable frontend framework around. Some people call Vue the new jQuery, because it easily gets in the application via a script tag, and gradually gains space from there. Think of it as a compliment, since jQuery dominated the Web in the past few years, and it still does its job on a huge number of sites. Vue picks from the best ideas. It was built by picking the best ideas of frameworks like Angular, React and Knockout, and by cherry-picking the best choices those frameworks made, and excluding some less brilliant ones, it kind of started as a "best-of" set and grew from there.
Where does Vue.js position itself in the frameworks landscape The 2 elephants in the room, when talking about web development, are React and Angular. How does Vue position itself relative to those 2 big and popular frameworks? Vue was created by Evan You when he was working at Google on AngularJS (Angular 1.0) apps and was born out of a need to create more performant applications. Vue picked some of the Angular templating syntax, but removed the opinionated, complex stack that Angular required, and made it very performant. The new Angular (Angular 2.0) also solved many of the AngularJS issues, but in very different ways, and requires a buy-in to TypeScript which not all developers enjoy using (or want to learn). What about React? Vue took many good ideas from React, most importantly the Virtual DOM. But Vue implements it with some sort of automatic dependency management, which tracks which components are affected by a change of the state so that only those components are rerendered when that state property changes. In React on the other hand when a part of the state that affects a component changes, the component will be re-rendered and by default all its children will be rerendered as well. To avoid this you need to use the
655
Introduction to Vue
shouldComponentUpdate method of each component and determine if that component should be rerendered. This gives Vue a bit of advantage in terms of ease of use, and out of the box performance gains. One big difference with React is JSX. While you can technically use JSX in Vue, it's not a popular approach and instead the templating system is used. Any HTML file is a valid Vue template, while JSX is very different than HTML and has a learning curve for people in the team that might only need to work with the HTML part of your app, like designers. Vue templates are a lot similar to Mustache and Handlebars (although they differ in terms of flexibility) and as such, they are more familiar to developers that already used frameworks like Angular and Ember. The official state management library, Vuex, follows the Flux architecture and is somewhat similar to Redux in its concepts. Again, this is part of the positive things about Vue, which saw this good pattern in React and borrowed it to its ecosystem. And while you can use Redux with Vue, Vuex is specifically tailored for Vue and its inner workings. Vue is flexible but the fact that the core team maintains two packages very important for any web app like routing and state management makes it a lot less fragmented than React for example: vue-router and vuex are key to the success of Vue. You don't need to choose or worry if that library you chose is going to be maintained in the future and will keep up with framework updates, and being official they are the canonical go-to libraries for their niche (but you can choose to use what you like, of course). One thing that puts Vue in a different bucket compared to React and Angular is that Vue is an indie project: it's not backed by a huge corporation like Facebook or Google. Instead, it's completely backed by the community, which fosters development through donations and sponsors. This makes sure the roadmap of Vue is not driven by a single company agenda.
656
Vue First App
Vue First App If you've never created a Vue.js application, I am going to guide you through the task of creating one, and understanding how it works. The app we're going to build is already done, and it's the Vue CLI default application First example See on Codepen Second example: the Vue CLI default app Use the Vue CLI locally Use CodeSandbox The files structure index.html src/main.js src/App.vue src/components/HelloWorld.vue
Run the app
If you've never created a Vue.js application, I am going to guide you through the task of creating one, and understanding how it works.
First example First I'll use the most basic example of using Vue. You create an HTML file which contains
and you open it in the browser. That's your first Vue app! The page should show a "Hello World!" message. I put the script tags at the end of the body so that they are executed in order after the DOM is loaded. What this code does is, we instantiate a new Vue app, linked to the #example element as its template (it's defined using a CSS selector usually, but you can also pass in an HTMLElement). Then, it associates that template to the data object. That is a special object that hosts the data we want Vue to render. In the template, the special {{ }} tag indicates that's some part of the template that's dynamic, and its content should be looked up in the Vue app data.
See on Codepen You can see this example on Codepen: https://codepen.io/flaviocopes/pen/YLoLOp
Codepen is a little different from using a plain HTML file, and you need to configure it to point to the Vue library location in the Pen settings:
658
Vue First App
Second example: the Vue CLI default app Let's level up the game a little bit. The next app we're going to build is already done, and it's the Vue CLI default application. What is the Vue CLI? It's a command line utility that helps to speed up development by scaffolding an application skeleton for you, with a sample app in place. There are two ways you can get this application.
Use the Vue CLI locally The first is to install the Vue CLI on your computer, and run the command vue create
659
Vue First App
Use CodeSandbox A simpler way, without having to install anything, is to go to https://codesandbox.io/s/vue. CodeSandbox is a cool code editor that allows you build apps in the cloud, which allows you to use any npm package and also easily integrate with Zeit Now for an easy deployment and GitHub to manage versioning. That link I put above opens the Vue CLI default application. Whether you chose to use the Vue CLI locally, or through CodeSandbox, let's inspect that Vue app in detail.
The files structure Beside package.json , which contains the configuration, these are the files contained in the initial project structure: index.html src/App.vue src/main.js src/assets/logo.png src/components/HelloWorld.vue
index.html The index.html file is the main app file. In the body it includes just one simple element: . This is the element the Vue application will use to attach to the DOM. CodeSandbox Vue
660
Vue First App
src/main.js This is the main JavaScript files that drive our app. We first import the Vue library and the App component from App.vue . We set productionTip to false, just to avoid Vue to output a "you're in development mode" tip in the console. Next, we create the Vue instance, by assigning it to the DOM element identified by #app , which we defined in index.html , and we tell it to use the App component. // The Vue build version to load with the `import` command // (runtime-only or standalone) has been set in webpack.base.conf with an alias. import Vue from 'vue' import App from './App' Vue.config.productionTip = false /* eslint-disable no-new */ new Vue({ el: '#app', components: { App }, template: '' })
src/App.vue App.vue is a Single File Component. It contains 3 chunks of code: HTML, CSS and
JavaScript. This might seem weird at first, but Single File Components are a great way to create selfcontained components that have all they need in a single file. We have the markup, the JavaScript that is going to interact with it, and style that's applied to it, which can be scoped, or not. In this case, it's not scoped, and it's just outputting that CSS which is applied like regular CSS to the page. The interesting part lies in the script tag. We import a component from the components/HelloWorld.vue file, which we'll describe later. This component is going to be referenced in our component. It's a dependency. We are going to output this code:
661
Vue First App
from this component, which you see references the HelloWorld component. Vue will automatically insert that component inside this placeholder.
src/components/HelloWorld.vue Here's the HelloWorld component, which is included by the App component. This component outputs a set of links, along with a message. Remember above we talked about CSS in App.vue, which was not scoped? The HelloWorld component has scoped CSS. You can easily determine it by looking at the style tag. If it has the scoped attribute, then it's scoped: This means that the generated CSS will be targeting the component uniquely, via a class that's applied by Vue transparently. You don't need to worry about this, and you know the CSS won't leak to other parts of the page. The message the component outputs is stored in the data property of the Vue instance, and outputted in the template as {{ msg }} .
662
Vue First App
Anything that's stored in data is reachable directly in the template via its own name. We didn't need to say data.msg , just msg .
{{ msg }}
Essential Links
Core Docs
Forum
Community Chat
Twitter
Docs for This Template
Ecosystem
vue-router
vuex
vue-loader
awesome-vue
export default { name: 'HelloWorld', data() { return { msg: 'Welcome to Your Vue.js App' } } } h1, h2 { font-weight: normal; } ul { list-style-type: none; padding: 0; } li { display: inline-block; margin: 0 10px;
664
Vue First App
} a { color: #42b983; }
Run the app CodeSandbox has a cool preview functionality. You can run the app and edit anything in the source to have it immediately reflected in the preview.
665
The Vue CLI
The Vue CLI Vue is a very impressive project, and in addition to the core of the framework, it maintains a lot of utilities that make a Vue programmer's life easier. One of them is the Vue CLI. Installation What does the Vue CLI provide? How to use the CLI to create a new Vue project How to start the newly created Vue CLI application Git repository Use a preset from the command line Where presets are stored Plugins Remotely store presets Another use of the Vue CLI: rapid prototyping Webpack
Vue is a very impressive project, and in addition to the core of the framework, it maintains a lot of utilities that make a Vue programmer's life easier. One of them is the Vue CLI. CLI stands for Command Line Interface. Note: There is a huge rework of the CLI going on right now, going from version 2 to 3. While not yet stable, I will describe version 3 because it's a huge improvement over version 2, and quite different.
Installation The Vue CLI is a command line utility, and you install it globally using npm: npm install -g @vue/cli
or using Yarn: yarn global add @vue/cli
666
The Vue CLI
Once you do so, you can invoke the vue command.
What does the Vue CLI provide? The CLI is essential for rapid Vue.js development. Its main goal is to make sure all the tools you need are working along, to perform what you need, and abstracts away all the nitty-gritty configuration details that using each tool in isolation would require. It can perform an initial project setup and scaffolding. It's a flexible tool: once you create a project with the CLI, you can go and tweak the configuration, without having to eject your application (like you'd do with create-react-app ). When you eject from create-react-app you can update and tweak what you want, but you can't rely on the cool features that create-react-app provides You can configure anything and still be able to upgrade with ease. After you create and configure the app, it acts as a runtime dependency tool, built on top of webpack. The first encounter with the CLI is when creating a new Vue project.
How to use the CLI to create a new Vue project The first thing you're going to do with the CLI is to create a Vue app:
667
The Vue CLI
vue create example
The cool thing is that it's an interactive process. You need to pick a preset. By default, there is one preset that's providing Babel and ESLint integration:
I'm going to press the down arrow ⬇ and manually choose the features I want:
Press space to enable one of the things you need, and then press enter to go on. Since I chose a linter/formatter, Vue CLI prompts me for the configuration. I chose ESLint + Prettier since that's my favorite setup:
668
The Vue CLI
Next thing is choosing how to apply linting. I choose lint on save.
Next up: testing. I picked testing, and Vue CLI offers me to choose between the two most popular solutions: Mocha + Chai vs Jest.
669
The Vue CLI
Vue CLI asks me where to put all the configuration: if in the package.json file, or in dedicated configuration files, one for each tool. I chose the latter.
Next, Vue CLI asks me if I want to save these presets, and allow me to pick them as a choice the next time I use Vue CLI to create a new app. It's a very convenient feature, as having a quick setup with all my preferences is a complexity reliever:
670
The Vue CLI
Vue CLI then asks me if I prefer using Yarn or npm:
and it's the last thing it asks me, and then it goes on to download the dependencies and create the Vue app:
671
The Vue CLI
How to start the newly created Vue CLI application Vue CLI has created the app for us, and we can go in the example folder and run yarn serve to start up our first app in development mode:
672
The Vue CLI
The starter example application source contains a few files, including package.json :
673
The Vue CLI
This is where all the CLI commands are defined, including yarn serve , which we used a minute ago. The other commands are yarn build , to start a production build yarn lint , to run the linter yarn test:unit , to run the unit tests
I will describe the sample application generated by Vue CLI in a separate tutorial.
Git repository
674
The Vue CLI
Notice the master word in the lower-left corner of VS Code? That's because Vue CLI automatically creates a repository, and makes the first commit, so we can jump right in, change things, and we know what we changed:
This is pretty cool. How many times you dive in and change things, only to realize when you want to commit the result, that you didn't commit the initial state?
Use a preset from the command line You can skip the interactive panel and instruct Vue CLI to use a particular preset: vue create -p favourite example-2
Where presets are stored Presets are stored in the .vuejs file in your home directory. Here's mine after creating the first "favorite" preset { "useTaobaoRegistry": false, "packageManager": "yarn", "presets": { "favourite": { "useConfigFiles": true, "plugins": { "@vue/cli-plugin-babel": {}, "@vue/cli-plugin-eslint": { "config": "prettier", "lintOn": [ "save"
Plugins As you can see from reading the configuration, a preset is basically a collection of plugins, with some optional configuration. Once a project is created, you can add more plugins by using vue add : vue add @vue/cli-plugin-babel
All those plugins are used in the latest version available. You can force Vue CLI to use a specific version by passing the version property: "@vue/cli-plugin-eslint": { "version": "^3.0.0" }
this is useful if a new version has a breaking change or a bug, and you need to wait a little bit before using it.
Remotely store presets A preset can be stored in GitHub (or on other services) by creating a repository that contains a preset.json file, which contains a single preset configuration. Extracted from the above, I
made a sample preset in https://github.com/flaviocopes/vue-cli-preset/blob/master/preset.json which contains this configuration: { "useConfigFiles": true, "plugins": { "@vue/cli-plugin-babel": {}, "@vue/cli-plugin-eslint": { "config": "prettier", "lintOn": [ "save"
It can be used to bootstrap a new application using: vue create --preset flaviocopes/vue-cli-preset example3
Another use of the Vue CLI: rapid prototyping Until now I've explained how to use the Vue CLI to create a new project from scratch, with all the bells & whistles. But for really quick prototyping, you can create a really simple Vue application, even one that's self-contained in a single .vue file, and serve that, without having to download all the dependencies in the node_modules folder. How? First install the cli-service-global global package: npm install -g @vue/cli-service-global //or yarn global add @vue/cli-service-global
Create an app.vue file:
Hello world!
Heyyy
and then run vue serve app.vue
677
The Vue CLI
You can serve more organized projects, composed by JavaScript and HTML files as well. Vue CLI by default uses main.js / index.js as its entry point, and you can have a package.json and any tool configuration set up. vue serve will pick it up. Since this uses global dependencies, it's not an optimal approach for anything more than demonstration or quick testing. Running vue build will prepare the project for deployment in dist/ , and generate all the corresponding code, also for vendor dependencies.
Webpack Internally, Vue CLI uses webpack, but the configuration is abstracted and we don't even see the config file in our folder. You can still have access to it by calling vue inspect :
678
The Vue CLI
679
DevTools
DevTools Vue has a great panel that integrates into the Browser Developer Tools, which lets you inspect your application and interact with it, to ease debugging and understanding Install on Chrome Install on Firefox Install the standalone app How to use the Developer Tools Filter components Select component in the page Format components names Filter inspected data Inspect DOM Open in editor
When you're first experimenting with Vue, if you open the Browser Developer Tools you will find this message: "Download the Vue Devtools extension for a better development experience: https://github.com/vuejs/vue-devtools"
This is a friendly reminder to install the Vue Devtools Extension. What's that? Any popular framework has its own devtools extension, which generally adds a new panel to the browser developer tools that is much more specialized than the ones that the browser ships by default. In this case, the panel will let us inspect our Vue application and interact with it.
680
DevTools
This tool will be an amazing help when building Vue apps. The developer tools can only inspect a Vue application when it's in development mode. This makes sure no one can use them to interact with your production app (and will make Vue more performant because it does not have to care about the devtools) Let's install it! There are 3 ways to install the Vue Dev Tools: on Chrome on Firefox as a standalone application Safari, Edge and other browsers are not supported with a custom extension, but using the standalone application you can debug a Vue.js app running in any browser.
Install on Chrome Go to this page on the Google Chrome Store: https://chrome.google.com/webstore/detail/vuedevtools/nhdogjmejiglipccpnnnanhbledajbpd and click Add to Chrome.
Go through the installation process:
681
DevTools
The Vue.js devtools icon shows up in the toolbar. If the page does not have a Vue.js instance running, it's grayed out.
682
DevTools
If Vue.js is detected, the icon has the Vue logo colors.
683
DevTools
The icon does nothing except showing us that there is a Vue.js instance. To use the devtools, we must open the Developer Tools panel, using "View → Developer → Developer Tools", or Cmd-Alt-i
Install on Firefox You can find the Firefox dev tools extension in the Mozilla addons store: https://addons.mozilla.org/en-US/firefox/addon/vue-devtools/
684
DevTools
Click "Add to Firefox" and the extension will be installed. As with Chrome, a grayed icon shows up in the toolbar
685
DevTools
And when you visit a site that has a Vue instance running, it will become green, and when we open the Dev Tools we will see a "Vue" panel:
686
DevTools
Install the standalone app Alternatively, you can use the DevTools standalone app. Simply install it using npm install -g @vue/devtools //or yarn global add @vue/devtools
and run it by calling vue-devtools
This will open the standalone Electron-based application.
now, paste the script tag it shows you:
687
DevTools
inside the project index.html file, and wait for the app to be reloaded, and it will automatically connect to the app:
How to use the Developer Tools As mentioned, the Vue DevTools can be activated by opening the Developer Tools in the browser and moving to the Vue panel. Another option is to right-click on any element in the page, and choose "Inspect Vue component":
688
DevTools
When the Vue DevTools panel is open, we can navigate the components tree. When we choose a component from the list on the left, the right panel shows the props and data it holds:
On the top there are 4 buttons: Components (the current panel), which lists all the component instances running in the current page. Vue can have multiple instances running at the same time, for example it
689
DevTools
might manage your shopping cart widget and the slideshow, with separate, lightweight apps. Vuex is where you can inspect the state managed through Vuex. Events shows all the events emitted Refresh reloads the devtools panel Notice the small = $vm0 text beside a component? It's a handy way to inspect a component using the Console. Pressing the "esc" key shows up the console in the bottom of the devtools, and you can type $vm0 to access the Vue component:
This is very cool to inspect and interact with components without having to assign them to a global variable in the code.
Filter components Start typing a component name, and the components tree will filter out the ones that don't match.
690
DevTools
Select component in the page Click the
button and you can hover any component in the page with the mouse, click it, and it will be opened in the devtools.
Format components names You can choose to show components in camelCase or use dashes.
Filter inspected data On the right panel, you can type any word to filter the properties that don't match it.
Inspect DOM Click the Inspect DOM button to be brought to the DevTools Elements inspector, with the DOM element generated by the component:
691
DevTools
Open in editor Any user component (not framework-level components) has a button that opens it in your default editor. Very handy.
692
Configuring VS Code for Vue Development
Configuring VS Code for Vue Development Visual Studio Code is one of the most used code editors in the world right now. When you're such a popular editor, people build nice plugins. One of such plugins is an awesome tool that can help us Vue.js developers. Vetur Installing Vetur Syntax highlighting Snippets IntelliSense Scaffolding Emmet Linting and error checking Code Formatting
Visual Studio Code is one of the most used code editors in the world right now. Editors have, like many software products, a cycle. Once TextMate was the favorite by developers, then it was Sublime Text, now it's VS Code. The cool thing about being popular is that people dedicate a lot of time to building plugins for everything they imagine. One of such plugins is an awesome tool that can help us Vue.js developers.
Vetur It's called Vetur, it's hugely popular, with more than 3 million downloads, and you can find it on the Visual Studio Marketplace.
693
Configuring VS Code for Vue Development
Installing Vetur Clicking the Install button will trigger the installation panel in VS Code:
694
Configuring VS Code for Vue Development
You can also simply open the Extensions in VS Code and search for "vetur":
What does this extension provide?
695
Configuring VS Code for Vue Development
Syntax highlighting Vetur provides syntax highlighting for all your Vue source code files. Without Vetur, a .vue file will be displayed in this way by VS Code:
with Vetur installed:
696
Configuring VS Code for Vue Development
VS Code is able to recognize the type of code contained in a file from its extension. Using Single File Component, you mix different types of code inside the same file, from CSS to JavaScript to HTML. VS Code by default cannot recognize this kind of situation, and Vetur allows to provide syntax highlighting for each kind of code you use. Vetur enables support, among the others, for HTML CSS JavaScript Pug Haml SCSS PostCSS Sass Stylus TypeScript
Snippets 697
Configuring VS Code for Vue Development
As with syntax highlighting, since VS Code cannot determine the kind of code contained in a part of a .vue file, it cannot provide the snippets we all love: pieces of code we can add to the file, provided by specialized plugins. Vetur provides VS Code the ability to use your favorite snippets in Single File Components.
IntelliSense IntelliSense is also enabled bye Vetur, for each different language, with autocomplete:
Scaffolding In addition to enabling custom snippets, Vetur provides its own set of snippets. Each one creates a specific tag (template, script or style) with its own language: scaffold template with html template with pug script with JavaScript script with TypeScript
698
Configuring VS Code for Vue Development
style with CSS style with CSS (scoped) style with scss style with scss (scoped) style with less style with less (scoped) style with sass style with sass (scoped) style with postcss style with postcss (scoped) style with stylus style with stylus (scoped)
If you type scaffold , you'll get a starter pack for a single-file component: export default { }
the others are specific and create a single block of code. Note: (scoped) means that it applies to the current component only
Emmet Emmet, the popular HTML/CSS abbreviations engine, is supported by default. You can type one of the Emmet abbreviations and by pressing tab VS Code will automatically expand it to the HTML equivalent:
699
Configuring VS Code for Vue Development
Linting and error checking Vetur integrates with ESLint, through the VS Code ESLint plugin.
700
Configuring VS Code for Vue Development
Code Formatting 701
Configuring VS Code for Vue Development
Vetur provides automatic support for code formatting, to format the whole file upon save (in combination with the "editor.formatOnSave" VS Code setting. You can choose to disable automatic formatting for some specific language by setting the vetur.format.defaultFormatter.XXXXX to none in the VS Code settings. To change one of
those settings, just start searching for the string, and override what you want in the user settings (the right panel). Most of the languages supported use Prettier for automatic formatting, a tool that's becoming an industry standard. Uses your Prettier configuration to determine your preferences.
702
Components
Components Components are single, independent units of an interface. They can have their own state, markup and style. Components are single, independent units of an interface. They can have their own state, markup and style.
How to use components Vue components can be defined in 4 main ways. Let's talk in code. The first is: new Vue({ /* options */ })
The second is: Vue.component('component-name', { /* options */ })
The third is by using local components: components that only accessible by a specific component, and not available elsewhere (great for encapsulation). The fourth is in .vue files, also called Single File Components. Let's dive into the first 3 ways in detail. Using new Vue() or Vue.component() is the standard way to use Vue when you're building an application that is not a Single Page Application (SPA) but rather uses Vue.js just in some pages, like in a contact form or in the shopping cart. Or maybe Vue is used in all pages, but the server is rendering the layout, and you serve the HTML to the client, which then loads the Vue application you build. In an SPA, where it's Vue that builds the HTML, it's more common to use Single File Components as they are more convenient.
703
Components
You instantiate Vue by mounting it on a DOM element. If you have a tag, you will use: new Vue({ el: '#app' })
A component initialized with new Vue has no corresponding tag name, so it's usually the main container component. Other components used in the application are initialized using Vue.component() . Such a component allows you to define a tag, with which you can embed the component multiple times in the application, and specify the output of the component in the template property:
What are we doing? We are initializing a Vue root component on #app , and inside that, we use the Vue component user-name , which abstracts our greeting to the user. The component accepts a prop, which is an attribute we use to pass data down to child components. In the Vue.component() call we passed user-name as the first parameter. This gives the component a name. You can write the name in 2 ways here. The first is the one we used, called kebab-case. The second is called PascalCase, which is like camelCase, but with the first letter capitalized: Vue.component('UserName', { /* ... */ })
Vue internally automatically creates an alias from user-name to UserName , and vice versa, so you can use whatever you like. It's generally best to use UserName in the JavaScript, and user-name in the template.
704
Components
Local components Any component created using Vue.component() is globally registered. You don't need to assign it to a variable or pass it around to reuse it in your templates. You can encapsulate components locally by assigning an object that defines the component object to a variable: const Sidebar = { template: 'Sidebar' }
and then make it available inside another component by using the components property: new Vue({ el: '#app', components: { Sidebar } })
You can write the component in the same file, but a great way to do this is to use JavaScript modules: import Sidebar from './Sidebar' export default { el: '#app', components: { Sidebar } }
Reusing a component A child component can be added multiple times. Each separate instance is independent of the others:
The building blocks of a component So far we've seen how a component can accept the el , props and template properties. el is only used in root components initialized using new Vue({}) , and identifies the
DOM element the component will mount on. props lists all the properties that we can pass down to a child component template is where we can set up the component template, which will be responsible for
defining the output the component generates. A component accepts other properties: data the component local state methods : the component methods computed : the computed properties associated with the component watch : the component watchers
706
Single File Components
Single File Components Learn how Vue helps you create a single file that is responsible for everything that regards a single component, centralizing the responsibility for the appearance and behavior A Vue component can be declared in a JavaScript file ( .js ) like this: Vue.component('component-name', { /* options */ })
or also: new Vue({ /* options */ })
or it can be specified in a .vue file. The .vue file is pretty cool because it allows you to define JavaScript logic HTML code template CSS styling all in just a single file, and as such it got the name of Single File Component. Here's an example:
All of this is possible thanks to the use of webpack. The Vue CLI makes this very easy and supported out of the box. .vue files cannot be used without a webpack setup, and as such, they are not very suited to apps that just use Vue on a page without being a full-blown singlepage app (SPA). Since Single File Components rely on Webpack, we get for free the ability to use modern Web features. Your CSS can be defined using SCSS or Stylus, the template can be built using Pug, and all you need to do to make this happen is to declare to Vue which language preprocessor you are going to use. The list of supported preprocessors include TypeScript SCSS Sass Less PostCSS Pug We can use modern JavaScript (ES6-7-8) regardless of the target browser, using the Babel integration, and ES Modules too, so we can use import/export statements. We can use CSS Modules to scope our CSS. Speaking of scoping CSS, Single File Components make it absolutely easy to write CSS that won't leak to other components, by using tags. If you omit scoped , the CSS you define will be global. But adding that, Vue adds automatically a specific class to the component, unique to your app, so the CSS is guaranteed to not leak out to other components. Maybe your JavaScript is huge because of some logic you need to take care of. What if you want to use a separate file for your JavaScript? You can use the src attribute to externalize it:
{{ hello }}
This also works for your CSS:
708
Single File Components
{{ hello }}
Notice how I used export default { data() { return { hello: 'Hello World!' } } }
in the component's JavaScript to set up the data. Other common ways you will see are export default { data: function() { return { name: 'Flavio' } } }
(the above is equivalent to what we did before) or: export default { data: () => { return { name: 'Flavio' } } }
this is different because it uses an arrow function. Arrow functions are fine until we need to access a component method, as we need to make use of this and such property is not bound to the component using arrow functions. So it's mandatory to use regular functions rather than arrow functions. You might also see module.exports = {
709
Single File Components
data: () => { return { name: 'Flavio' } } }
this is using the CommonJS syntax, and works as well, although it's recommended to use the ES Modules syntax, as that is a JavaScript standard.
710
Templates
Templates Vue.js uses a templating language that's a superset of HTML. Any HTML is a valid Vue.js template, and in addition to that, Vue.js provides two powerful things: interpolation and directives. Vue.js uses a templating language that's a superset of HTML. Any HTML is a valid Vue.js template, and in addition to that, Vue.js provides two powerful things: interpolation and directives. This is a valid Vue.js template: Hello!
This template can be put inside a Vue component declared explicitly: new Vue({ template: 'Hello!' })
or it can be put into a Single File Component: Hello!
This first example is very basic. The next step is making it output a piece of the component state, for example, a name. This can be done using interpolation. First, we add some data to our component: new Vue({ data: { name: 'Flavio' }, template: 'Hello!' })
and then we can add it to our template using the double brackets syntax: new Vue({ data: { name: 'Flavio' },
711
Templates
template: 'Hello {{name}}!' })
One interesting thing here. See how we just used name instead of this.data.name ? This is because Vue.js does some internal binding and lets the template use the property as if it was local. Pretty handy. In a single file component, that would be: Hello {{name}}! export default { data() { return { name: 'Flavio' } } }
I used a regular function in my export. Why not an arrow function? This is because in data we might need to access a method in our component instance, and we can't do that if this is not bound to the component, so arrow functions usage is not possible. We could use an arrow function, but then I would need to remember to switch to a regular function in case I use this . Better play it safe, I think. Now, back to the interpolation. {{ name }} reminds of Mustache / Handlebars template interpolation, but it's just a visual
reminder. While in those templating engines they are "dumb", in Vue, you can do much more, it's more flexible. You can use any JavaScript expression inside your interpolations, but you're limited to just one expression: {{ name.reverse() }}
{{ name === 'Flavio' ? 'Flavio' : 'stranger' }}
712
Templates
Vue provides access to some global objects inside templates, including Math and Date, so you can use them: {{ Math.sqrt(16) * Math.random() }}
It's best to avoid adding complex logic to templates, but the fact Vue allows it gives us more flexibility, especially when trying things out. We can first try to put an expression in the template, and then move it to a computed property or method later on. The value included in any interpolation will be updated upon a change of any of the data properties it relies on. You can avoid this reactivity by using the v-once directive. The result is always escaped, so you can't have HTML in the output. If you need to have an HTML snippet you need to use the v-html directive instead.
713
Styling components using CSS
Styling components using CSS Learn all the options at your disposal to style Vue.js components using CSS Note: this tutorial uses Vue.js Single File Components The simplest option to add CSS to a Vue.js component is to use the style tag, just like in HTML:
:style is a shorthand for v-bind:style . I'll use this shorthand throughout this tutorial.
Notice how we had to wrap text-decoration in quotes. This is because of the dash, which is not part of a valid JavaScript identifier. You can avoid the quote by using a special camelCase syntax that Vue.js enables, and rewriting it to textDecoration :
714
Styling components using CSS
Hi!
Instead of binding an object to style , you can reference a computed property:
Vue components generate plain HTML, so you can choose to add a class to each element, and add a corresponding CSS selector with properties that style it:
Hi!
.underline { text-decoration: underline; }
You can use SCSS like this:
Hi!
body { .underline { text-decoration: underline; } }
715
Styling components using CSS
You can hardcode the class like in the above example, or you can bind the class to a component property, to make it dynamic, and only apply to that class if the data property is true:
Notice that in the computed property you need to reference the component data using this. [propertyName] , while in the template data properties are conveniently put as first-level
properties. Any CSS that's not hardcoded like in the first example is going to be processed by Vue, and Vue does the nice job of automatically prefixing the CSS for us, so we can write clean CSS while still targeting older browsers (which still means browsers that Vue supports, so IE9+)
718
Styling components using CSS
719
Directives
Directives Vue.js uses a templating language that's a superset of HTML. We can use interpolation, and directives. This article explains directives. v-text v-once v-html v-bind
Two-way binding using v-model Using expressions Conditionals Loops Events Show or hide Event directive modifiers Custom directives
We saw in Vue.js templates and interpolations how you can embed data in Vue templates. This article explains the other technique offered by Vue.js in templates: directives. Directives, are basically like HTML attributes which are added inside templates. They all start with v- , to indicate that's a Vue special attribute. Let's see each of the Vue directives in detail.
v-text Instead of using interpolation, you can use the v-text directive. It performs the same job:
v-once You know how {{ name }} binds to the name property of the component data. Any time name changes in your component data, Vue is going to update the value represented in the browser.
720
Directives
Unless you use the v-once directive, which is basically like an HTML attribute: {{ name }}
v-html When you use interpolation to print a data property, the HTML is escaped. This is a great way that Vue uses to automatically protect from XSS attacks. There are cases however where you want to output HTML and make the browser interpret it. You can use the v-html directive:
v-bind Interpolation only works in the tag content. You can't use it on attributes. Attributes must use v-bind : {{ linkText }}
v-bind is so common that there is a shorthand syntax for it:
{{ linkText }} {{ linkText }}
Two-way binding using v-model v-model lets us bind a form input element for example, and make it change the Vue data
property when the user changes the content of the field:
Message is: {{ message }}
Choose a fruit Apple Banana Strawberry
721
Directives
Fruit chosen: {{ selected }}
Using expressions You can use any JavaScript expression inside a directive:
{{ linkText }}
Any variable used in a directive references the corresponding data property.
Conditionals Inside a directive you can use the ternary operator to perform a conditional check, since that's an expression:
There are dedicated directives that allow you to perform more organized conditionals: v-if , v-else and v-else-if .
Hey!
shouldShowThis is a boolean value contained in the component's data.
Loops v-for allows you to render a list of items. Use it in combination with v-bind to set the
properties of each item in the list. You can iterate on a simple array of values:
You can pass parameters to any event: Click me! export default { methods: { handleClick: function(value) { alert(value) } } }
and small bits of JavaScript (a single expression) can be put directly into the template: {{counter}} export default { data: function() { return { counter: 0 } } }
click is just one kind of event. A common event is submit , which you can bind using von:submit . v-on is so common that there is a shorthand syntax for it, @ :
Click me! Click me!
724
Directives
More details on v-on in the Vue Events
Show or hide You can choose to only show an element in the DOM if a particular property of the Vue instance evaluates to true, using v-show :
Something
The element is still inserted in the DOM, but set to display: none if the condition is not satisfied.
Event directive modifiers Vue offers some optional event modifiers you can use in association with v-on , which automatically make the event do something without you explicitly coding it in your event handler. One good example is .prevent , which automatically calls preventDefault() on the event. In this case, the form does not cause the page to be reloaded, which is the default behavior:
Other modifiers include .stop , .capture , .self , .once , .passive and they are described in detail in the official docs.
Custom directives The Vue default directives already let you do a lot of work, but you can always add new, custom directives if you want. Read https://vuejs.org/v2/guide/custom-directive.html if you're interested in learning more.
725
Events
Events Vue.js allows us to intercept any DOM event by using the v-on directive on an element. This topic is key to making a component interactive What are Vue.js events Access the original event object Event modifiers
What are Vue.js events Vue.js allows us to intercept any DOM event by using the v-on directive on an element. If we want to do something when a click event happens in this element: Click me!
we add a v-on directive: Click me!
Vue also offers a very convenient alternative syntax for this: Click me!
You can choose to use the parentheses or not. @click="handleClick" is equivalent to @click="handleClick()" . handleClick is a method attached to the component:
Methods are explained more in detail in my Vue Methods tutorial. What you need to know here is that you can pass parameters from events: @click="handleClick(param)" and they will be received inside the method.
Access the original event object In many cases, you will want to perform an action on the event object or look up some property in it. How can you access it? Use the special $event directive: Click me! export default { methods: { handleClick: function(event) { console.log(event) } } }
and if you already pass a variable: Click me! export default { methods: { handleClick: function(text, event) { console.log(text) console.log(event) } } }
From there you could call event.preventDefault() , but there's a better way: event modifiers
727
Events
Event modifiers Instead of messing with DOM "things" in your methods, tell Vue to handle things for you: @click.prevent call event.preventDefault() @click.stop call event.stopPropagation() @click.passive makes use of the passive option of addEventListener @click.capture uses event capturing instead of event bubbling @click.self make sure the click event was not bubbled from a child event, but directly
happened on that element @click.once the event will only be triggered exactly once
All those options can be combined by appending on modifier after the other. For more on propagation, bubbling/capturing see my JavaScript events guide.
728
Methods
Methods A Vue method is a function associated with the Vue instance. Methods are defined inside the `methods` property. Let's see how they work. What are Vue.js methods Pass parameters to Vue.js methods How to access data from a method
What are Vue.js methods A Vue method is a function associated with the Vue instance. Methods are defined inside the methods property: new Vue({ methods: { handleClick: function() { alert('test') } } })
or in the case of Single File Components: export default { methods: { handleClick: function() { alert('test') } } }
Methods are especially useful when you need to perform an action and you attach a v-on directive on an element to handle events. Like this one, which calls handleClick when the element is clicked: Click me!
729
Methods
Pass parameters to Vue.js methods Methods can accept parameters. In this case, you just pass the parameter in the template, and you Click me!
or in the case of Single File Components: export default { methods: { handleClick: function(text) { alert(text) } } }
How to access data from a method You can access any of the data properties of the Vue component by using this.propertyName : Click me! export default { data() { return { name: 'Flavio' } }, methods: { handleClick: function() {
730
Methods
console.log(this.name) } } }
We don't have to use this.data.name , just this.name . Vue does provide a transparent binding for us. Using this.data.name will raise an error. As you saw before in the events description, methods are closely interlinked to events, because they are used as event handlers. Every time an event occurs, that method is called.
731
Watchers
Watchers A Vue watcher allows you to listen to the component data and run whenever a particular property changes A watcher is a special Vue.js feature that allows you to spy on one property of the component state, and run a function when that property value changes. Here's an example. We have a component that shows a name, and allows you to change it by clicking a button:
When the name changes we want to do something, like printing a console log. We can do so by adding to the watch object a property named as the data property we want to watch over: export default { data() { return { name: 'Flavio' } }, methods: { changeName: function() { this.name = 'Flavius' } }, watch: { name: function() {
732
Watchers
console.log(this.name) } } }
The function assigned to watch.name can optionally accept 2 parameters. The first is the new property value. The second is the old property value: export default { /* ... */ watch: { name: function(newValue, oldValue) { console.log(newValue, oldValue) } } }
Watchers cannot be looked up from a template as you can with computed properties.
733
Computed Properties
Computed Properties Learn how you can use Vue Computed Properties to cache calculations What is a Computed Property An example of a computed property Computed properties vs methods
What is a Computed Property In Vue.js you can output any data value using parentheses:
{{ count }}
export default { data() { return { count: 1 } } }
This property can host some small computations, for example {{ count * 10 }}
but you're limited to a single JavaScript expression. In addition to this technical limitation, you also need to consider that templates should only be concerned with displaying data to the user, not perform logic computations. To do something more a single expression, and to have more declarative templates, that you use a computed property. Computed properties are defined in the computed property of the Vue component: export default {
734
Computed Properties
computed: { } }
An example of a computed property Here's an example code that uses a computed property count to calculate the output. Notice: 1. I didn't have to call count() . Vue.js automatically invokes the function 2. I used a regular function (not an arrow function) to define the count computed property because I need to be able to access the component instance through this .
Computed properties vs methods If you already know Vue methods, you may wonder what's the difference. First, methods must be called, not just referenced, so you'd need to do call count() instead of count if you have a count method:
count: function() { return 'The count is ' + this.items.length * 10 } } }
But the main difference is that computed properties are cached. The result of the count computed property is internally cached until the items data property changes. Important: computed properties are only updated when a reactive source updates. Regular JavaScript methods are not reactive, so a common example is to use Date.now() :
It will render once, and then it will not be updated even when the component re-renders. Just on a page refresh, when the Vue component is quit and reinitialized. In this case a method is better suited for your needs.
736
Methods vs Watchers vs Computed Properties
Methods vs Watchers vs Computed Properties Vue.js provides us methods, watchers and computed properties. When to use one vs the other?
When to use methods To react on some event happening in the DOM To call a function when something happens in your component. You can call a methods from computed properties or watchers.
When to use computed properties You need to compose new data from existing data sources You have a variable you use in your template that's built from one or more data properties You want to reduce a complicated, nested property name to a more readable and easy to use one, yet update it when the original property changes You need to reference a value from the template. In this case, creating a computed property is the best thing because it's cached. You need to listen to changes of more than one data property
When to use watchers You want to listen when a data property changes, and perform some action You want to listen to a prop value change You only need to listen to one specific property (you can't watch multiple properties at the same time) You want to watch a data property until it reaches some specific value and then do something
737
Props
Props Props are used to pass down state to child components. Learn all about them Define a prop inside the component Accept multiple props Set the prop type Set a prop to be mandatory Set the default value of a prop Passing props to the component
Define a prop inside the component Props are the way components can accept data from components that include them (parent components). When a component expects one or more prop, it must define them in its props property: Vue.component('user-name', { props: ['name'], template: '
Hi {{ name }}
' })
or, in a Vue Single File Component:
{{ name }}
export default { props: ['name'] }
Accept multiple props You can have multiple props by simply appending them to the array: Vue.component('user-name', { props: ['firstName', 'lastName'], template: '
Hi {{ firstName }} {{ lastName }}
' })
738
Props
Set the prop type You can specify the type of a prop very simply by using an object instead of an array, using the name of the property as the key of each property, and the type as the value: Vue.component('user-name', { props: { firstName: String, lastName: String }, template: '
Hi {{ firstName }} {{ lastName }}
' })
The valid types you can use are: String Number Boolean Array Object Date Function Symbol When a type mismatches, Vue alerts (in development mode) in the console with a warning. Prop types can be more articulated. You can allow multiple different value types: props: { firstName: [String, Number] }
Set a prop to be mandatory You can require a prop to be mandatory: props: { firstName: { type: String, required: true }
739
Props
}
Set the default value of a prop You can specify a default value: props: { firstName: { type: String, default: 'Unknown person' } }
default can also be a function that returns an appropriate value, rather than being the actual
value. You can even build a custom validator, which is cool for complex data: props: { name: { validator: name => { return name === 'Flavio' //only allow "Flavios" } } }
Passing props to the component You pass a prop to a component using the syntax
740
Props
if what you pass is a static value. If it's a data property, you use ... export default { //... data: function() { return { color: 'white' } }, //... }
You can use a ternary operator inside the prop value to check a truthy condition and pass a value that depends on it: ... export default { //... data: function() { return { color: 'white' } }, //... }
741
Slots
Slots Slots help you position content in a component, and allow parent components to arrange it. A component can choose to define its content entirely, like in this case: Vue.component('user-name', { props: ['name'], template: '
Hi {{ name }}
' })
or it can also let the parent component inject any kind of content into it, by using slots. What's a slot? You define it by putting in a component template: Vue.component('user-information', { template: '' })
When using this component, any content added between the opening and closing tag will be added inside the slot placeholder:
Hi!
If you put any content side the tags, that serves as the default content in case nothing is passed in. A complicated component layout might require a better way to organize content. Enter named slots. With a named slot you can assign parts of a slot to a specific position in your component template layout, and you use a slot attribute to any tag, to assign content to that slot. Anything outside any template tag is added to the main slot . For convenience I use a page single file component in this example:
742
Slots
Home
Contact
Page title
Page content
743
Filters
Filters Filters are the way Vue.js lets us apply formatting and transformations to a value that's used in a template interpolation. Filters are a functionality provided by Vue components that let you apply formatting and transformations to any part of your template dynamic data. They don't change a component data or anything, but they only affect the output. Say you are printing a name:
What if you want to check that name is actually containing a value, and if not print 'there', so that our template will print "Hi there!"? Enter filters:
Notice the syntax to apply a filter, which is | filterName . If you're familiar with Unix, that's the Unix pipe operator, which is used to pass the output of an operation as an input to the next one. The filters property of the component is an object. A single filter is a function that accepts a value and returns another value. The returned value is the one that's actually printed in the Vue.js template. The filter, of course, has access to the component data and methods. What's a good use case for filters? transforming a string, for example, capitalizing or making it lowercase formatting a price Above you saw a simple example of a filter: {{ name | fallback }} . Filters can be chained, by repeating the pipe syntax: {{ name | fallback | capitalize }}
This first applies the fallback filter, then the capitalize filter (which we didn't define, but try making one!). Advanced filters can also accept parameters, using the normal function parameters syntax:
If you pass parameters to a filter, the first one passed to the filter function is always the item in the template interpolation ( name in this case), followed by the explicit parameters you passed.
745
Filters
You can use multiple parameters by separating them using a comma. Notice I used an arrow function. We avoid arrow function in methods and computed properties generally because they almost always reference this to access the component data, but in this case, the filter does not need to access this but receives all the data it needs through the parameters, and we can safely use the simpler arrow function syntax. This package has a lot of pre-made filters for you to use directly in templates, which include capitalize , uppercase , lowercase , placeholder , truncate , currency , pluralize and
more.
746
Communication among components
Communication among components How you can make components communicate in a Vue.js application. Props Events to communicate from children to parent Using an Event Bus to communicate between any component Alternatives
Components in Vue can communicate in various ways.
Props The first way is using props. Parents "pass down" data by adding arguments to the component declaration: import Car from './components/Car' export default { name: 'App', components: { Car } }
Props are one-way: from parent to child. Any time the parent changes the prop, the new value is sent to the child and rerendered. The reverse is not true, and you should never mutate a prop inside the child component.
Using Events to communicate from children to parent 747
Communication among components
Events allow you to communicate from the children up to the parent: export default { name: 'Car', methods: { handleClick: function() { this.$emit('clickedSomething') } } }
The parent can intercept this using the v-on directive when including the component in its template: export default { name: 'App', methods: { handleClickInParent: function() { //... } } }
You can pass parameters of course: export default { name: 'Car', methods: { handleClick: function() { this.$emit('clickedSomething', param1, param2) } } }
Using an Event Bus to communicate between any component Using events you're not limited to child-parent relationships. You can use the so-called Event Bus. Above we used this.$emit to emit an event on the component instance. What we can do instead is to emit the event on a more generally accessible component. this.$root , the root component, is commonly used for this.
You can also create a Vue component dedicated to this job, and import it where you need. export default { name: 'Car', methods: { handleClick: function() { this.$root.$emit('clickedSomething') } } }
Any other component can listen for this event. You can do so in the mounted callback: export default {
Alternatives This is what Vue provides out of the box. When you outgrow this, you can look into Vuex or other 3rd part libraries.
750
Vuex
Vuex Vuex is the official state management library for Vue.js. In this tutorial I'm going to explain its basic usage. Introduction to Vuex Why should you use Vuex Let's start Create the Vuex store An use case for the store Introducing the new components we need Adding those components to the app Add the state to the store Add a mutation Add a getter to reference a state property Adding the Vuex store to the app Update the state on a user action using commit Use the getter to print the state value Wrapping up
Introduction to Vuex Vuex is the official state management library for Vue.js. Its job is to share data across the components of your application. Components in Vue.js out of the box can communicate using props, to pass state down to child components from a parent events, to change the state of a parent component from a child, or using the root component as an event bus Sometimes things get more complex than what these simple options allow. In this case, a good option is to centralize the state in a single store. This is what Vuex does.
Why should you use Vuex Vuex is not the only state management option you can use in Vue (you can use Redux too), but its main advantage is that it's official, and its integration with Vue.js is what makes it shine.
751
Vuex
With React you have the trouble of having to choose one of the many libraries available, as the ecosystem is huge and has no de-facto standard. Lately Redux was the most popular choice, with MobX following up in terms of popularity. With Vue I'd go as far as to say that you won't need to look around for anything other than Vuex, especially when starting out. Vuex borrowed many of its ideas from the React ecosystem, as this is the Flux pattern popularized by Redux. If you know Flux or Redux already, Vuex will be very familiar. If you don't, no problem - I'll explain every concept from the ground up. Components in a Vue application can have their own state. For example, an input box will store the data entered into it locally. This is perfectly fine, and components can have local state even when using Vuex. You know that you need something like Vuex when you start doing a lot of work to pass a piece of state around. In this case Vuex provides a central repository store for the state, and you mutate the state by asking the store to do that. Every component that depends on a particular piece of the state will access it using a getter on the store, which makes sure it's updated as soon as that thing changes. Using Vuex will introduce some complexity into the application, as things need to be set up in a certain way to work correctly, but if this helps solve the unorganized props passing and event system that might grow into a spaghetti mess if too complicated, then it's a good choice.
Let's start In this example I'm starting from a Vue CLI application. Vuex can be used also by directly loading it into a script tag, but since Vuex is more in tune with bigger applications, it's much more likely you will use it on a more structured application, like the ones you can start up quickly with the Vue CLI. The examples I use will be put CodeSandbox, which is a great service that has a Vue CLI sample ready to go at https://codesandbox.io/s/vue. I recommend using it to play around.
752
Vuex
Once you're there, click the Add dependency button, enter "vuex" and click it. Now Vuex will be listed in the project dependencies. To install Vuex locally you can simply run npm install vuex or yarn add vuex inside the project folder.
Create the Vuex store Now we are ready to create our Vuex store. This file can be put anywhere. It's generally suggested to put it in the src/store/store.js file, so we'll do that. In this file we initialize Vuex and we tell Vue to use it: import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({})
753
Vuex
We export a Vuex store object, which we create using the Vuex.Store() API.
An use case for the store Now that we have a skeleton in place, let's come up with an idea for a good use case for Vuex, so I can introduce its concepts. For example, I have 2 sibling components, one with an input field, and one that prints that input field content. When the input field is changed, I want to also change the content in that second component. Very simple but this will do the job for us.
Introducing the new components we need I delete the HelloWorld component and add a Form component, and a Display component. Favorite ice cream flavor?
754
Vuex
You chose ???
Adding those components to the app We add them to the App.vue code instead of the HelloWorld component: import Form from './components/Form' import Display from './components/Display' export default { name: 'App', components: { Form, Display } }
Add the state to the store So with this in place, we go back to the store.js file and we add a property to the store called state , which is an object, that contains the flavor property. That's an empty string initially.
import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' } })
We'll update it when the user types into the input field.
755
Vuex
Add a mutation The state cannot be manipulated except by using mutations. We set up one mutation which will be used inside the Form component to notify the store that the state should change. import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' }, mutations: { change(state, flavor) { state.flavor = flavor } } })
Add a getter to reference a state property With that set, we need to add a way to look at the state. We do so using getters. We set up a getter for the flavor property: import Vue from 'vue' import Vuex from 'vuex' Vue.use(Vuex) export const store = new Vuex.Store({ state: { flavor: '' }, mutations: { change(state, flavor) { state.flavor = flavor } }, getters: { flavor: state => state.flavor } })
Notice how getters is an object. flavor is a property of this object, which accepts the state as the parameter, and returns the flavor property of the state.
756
Vuex
Adding the Vuex store to the app Now the store is ready to be used. We go back to our application code, and in the main.js file, we need to import the state and make it available in our Vue app. We add import { store } from './store/store'
and we add it to the Vue application: new Vue({ el: '#app', store, components: { App }, template: '' })
Once we add this, since this is the main Vue component, the store variable inside every Vue component will point to the Vuex store.
Update the state on a user action using commit Let's update the state when the user types something. We do so by using the store.commit() API. But first, let's create a method that is invoked when the input content changes. We use @input rather than @change because the latter is only triggered when the focus is moved
away from the input box, while @input is called on every keypress. Favorite ice cream flavor? export default { methods: { changed: function(event) { alert(event.target.value) } } }
757
Vuex
Now that we have the value of the flavor, we use the Vuex API: export default { methods: { changed: function(event) { this.$store.commit('change', event.target.value) } } }
see how we reference the store using this.$store ? This is thanks to the inclusion of the store object in the main Vue component initialization. The commit() method accepts a mutation name (we used change in the Vuex store) and a payload, which will be passed to the mutation as the second parameter of its callback function.
Use the getter to print the state value Now we need to reference the getter of this value in the Display template, by using $store.getters.flavor . this can be removed because we're in the template, and this is
implicit.
You chose {{ $store.getters.flavor }}
Wrapping up That's it for an introduction to Vuex! The full, working source code is available at https://codesandbox.io/s/zq7k7nkzkm There are still many concepts missing in this puzzle: actions modules helpers plugins
758
Vuex
but you have the basics to go and read about them in the official docs. Happy coding!
759
Vue Router
Vue Router Discover one of the essential pieces of a Vue application: the router
Introduction In a JavaScript web application, a router is the part that syncs the currently displayed view with the browser address bar content. In other words, it's the part that makes the URL change when you click something in the page, and helps to show the correct view when you hit a specific URL. Traditionally the Web is built around URLs. When you hit a certain URL, a specific page is displayed. With the introduction of applications that run inside the browser and change what the user sees, many applications broke this interaction, and you had to manually update the URL with the browser's History API. You need a router when you need to sync URLs to views in your app. It's a very common need, and all the major modern frameworks now allow you to manage routing. The Vue Router library is the way to go for Vue.js applications. Vue does not enforce the use of this library. You can use whatever generic routing library you want, or also create your own History API integration, but the benefit of using Vue Router is that it's official. This means it's maintained by the same people who maintain Vue, so you get a more consistent integration in the framework, and the guarantee that it's always going to be compatible in the future, no matter what.
Installation Vue Router is available via npm with the package named vue-router . If you use Vue via a script tag, you can include Vue Router using
unpkg.com is a very handy tool that makes every npm package available in the browser with a simple link
760
Vue Router
If you use the Vue CLI, install it using npm install vue-router
Once you install vue-router and make it available either using a script tag or via Vue CLI, you can now import it in your app. You import it after vue , and you call Vue.use(VueRouter) to install it inside the app: import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(VueRouter)
After you call Vue.use() passing the router object, in any component of the app you have access to these objects: this.$router is the router object this.$route is the current route object
The router object The router object, accessed using this.$router from any component when the Vue Router is installed in the root Vue component, offers many nice features. We can make the app navigate to a new route using this.$router.push() this.$router.replace() this.$router.go()
which resemble the pushState , replaceState and go methods of the History API. push() is used to go to a new route, adding a new item to the browser history. replace() is
the same, except it does not push a new state to the history. Usage samples: this.$router.push('about') //named route, see later this.$router.push({ path: 'about' }) this.$router.push({ path: 'post', query: { post_slug: 'hello-world' } }) //using query par ameters (post?post_slug=hello-world) this.$router.replace({ path: 'about' })
761
Vue Router
go() goes back and forth, accepting a number that can be positive or negative to go back in
the history: this.$router.go(-1) //go back 1 step this.$router.go(1) //go forward 1 step
Defining the routes I'm using a Vue Single File Component in this example. In the template I use a nav tag that has 3 router-link components, which have a label (Home/Login/About) and a URL assigned through the to attribute. The router-view component is where the Vue Router will put the content that matches the current URL. Home Login About
A router-link component renders an a tag by default (you can change that). Every time the route changes, either by clicking a link or by changing the URL, a router-link-active class is added to the element that refers to the active route, allowing you to style it. In the JavaScript part we first include and install the router, then we define 3 route components. We pass them to the initialization of the router object, and we pass this object to the Vue root instance. Here's the code: import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(Router) const Home = { template: 'Home'
762
Vue Router
} const Login = { template: 'Login' } const About = { template: 'About' } const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/login', component: Login }, { path: '/about', component: About } ] }) new Vue({ router }).$mount('#app')
Usually, in a Vue app you instantiate and mount the root app using: new Vue({ render: h => h(App) }).$mount('#app')
When using the Vue Router, you don't pass a render property but instead, you use router . The syntax used in the above example: new Vue({ router }).$mount('#app')
is a shorthand for new Vue({ router: router }).$mount('#app')
See in the example, we pass a routes array to the VueRouter constructor. Each route in this array has a path and component params. If you pass a name param too, you have a named route.
763
Vue Router
Using named routes to pass parameters to the router push and replace methods Remember how we used the Router object to push a new state before? this.$router.push({ path: 'about' })
With a named route we can pass parameters to the new route: this.$router.push({ name: 'post', params: { post_slug: 'hello-world' } })
the same goes for replace() : this.$router.replace({ name: 'post', params: { post_slug: 'hello-world' } })
What happens when a user clicks a router-link The application will render the route component that matches the URL passed to the link. The new route component that handles the URL is instantiated and its guards called, and the old route component will be destroyed.
Route guards Since we mentioned guards, let's introduce them. You can think of them of life cycle hooks or middleware, those are functions called at specific times during the execution of the application. You can jump in and alter the execution of a route, redirecting or simply canceling the request. You can have global guards by adding a callback to the beforeEach() and afterEach() property of the router. beforeEach() is called before the navigation is confirmed beforeResolve() is called when beforeEach is executed and all the components beforeRouterEnter and beforeRouteUpdate guards are called, but before the navigation is
confirmed. The final check, if you want afterEach() is called after the navigation is confirmed
764
Vue Router
What does "the navigation is confirmed" mean? We'll see it in a second. In the meantime think of it as "the app can go to that route". The usage is: this.$router.beforeEach((to, from, next) => { // ... })
this.$router.afterEach((to, from) => { // ... })
to and from represent the route objects that we go to and from. beforeEach has an
additional parameter next which if we call with false as the parameter, will block the navigation, and cause it to be unconfirmed. Like in Node middleware, if you're familiar, next() should always be called otherwise execution will get stuck. Single route components also have guards: beforeRouteEnter(from, to, next) is called before the current route is confirmed beforeRouteUpdate(from, to, next) is called when the route changes but the component
that manages it is still the same (with dynamic routing, see next) beforeRouteLeave(from, to, next) is called when we move away from here
We mentioned navigation. To determine if the navigation to a route is confirmed, Vue Router performs some checks: it calls beforeRouteLeave guard in the current component(s) it calls the router beforeEach() guard it calls the beforeRouteUpdate() in any component that needs to be reused, if any exist it calls the beforeEnter() guard on the route object (I didn't mention it but you can look here) it calls the beforeRouterEnter() in the component that we should enter into it calls the router beforeResolve() guard if all was fine, the navigation is confirmed! it calls the router afterEach() guard You can use the route-specific guards ( beforeRouteEnter and beforeRouteUpdate in case of dynamic routing) as life cycle hooks, so you can start data fetching requests for example.
Dynamic routing
765
Vue Router
The example above shows a different view based on the URL, handling the / , /login and /about routes.
A very common need is to handle dynamic routes, like having all posts under /post/ , each with the slug name: /post/first /post/another-post /post/hello-world
You can achieve this using a dynamic segment. Those were static segments: const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/login', component: Login }, { path: '/about', component: About } ] })
we add in a dynamic segment to handle blog posts: const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/post/:post_slug', component: Post }, { path: '/login', component: Login }, { path: '/about', component: About } ] })
Notice the :post_slug syntax. This means that you can use any string, and that will be mapped to the post_slug placeholder. You're not limited to this kind of syntax. Vue relies on this library to parse dynamic routes, and you can go wild with Regular Expressions. Now inside the Post route component we can reference the route using $route , and the post slug using $route.params.post_slug : const Post = { template: 'Post: {{ $route.params.post_slug }}' }
We can use this parameter to load the contents from the backend.
766
Vue Router
You can have as many dynamic segments as you want, in the same URL: /post/:author/:post_slug
Remember when before we talked about what happens when a user navigates to a new route? In the case of dynamic routes, what happens is a little different. Vue to be more efficient instead of destroying the current route component and re-instantiating it, it reuses the current instance. When this happens, Vue calls the beforeRouteUpdate life cycle event. There you can perform any operation you need: const Post = { template: 'Post: {{ $route.params.post_slug }}' beforeRouteUpdate(to, from, next) { console.log(`Updating slug from ${from} to ${to}`) next() //make sure you always call next() } }
Using props In the examples, I used $route.params.* to access the route data. A component should not be so tightly coupled with the router, and instead, we can use props: const Post = { props: ['post_slug'], template: 'Post: {{ post_slug }}' } const router = new VueRouter({ routes: [ { path: '/post/:post_slug', component: Post, props: true } ] })
Notice the props: true passed to the route object to enable this functionality.
Nested routes Before I mentioned that you can have as many dynamic segments as you want, in the same URL, like:
767
Vue Router
/post/:author/:post_slug
So, say we have an Author component taking care of the first dynamic segment: import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(Router) const Author = { template: 'Author: {{ $route.params.author}}' } const router = new VueRouter({ routes: [ { path: '/post/:author', component: Author } ] }) new Vue({ router }).$mount('#app')
We can insert a second router-view component instance inside the Author template: const Author = { template: 'Author: {{ $route.params.author}}' }
we add the Post component: const Post = { template: 'Post: {{ $route.params.post_slug }}' }
and then we'll inject the inner dynamic route in the VueRouter configuration: const router = new VueRouter({ routes: [{ path: '/post/:author', component: Author, children: [ path: ':post_slug',
768
Vue Router
component: Post ] }] })
769
Node.js
Node.js
770
Introduction to Node
Introduction to Node This post is a getting started guide to Node.js, the server-side JavaScript runtime environment. Node.js is built on top of the Google Chrome V8 JavaScript engine, and it's mainly used to create web servers - but it's not limited to that
Overview The best features of Node.js Fast Simple JavaScript V8 Asynchronous platform A huge number of libraries An example Node.js application Node.js frameworks and tools
Overview Node.js is a runtime environment for JavaScript that runs on the server. Node.js is open source, cross-platform, and since its introduction in 2009, it got hugely popular and now plays a significant role in the web development scene. If GitHub stars are one popularity indication factor, having 46000+ stars means being very popular.
771
Introduction to Node
Node.js is built on top of the Google Chrome V8 JavaScript engine, and it's mainly used to create web servers - but it's not limited to that.
The best features of Node.js Fast
One of the main selling points of Node.js is speed. JavaScript code running on Node.js (depending on the benchmark) can be twice as fast than compiled languages like C or Java, and orders of magnitude faster than interpreted languages like Python or Ruby, because of its non-blocking paradigm.
Simple Node.js is simple. Extremely simple, actually.
JavaScript Node.js runs JavaScript code. This means that millions of frontend developers that already use JavaScript in the browser are able to run the server-side code and frontend-side code using the same programming language without the need to learn a completely different tool. The paradigms are all the same, and in Node.js the new ECMAScript standards can be used first, as you don't have to wait for all your users to update their browsers - you decide which ECMAScript version to use by changing the Node.js version.
V8 772
Introduction to Node
Running on the Google V8 JavaScript engine, which is Open Source, Node.js is able to leverage the work of thousands of engineers that made (and will continue to make) the Chrome JavaScript runtime blazing fast.
Asynchronous platform
In traditional programming languages (C, Java, Python, PHP) all instructions are blocking by default unless you explicitly "opt in" to perform asynchronous operations. If you perform a network request to read some JSON, the execution of that particular thread is blocked until the response is ready. JavaScript allows to create asynchronous and non-blocking code in a very simple way, by using a single thread, callback functions and event-driven programming. Every time an expensive operation occurs, we pass a callback function that will be called once we can continue with the processing. We're not waiting for that to finish before going on with the rest of the program. Such mechanism derives from the browser. We can't wait until something loads from an AJAX request before being able to intercept click events on the page. It all must happen in real time to provide a good experience to the user. If you've created an onclick handler for a web page you've already used asynchronous programming techniques with event listeners. This allows Node.js to handle thousands of concurrent connections with a single server without introducing the burden of managing threads concurrency, which would be a major source of bugs. Node provides non-blocking I/O primitives, and generally, libraries in Node.js are written using non-blocking paradigms, making a blocking behavior an exception rather than the normal.
773
Introduction to Node
When Node.js needs to perform an I/O operation, like reading from the network, access a database or the filesystem, instead of blocking the thread Node.js will simply resume the operations when the response comes back, instead of wasting CPU cycles waiting.
A huge number of libraries npm with its simple structure helped the ecosystem of node.js proliferate and now the npm
registry hosts almost 500.000 open source packages you can freely use.
An example Node.js application The most common example Hello World of Node.js is a web server: const http = require('http') const hostname = '127.0.0.1' const port = 3000 const server = http.createServer((req, res) => { res.statusCode = 200 res.setHeader('Content-Type', 'text/plain') res.end('Hello World\n') }) server.listen(port, hostname, () => { console.log(`Server running at http://${hostname}:${port}/`) })
To run this snippet, save it as a server.js file and run node server.js in your terminal. This code first includes the Node.js http module. Node.js has an amazing standard library, including a first-class support for networking. The createServer() method of http creates a new HTTP server and returns it. The server is set to listen on the specified port and hostname. When the server is ready, the callback function is called, in this case informing us that the server is running. Whenever a new request is received, the request event is called, providing two objects: a request (an http.IncomingMessage object) and a response (an http.ServerResponse object). Those 2 objects are essential to handle the HTTP call. The first provides the request details. In this simple example, this is not used, but you could access the request headers and request data.
774
Introduction to Node
The second is used to return data to the caller. In this case with res.statusCode = 200
we set the statusCode property to 200, to indicate a successful response. We set the Content-Type header: res.setHeader('Content-Type', 'text/plain')
and we end close the response, adding the content as an argument to end() : res.end('Hello World\n')
Node.js frameworks and tools Node.js is a low-level platform, and to make things easier and more interesting for developers thousands of libraries were built upon Node.js. Many of those established over time as popular options. Here is a non-comprehensive list of the ones I consider very relevant and worth learning: Express, one of the most simple yet powerful ways to create a web server. Its minimalist approach, unopinionated, focused on the core features of a server, is key to its success. Meteor, an incredibly powerful full-stack framework, powering you with an isomorphic approach to building apps with JavaScript, sharing code on the client and the server. Once an off-the-shelf tool that provided everything, now integrates with frontend libs React, Vue and Angular. Can be used to create mobile apps as well. koa, built by the same team behind Express, aims to be even simpler and smaller, building on top of years of knowledge. The new project born out of the need to create incompatible changes without disrupting the existing community. Next.js, a framework to render server-side rendered React applications. Micro, a very lightweight server to create asynchronous HTTP microservices. Socket.io, a real-time communication engine to build network applications.
775
A brief history of Node
A brief history of Node A look back on the history of Node.js from 2009 to today Believe it or not, Node.js is just 9 years old. In comparison, JavaScript is 23 years old and the web as we know it (after the introduction of Mosaic) is 25 years old. 9 years is such a little amount of time for a technology, but Node.js seems to have been around forever. I've had the pleasure to work with Node since the early days when it was just 2 years old, and despite the little information available, you could already feel it was a huge thing. In this post, I want to draw the big picture of Node in its history, to put things in perspective. A little bit of history 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
A little bit of history JavaScript is a programming language that was created at Netscape as a scripting tool to manipulate web pages inside their browser, Netscape Navigator. Part of the business model of Netscape was to sell Web Servers, which included an environment called Netscape LiveWire, which could create dynamic pages using server-side JavaScript. So the idea of server-side JavaScript was not introduced by Node.js, but it's old just like JavaScript - but at the time it was not successful. One key factor that led to the rise of Node.js was timing. JavaScript since a few years was starting being considered a serious language, thanks for the "Web 2.0" applications that showed the world what a modern experience on the web could be like (think Google Maps or
776
A brief history of Node
GMail). The JavaScript engines performance bar raised considerably thanks to the browser competition battle, which is still going strong. Development teams behind each major browser work hard every day to give us better performance, which is a huge win for JavaScript as a platform. V8, the engine that Node.js uses under the hood, is one of those and in particular it's the Chrome JS engine. But of course, Node.js is not popular just because of pure luck or timing. It introduced much innovative thinking on how to program in JavaScript on the server.
2009 Node.js is born The first form of npm is created
2010 Express is born Socket.io is born
2011 npm hits 1.0 Big companies start adopting Node: LinkedIn, Uber Hapi is born
2012 Adoption continues very rapidly
2013 First big blogging platform using Node: Ghost Koa is born
2014 The Big Fork: io.js is a major fork of Node.js, with the goal of introducing ES6 support and moving faster
2015 777
A brief history of Node
The Node.js Foundation is born IO.js is merged back into Node.js npm introduces private modules Node 4 (no 1, 2, 3 versions were previously released)
2016 The leftpad incident Yarn is born Node 6
2017 npm focuses more on security Node 8 HTTP/2 V8 introduces Node in its testing suite, officially making Node a target for the JS engine, in addition to Chrome 3 billion npm downloads every week
2018 Node 10 ES modules .mjs experimental support
778
How to install Node
How to install Node How you can install Node.js on your system: a package manager, the official website installer or nvm Node.js can be installed in different ways. This post highlights the most common and convenient ones. Official packages for all the major platforms are available at https://nodejs.org/en/download/. One very convenient way to install Node.js is through a package manager. In this case, every operating system has its own. On macOS, Homebrew is the de-facto standard, and - once installed - allows to install Node.js very easily, by running this command in the CLI: brew install node
Other package managers for Linux and Windows are listed in https://nodejs.org/en/download/package-manager/ nvm is a popular way to run Node. It allows you to easily switch the Node version, and install
new versions to try and easily rollback if something breaks, for example. It is also very useful to test your code with old Node versions. See https://github.com/creationix/nvm for more information about this option. My suggestion is to use the official installer if you are just starting out and you don't use Homebrew already, otherwise, Homebrew is my favorite solution. In any case, when Node is installed you'll have access to the node executable program in the command line.
779
How much JavaScript do you need to know to use Node?
How much JavaScript do you need to know to use Node? If you are just starting out with JavaScript, how much deeply do you need to know the language? As a beginner, it's hard to get to a point where you are confident enough in your programming abilities. While learning to code, you might also be confused at where does JavaScript end, and where Node.js begins, and vice versa. I would recommend you to have a good grasp of the main JavaScript concepts before diving into Node.js: Lexical Structure Expressions Types Variables Functions this Arrow Functions Loops Loops and Scope Arrays Template Literals Semicolons Strict Mode ECMAScript 6, 2016, 2017 With those concepts in mind, you are well on your road to become a proficient JavaScript developer, in both the browser and in Node.js. The following concepts are also key to understand asynchronous programming, which is one fundamental part of Node.js: Asynchronous programming and callbacks Timers Promises Async and Await Closures The Event Loop
780
How much JavaScript do you need to know to use Node?
Luckily I wrote a free ebook that explains all those topics, and it's called JavaScript Fundamentals. It's the most compact resource you'll find to learn all of this. You can find the ebook at the bottom of this page: https://flaviocopes.com/javascript/.
781
Differences between Node and the Browser
Differences between Node and the Browser How writing JavaScript application in Node.js differs from programming for the Web inside the browser Both the browser and Node use JavaScript as their programming language. Building apps that run in the browser is a completely different thing than building a Node.js application. Despite the fact that it's always JavaScript, there are some key differences that make the experience radically different. As a frontend developer who extensively uses Javascript, Node apps brings with it, a huge advantage - the comfort of programming everything, the frontend and the backend, in a single language You have a huge opportunity because we know how hard it is to fully, deeply learn a programming language, and by using the same language to perform all your work on the web both on the client and on the server, you're in a unique position of advantage. What changes is the ecosystem. In the browser, most of the time what you are doing is interacting with the DOM, or other Web Platform APIs like Cookies. Those do not exist in Node, of course. You don't have the document , window and all the other objects that are provided by the browser.
And in the browser, we don't have all the nice APIs that Node.js provides through its modules, like the filesystem access functionality. Another big difference is that in Node.js you control the environment. Unless you are building an open source application that anyone can deploy anywhere, you know which version of Node you will run the application on. Compared to the browser environment, where you don't get the luxury to choose what browser your visitors will use, this is very convenient. This means that you can write all the modern ES6-7-8-9 JavaScript that your Node version supports. Since JavaScript moves so fast, but browsers can be a bit slow and users a bit slow to upgrade, sometimes on the web, you are stuck to use older JavaScript / ECMAScript releases. You can use Babel to transform your code to be ES5-compatible before shipping it to the browser, but in Node, you won't need that.
782
Differences between Node and the Browser
Another difference is that Node uses the CommonJS module system, while in the browser we are starting to see the ES Modules standard being implemented. In practice, this means that for the time being you use require() in Node and import in the browser.
783
Run Node.js scripts from the command line
Run Node.js scripts from the command line How to run any Node.js script from the CLI The usual way to run a Node program is to call the node globally available command (once you install Node) and pass the name of the file you want to execute. If your main Node application file is in app.js , you can call it by typing node app.js
784
How to exit from a Node.js program
How to exit from a Node.js program Learn how to terminate a Node.js app in the best possible way There are various ways to terminate a Node.js application. When running a program in the console you can close it with ctrl-C , but what I want to discuss here is programmatically exiting. Let's start with the most drastic one, and see why you're better off not using it. The process core module is provides a handy method that allows you to programmatically exit from a Node.js program: process.exit() . When Node.js runs this line, the process is immediately forced to terminate. This means that any callback that's pending, any network request still being sent, any filesystem access, or processes writing to stdout or stderr - all is going to be ungracefully terminated right away. If this is fine for you, you can pass an integer that signals the operating system the exit code: process.exit(1)
By default the exit code is 0 , which means success. Different exit codes have different meaning, which you might want to use in your own system to have the program communicate to other programs. You can read more on exit codes at https://nodejs.org/api/process.html#process_exit_codes You can also set the process.exitCode property: process.exitCode = 1
and when the program will later end, Node will return that exit code. A program will gracefully exit when all the processing is done. Many times with Node we start servers, like this HTTP server: const express = require('express') const app = express() app.get('/', (req, res) => { res.send('Hi!') })
This program is never going to end. If you call process.exit() , any currently pending or running request is going to be aborted. This is not nice. In this case you need to send the command a SIGTERM signal, and handle that with the process signal handler: Note: process does not require a "require", it's automatically available. const express = require('express') const app = express() app.get('/', (req, res) => { res.send('Hi!') }) const server = app.listen(3000, () => console.log('Server ready')) process.on('SIGTERM', () => { server.close(() => { console.log('Process terminated') }) })
What are signals? Signals are a POSIX intercommunication system: a notification sent to a process in order to notify it of an event that occurred. SIGKILL is the signals that tells a process to immediately terminate, and would ideally act like process.exit() . SIGTERM is the signals that tells a process to gracefully terminate. It is the signal that's sent
from process managers like upstart or supervisord and many others. You can send this signal from inside the program, in another function: process.kill(process.pid, 'SIGTERM')
Or from another Node.js running program, or any other app running in your system that knows the PID of the process you want to terminate.
786
How to read environment variables
How to read environment variables Learn how to read and make use of environment variables in a Node.js program The process core module of Node provides the env property which hosts all the environment variables that were set at the moment the process was started. Here is an example that accesses the NODE_ENV environment variable, which is set to development by default.
Note: process does not require a "require", it's automatically available. process.env.NODE_ENV // "development"
Setting it to "production" before the script runs will tell Node that this is a production environment. In the same way you can access any custom environment variable you set.
787
Node hosting options
Node hosting options A Node.js application can be hosted in a lot of places, depending on your needs. This is a list of all the various options you have at your disposal Here is a non-exhaustive list of the options you can explore when you want to deploy your app and make it publicly accessible. I will list the options from simplest and constrained to more complex and powerful. Simplest option ever: local tunnel Zero configuration deployments Glitch Codepen Serverless PAAS Zeit Now Nanobox Heroku Microsoft Azure Google Cloud Platform Virtual Private Server Bare metal
Simplest option ever: local tunnel Even if you have a dynamic IP, or you're under a NAT, you can deploy your app and serve the requests right from your computer using a local tunnel. This option is suited for some quick testing, demo a product or sharing of an app with a very small group of people. A very nice tool for this, available on all platforms, is ngrok. Using it, you can just type ngrok PORT and the PORT you want is exposed to the internet. You will get a ngrok.io domain, but with a paid subscription you can get a custom URL as well as more security options (remember that you are opening your machine to the public Internet). Another service you can use is https://github.com/localtunnel/localtunnel
Zero configuration deployments 788
Node hosting options
Glitch Glitch is a playground and a way to build your apps faster than ever, and see them live on their own glitch.com subdomain. You cannot currently have a a custom domain, and there are a few restrictions in place, but it's really great to prototype. It looks fun (and this is a plus), and it's not a dumbed down environment - you get all the power of Node.js, a CDN, secure storage for credentials, GitHub import/export and much more. Provided by the company behind FogBugz and Trello (and co-creators of Stack Overflow). I use it a lot for demo purposes.
Codepen Codepen is an amazing platform and community. You can create a project with multiple files, and deploy it with a custom domain.
Serverless A way to publish your apps, and have no server at all to manage, is Serverless. Serverless is a paradigm where you publish your apps as functions, and they respond on a network endpoint (also called FAAS - Functions As A Service). To very popular solutions are Serverless Framework Standard Library They both provide an abstraction layer to publishing on AWS Lambda and other FAAS solutions based on Azure or the Google Cloud offering.
PAAS PAAS stands for Platform As A Service. These platforms take away a lot of things you should otherwise worry about when deploying your application.
Zeit Now Zeit is an interesting option. You just type now in your terminal, and it takes care of deploying your application. There is a free version with limitations, and the paid version is more powerful. You simply forget that there's a server, you just deploy the app.
789
Node hosting options
Nanobox Nanobox
Heroku Heroku is an amazing platform. This is a great article on getting started with Node.js on Heroku.
Microsoft Azure Azure is the Microsoft Cloud offering. Check out how to create a Node.js web app in Azure.
Google Cloud Platform Google Cloud is an amazing structure for your apps. They have a good Node.js Documentation Section
Virtual Private Server In this section you find the usual suspects, ordered from more user friendly to less user friendly: Digital Ocean Linode Amazon Web Services, in particular I mention Amazon Elastic Beanstalk as it abstracts away a little bit the complexity of AWS. Since they provide an empty Linux machine on which you can work, there is no specific tutorial for these. There are lots more options in the VPS category, those are just the ones I used and I would recommend.
Bare metal Another solution is to get a bare metal server, install a Linux distribution, connect it to the internet (or rent one monthly, like you can do using the Vultr Bare Metal service)
790
Node hosting options
791
Use the Node REPL
Use the Node REPL REPL stands for Read-Evaluate-Print-Loop, and it's a great way to explore the Node features in a quick way The node command is the one we use to run our Node.js scripts: node script.js
If we omit the filename, we use it in REPL mode: node
If you try it now in your terminal, this is what happens: ❯ node >
the command stays in idle mode and waits for us to enter something. Tip: if you are unsure how to open your terminal, google "How to open terminal on ". The REPL is waiting for us to enter some JavaScript code, to be more precise. Start simple and enter > console.log('test') test undefined >
The first value, test , is the output we told the console to print, then we get undefined which is the return value of running console.log() . We can now enter a new line of JavaScript.
Use the tab to autocomplete The cool thing about the REPL is that it's interactive. As you write your code, if you press the tab key the REPL will try to autocomplete what you wrote to match a variable you already defined or a predefined one.
792
Use the Node REPL
Exploring JavaScript objects Try entering the name of a JavaScript class, like Number , add a dot and press tab . The REPL will print all the properties and methods you can access on that class:
Explore global objects You can inspect the globals you have access to by typing global. and pressing tab :
793
Use the Node REPL
The _ special variable If after some code you type _ , that is going to print the result of the last operation.
Dot commands The REPL has some special commands, all starting with a dot . . They are .help : shows the dot commands help .editor : enables editor more, to write multiline JavaScript code with ease. Once you are
in this mode, enter ctrl-D to run the code you wrote. .break : when inputting a multi-line expression, entering the .break command will abort
further input. Same as pressing ctrl-C. .clear : resets the REPL context to an empty object and clears any multi-line expression
794
Use the Node REPL
currently being input. .load : loads a JavaScript file, relative to the current working directory .save : saves all you entered in the REPL session to a file (specify the filename) .exit : exists the repl (same as pressing ctrl-C two times)
The REPL knows when you are typing a multi-line statement without the need to invoke .editor .
For example if you start typing an iteration like this: [1, 2, 3].forEach(num => {
and you press enter , the REPL will go to a new line that starts with 3 dots, indicating you can now continue to work on that block. ... console.log(num) ... })
If you type .break at the end of a line, the multiline mode will stop and the statement will not be executed.
795
Pass arguments from the command line
Pass arguments from the command line How to accept arguments in a Node.js program passed from the command line You can pass any number of arguments when invoking a Node.js application using node app.js
Arguments can be standalone or have a key and a value. For example: node app.js flavio
or node app.js name=flavio
This changes how you will retrieve this value in the Node code. The way you retrieve it is using the process object built into Node. It exposes an argv property, which is an array that contains all the command line invocation arguments. The first argument is the full path of the node command. The second element is the full path of the file being executed. All the additional arguments are present from the third position going forward. You can iterate over all the arguments (including the node path and the file path) using a loop: process.argv.forEach((val, index) => { console.log(`${index}: ${val}`) })
You can get only the additional arguments by creating a new array that excludes the first 2 params: const args = process.argv.slice(2)
796
Pass arguments from the command line
If you have one argument without an index name, like this: node app.js flavio
you can access it using const args = process.argv.slice(2) args[0]
In this case: node app.js name=flavio
args[0] is name=flavio , and you need to parse it. The best way to do so is by using the minimist library, which helps dealing with arguments:
This time you need to use double dashes before each argument name: node app.js --name=flavio
797
Output to the command line
Output to the command line How to print to the command line console using Node, from the basic console.log to more complex scenarios Basic output using the console module Clear the console Counting elements Print the stack trace Calculate the time spent stdout and stderr Color the output Create a progress bar
Basic output using the console module Node provides a console module which provides tons of very useful ways to interact with the command line. It is basically the same as the console object you find in the browser. The most basic and most used method is console.log() , which prints the string you pass to it to the console. If you pass an object, it will render it as a string. You can pass multiple variables to console.log , for example: const x = 'x' const y = 'y' console.log(x, y)
and Node will print both. We can also format pretty phrases by passing variables and a format specifier. For example: console.log('My %s has %d years', 'cat', 2)
%s format a variable as a string %d or %i format a variable as an integer
798
Output to the command line
%f format a variable as a floating point number %O used to print an object representation
Example: console.log('%O', Number)
Clear the console console.clear() clears the console (the behavior might depend on the console used)
Counting elements console.count() is a handy method.
Take this code: const x = 1 const y = 2 const z = 3 console.count( 'The value of x is ' + x + ' and has been checked .. how many times?' ) console.count( 'The value of x is ' + x + ' and has been checked .. how many times?' ) console.count( 'The value of y is ' + y + ' and has been checked .. how many times?' )
What happens is that count will count the number of times a string is printed, and print the count next to it: You can just count apples and oranges: const oranges = ['orange', 'orange'] const apples = ['just one apple'] oranges.forEach(fruit => { console.count(fruit) }) apples.forEach(fruit => { console.count(fruit) })
799
Output to the command line
Print the stack trace There might be cases where it's useful to print the call stack trace of a function, maybe to answer the question how did you reach that part of the code? You can do so using console.trace() : const function2 = () => console.trace() const function1 = () => function2() function1()
This will print the stack trace. This is what's printed if I try this in the Node REPL: Trace at function2 (repl:1:33) at function1 (repl:1:25) at repl:1:1 at ContextifyScript.Script.runInThisContext (vm.js:44:33) at REPLServer.defaultEval (repl.js:239:29) at bound (domain.js:301:14) at REPLServer.runBound [as eval] (domain.js:314:12) at REPLServer.onLine (repl.js:440:10) at emitOne (events.js:120:20) at REPLServer.emit (events.js:210:7)
Calculate the time spent You can easily calculate how much time a function takes to run, using time() and timeEnd() const doSomething = () => console.log('test') const measureDoingSomething = () => { console.time('doSomething()') //do something, and measure the time it takes doSomething() console.timeEnd('doSomething()') } measureDoingSomething()
stdout and stderr As we saw console.log is great for printing messages in the Console. This is what's called the standard output, or stdout . console.error prints to the stderr stream.
800
Output to the command line
It will not appear in the console, but it will appear in the error log.
Color the output You can color the output of your text in the console by using escape sequences. An escape sequence is a set of characters that identifies a color. Example: console.log('\x1b[33m%s\x1b[0m', 'hi!')
You can try that in the Node REPL, and it will print hi! in yellow. However, this is the low-level way to do this. The simplest way to go about coloring the console output is by using a library. Chalk is such a library, and in addition to coloring it also helps with other styling facilities, like making text bold, italic or underlined. You install it with npm install chalk , then you can use it: const chalk = require('chalk') console.log(chalk.yellow('hi!'))
Using chalk.yellow is much more convenient than trying to remember the escape codes, and the code is much more readable. Check the project link I posted above for more usage examples.
Create a progress bar Progress is an awesome package to create a progress bar in the console. Install it using npm install progress
This snippet creates a 10-step progress bar, and every 100ms one step is completed. When the bar completes we clear the interval: const ProgressBar = require('progress') const bar = new ProgressBar(':bar', { total: 10 }) const timer = setInterval(() => { bar.tick() if (bar.complete) { clearInterval(timer) } }, 100)
801
Output to the command line
802
Accept input from the command line
Accept input from the command line How to make a Node.js CLI program interactive using the built-in readline Node module How to make a Node.js CLI program interactive? Node since version 7 provides the readline module to perform exactly this: get input from a readable stream such as the process.stdin stream, which during the execution of a Node program is the terminal input, one line at a time. const readline = require('readline').createInterface({ input: process.stdin, output: process.stdout }) readline.question(`What's your name?`, (name) => { console.log(`Hi ${name}!`) readline.close() })
This piece of code asks the username, and once the text is entered and the user presses enter, we send a greeting. The question() method shows the first parameter (a question) and waits for the user input. It calls the callback function once enter is pressed. In this callback function, we close the readline interface. readline offers several other methods, and I'll let you check them out on the package
documentation I linked above. If you need to require a password, it's best to now echo it back, but instead showing a * symbol. The simplest way is to use the readline-sync package which is very similar in terms of the API and handles this out of the box. A more complete and abstract solution is provided by the Inquirer.js package. You can install it using npm install inquirer , and then you can replicate the above code like this: const inquirer = require('inquirer') var questions = [{ type: 'input',
Inquirer.js lets you do many things like asking multiple choices, having radio buttons, confirmations, and more. It's worth knowing all the alternatives, especially the built-in ones provided by Node, but if you plan to take CLI input to the next level, Inquirer.js is an optimal choice.
804
Expose functionality from a Node file using exports
Expose functionality from a Node file using exports How to use the module.exports API to expose data to other files in your application, or to other applications as well Node has a built-in module system. A Node.js file can import functionality exposed by other Node.js files. When you want to import something you use const library = require('./library')
to import the functionality exposed in the library.js file that resides in the current file folder. In this file, functionality must be exposed before it can be imported by other files. Any other object or variable defined in the file by default is private and not exposed to the outer world. This is what the module.exports API offered by the module system allows us to do. When you assign an object or a function as a new exports property, that is the thing that's being exposed, and as such, it can be imported in other parts of your app, or in other apps as well. You can do so in 2 ways. The first is to assign an object to module.exports , which is an object provided out of the box by the module system, and this will make your file export just that object: const car = { brand: 'Ford', model: 'Fiesta' } module.exports = car //..in the other file const car = require('./car')
The second way is to add the exported object as a property of exports . This way allows you to export multiple objects, functions or data:
805
Expose functionality from a Node file using exports
const car = { brand: 'Ford', model: 'Fiesta' } exports.car = car
or directly exports.car = { brand: 'Ford', model: 'Fiesta' }
And in the other file, you'll use it by referencing a property of your import: const items = require('./items') items.car
or const car = require('./items').car
What's the difference between module.exports and exports ? The first exposes the object it points to. The latter exposes the properties of the object it points to.
806
npm
npm A quick guide to npm, the powerful package manager key to the success of Node.js. In January 2017 over 350000 packages were reported being listed in the npm registry, making it the biggest single language code repository on Earth, and you can be sure there is a package for (almost!) everything.
Introduction to npm Downloads Installing all dependencies Installing a single package Updating packages Versioning Running Tasks
Introduction to npm npm is the standard package manager for Node.js.
807
npm
In January 2017 over 350000 packages were reported being listed in the npm registry, making it the biggest single language code repository on Earth, and you can be sure there is a package for (almost!) everything. It started as a way to download and manage dependencies of Node.js packages, but it has since become a tool used also in frontend JavaScript. There are many things that npm does. Yarn is an alternative to npm. Make sure you check it out as well.
Downloads npm manages downloads of dependencies of your project.
Installing all dependencies If a project has a packages.json file, by running npm install
it will install everything the project needs, in the node_modules folder, creating it if it's not existing already.
Installing a single package You can also install a specific package by running npm install
Often you'll see more flags added to this command: --save installs and adds the entry to the package.json file dependencies --save-dev installs and adds the entry to the package.json file devDependencies
The difference is mainly that devDependencies are usually development tools, like a testing library, while dependencies are bundled with the app in production.
Updating packages Updating is also made easy, by running npm update
808
npm
npm will check all packages for a newer version that satisfies your versioning constraints.
You can specify a single package to update as well: npm update
Versioning In addition to plain downloads, npm also manages versioning, so you can specify any specific version of a package, or require a version higher or lower than what you need. Many times you'll find that a library is only compatible with a major release of another library. Or a bug in the latest release of a lib, still unfixed, is causing an issue. Specifying an explicit version of a library also helps to keep everyone on the same exact version of a package, so that the whole team runs the same version until the package.json file is updated. In all those cases, versioning helps a lot, and npm follows the semantic versioning (semver) standard.
Running Tasks The package.json file supports a format for specifying command line tasks that can be run by using npm run
So instead of typing those long commands, which are easy to forget or mistype, you can run $ npm run watch $ npm run dev $ npm run prod
810
Where does npm install the packages
Where does npm install the packages How to find out where npm installs the packages Read the npm guide if you are starting out with npm, it's going to go in a lot of the basic details of it. When you install a package using npm (or yarn), you can perform 2 types of installation: a local install a global install By default, when you type an npm install command, like: npm install lodash
the package is installed in the current file tree, under the node_modules subfolder. As this happens, npm also adds the lodash entry in the dependencies property of the package.json file present in the current folder.
A global installation is performed using the -g flag: npm install -g lodash
When this happens, npm won't install the package under the local folder, but instead, it will use a global location. Where, exactly? The npm root -g command will tell you where that exact location is on your machine. On macOS or Linux this location could be /usr/local/lib/node_modules . On Windows it could be C:\Users\YOU\AppData\Roaming\npm\node_modules If you use nvm to manage Node.js versions, however, that location would differ. I for example use nvm and my packages location was shown as /Users/flavio/.nvm/versions/node/v8.9.0/lib/node_modules .
811
How to use or execute a package installed using npm
How to use or execute a package installed using npm How to include and use in your code a package installed in your node_modules folder When you install using npm a package into your node_modules folder, or also globally, how do you use it in your Node code? Say you install lodash , the popular JavaScript utility library, using npm install lodash
This is going to install the package in the local node_modules folder. To use it in your code, you just need to import it into your program using require : const _ = require('lodash')
What if your package is an executable? In this case, it will put the executable file under the node_modules/.bin/ folder. One easy way to demonstrate this is cowsay. The cowsay package provides a command line program that can be executed to make a cow say something (and other animals as well
).
When you install the package using npm install cowsay , it will install itself and a few dependencies in the node_modules folder:
812
How to use or execute a package installed using npm
There is a hidden .bin folder, which contains symbolic links to the cowsay binaries:
How do you execute those? You can of course type ./node_modules/.bin/cowsay to run it, and it works, but npx, included in the recent versions of npm (since 5.2), is a much better option. You just run: npx cowsay
and npx will find the package location.
813
How to use or execute a package installed using npm
814
The package.json file
The package.json file The package.json file is a key element in lots of app codebases based on the Node.js ecosystem. If you work with JavaScript, or you've ever interacted with a JavaScript project, Node.js or a frontend project, you surely met the package.json file. What's that for? What should you know about it, and what are some of the cool things you can do with it? The package.json file is kind of a manifest for your project. It can do a lot of things, completely unrelated. It's a central repository of configuration for tools, for example. It's also where npm and yarn store the names and versions of the package it installed. The file structure Properties breakdown name author contributors bugs homepage version license keywords description repository main private scripts dependencies devDependencies engines browserslist
Command-specific properties Package versions
The file structure Here's an example package.json file:
815
The package.json file
{ }
It's empty! There are no fixed requirements of what should be in a package.json file, for an application. The only requirement is that it respects the JSON format, otherwise it cannot be read by programs that try to access its properties programmatically. If you're building a Node.js package that you want to distribute over npm things change radically, and you must have a set of properties that will help other people use it. We'll see more about this later on. This is another package.json: { "name": "test-project" }
It defines a name property, which tells the name of the app, or package, that's contained in the same folder where this file lives. Here's a much more complex example, which I extracted this from a sample Vue.js application: { "name": "test-project", "version": "1.0.0", "description": "A Vue.js project", "main": "src/main.js", "private": true, "scripts": { "dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js", "start": "npm run dev", "unit": "jest --config test/unit/jest.conf.js --coverage", "test": "npm run unit", "lint": "eslint --ext .js,.vue src test/unit", "build": "node build/build.js" }, "dependencies": { "vue": "^2.5.2" }, "devDependencies": { "autoprefixer": "^7.1.2", "babel-core": "^6.22.1", "babel-eslint": "^8.2.1", "babel-helper-vue-jsx-merge-props": "^2.0.3", "babel-jest": "^21.0.2", "babel-loader": "^7.1.1", "babel-plugin-dynamic-import-node": "^1.2.0", "babel-plugin-syntax-jsx": "^6.18.0",
browserslist Is used to tell which browsers (and their versions) you want to support. It's referenced by Babel, Autoprefixer, and other tools, to only add the polyfills and fallbacks needed to the browsers you target. Example: "browserslist": [ "> 1%", "last 2 versions", "not ie , & , ' , " and / ltrim() like trim(), but only trims characters at the start of the string rtrim() like trim(), but only trims characters at the end of the string stripLow() remove ASCII control characters, which are normally invisible
Force conversion to a format: toBoolean() convert the input string to a boolean. Everything except for '0', 'false' and ''
returns true. In strict mode only '1' and 'true' return true toDate() convert the input string to a date, or null if the input is not a date toFloat() convert the input string to a float, or NaN if the input is not a float toInt() convert the input string to an integer, or NaN if the input is not an integer
Like with custom validators, you can create a custom sanitizer. In the callback function you just return the sanitized value: const sanitizeValue = value => { //sanitize... } app.post('/form', [ check('value').customSanitizer(value => { return sanitizeValue(value) }), ], (req, res) => { const value = req.body.value })
963
Sanitizing input
964
Handling forms
Handling forms How to process forms using Express This is an example of an HTML form:
When the user press the submit button, the browser will automatically make a POST request to the /submit-form URL on the same origin of the page, sending the data it contains, encoded as application/x-www-form-urlencoded . In this case, the form data contains the username input field value.
Forms can also send data using the GET method, but the vast majority of the forms you'll build will use POST . The form data will be sent in the POST request body. To extract it, you will use the express.urlencoded() middleware, provided by Express: const express = require('express') const app = express() app.use(express.urlencoded())
Now you need to create a POST endpoint on the /submit-form route, and any data will be available on Request.body : app.post('/submit-form', (req, res) => { const username = req.body.username //... res.end() })
Don't forget to validate the data before using it, using express-validator .
965
File uploads in forms
File uploads in forms How to manage storing and handling files uploaded via forms, in Express This is an example of an HTML form that allows a user to upload a file:
When the user press the submit button, the browser will automatically make a POST request to the /submit-form URL on the same origin of the page, sending the data it contains, not encoded as application/x-www-form-urlencoded as a normal form, but as multipart/form-data . Server-side, handling multipart data can be tricky and error prone, so we are going to use a utility library called formidable. Here's the GitHub repo, it has over 4000 stars and well maintained. You can install it using: npm install formidable
Then in your Node.js file, include it: const express = require('express') const app = express() const formidable = require('formidable')
Now in the POST endpoint on the /submit-form route, we instantiate a new Formidable form using formidable.IncomingFrom() : app.post('/submit-form', (req, res) => { new formidable.IncomingFrom() })
After doing so, we need to parse the form. We can do so synchronously by providing a callback, which means all files are processed, and once formidable is done, it makes them available: app.post('/submit-form', (req, res) => { new formidable.IncomingFrom().parse(req, (err, fields, files) => { if (err) {
Or you can use events instead of a callback, to be notified when each file is parsed, and other events, like ending processing, receiving a non-file field, or an error occurred: app.post('/submit-form', (req, res) => { new formidable.IncomingFrom().parse(req) .on('field', (name, field) => { console.log('Field', name, field) }) .on('file', (name, file) => { console.log('Uploaded file', name, file) }) .on('aborted', () => { console.error('Request aborted by the user') }) .on('error', (err) => { console.error('Error', err) throw err }) .on('end', () => { res.end() }) })
Whatever way you choose, you'll get one or more Formidable.File objects, which give you information about the file uploaded. These are some of the methods you can call: file.size , the file size in bytes file.path , the path this file is written to file.name , the name of the file file.type , the MIME type of the file
The path defaults to the temporary folder and can be modified if you listen to the fileBegin event: app.post('/submit-form', (req, res) => { new formidable.IncomingFrom().parse(req) .on('fileBegin', (name, file) => { form.on('fileBegin', (name, file) => { file.path = __dirname + '/uploads/' + file.name })
An Express HTTPS server with a self-signed certificate
An Express HTTPS server with a selfsigned certificate How to create a self-signed HTTPS certificate for Node.js to test apps locally To be able to serve a site on HTTPS from localhost you need to create a self-signed certificate. A self-signed certificate will be enough to establish a secure HTTPS connection, although browsers will complain that the certificate is self-signed and as such it's not trusted. It's great for development purposes. To create the certificate you must have OpenSSL installed on your system. You might have it installed already, just test by typing openssl in your terminal. If not, on a Mac you can install it using brew install openssl if you use Homebrew. Otherwise search on Google "how to install openssl on ". Once OpenSSL is installed, run this command: openssl req -nodes -new -x509 -keyout server.key -out server.cert
It will as you a few questions. The first is the country name: Generating a 1024 bit RSA private key ...........++++++ .........++++++ writing new private key to 'server.key' ----You are about to be asked to enter information that will be incorporated into your certifi cate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----Country Name (2 letter code) [AU]:
Then your state or province: State or Province Name (full name) [Some-State]:
your city:
969
An Express HTTPS server with a self-signed certificate
Locality Name (eg, city) []:
and your organization name: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []:
You can leave all of these empty. Just remember to set this to localhost : Common Name (e.g. server FQDN or YOUR name) []: localhost
and to add your email address: Email Address []:
That's it! Now you have 2 files in the folder where you ran this command: server.cert is the self-signed certificate file server.key is the private key of the certificate
Both files will be needed to establish the HTTPS connection, and depending on how you are going to setup your server, the process to use them will be different. Those files need to be put in a place reachable by the application, then you need to configure the server to use them. This is an example using the https core module and Express: const https = require('https') const app = express() app.get('/', (req, res) => { res.send('Hello HTTPS!') }) https.createServer({}, app).listen(3000, () => { console.log('Listening...') })
without adding the certificate, if I connect to https://localhost:3000 this is what the browser will show:
970
An Express HTTPS server with a self-signed certificate
With the certificate in place: const fs = require('fs') //... https.createServer({ key: fs.readFileSync('server.key'), cert: fs.readFileSync('server.cert') }, app).listen(3000, () => { console.log('Listening...') })
Chrome will tell us the certificate is invalid, since it's self-signed, and will ask us to confirm to continue, but the HTTPS connection will work:
971
An Express HTTPS server with a self-signed certificate
972
Setup Let's Encrypt for Express
Setup Let's Encrypt for Express How to set up HTTPS using the popular free solution Let's Encrypt If you run a Node.js application on your own VPS, you need to manage getting an SSL certificate. Today the standard for doing this is to use Let's Encrypt and Certbot, a tool from EFF, aka Electronic Frontier Foundation, the leading nonprofit organization focused on privacy, free speech, and in general civil liberties in the digital world. These are the steps we'll follow: Install Certbot Generate the SSL certificate using Certbot Allow Express to serve static files Confirm the domain Obtain the certificate Setup the renewal
Install Certbot Those instructions assume you are using Ubuntu, Debian or any other Linux distribution that uses apt-get : sudo add-apt repository ppa:certbot/certbot sudo apt-get update sudo apt-get install certbot
You can also install Certbot on a Mac to test: brew install certbot
but you will need to link that to a real domain name, in order for it to be useful.
Generate the SSL certificate using Certbot Now that Certbot is installed, you can invoke it to generate the certificate. You must run this as root:
973
Setup Let's Encrypt for Express
certbot certonly --manual
or call sudo sudo certbot certonly --manual
The installer will ask you the domain of your website. This is the process in detail. It asks for the email ➜ sudo certbot certonly --manual Password: XXXXXXXXXXXXXXXXXX Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator manual, Installer None Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): [email protected]
It asks to accept the ToS: Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must agree in order to register with the ACME server at https://acme-v02.api.letsencrypt.org/directory (A)gree/(C)ancel: A
It asks to share the email address Would you be willing to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y
And finally we can enter the domain where we want to use the SSL certificate: Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): copesflavio.com
It asks if it's ok to log your IP: Obtaining a new certificate Performing the following challenges:
974
Setup Let's Encrypt for Express
http-01 challenge for copesflavio.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - NOTE: The IP of this machine will be publicly logged as having requested this certificate. If you're running certbot in manual mode on a machine that is not your server, please ensure you're okay with that. Are you OK with your IP being logged? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: y
And finally we get to the verification phase! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Create a file containing just this data: TS_oZ2-ji23jrio3j2irj3iroj_U51u1o0x7rrDY2E.1DzOo_voCOsrpddP_2kpoek2opeko2pke-UAPb21sW1c And make it available on your web server at this URL: http://copesflavio.com/.well-known/acme-challenge/TS_oZ2-ji23jrio3j2irj3iroj_U51u1o0x7rrDY 2E
Now let's leave Certbot alone for a couple minutes. We need to verify we own the domain, by creating a file named TS_oZ2ji23jrio3j2irj3iroj_U51u1o0x7rrDY2E in the .well-known/acme-challenge/ folder. Pay attention!
The weird string I just pasted change every single time. You'll need to create the folder and the file, since they do not exist by default. In this file you need to put the content that Certbot printed: TS_oZ2-ji23jrio3j2irj3iroj_U51u1o0x7rrDY2E.1DzOo_voCOsrpddP_2kpoek2opeko2pke-UAPb21sW1c
As for the filename, this string is unique each time you run Certbot.
Allow Express to serve static files In order to serve that file from Express, you need to enable serving static files. You can create a static folder, and add there the .well-known subfolder, then configure Express like this: const express = require('express') const app = express() //...
The dotfiles option is mandatory otherwise .well-known , which is a dotfile as it starts with a dot, won't be made visible. This is a security measure, because dotfiles can contain sensitive information and they are better off preserved by default.
Confirm the domain Now run the application and make sure the file is reachable from the public internet, and go back to Certbot, which is still running, and press ENTER to go on with the script.
Obtain the certificate That's it! If all went well, Certbot created the certificate, and the private key, and made them available in a folder on your computer (and it will tell you which folder, of course). Now copy/paste the paths into your application, to start using them to serve your requests: const fs = require('fs') const https = require('https') const app = express() app.get('/', (req, res) => { res.send('Hello HTTPS!') }) https.createServer({ key: fs.readFileSync('/etc/letsencrypt/path/to/key.pem'), cert: fs.readFileSync('/etc/letsencrypt/path/to/cert.pem'), ca: fs.readFileSync('/etc/letsencrypt/path/to/chain.pem') }, app).listen(443, () => { console.log('Listening...') })
Note that I made this server listen on port 443, so you need to run it with root permissions. Also, the server is exclusively running in HTTPS, because I used https.createServer() . You can also run an HTTP server alongside this, by running: http.createServer(app).listen(80, () => { console.log('Listening...') }) https.createServer({
Setup the renewal The SSL certificate is not going to be valid for 90 days. You need to set up an automated system for renewing it. How? Using a cron job. A cron job is a way to run tasks every interval of time. It can be eery week, every minute, every month. In our case we'll run the renewal script twice per day, as recommended in the Certbot documentation. First find out the absolute path of certbot on you system. I use type certbot on macOS to get it, and in my case it's /usr/local/bin/certbot . Here's the script we need to run: certbot renew
This is the cron job entry: 0 */12 * * * root /usr/local/bin/certbot renew >/dev/null 2>&1
It means run it every 12 hours, every day: at 00:00 and at 12:00. Tip: I generated this line using https://crontab-generator.org/ Add this script to your crontab, by using the command: env EDITOR=pico crontab -e
This opens the pico editor (you can choose the one you prefer). You enter the line, save, and the cron job is installed. Once this is done, you can see the list of cron jobs active using crontab -l
977
Setup Let's Encrypt for Express
978
JavaScript Libraries
JavaScript Libraries
979
Axios
Axios Axios is a very popular JavaScript library you can use to perform HTTP requests, that works in both Browser and Node.js platforms Introduction Installation The Axios API GET requests Add parameters to GET requests POST Requests
Introduction Axios is a very popular JavaScript library you can use to perform HTTP requests, that works in both Browser and Node.js platforms.
It supports all modern browsers, including support for IE8 and higher. It is promise-based, and this lets us write async/await code to perform XHR requests very easily. Using Axios has quite a few advantages over the native Fetch API:
980
Axios
supports older browsers (Fetch needs a polyfill) has a way to abort a request has a way to set a response timeout has built-in CSRF protection supports upload progress performs automatic JSON data transformation works in Node.js
Installation Axios can be installed using npm: npm install axios
or yarn: yarn add axios
or simply include it in your page using unpkg.com:
The Axios API You can start an HTTP request from the axios object: axios({ url: 'https://dog.ceo/api/breeds/list/all', method: 'get', data: { foo: 'bar' } })
but for convenience, you will generally use axios.get() axios.post()
(like in jQuery you would use $.get() and $.post() instead of $.ajax() ) Axios offers methods for all the HTTP verbs, which are less popular but still used:
and a method to get the HTTP headers of a request, discarding the body: axios.head()
GET requests One convenient way to use Axios is to use the modern (ES2017) async/await syntax. This Node.js example queries the Dog API to retrieve a list of all the dogs breeds, using axios.get() , and it counts them:
Add parameters to GET requests A GET response can contain parameters in the URL, like this: https://site.com/?foo=bar . With Axios you can perform this by simply using that URL: axios.get('https://site.com/?foo=bar')
or you can use a params property in the options: axios.get('https://site.com/', { params: { foo: 'bar' } })
POST Requests Performing a POST request is just like doing a GET request, but instead of axios.get , you use axios.post : axios.post('https://site.com/')
An object containing the POST parameters is the second argument: axios.post('https://site.com/', { foo: 'bar' })
983
Axios
984
The Beginner's Guide to Meteor
The Beginner's Guide to Meteor Meteor is an awesome web application platform. It's a great tool for both beginners and experts, it makes it super easy to start, and provides a huge ecosystem of libraries you can leverage Meteor is an awesome web application platform. Modern web applications can be extremely complicated to write. Especially for beginners. Meteor is a great tool for both beginners and experts, it makes it super easy to start, and provides a huge ecosystem of libraries you can leverage.
JavaScript Real-time Feels fast Open Source It's simple A great package system How Meteor can improve your life When Meteor might not be the best fit for you
985
The Beginner's Guide to Meteor
The 7 Meteor Principles Data on the Wire One Language Database Everywhere Latency Compensation Full Stack Reactivity Embrace the Ecosystem Simplicity Equals Productivity Installation procedure First steps with Meteor Code walk-through client/main.html client/main.js The Meteor CLI meteor meteor create meteor add meteor remove Isomorphic Meteor.isServer, Meteor.isClient Special directories Session variables and template helpers Reactive programming What is reactive programming Reactive sources Reactive computations Defining your own reactive computations Meteor Publications Server publication Client subscription Autopublish Minimongo MongoDB: The Meteor Database MongoDB in two words Meteor and MongoDB Minimongo Minimongo is a MongoDB client-side clone Client-side storage facility Latency Compensation What does it mean?
986
The Beginner's Guide to Meteor
Meteor Collections Create your first collection Adding items to a collection Showing the collection in the template
JavaScript Meteor was one of the first popular approaches to just use JavaScript both on the client, and on the server, seamlessly. Coupled with MongoDB, which is a Database which stores JSON objects, and uses Javascript as a query language, it makes JavaScript ubiquitous. Meteor also ships with Minimongo in the frontend, which is a frontend database compatible with the MongoDB APIs, entirely written in JavaScript.
Real-time Meteor is known for its real-time features, but what exactly is real-time? Suppose you want to create a chat app. Meteor offers you features that are perfect for that. Want to create an internal communication app? Perfect too. A project management app? Basically, any app in which users could be notified or things should be updated based on other users actions, or third part things such as API change the information displayed, the user viewing the app can be notified of those changes immediately, in a rather easy way compared to other solutions.
Feels fast A thing named Latency Compensation offers you a trick that enables the interface to feel dead fast even if it still need to communicate with a remote server. And best of all, it's free for you in terms of implementation, meaning it's backed in Meteor and you don't have to do anything to enable it.
Open Source Of course Meteor is entirely Open Source.
It's simple Things seem very simple in Meteor, because they are simple.
987
The Beginner's Guide to Meteor
Complicated things lead to weird bugs or hard problems later. Meteor offers us a clean, beautiful API and functionality to build upon.
A great package system The cool thing about Meteor is that since it can power both the frontend and the backend, and it's deeply integrated with the database, both frontend and backend code can be put in a single package, and work seamlessly for us on both sides. That's why we can add full user management with a single line of code.
How Meteor can improve your life Meteor gives you a Full-Stack platform, by providing both the client-side framework, and the server-side framework. What's more, Meteor even provides you the communication channel between those. It's called DDP, and we'll talk about it later. You no longer need to glue together different framework, languages, tooling and codebases. This is huge for the independent developer, small startups or even bigger organizations that don't want to lose time and resources making things harder than they should be.
When Meteor might not be the best fit for you Static content websites have other better platforms to build upon. If you need to just output some HTML without a lot of interactivity, use a static site generator. Meteor as of writing does not support SQL Databases, which can be a good thing in many cases, but they can be needed in other cases. Of course you can write your own procedures that use SQL Data.
The 7 Meteor Principles Meteor is built upon the following seven principles. They're listed in the project documentation and they're fundamental principles so we'll report them here. Principles always matter when they're respected in the everyday life.
Data on the Wire Meteor doesn't send HTML over the network. The server sends data and lets the client render it.
988
The Beginner's Guide to Meteor
One Language Meteor lets you write both the client and the server parts of your application in JavaScript.
Database Everywhere You can use the same methods to access your database from the client or the server.
Latency Compensation On the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.
Full Stack Reactivity In Meteor, real-time is the default. All layers, from database to template, update themselves automatically when necessary.
Embrace the Ecosystem Meteor is open source and integrates with existing open source tools and frameworks.
Simplicity Equals Productivity The best way to make something seem simple is to have it actually be simple. Meteor's main functionality has clean, classically beautiful APIs.
Installation procedure On OSX and Linux installing Meteor is as simple as typing this in the Operating System terminal: curl https://install.meteor.com/ | sh
989
The Beginner's Guide to Meteor
That's it! Windows has its own official installer, so check it out on the official site.
First steps with Meteor Let's create the first Meteor app. Open the Operating System terminal, go into the directory where you'll host the project and type meteor create hello-world
990
The Beginner's Guide to Meteor
Meteor will create the new app for you, in the hello-world directory. Now go inside that directory and type meteor
This will spin up the Meteor web server, and you'll be able to reach your first Meteor app by pointing your browser at http://localhost:3000
991
The Beginner's Guide to Meteor
It was easy, right?
Code walk-through Let's walk through the app code to see how that works. Do not worry if things are not much clear right now, many concepts will be introduced and explained later on. A few years ago this Meteor sample app would have contained just one JavaScript file, for both the client and server, using Meteor.isClient and Meteor.isServer to check if the app was running on the client, or on the server. The sample app moved away from this approach, and how has a server/main.js file, and other files in client/ .
client/main.html If you open the client/main.html file you can see the source code of the app:
992
The Beginner's Guide to Meteor
hello-world
Welcome to Meteor!
{{> hello}} {{> info}} Click Me
You've pressed the button {{counter}} times.
Learn Meteor!
...
Meteor recognizes the head and body tags and puts them in the correct place in the page content. This means that by including a head tag, all its content will be added to the "real" page head tag. Same thing applies to the body tag. They are the two main tags. All the rest of the application must be put in separate template tags. The special {{ }} parentheses you see are defined by Spacebars, which is a templating language very similar to Handlebars with some unique features that make it perfect to work with Meteor In the hello-world example, {{> hello}}
includes the hello template, and {{counter}}
inside the hello template looks for the counter value in the template context.
client/main.js This is the content of the client/main.js file:
993
The Beginner's Guide to Meteor
import { Template } from 'meteor/templating'; import { ReactiveVar } from 'meteor/reactive-var'; import './main.html'; Template.hello.onCreated(function helloOnCreated() { // counter starts at 0 this.counter = new ReactiveVar(0); }); Template.hello.helpers({ counter() { return Template.instance().counter.get(); }, }); Template.hello.events({ 'click button'(event, instance) { // increment the counter when button is clicked instance.counter.set(instance.counter.get() + 1); }, });
The code sets up a ReactiveVar, a reactive variable. A reactive variable exposes a setter and a getter. By using the setter, all functions that are retrieving the value using get() will be alerted when its value changes. The value of the reactive variable is displayed in the HTML using the {{counter}} snippet, which calls the counter() template helper we defined here. It first initializes that variable to zero, and it sets its value to be incremented when the button is clicked in the hello template. To handle clicks, you act on the events of the hello template. In this case, we intercept the click on a button HTML element. When this happens, you increment the Session.counter value. In the Meteor server code, in server/main.js , there's a Meteor.startup call, which just calls the passed function when Meteor is ready. Now there's nothing in it, but we'll see how this can be useful later.
The Meteor CLI When installing Meteor, you get the CLI (command line utility) called meteor . It's a super useful tool, you already used it to create the first app, and to start with we just need to know a small fraction of what it can do.
994
The Beginner's Guide to Meteor
Let's introduce the four most useful commands you'll use when starting with Meteor.
meteor If inside an empty directory you type meteor
you'll get an error because Meteor was not initialized in that directory. If you instead type meteor in a folder that has a Meteor project already created (see meteor create here below), Meteor will start up and create the server, initialize the database and
you'll be able to open the Meteor website.
meteor create If inside a directory you type meteor create my_app_name
Meteor will initialize a new Meteor project in a subfolder named my_app_name .
will lookup the package_name package and will install it in the current project. You can run this command in a separate terminal window while the Meteor app is running, and you'll get the package functionality without the need to restart the Meteor server.
will remove the package with that name from your project.
995
The Beginner's Guide to Meteor
Isomorphic The term isomorphic identifies a framework where client-side code and server-side code are written in the same language. This implies that any piece of code could run both on the server and on the client, unless it's tied to a context-specific action. In the past 10 years Web Applications have been built by clearly separating the server and the client code. Server code run PHP, Ruby, Python code. That code could never work on the frontend-side, because the browser does not support those languages. Browsers are only capable of executing code written in JavaScript. With the meteoric rise of Node.js in the last few years, and what was built on top, now we have the chance to build an entire application in the same language: JavaScript. Meteor takes the isomorphic concept even further, by transparently running every file in your project, unless you don't want it to do it, on both sides of the platform, doing different things based on the context, clearly explained by the Meteor documentation. This is an amazing opportunity and advantage that Meteor enables by building a "superplatform" on top of Node.js and the Browser platforms, enabling you to build applications faster and better than ever. Isomorphic refers to JavaScript code that runs with little to no modifications on the client and on the server. It's code that takes care of both what runs inside the browser, and the what runs on the server. Meteor is an isomorphic framework. This is great because we can write concise applications that now even share some pieces of code between client and server code. It enables you to become a full-stack developer, because you no longer need to deeply know two separate stacks to work on both sides of the application. The classical example is the one of an HTTP request. On the browser you'd do an AJAX call. On the server you'd use your stack-specific code. Using Meteor, you can use the same function HTTP.get() provided by the http package, on both sides, just like when you install the Axios library.
Meteor.isServer, Meteor.isClient Meteor exposes two boolean variables to determine where the code is running: Meteor.isServer Meteor.isClient
996
The Beginner's Guide to Meteor
Put them inside an if statement to run some code part just on one side of the platform. For example: if (Meteor.isClient) { alert('Hello dear user!') } else if (Meteor.isServer) { //running server-side }
Special directories Putting lots of Meteor.isServer and Meteor.isClient checks in the application is not ideal of course. First, the code can quickly grow complicated and not nice to read. Second, even the server code is sent to the client. This is bad because you'd better keep server-side code private, and also because you send unnecessary code which slows down loading times. That's why Meteor has two special folders that automatically take care of the distinction for us: client and server
Whatever you put in the client directory is not loaded on the server side. Whatever you put in the server directory is not sent to the client. Another advantage of keeping this distinction is that assets put in the client folders are never taken into consideration during the build phases.
Session variables and template helpers Here's a simple example on combining Session variables and template helpers to achieve a simple case of selecting the current comment selected in a list. In our template.html file: {{#each comments}} {{> comment}} {{/each}}
In this case any time I click a comment, that comment becomes the selected comment, and we can show it full-size, fetch the other comments made by the user or do some other fancy stuff.
Reactive programming First, a clarification: Meteor's reactivity has nothing to do with React, the other very popular JavaScript framework. What is reactive programming, you say? Reactive programming is a programming paradigm. Reactive programming is nothing new, nor something that Meteor introduced. But, what Meteor did was making reactive programming easy to use. Actually, you're most probably already using reactive programming without even knowing about it.
What is reactive programming Reactive programming allows you to write code that automatically refreshes and re-calculates functions and values when something that you depend on changed. For example, data in the database changed? You need to re-display it in the client.
998
The Beginner's Guide to Meteor
That variable that counts the number of comments changed because you added a comment? Everything that depends on it, or shows it, must react to that change and re-compute the values. That works by having Reactive Sources. The database for example is a reactive source. When something changes inside it, it notifies the JavaScript variables that depend on those changes. Those variables are invalidated and must be recalculated according to the new data available.
Reactive sources Meteor has a list of things that are reactive, and those drive the entire application. Not everything is reactive, just those things listed here: Reactive variables, defined using new ReactiveVar() The data coming from the database is a reactive data source, because by subscribing to a publication you get a cursor, and that cursor is reactive. Any change to the collection represented by the cursor will trigger a recomputation of anything that uses it. Talking about subscriptions, when a subscription is available on the client its .ready() method is called. That is a reactive data source. Session variables are a reactive data source. When one changes a session variable by using .set() , everything that depends on those will be recalculate or re-rendered. The user methods Meteor.user() and Meteor.userId() are a reactive data source. Meteor.status() , which is a client-side method that returns the current client-server
connection status, is a reactive data source. Meteor.loggingIn() , which returns true if the user is currently doing a login, is a reactive
data source.
Reactive computations Whatever changes upon a reactive source change is a reactive computation. It's some piece of code, a function, that needs to run again when a reactive source it depends on changes. An example of reactive computation is the template helpers: every time a reactive data source that involves a template helper changes, the template re-renders it.
Defining your own reactive computations You can define your own reactive computations, and react when something changes upstream, by using Tracker.autorun() .
999
The Beginner's Guide to Meteor
We'll soon talk more in depth about it, in the meanwhile just know that this function Tracker.autorun(function () { var currentPage = Session.get('currentPage') alert("The current page is " + currentPage) })
Will trigger an alert whenever you call Session.set('currentPage', 'whatever') without you needing to add callbacks or other observers.
Meteor Publications One of the key features of Meteor is provided by the data layer. Since Meteor manages both the server and the client, I can explain the concept simply: The server creates a publication The client subscribes to that publication Meteor keeps everything in sync The server can precisely determine what each client will see. Each publication can be tailored upon parameters and user permissions. Let's do a simple Pub/Sub introduction on standard MongoDB collections.
Server publication Here's an example of a server code creating a publication for comments that have been approved: //server-side code Meteor.publish('comments', () => { return Comments.find({ approved: true }) })
Or we want to create a publication for comments made on a specific article: Meteor.publish('comments', (articleId) => { return Comments.find({ articleId: articleId }) })
The publish function is called every time a client subscribes.
Client subscription 1000
The Beginner's Guide to Meteor
On the client the code is very easy. For example, let's subscribe to all comments: Meteor.subscribe('comments')
Let's instead subscribe to comments made on the current article: const articleId = 23 Meteor.subscribe('comments', articleId)
Once the subscribe method has been called, Meteor fills the client-side Minimongo (the MongoDB instance running on the client) with the data you chose to sent it. Typically the client-side database only gets some records, the minimum amount needed to initialize and work. You don't replicate the whole server-side Mongo collection content of course, but you request data as needed.
Autopublish Meteor makes it very easy for us to start diving into a project without worrying at all about publications and subscriptions. It does this by including the autopublish package in every new project. That that packages does is, it automatically creates a pub/sub for each collection we have defined, syncing all the data available from server to client. When you'll reach the phase when you need more control on the data available to each user or view, you'll just remove the autopublish package and you'll manually define what you need.
Minimongo Minimongo is your best friend when developing in Meteor. Ok, if you feel you have lots of best friends when using Meteor, I feel the same. Everything in Meteor is provided to ease your life. Minimongo, in particular, is a frontend implementation of MongoDB. You might say.. what? Why do I need another database?
MongoDB: The Meteor Database As of writing, Meteor has just one officially supported database: MongoDB.
1001
The Beginner's Guide to Meteor
You may wonder why. First, let me clarify: you can actually use any database you want, but to enjoy at 100% the marvels of Meteor you need to use Mongo. There are currently community projects that are working towards adding support for many other databases.
MongoDB in two words MongoDB is a document-based database. It features high performance, high availability, easy scalability. It stores its documents in database collections. A document is a set of key-value pairs (JSON), and it has a dynamic schema. This means that each document does not need to have the same set of fields, but you have a great freedom in managing data.
Meteor and MongoDB As said, a MongoDB document is just a JSON object. Meteor Collections are directly related to MongoDB collections, and the Meteor internals make sure that when data changes in a MongoDB Collection tracked by Meteor, the Meteor Collection is updated too.
Minimongo In short, in Meteor you typically create a collection, and that collection is available on both client and server code. When you do some database query or database processing, you don't "think" whether you should do that operation on the client-side database, or the server-side database: to a certain extent, they're mostly the same thing. And they talk to each other transparently. This means that when the server-side database (MongoDB) is updated by someone else or something happens in the app you're using, or even you add something in a second browser window.. everything that's stored in the database that interests your current session is pushed by the server MongoDB to the Minimongo running inside your browser. The same happens for the opposite: you push a post to the Posts collection? Minimongo is updated immediately, while Meteor pushes the update to the MongoDB database server side. This has the nice effect of making your changes, your pages and interactions feel immediate to the user.
Minimongo is a MongoDB client-side clone Minimongo tries to perfectly emulate a subset of MongoDB. You can insert data, remove data, search, sort, update.. with the same exact MongoDB APIs. This means you can also easily port some parts of your code from the server to the client-side very easily when it makes sense.
1002
The Beginner's Guide to Meteor
Client-side storage facility With Minimongo you have a fantastic client-side storage that you can query using the MongoDB Query functionalities. You can of course create instances of a Minimongo collection just client-side, when you don't have the need to sync a collection to the server. Not only, you can observe for database changes, and your interface can react to those changes easily.
Latency Compensation Latency Compensation is part of the Meteor Principles. There, it's described in this way: on the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.
What does it mean? On a typical Web Application, when you do some kind of action, the action is passed to the server to be processed, and then you need to wait until the server responds, and then changes are applied to the page you're interacting with. More modern applications rely on AJAX to provide a better turnaround and avoid refreshing a page on every action, but still many apps rely on the server response prior to taking any action. Better apps introduce some sort of latency compensation, but it's a manual process. Meteor introduces the concept of Latency Compensation deep into his philosophy and it's enabled by default, without you needing to do anything special to work with it. For example when you add an item to a collection, meanwhile the item will be sent to the server, it's already added to the collection view on your page. It feels better, because the app feels immediately responsive (it is). If there is an error you'll be notified later and you have the opportunity to handle things in the best way for each situation.
Meteor Collections An application typically needs to display data of some sort.
1003
The Beginner's Guide to Meteor
Be it messages, comments, posts, books, addresses, pictures.. everything is a collection of something. Meteor being deeply integrated with MongoDB takes the Mongo database collection concept and takes it to the application level. In both the client and server contexts, you'll typically interact with data by interacting with data collections. How does it work?
Create your first collection Messages = new Mongo.Collection('messages')
This defines a global variable messages, which will be visible across the entire App on client and server. This code needs to run on both the client and the server, so you'll put it for example under collections/messages.js .
While the code running in the two environments is the same, what it does is different: on the server it creates a Mongo collection if it's not already existing, and then it will load the cursor in the Messages variable on the client it will instantiate a Minimongo local collection. The app once instantiated will link that to the server-side collection, and will automatically keep them in sync.
Adding items to a collection You'll be able to insert items into a collection using the .insert() method on the collection cursor: Messages.insert({message: "Hi"})
Showing the collection in the template In a template you can use the {{#each}} Spacebars helper to navigate the collection and print all values stored in it: {{#each messages}} {{message}} {{/each}}
1004
The Beginner's Guide to Meteor
1005
Moment.js
Moment.js Moment.js is a great help in managing dates in JavaScript
Moment.js is an awesome JavaScript library that helps you manage dates, in the browser and in Node.js as well. This article aims to explain the basics and the most common usages of this library.
Installation You can include it directly in your page using a script tag, from unpkg.com:
or using npm: npm install moment
1006
Moment.js
If you install using npm you need to import the package (using ES Modules): import moment from 'moment'
or require it (using CommonJS): const moment = require('moment')
Get the current date and time const date = moment()
Parse a date A moment object can be initialized with a date by passing it a string: const date = moment(string)
it accepts any string, parsed according to (in order): ISO 8601 The RFC 2822 Date Time format the formats accepted by the Date object ISO 8601 is definitely the most convenient. Here's a format reference: Format
Meaning
Example
YYYY
4-digits Year
2018
YY
2-digits Year
18
M
2-digits Month number, omits leading 0
7
MM
2-digits Month number
07
MMM
3-letters Month name
Jul
MMMM
Full Month name
July
dddd
Full day name
Sunday
gggg
4-digits Week year
2018
gg
2-digits Week year
18
w
Week of the year without leading zero
18 1007
Moment.js
ww
Week of the year with leading zero
18
e
Day of the week, starts at 0
4
D
2-digits day number, omits leading 0
9
DD
2-digits day number
09
Do
Day number with ordinal
9th
T
Indicates the start of the time part
HH
2-digits hours (24 hour time) from 0 to 23
22
H
2-digits hours (24 hour time) from 0 to 23 without leading 0
22
kk
2-digits hours (24 hour time) from 1 to 24
23
k
2-digits hours (24 hour time) from 1 to 24 without leading 0
23
a/A
am or pm
pm
hh
2-digits hours (12 hour time)
11
mm
2-digits minutes
22
ss
2-digits seconds
40
s
2-digits seconds without leading zero
40
S
1-digits milliseconds
1
SS
2-digits milliseconds
12
SSS
3-digits milliseconds
123
Z
The timezone
+02:00
x
UNIX timestamp in milliseconds
1410432140575
Set a date Format a date When you want to output the content of a plain JavaScript Date object, you have little options to determine the formatting. All you can do is to use the built-in methods, and compose the date as you want using them. Moment offers a handy way to format the date according to your needs, using the format() method: date.format(string)
1008
Moment.js
The string format accepts the same formats I described in the "Parse a date" section above. Example: moment().format("YYYY Do MM")
Moment provides some constants you can use instead of writing your own format: Constant
Format
Example
moment.HTML5_FMT.DATETIME_LOCAL
YYYY-MMDDTHH:mm
2017-12-14T16:34
moment.HTML5_FMT.DATETIME_LOCAL_SECONDS
YYYY-MMDDTHH:mm:ss
2017-1214T16:34:10
moment.HTML5_FMT.DATETIME_LOCAL_MS
YYYY-MMDDTHH:mm:ss.SSS
2017-1214T16:34:10.234
moment.HTML5_FMT.DATE
YYYY-MM-DD
2017-12-14
moment.HTML5_FMT.TIME
HH:mm
16:34
moment.HTML5_FMT.TIME_SECONDS
HH:mm:ss
16:34:10
moment.HTML5_FMT.TIME_MS
HH:mm:ss.SSS
16:34:10.234
moment.HTML5_FMT.WEEK
YYYY-[W]WW
2017-W50
moment.HTML5_FMT.MONTH
YYYY-MM
2017-12
Validating a date Any date can be checked for validity using the isValid() method: moment('2018-13-23').isValid() //false moment('2018-11-23').isValid() //true
Time ago, time until date Use fromNow() . Strings are localized: moment('2016-11-23').fromNow() //2 years ago moment('2018-05-23').fromNow() //a month ago moment('2018-11-23').fromNow() //in 5 months
if you pass true to fromNow(), it just shows the difference, without reference to future/past.
1009
Moment.js
moment('2016-11-23').fromNow(true) //2 years moment('2018-05-23').fromNow(true) //a month moment('2018-11-23').fromNow(true) //5 months
Manipulate a date You can add or subtract any amount of time to a date: moment('2016-11-23').add(1, 'years') moment('2016-11-23').subtract(1, 'years')
You can use those values: years quarters months weeks days hours minutes seconds milliseconds
1010
GraphQL
GraphQL
1011
GraphQL
GraphQL GraphQL is a query language for your API, and a set of server-side runtimes (implemented in various backend languages) for executing queries
What is GraphQL GraphQL Principles GraphQL vs REST Rest is a concept A single endpoint Tailored to your needs GraphQL makes it easy to monitor for fields usage Access nested data resources Types Which one is better? GraphQL Queries Fields and arguments Aliases Fragments
1012
GraphQL
GraphQL Variables Making variables required Specifying a default value for a variable GraphQL Directives @include(if: Boolean) @skip(if: Boolean)
What is GraphQL GraphQL is the new frontier in APIs (Application Programming Interfaces). It's a query language for your API, and a set of server-side runtimes (implemented in various backend languages) for executing queries. It's not tied to a specific technology, but you can implement it in any language. It is a methodology that directly competes with REST (Representational state transfer) APIs, much like REST competed with SOAP at first. GraphQL was developed at Facebook, like many of the technologies that are shaking the world lately, like React and React Native, and it was publicly launched in 2015 - although Facebook used it internally for a few years before. Many big companies are adopting GraphQL beside Facebook, including GitHub, Pinterest, Twitter, Sky, The New York Times, Shopify, Yelp and thousands many other.
GraphQL Principles GraphQL exposes a single endpoint. You send a query to that endpoint by using a special Query Language syntax. That query is just a string. The server responds to a query by providing a JSON object. Let's see a first example of such a query. This query gets the name of a person with id=1 : GET /graphql?query={ person(id: "1") { name } }
or simply { person(id: "1") { name
1013
GraphQL
} }
We'll get this JSON response back: { "name": "Tony" }
Let's add a bit more complexity: we get the name of the person, and the city where the person lives, by extracting it from the address object. We don't care about other details of the address, and the server does not return them back to us. GET /graphql?query={ person(id: "1") { name, address { city } } }
or { person(id: "1") { name address { city } } }
{ "name": "Tony", "address": { "city": "York" } }
As you can see the data we get is basically the same request we sent, filled with values.
GraphQL vs REST Since REST is such a popular, or I can say universal, approach to building APIs, it's fair to assume you are familiar with it, so let's see the differences between GraphQL and REST.
Rest is a concept
1014
GraphQL
REST is a de-facto architecture standard but it actually has no specification and tons of unofficial definitions. GraphQL has a specification draft, and it's a Query Language instead of an architecture, with a well defined set of tools built around it (and a flourishing ecosystem). While REST is built on top of an existing architecture, which in the most common scenarios is HTTP, GraphQL is building its own set of conventions. Which can be an advantage point or not, since REST benefits for free by caching on the HTTP layer.
A single endpoint GraphQL has only one endpoint, where you send all your queries. With a REST approach, you create multiple endpoints and use HTTP verbs to distinguish read actions (GET) and write actions (POST, PUT, DELETE). GraphQL does not use HTTP verbs to determine the request type.
Tailored to your needs With REST, you generally cannot choose what the server returns back to you, unless the server implements partial responses using sparse fieldsets, and clients use that feature. The API maintainer cannot enforce such filtering. The API will usually return you much more information than what you need, unless you control the API server as well, and you tailor your responses for each different request. With GraphQL you explicitly request just the information you need, you don't "opt out" from the full response default, but it's mandatory to pick the fields you want. This helps saving resources on the server, since you most probably need less processing, and also network savings, since the payload to transfer is smaller.
GraphQL makes it easy to monitor for fields usage With REST, unless forcing sparse fieldsets, there is no way to determine if a field is used by clients, so when it comes to refactoring or deprecating, it's impossible to determine actual usage. GraphQL makes it possible to track which fields are used by clients.
Access nested data resources GraphQL allows to generate a lot less network calls.
1015
GraphQL
Let's do an example: you need to access the names of the friends of a person. If your REST API exposes a /person endpoint, which returns a person object with a list of friends, you generally first get the person information by doing GET /person/1 , which contains a list of ID of its friends. Unless the list of friends of a person already contains the friend name, with 100 friends you'd need to do 101 HTTP requests to the /person endpoint, which is a huge time cost, and also a resource intensive operation. With GraphQL, you need only one request, which asks for the names of the friends of a person.
Types A REST API is based on JSON which cannot provide type control. GraphQL has a Type System.
Which one is better? Organizations around the world are questioning their API technology choices and they are trying to find out if migrating from REST to GraphQL is best for their needs. GraphQL is a perfect fit when you need to expose complex data representations, and when clients might need only a subset of the data, or they regularly perform nested queries to get the data they need. As with programming languages, there is no single winner, it all depends on your needs.
GraphQL Queries In this article you'll learn how is a GraphQL query composed. The concepts I'll introduce are fields and arguments aliases fragments
Fields and arguments Take this simple GraphQL query: {
1016
GraphQL
person(id: "1") { name } }
In this query you see 2 fields, and 1 argument. The field person returns an Object which has another field in it, a String. The argument allows us to specify which person we want to reference. We pass an id , but we could as well pass a name argument, if the API we talk to has the option to find a person by name. Arguments are not limited to any particular field, we could have a friends field in person that lists the friends of that person, and it could have a limit argument, to specify how many we want the API to return: { person(id: "1") { name friends(limit: 100) } }
Aliases You can ask the API to return a field with a different name, for example: { owner: person(id: "1") { fullname: name } }
This feature, beside creating more ad-hoc naming for your client code, is the only thing that can make the query work if you need to reference the same endpoint 2 times in the same query:
1017
GraphQL
{ owner: person(id: "1") { fullname: name } first_employee: person(id: "2") { fullname: name } }
Fragments In the above query we replicated the person structure. Fragments allow us to specify the structure once (much useful with many fields): { owner: person(id: "1") { ...personFields } first_employee: person(id: "2") { ...personFields } } fragment personFields on person { fullname: name }
GraphQL Variables More complex GraphQL queries need to use variables, a way to dynamically specify a value that is used inside a query. In this case we added the person id as a string inside the query: { owner: person(id: "1") { fullname: name } }
The id will most probably change dynamically in our program, so we need a way to pass it, and not with string interpolation. With variables, the same query can be written as query GetOwner($id: String) {
In this snippet we have assigned the GetOwner name to our query. Think of it as named functions, while previously you had an anonymous function. Named queries are useful when you have lots of queries in your application. The query definition with the variables looks like a function definition, and it works in an equivalent way.
Making variables required Appending a ! to the type: query GetOwner($id: String!)
instead of $id: String will make the $id variable required.
Specifying a default value for a variable You can specify a default value using this syntax: query GetOwner($id: String = "1")
GraphQL Directives Directives let you include or exclude a field if a variable is true or false. query GetPerson($id: String) { person(id: $id) { fullname: name, address: @include(if: $getAddress) { city street country } } }
1019
GraphQL
{ "id": "1", "getAddress": false }
In this case if getAddress variable we pass is true, we also get the address field, otherwise not. We have 2 directives available: include , which we have just seen (includes if true), and skip , which is the opposite (skips if true)
@include(if: Boolean) query GetPerson($id: String) { person(id: $id) { fullname: name, address: @include(if: $getAddress) { city street country } } } { "id": "1", "getAddress": false }
@skip(if: Boolean) query GetPerson($id: String) { person(id: $id) { fullname: name, address: @skip(if: $excludeAddress) { city street country } } } { "id": "1", "excludeAddress": false }
1020
GraphQL
1021
Apollo
Apollo Apollo is a suite of tools to create a GraphQL server, and to consume a GraphQL API. Let's explore Apollo in detail, both Apollo Client and Apollo Server.
Introduction to Apollo Apollo Client Start a React app Get started with Apollo Boost Create an ApolloClient object Apollo Links Caching Use ApolloProvider The gql template tag Perform a GraphQL request Obtain an access token for the API Use an Apollo Link to authenticate Render a GraphQL query result set in a component Apollo Server Launchpad The Apollo Server Hello World Run the GraphQL Server locally Your first Apollo Server code Add a GraphiQL endpoint
Introduction to Apollo In the last few years GraphQL got hugely popular as an alternative approach to building an API over REST.
1022
Apollo
GraphQL is a great way to let the client decide which data they want to be transmitted over the network, rather than having the server send a fixed set of data. Also, it allows you to specify nested resources, reducing a lot the back and forth sometimes required when dealing with REST APIs. Apollo is a team and community that builds on top of GraphQL, and provides different tools that help you build your projects. The tools provided by Apollo are mainly 3: Client, Server, Engine. Apollo Client helps you consume a GraphQL API, with support for the most popular frontend web technologies including React, Vue, Angular, Ember, Meteor and more, and native development on iOS and Android. Apollo Server is the server part of GraphQL, which interfaces with your backend and sends responses back to the client requests. Apollo Engine is an hosted infrastructure (SAAS) that serves as a middle man between the client and your server, providing caching, performance reporting, load measurement, error tracking, schema field usage statistics, historical stats and many more goodies. It's currently free up to 1 million requests per month, and it's the only part of Apollo that's not open source and free, and provides funding for the open source part of the project. It's worth noting that those 3 tools are not linked together in any way, and you can use just Apollo Client to interface with a 3rd part API, or serve an API using Apollo Server without having a client at all, for example. It's all compatible with the GraphQL standard specification, so there is no proprietary or incompatible tech in Apollo. But it's very convenient to have all those tools together under a single roof, a complete suite for all your GraphQL-related needs. Apollo strives to be easy to use and easy to contribute to. Apollo Client and Apollo Server are all community projects, built by the community, for the community. Apollo is backed by the Meteor Development Group, the company behind Meteor, a very popular JavaScript framework. Apollo is focused on keeping things simple. This is something key to the success of a technology that wants to become popular, as some tech or framework or library might be overkill for 99% of the small or medium companies out there, and just suited for the big companies with very complex needs.
1023
Apollo
Apollo Client Apollo Client is the leading JavaScript client for GraphQL. Community-driven, it's designed to let you build UI components that interface with GraphQL data, either in displaying data, or in performing mutations when certain actions happen. You don't need to change everything in your application to make use of Apollo Client. You can start with just one tiny layer, one request, and expand from there. Most of all, Apollo Client is built to be simple, small and flexible from the ground up. In this post I'm going to detail the process of using Apollo Client within a React application. I'll use the GitHub GraphQL API as a server.
Start a React app I use create-react-app to setup the React app, which is very convenient and just adds the bare bones of what we need: npx create-react-app myapp
npx is a command available in the latest npm versions. Update npm if you do not have
this command. and start the app local server with yarn: yarn start
Open src/index.js : import React from 'react' import ReactDOM from 'react-dom' import './index.css' import App from './App' import registerServiceWorker from './registerServiceWorker' ReactDOM.render(, document.getElementById('root')) registerServiceWorker()
and remove all this content.
Get started with Apollo Boost
1024
Apollo
Apollo Boost is the easiest way to start using Apollo Client on a new project. We'll install that in addition to react-apollo and graphql . In the console, run yarn add apollo-boost react-apollo graphql
or with npm: npm install apollo-boost react-apollo graphql --save
Create an ApolloClient object You start by importing ApolloClient from apollo-client in index.js : import { ApolloClient } from 'apollo-client' const client = new ApolloClient()
By default Apollo Client uses the /graphql endpoint on the current host, so let's use an Apollo Link to specify the details of the connection to the GraphQL server by setting the GraphQL endpoint URI.
Apollo Links An Apollo Link is represented by an HttpLink object, which we import from apollo-link-http . Apollo Link provides us a way to describe how we want to get the result of a GraphQL operation, and what we want to do with the response. In short, you create multiple Apollo Link instances that all act on a GraphQL request one after another, providing the final result you want. Some Links can give you the option of retrying a request if not successful, batching and much more. We'll add an Apollo Link to our Apollo Client instance to use the GitHub GraphQL endpoint URI https://api.github.com/graphql import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' const client = new ApolloClient({ link: new HttpLink({ uri: 'https://api.github.com/graphql' }) })
1025
Apollo
Caching We're not done yet. Before having a working example we must also tell ApolloClient which caching strategy to use: InMemoryCache is the default and it's a good one to start. import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { InMemoryCache } from 'apollo-cache-inmemory' const client = new ApolloClient({ link: new HttpLink({ uri: 'https://api.github.com/graphql' }), cache: new InMemoryCache() })
Use ApolloProvider Now we need to connect the Apollo Client to our component tree. We do so using ApolloProvider , by wrapping our application component in the main React file:
import React from 'react' import ReactDOM from 'react-dom' import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { InMemoryCache } from 'apollo-cache-inmemory' import { ApolloProvider } from 'react-apollo' import App from './App' const client = new ApolloClient({ link: new HttpLink({ uri: 'https://api.github.com/graphql' }), cache: new InMemoryCache() }) ReactDOM.render( , document.getElementById('root') )
This is enough to render the default create-react-app screen, with Apollo Client initialized:
1026
Apollo
The gql template tag We're now ready to do something with Apollo Client, and we're going to fetch some data from the GitHub API and render it. To do so we need to import the gql template tag: import gql from 'graphql-tag'
any GraphQL query will be built using this template tag, like this: const query = gql` query { ... } `
Perform a GraphQL request gql was the last item we needed in our toolset.
We're now ready to do something with Apollo Client, and we're going to fetch some data from the GitHub API and render it.
Obtain an access token for the API The first thing to do is to obtain a personal access token from GitHub. GitHub makes it easy by providing an interface from which you select any permission you might need:
1027
Apollo
For the sake of this example tutorial you don't need any of those permissions, they are meant for access to private user data but we will just query the public repositories data. The token you get is an OAuth 2.0 Bearer token. You can easily test it by running from the command line: $ curl -H "Authorization: bearer ***_YOUR_TOKEN_HERE_***" -X POST -d " \ { \ \"query\": \"query { viewer { login }}\" \ } \ " https://api.github.com/graphql
which should give you the result {"data":{"viewer":{"login":"***_YOUR_LOGIN_NAME_***"}}}
or { "message": "Bad credentials", "documentation_url": "https://developer.github.com/v4"
1028
Apollo
}
if something went wrong.
Use an Apollo Link to authenticate So, we need to send the Authorization header along with our GraphQL request, just like we did in the curl request above. How we do this with Apollo Client is by creating an Apollo Link middleware. Start with installing apollo-link-context :
npm install apollo-link-context
This package allows us to add an authentication mechanism by setting the context of our requests. We can use it in this code by referencing the setContext function in this way: const authLink = setContext((_, { headers }) => { const token = '***YOUR_TOKEN**' return { headers: { ...headers, authorization: `Bearer ${token}` } } })
and once we have this new Apollo Link, we can compose it with the HttpLink we already had, by using the concat() method on a link: const link = authLink.concat(httpLink)
Here is the full code for the src/index.js file with the code we have right now: import React from 'react' import ReactDOM from 'react-dom' import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { InMemoryCache } from 'apollo-cache-inmemory' import { ApolloProvider } from 'react-apollo' import { setContext } from 'apollo-link-context' import gql from 'graphql-tag' import App from './App'
Keep in mind this code is an example for educational purposes and it
exposes your GitHub GraphQL API to the world to see in your frontend-facing code. Production code needs to keep this token private. We can now make the first GraphQL request at the bottom of this file, and this sample query asks for the names and the owners of the 10 most popular repositories, with more than 50k stars: const POPULAR_REPOSITORIES_LIST = gql` { search(query: "stars:>50000", type: REPOSITORY, first: 10) { repositoryCount edges { node { ... on Repository { name owner { login } stargazers { totalCount } } } }
Running this code successfully returns the result of our query in the browser console:
Render a GraphQL query result set in a component What we saw up to now is already cool. What's even cooler is using the graphql result set to render your components. We let Apollo Client the burden (or joy) or fetching the data and handling all the low-level stuff, and we can focus on showing the data, by using the graphql component enhancer offered by react-apollo :
import React from 'react' import { graphql } from 'react-apollo' import { gql } from 'apollo-boost' const POPULAR_REPOSITORIES_LIST = gql` { search(query: "stars:>50000", type: REPOSITORY, first: 10) {
Here is the result of our query rendered in the component
1032
Apollo
Apollo Server A GraphQL server has the job of accepting incoming requests on an endpoint, interpreting the request and looking up any data that's necessary to fulfill the client needs. There are tons of different GraphQL server implementations for every possible language. Apollo Server is a GraphQL server implementation for JavaScript, in particular for the Node.js platform. It supports many popular Node.js frameworks, including: Express Hapi Koa Restify Apollo Server gives us 3 things basically: gives us a way to describe our data with a schema. provides the framework for resolvers, which are functions we write to fetch the data needed to fulfill a request. facilitates handling authentication for our API. For the sake of learning the basics of Apollo Server, we're not going to use any of the supported Node.js frameworks. Instead, we'll be using something that was built by the Apollo team, something really great which will be the base of our learning: Launchpad.
Launchpad Launchpad is a project that's part of the Apollo umbrella of products, and it's a pretty amazing tool that allows us to write code on the cloud and create a an Apollo Server online, just like we'd run a snippet of code on Glitch, Codepen, JSFiddle or JSBin. Except that instead of building a visual tool that's going to be isolated there, and meant just as a showcase or as a learning tool, with Launchpad we create a GraphQL API and it's going to be publicly accessible. Every project on Launchpad is called pad and has its GraphQL endpoint URL, like: https://1jzxrj129.lp.gql.zone/graphql
1033
Apollo
Once you build a pad, Launchpad gives you the option to download the full code of the Node.js app that's running it, and you just need to run npm install and npm start to have a local copy of your Apollo GraphQL Server. To summarize, it's a great tool to learn, share, and prototype.
The Apollo Server Hello World Every time you create a new Launchpad pad, you are presented with the Hello, World! of Apollo Server. Let's dive into it. First you import the makeExecutableSchema function from graphql-tools . import { makeExecutableSchema } from 'graphql-tools'
This function is used to create a GraphQLSchema object, by providing it a schema definition (written in the GraphQL schema language) and a set of resolvers. A schema definition is an template literal string containing the description of our query and the types associated with each field: const typeDefs = ` type Query { hello: String } `
A resolver is an object that maps fields in the schema to resolver functions, able to lookup data to respond to a query. Here is a simple resolver containing the resolver function for the hello field, which simply returns the Hello world! string: const resolvers = { Query: { hello: (root, args, context) => { return 'Hello world!' } } }
Given those 2 elements, the schema definition and the resolver, we use the makeExecutableSchema function we imported previously to get a GraphQLSchema object, which
This is all you need to serve a simple read-only API. Launchpad takes care of the tiny details. Here is the full code for the simple Hello World example: import { makeExecutableSchema } from 'graphql-tools' const typeDefs = ` type Query { hello: String } ` const resolvers = { Query: { hello: (root, args, context) => { return 'Hello world!' } } } export const schema = makeExecutableSchema({ typeDefs, resolvers })
Launchpad provides a great built-in tool to consume the API:
and as said previously the API can is publicly accessible, you just need to login and save your pad. I made a pad that exposes its endpoint at https://kqwwkp0pr7.lp.gql.zone/graphql , so let's try it using curl from the command line:
which successfully gives us the result we expect: { "data": { "hello": "Hello world!" } }
Run the GraphQL Server locally We mentioned that anything you create on Launchpad is downloadable, so let's go on. The package is composed by 2 files. The first schema.js is what we have above. The second, server.js , was invisible in Launchpad and that is what provides the underlying Apollo Server functionality, powered by Express, the popular Node.js framework. It is not the simplest example of an Apollo Server setup, so for the sake of explaining I'm going to replace it with a simpler example (but feel free to study that after you've understood the basics)
Your first Apollo Server code First, run npm install and npm start on the Launchpad code you downloaded. The node server we initialized previusly uses nodemon to restart the server when the files change, so when you change the code, the server is restarted with your changes applied. Add this code in server.js : const express = require('express') const bodyParser = require('body-parser') const { graphqlExpress } = require('apollo-server-express') const { schema } = require('./schema') const server = express() server.use('/graphql', bodyParser.json(), graphqlExpress({ schema })) server.listen(3000, () => { console.log('GraphQL listening at http://localhost:3000/graphql') })
1036
Apollo
With just 11 lines, this is much simpler than the server set up by Launchpad, because we removed all the things that made that code more flexible for their needs. Coding is 50% deciding how much flexibility you need now, versus how more important is to have clean, well understandable code that you can pick up 6 months from now and easily tweak, or pass to other developers and team members and be productive in as little time as needed. Here's what the code does: We first import a few libraries we're going to use. express which will power the underlying network functionality to expose the endpoint bodyParser is the Node body parsing middleware graphqlExpress is the Apollo Server object for Express
Next we import the GraphQLSchema object we created in the schema.js file above as Schema : const { schema } = require('./schema')
Here is some standard Express set, we just initialize a server on port 3000 const server = express()
Now we are ready to initialize Apollo Server: graphqlExpress({ schema })
and we pass that as a callback to our endpoint to HTTP JSON requests: server.use('/graphql', bodyParser.json(), graphqlExpress({ schema }))
All we need now is to start Express: server.listen(3000, () => { console.log('GraphQL listening at http://localhost:3000/graphql') })
1037
Apollo
Add a GraphiQL endpoint If you use GraphiQL, you can easily add a /graphiql endpoint, to consume with the GraphiQL interactive in-browser IDE: server.use('/graphiql', graphiqlExpress({ endpointURL: '/graphql', query: `` }))
We now just need to start up the Express server: server.listen(PORT, () => { console.log('GraphQL listening at http://localhost:3000/graphql') console.log('GraphiQL listening at http://localhost:3000/graphiql') })
You can test it by using curl again: $ curl \ -X POST \ -H "Content-Type: application/json" \ --data '{ "query": "{ hello }" }' \ http://localhost:3000/graphql
This will give you the same result as above, where you called the Launchpad servers: { "data": { "hello": "Hello world!" } }
1038
Git and GitHub
Git and GitHub
1039
Git
Git Git is a free and Open Source version control system (VCS), a technology used to track older versions of files, providing the ability to roll back and maintain separate different versions at the same time
What is Git Git is a free and Open Source version control system (VCS), a technology used to track older versions of files, providing the ability to roll back and maintain separate different versions at the same time. Git is a successor of SVN and CVS, two very popular version control systems of the past. First developed by Linus Torvalds (the creator of Linux), today is the go-to system which you can't avoid if you make use of Open Source software.
Distributed VCS Git is a distributed system. Many developers can clone a repository from a central location, work independently on some portion of code, and then commit the changes back to the central location where everybody updates. Git makes it very easy for developers to collaborate on a codebase simultaneously and provides tools they can use to combine all the independent changes they make. A very popular service that hosts Git repositories is GitHub, especially for Open Source software, but we can also mention BitBucket, GitLab and many others which are widely used by teams all over the world to host their code publicly and also privately.
Installing Git Installing Git is quite easy on all platforms:
OSX Using Homebrew, run: brew install git
1040
Git
Windows Download and install Git for Windows.
Linux Use the package manager of your distribution to install Git. E.g. sudo apt-get install git
or sudo yum install git
Initializing a repository Once Git is installed on your system, you are able to access it using the command line by typing git .
1041
Git
Suppose you have a clean folder. You can initialize a Git repository by typing git init
1042
Git
What does this command do? It creates a .git folder in the folder where you ran it. If you don't see it, it's because it's a hidden folder, so it might not be shown everywhere, unless you set your tools to show hidden folders.
Anything related to Git in your newly created repository will be stored into this .git directory, all except the .gitignore file, which I'll talk about in the next article.
Adding files to a repository Let's see how a file can be added to Git. Type: echo "Test" > README.txt
1043
Git
to create a file. The file is now in the directory, but Git was not told to add it to its index, as you can see what git status tells us:
Add the file to the staging area We need to add the file with git add README.txt
to make it visible to Git, and be put into the staging area:
1044
Git
Once a file is in the staging area, you can remove it by typing: git reset README.txt
But usually what you do once you add a file is commit it.
Commit changes Once you have one or more changes to the staging area, you can commit them using git commit -am "Description of the change"
This cleans the status of the staging area:
1045
Git
and permanently stores the edit you made into a record store, which you can inspect by typing git log :
Branches When you commit a file to Git, you are committing it into the current branch. Git allows you to work simultaneously on multiple, separate branches, different lines of development which represent forks of the main branch. Git is very flexible: you can have an indefinite number of branches active at the same time, and they can be developed independently until you want to merge one of them into another. Git by default creates a branch called master . It's not special in any way other than it's the one created initially. You can create a new branch called develop by typing git branch develop
1046
Git
As you can see, git branch lists the branches that the repository has. The asterisk indicates the current branch. When creating the new branch, that branch points to the latest commit made on the current branch. If you switch to it (using git checkout develop ) and run git log , you'll see the same log as the branch that you were previously.
Push and pull In Git you always commit locally. This is a very nice benefit over SVN or CSV where all commits had to be immediately pushed to a server. You work offline, do as many commits as you want, and once you're ready you push them to the server, so your team members, or the community if you are pushing to GitHub, can access your latest and greatest code. Push sends your changes. Pull downloads remote changes to your working copy. Before you can play with push and pull, however, you need to add a remote!
Add a remote A remote is a clone of your repository, positioned on another machine.
1047
Git
I'll do an example with GitHub. If you have an existing repository, you can publish it on GitHub. The procedure involves creating a repository on the platform, through their web interface, then you add that repository as a remote, and you push your code there. To add the remote type git remote add origin https://github.com/YOU/REPONAME.git
An alternative approach is creating a blank repo on GitHub and cloning it locally, in which case the remote is automatically added for you
Push Once you're done, you can push your code to the remote, using the syntax git push , for example:
git push origin master
You specify origin as the remote, because you can technically have more than one remote. That is the name of the one we added previously, and it's a convention.
Pull The same syntax applies to pulling: git pull origin master
tells Git to push the master branch from origin , and merge it in the current local branch.
Conflicts In both push and pull there is a problem to consider: if the remote contains changes incompatible with your set of commits, the operation will fail. This happens when the remote contains changes subsequent to your latest pull, which affects lines of code you worked on as well. In the case of push this is usually solved by pulling changes, analyzing the conflicts, and then making a new commit that solves them.
1048
Git
In the case of pull, your working copy will automatically be edited with the conflicting changes, and you need to solve them, and make a new commit so the codebase now includes the problematic changes that were made on the remote.
Command Line vs Graphical Interface Up to now I talked about the command line Git application. This was key to introduce you to how Git actually works, but in the day-to-day operations, you are most likely to use an app that exposes you those commands via a nice UI, although many developers I know like to use the CLI. The CLI (command line) commands will still prove themselves to be useful if you need to setup Git using SSH on a remote server, for instance. It's not useless knowledge at all! That said, there are many very nice apps that are made to simplify the life of a developer that turn out very useful especially when you dive more into the complexity of a Git repository. The easy steps are easy everywhere, but things could quickly grow to a point where you might find it hard to use the CLI. Some of the most popular apps are
GitHub Desktop https://desktop.github.com Free, at the time of writing only available for Mac and Win
Tower https://www.git-tower.com Paid, at the time of writing only available for Mac and Win
GitKraken https://www.gitkraken.com Free / Paid depending on the needs, for Mac, Win and Linux
1049
Git
A good Git workflow Different developers and teams like to use different strategies to manage Git effectively. Here is a strategy I used on many teams and on widely used open source projects, and I saw used by many big and small projects as well. The strategy is inspired by the famous A successful Git branching model post. I have only 2 permanent branches: master and develop. Those are the rules I follow in my daily routine: When I take on a new issue, or decide to incorporate a feature, there are 2 main roads:
The feature is a quick one The commits I’ll make won’t break the code (or at least I hope so): I can commit on develop, or do a quick feature branch, and then merge it to develop.
The feature will take more than one commit to finish Maybe it will take days of commits before the feature is finished and it gets stable again: I do a feature branch, then merge to develop once ready (it might take weeks).
Hotfix If something on our production server requires immediate action, like a bugfix I need to get solved ASAP, I do a short hotfix branch, fix the thing, test the branch locally and on a test machine, then merge it to master and develop.
Develop is unstable. Master is the latest stable release The develop branch will always be in a state of flux, that’s why it should be put on a ‘freeze’ when preparing a release. The code is tested and every workflow is checked to verify code quality, and it’s prepared for a merge into master. Every time develop or another hotfix branch is merged into master, I tag it with a version number, and if on GitHub I also create a release, so it’s easy to move back to a previous state if something goes wrong.
1050
Git
1051
GitHub
GitHub GitHub is a website where millions of developers gather every day to collaborate on open source software. It's also the place that hosts billions of lines of code, and also a place where users of software go to report issues they might have. Learn all the most important pieces of GitHub that you should know as a developer
Introduction to GitHub Why GitHub? GitHub issues Social coding Follow Stars Fork Popular = better Pull requests Project management Comparing commits Webhooks and Services Webhooks Services Final words
1052
GitHub
Introduction to GitHub GitHub is a website where millions of developers gather every day to collaborate on open source software. It's also the place that hosts billions of lines of code, and also a place where users of software go to report issues they might have. In short, it's a platform for software developers, and it's built around Git. TIP: If you don't know about Git yet, checkout the Git guide. As a developer you can't avoid using GitHub daily, either to host your code or to make use of other people's code. This post explains you some key concepts of GitHub, and how to use some of its features that improve your workflow, and how to integrate other applications into your process.
Why GitHub? Now that you know what GitHub is, you might ask why you should use it. GitHub after all is managed by a private company, which profits from hosting people's code. So why should you use that instead of similar platforms such as BitBucket or GitLab, which are very similar? Beside personal preferences, and technical reasons, there is one big reason: everyone uses GitHub, so the network effect is huge. Major codebases migrated over time to Git from other version control systems, because of its convenience, and GitHub was historically well positioned into (and put a lot of effort to "win") the Open Source community. So today any time you look up some library, you will 99% of the times find it on GitHub. Apart from Open Source code, many developers also host private repositories on GitHub because of the convenience of a unique platform.
GitHub issues GitHub issues are one of the most popular bug tracker in the world. It provides the owners of a repository the ability to organize, tag and assign to milestones issues.
1053
GitHub
If you open an issue on a project managed by someone else, it will stay open until either you close it (for example if you figure out the problem you had) or if the repo owner closes it. Sometimes you'll get a definitive answer, other times the issue will be left open and tagged with some information that categorizes it, and the developer could get back to it to fix a problem or improve the codebase with your feedback. Most developers are not paid to support their code released on GitHub, so you can't expect prompt replies, but other times Open Source repositories are published by companies that either provide services around that code, or have commercial offerings for versions with more features, or a plugin-based architecture, in which case they might be working on the open source software as paid developers.
Social coding Some years ago the GitHub logo included the "social coding" tagline. What did this mean, and is that still relevant? It certainly is.
Follow With GitHub you can follow developers, by going on their profile and clicking "follow". You can also follow a repository, by clicking the "watch" button on a repo. In both cases the activity will show up in your dashboard. You don't follow like in Twitter, where you see what people say, but you see what people do.
Stars One big feat of GitHub is the ability to star a repository. This action will include it in your "starred repositories" list, which allows you to find things you found interesting before, and it's also one of the most important rating mechanisms, as the more stars a repo has, the more important it is, and the more it will show up in search results. Major projects can have 70.000 and more stars. GitHub also has a trending page where it features the repositories that get the most stars in a determined period of time, e.g. today or this week or month. Getting into those trending lists can cause other network effects like being featured on other sites, just because you have more visibility.
Fork 1054
GitHub
The last important network indicator of a project is the number of forks. This is key to how GitHub works, as a fork is the base of a Pull Request (PR), a change proposal. Starting from your repository, a person forks it, makes some changes, then creates a PR to ask you to merge those changes. Sometimes the person that forks never asks you to merge anything, just because they liked your code and decided to add something on top of it, or they fixed some bug they were experiencing. A fork clones the files of a GitHub project, but not any of the stars or issues of the original project.
Popular = better All in all, those are all key indicators of the popularity of a project, and generally along with the date of the latest commit and the involvement of the author in the issues tracker, is a useful indication of whether or not you should rely on a library or software.
Pull requests Before I introduced what is a Pull Request (PR) Starting from your repository, a person forks it, makes some changes, then creates a PR to ask you to merge those changes. A project might have hundreds of PRs, generally the more popular a project, the more PRs, like the React project:
1055
GitHub
Once a person submits a PR, an easy process using the GitHub interface, it needs to be reviewed by the core maintainers of the project. Depending on the scope of your PR (the number of changes, or the number of things affected by your change, or the complexity of the code touched) the maintainer might need more or less time to make sure your changes are compatible with the project. A project might have a clear timeline of changes they want to introduce. The maintainer might like to keep things simple while you are introducing a complex architecture in a PR. This is to say that not always a PR gets accepted fast, and also there is no guarantee that the PR will even get accepted. In the example i posted above, there is a PR in the repo that dates back 1.5 years. And this happens in all the projects.
Project management Along with issues, which are the place where developers get feedback from users, the GitHub interface offers other features aimed at helping project management. One of those is Projects. It's very new in the ecosystem and very rarely used, but it's a kanban board that helps organizing issues and work that needs to be done.
1056
GitHub
The Wiki is intended to be used as a documentation for users. One of the most impressive usage of the Wiki I saw up to now is the Go Programming Language GitHub Wiki. Another popular project management aid is milestones. Part of the issues page, you can assign issues to specific milestones, which could be release targets. Speaking of releases, GitHub enhances the Git tag functionality by introducing releases. A Git tag is a pointer to a specific commit, and if done consistently, helps you roll back to previous version of your code without referencing specific commits. A GitHub release builds on top of Git tags and represents a complete release of your code, along with zip files, release notes and binary assets that might represent a fully working version of your code end product. While a Git tag can be created programmatically (e.g. using the Command Line git program), creating a GitHub release is a manual process that happens through the GitHub UI. You basically tell GitHub to create a new release and tell them which tag you want to apply that release to.
Comparing commits GitHub offers many tools to work with your code. One of the most important things you might want to do is compare one branch to another one. Or, compare the latest commit with the version you are currently using, to see which changes were made over time. GitHub allows you to do this with the compare view, just add /compare to the repo name, for example: https://github.com/facebook/react/compare
1057
GitHub
For example here I choose to compare the latest React v15.x to the latest v16.0.0-rc version available at the time of writing, to check what's changed:
1058
GitHub
The view shows you the commits made between two releases (or tags or commits references) and the actual diff, if the number of changes is lower than a reasonable amount.
Webhooks and Services GitHub offers many features that help the developer workflow. One of them is webhooks, the other one is services.
Webhooks
1059
GitHub
Webhooks allow external services to be pinged when certain events happen in the repository, like when code is pushed, a fork is made, a tag was created or deleted. When an event happens, GitHub sends a POST request to the URL we told it to use. A common usage of this feature is to ping a remote server to fetch the latest code from GitHub when we push an update from our local computer. We push to GitHub, GitHub tells the server we pushed, the server pulls from GitHub.
Services GitHub services, and the new GitHub apps, are 3rd part integrations that improve the developer experience or provide a service to you. For example you can setup a test runner to run the tests automatically every time you push some new commits, using TravisCI. You can setup Continuous Integration using CircleCI. You might create a Codeclimate integration that analyzes the code and provides a report of technical debt and test coverage.
Final words GitHub is an amazing tool and service to take advantage of, a real gem in today’s developer toolset. This tutorial will help you start, but the real experience of working on GitHub on open source (or closed source) projects is something not to be missed.
1060
A Git cheat sheet
A Git cheat sheet This page contains a list of Git commands I find handy to know but I find hard to remember Squash a series of commits and rewrite the history by writing them as one Take a commit that lives in a separate branch and apply the same changes on the current branch Restore the status of a file to the last commit (revert changes) Show a pretty graph of the commit history Get a prettier log Get a shorter status Checkout a pull request locally List the commits that involve a specific file List the commits that involve a specific file, including the commits content List the repository contributors ordering by the number of commits Undo the last commit you pushed to the remote Pick every change you haven't already committed and create a new branch Stop tracking a file, but keep it in the file system Get the name of the branch where a specific commit was made
Squash a series of commits and rewrite the history by writing them as one git rebase -i
this puts you in the interactive rebasing tool. Type s to apply squash to a commit with the previous one. Repeat the s command for as many commits as you need.
Take a commit that lives in a separate branch and apply the same changes on the current branch single commit: git cherry-pick
for multiple commits: git cherry-pick
Restore the status of a file to the last commit (revert changes) 1061
A Git cheat sheet
git checkout --
Show a pretty graph of the commit history git log --pretty=format:"%h %s" --graph
Get a prettier log git log --pretty=format:"%h - %an, %ar : %s"
List the commits that involve a specific file git log --follow --
List the commits that involve a specific file, including the commits content git log --follow -p --
List the repository contributors ordering by the number of commits git shortlog -s -n
Undo the last commit you pushed to the remote git revert -n HEAD
Pick every change you haven't already committed and create a new branch git checkout -b
Stop tracking a file, but keep it in the file system git rm -r --cached
1062
A Git cheat sheet
Get the name of the branch where a specific commit was made git branch --contains
1063
Deployment, APIs and Services
Deployment, APIs and Services
1064
Netlify
Netlify Discover Netlify, a great hosting service ideal for static sites which has a nice free plan, free CDN and it's blazing fast Introducing Netlify Netlify and Hugo Advanced functionality offered by Netlify for Static Sites Previewing branches I recently switched my blog hosting to Netlify. I did so while my previous hosting was having some issues that made my site unreachable for a few hours, and while I waited for it to get up online again, I created a replica of my site on Netlify. Since this blog runs on Hugo, which is a Static Site Generator, I need very little time to move the blog files around. All I need is something that can serve HTML files, which is pretty much any hosting on the planet. I started looking for the best platform for a static site, and a few stood out but I eventually tried Netlify, and I'm glad I did.
Introducing Netlify There are a few things that made a great impression to me before trying it. First, the free plan is very generous for free or commercial projects, with 100GB of free monthly bandwidth, and for a static site with just a few images here and there, it's a lot of space!
1065
Netlify
They include a global CDN, to make sure speed is not a concern even in continents far away from the central location servers. You can point your DNS nameservers to Netlify and they will handle everything for you with a very nice interface to set up advanced needs. They of course support having a custom domain and HTTPS. Coming from Firebase, I expected a very programmer friendly way to manage deploys, but I found it even better with regards to handling each Static Site Generator.
Netlify and Hugo I use Hugo, and locally I run a server by using its built-in tool hugo server , which handles rebuilding all the HTML every time I make a change, and it runs an HTTP server on port 1313 by default. To generate the static site, I have to run hugo , and this creates a series of files in the public/ folder.
I followed this method on Firebase: I ran hugo to create the files, then firebase deploy , configured to push my public/ folder content to the Google servers. In the case of Netlify however, I linked it to my private GitHub repository that hosts the site, and every time I push to the master branch, the one I told Netlify to sync with, Netlify initiates a new deploy, and the changes are live within seconds.
1066
Netlify
TIP: if you use Hugo on Netlify, make sure you set HUGO_VERSION in netlify.toml to the latest Hugo stable release, as the default version might be old and (at the time of writing) does not support recent features like post bundles. Here's my netlify.toml configuration file. If you think this is nothing new, you're right, since this is not hard to implement on your own server (I do so on other sites not hosted on Netlify), but here's something new: you can preview any GitHub (or GitLab, or BitBucket) branch / PR on a separate URL, all while your main site is live and running with the "stable" content. Another cool feature is the ability to perform A/B testing on 2 different Git branches.
Advanced functionality offered by Netlify for Static Sites Static sites have the obvious limitation of not being able to do any server-side operation, like the ones you'd expect from a traditional CMS for example. This is an advantage (less security issues to care about) but also a limitation in the functionality you can implement.
1067
Netlify
A blog is nothing complex, maybe you want to add comments and they can be done using services like Disqus or others. Or maybe you want to add a form and you do so by embedding forms generated on 3rd part applications, like Wufoo or Google Forms. Netlify provides a suite of tools to handle Forms, authenticate users and even deploy and manage Lambda functions. Need to password protect a site before launching it? ✅ Need to handle CORS? ✅ Need to have 301 redirects? ✅ Need pre-rendering for your SPA? ✅ I just scratched the surface of the things you can do with Netlify without reaching out to 3rd part services, and I hope I gave you a reason to try it out.
Previewing branches The GitHub integration works great with Pull Requests. Every time you push a Pull Request, Netlify deploys that branch on a specific URL which you can share with your team, or to anyone that you want. Here I made a Pull Request to preview a blog post, without making it available on my public blog:
1068
Netlify
Netlify immediately picked it up, and automatically deployed it
1069
Netlify
Clicking the link points you to the special URL that lets you preview the PR version of the site.
1070
Firebase Hosting
Firebase Hosting Firebase is a Google Cloud service, an articulated product, mainly targeted at mobile applications. Firebase Hosting is one small part of it. Intro to Firebase Firebase Hosting Features Why should you use Firebase Hosting? Install the Firebase CLI tool Create a project on Firebase Configure the site Publish the site Custom Domain
Intro to Firebase Firebase is a mobile and web application development platform developed by Firebase, Inc. in 2011, and was acquired by Google in 2014. So now Firebase is a Google Cloud service, and not just that - it's a flagship product of their Cloud offering. Firebase is a complex and articulated product, mainly targeted at mobile applications. One of its features however is an advanced web hosting service.
Firebase Hosting Features Firebase Hosting provides hosting for static web sites, such as the ones you can generate using static site generators or even sites built with server-side CMS platforms, from which you generate a static copy of the website. You can host anything as long as it's not dynamic. A WordPress blog for example is almost always a good candidate to be a static site, if you use Disqus or Facebook comments. Firebase Hosting delivers files through the Fastly CDN, using HTTPS and provides an automatic SSL certificate, with custom domain support. Its free tier is generous, with cheap plans if you outgrow it, and is very developer-friendly: Firebase provides a CLI interface tool, an easy deployment process, and one-click rollbacks
1071
Firebase Hosting
Why should you use Firebase Hosting? Firebase can be a good choice to deploy static websites, and Single Page Apps. I like to use Firebase Hosting mainly because I tested many different providers and Firebase offers an awesome speed across the continents without the need for a separate CDN on top, since the CDN is built-in for free. Also while having your own VPS is a very good option as well, I don't want to manage my own server just for a simple website, I prefer to focus on the content rather than on the operations, much like I would deploy an app on Heroku. Firebase is even easier to setup than Heroku.
Install the Firebase CLI tool Install the Firebase CLI with npm: npm install -g firebase-tools
or yarn global add firebase-tools
and authenticate with the Google account (I assume you already have a Google account) by running firebase login
Create a project on Firebase Go to https://console.firebase.google.com/ and create a new project.
1072
Firebase Hosting
Now back to the console, from the site you're working on, in the root folder, run firebase init
Choose "Hosting" by pressing space, then enter to go on.
1073
Firebase Hosting
Now you need to choose the project you want to deploy the site to.
Choose "create a new project". Now you choose which folder contains the static version of your site. For example, public . Reply "No" to the Configure as a single-page app (rewrite all urls to /index.html)? question, and also reply "No" to File public/index.html already exists. Overwrite? to avoid Firebase to add its own default index.html file. You're good to go:
Configure the site The Firebase CLI app created the firebase.json file in the root site folder. In this article I tell how to configure a simple feature in Firebase Hosting, by adding a small bit of configuration in the firebase.json file.
1074
Firebase Hosting
I want to set the Cache-Control header directive on all the site assets: images as well as CSS and JS files. A clean firebase.json file contains the following: { "hosting": { "public": "public", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ] } }
It tells Firebase where is the site content, and which files it should ignore. Feel free to add all the folders you have, except public . We're going to add a new property in there, called headers : { "hosting": { "public": "public", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ], "headers": [ { "source" : "**/*.@(jpg|jpeg|gif|png|css|js)", "headers" : [ { "key" : "Cache-Control", "value" : "max-age=1000000" //1 week+ } ] } ] } }
As you can see we tell that for all files ending with jpg|jpeg|gif|png|css|js Firebase should apply the Cache-Control:max-age=1000000` directive, which means all assets are cached for more than 1 week.
Publish the site When you are ready to publish the site, you just run
1075
Firebase Hosting
firebase deploy
and Firebase takes care of everything. You can now open https://yourproject.firebaseapp.com and you should see the website running.
Custom Domain The next logical step is to make your site use a custom domain. Go to https://console.firebase.google.com/project/_/hosting/main and click the "Connect Domain" button:
The wizard will ask you for the domain name, then it will provide a TXT record you need to add to your hosting DNS panel to verify the domain. If the domain is brand new, it might take some time before you can pass this step. Once this is done, the interface will give you two A records to add as well to your hosting DNS panel. If you set up yourdomain.com , don't forget to also set up www.yourdomain.com , by making it a redirect.
1076
Firebase Hosting
Now you just have to wait for your hosting to update the DNS records and for DNS caches to flush. Also, keep in mind that your SSL certificate is automatically provisioned but requires a bit of time to be valid.
1077
How to authenticate to any Google API
How to authenticate to any Google API The Google Developers Console can be complicated to get right, and it's one of the reasons I sometimes have resistance into using one of the Google APIs. This article aims to make it simple to use This article explains how to use the Google Developers Console to authenticate to any of the Google APIs.
The Developers Console can be complicated to get right, and it's one of the reasons I sometimes have resistance into using one of the Google APIs. Let's see how that works, in a very simple way. This guide assumes you already have a Google account. Create a new Google API Project Create the Authentication Credentials Service to Service API Using the JSON Key File Use environment variables Access other APIs
Create a new Google API Project Create a new project, if you haven't done it yet.
1078
How to authenticate to any Google API
From the dashboard click Create a new project.
1079
How to authenticate to any Google API
Give it a name, and you'll be redirected to the project dashboard:
1080
How to authenticate to any Google API
Add an API by clicking Enable APIs and services.
1081
How to authenticate to any Google API
From the list, search the API you're interested in
1082
How to authenticate to any Google API
and enable it
1083
How to authenticate to any Google API
That's it!
1084
How to authenticate to any Google API
The project is now ready, you can go on and create the authentication credentials.
Create the Authentication Credentials There are 3 ways to authenticate with the Google APIs: OAuth 2 Service to Service API key API key is less secure and restricted in scope and usage by Google. OAuth 2 is meant to let your app make requests on behalf of a user, and as such the process is more complicated than needed, and requires exposing URLs to handle callbacks. Way too complex for simple uses. In a Service to Service authentication model, the application directly talks to the Google API, using a service account, by using a JSON Web Token.
1085
How to authenticate to any Google API
This is the simplest method, especially if you're building a prototype or an application that talks from your server (like a Node.js app) to the Google APIs. This is the one method I'll talk about for the test of the article.
Service to Service API To use this method you need to first generate a JSON Key File through the Google Developers Console. There is another option which involves downloading a .p12 file and then converting it to a pem file using the openssl command. It's no longer recommended by Google, just use JSON. From a project dashboard, click Create credentials, and choose Service Account Key:
Fill the form and choose a "JSON" key type:
1086
How to authenticate to any Google API
That's it! Google sent you a JSON file:
1087
How to authenticate to any Google API
This is the content of this JSON file, called JSON Key File: { "type": "service_account", "project_id": "...", "private_key_id": "...", "private_key": "...", "client_email": "...", "client_id": "...", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "..." }
Using the JSON Key File The simplest way is to put the JSON file somewhere reachable by your program, on the filesystem.
1088
How to authenticate to any Google API
For example I have a test app under ~/dev/test , so I put the JSON file into that folder, and renamed it to auth.json . Then inside a Node.js app make sure the GOOGLE_APPLICATION_CREDENTIALS environment variable points to that file location on the filesystem. You create a JSON Web Token using the properties contained in the file: const jwt = new google.auth.JWT(key.client_email, null, key.private_key, scopes)
and you pass that to any API request you make. This is an example of how to use it with the Google Analytics API. process.env.GOOGLE_APPLICATION_CREDENTIALS is better be set outside the program, but I added it
Use environment variables This is not ideal in many situations where having your private information on the filesystem is either not practical or not secure. For example if you're using Heroku, it's best to avoid putting the authentication credentials in the repository, and instead set them through the interface or console Heroku provides.
1089
How to authenticate to any Google API
Or it's the case of using it on Glitch prototypes, where environment variables are hidden to everyone except you. In this case the best thing is to use environment variables, and store the content you need from the JSON file. In the following example, all we need are the client_email and private_key variables set in the JSON, so we can extract those and set them as environment
Access other APIs I used Google Analytics in the examples. The google object makes it reachable at google.analytics('v3') . v3 is the API version.
Other APIs are reachable using a similar way: google.urlshortener('v1') google.drive('v2')
1090
How to authenticate to any Google API
1091
Interact with the Google Analytics API using Node.js
Interact with the Google Analytics API using Node.js Learn how to interface a Node.js application with the Google Analytics API, using the official `googleapis` package. We'll use a JSON Web Token and see some examples Note: I just noticed uBlock Origin blocks the images on this post, because they have the analytics word in the path. So make sure you disable it for this page, to see the images
. I don't have ads. You can still follow along without them, but it's better with them of course :) In this post I'm going to show some examples of using the Google Analytics API with Node.js.
Environment variables Add the user to Google Analytics Import the Google library Define the scope The Google Analytics Reporting API Create the JWT Perform a request Metrics Common code
1092
Interact with the Google Analytics API using Node.js
Get the number of today sessions Get the number of today sessions coming from organic sources (Search Engines) Get the number of yesterday sessions Get the number of sessions in the last 30 days Get the browsers used in the last 30 days Get the number of visitors using Chrome Get the sessions by traffic source The Google Analytics Real Time API Google offers a great npm package: googleapis . We're going to use that as the base building block of our API interaction. Authentication is a big part of interacting with an API. Check out this post on how to authenticate to the Google APIs. In this article I'm going to assume you read that, and you know how to perform a JWT authentication.
Environment variables Once you download the JSON Key file from Google, put the client_email and private_key values as environment variables, so that they will be accessible through process.env.CLIENT_EMAIL process.env.PRIVATE_KEY
Add the user to Google Analytics Since we're using the Service to Service API in these examples, you need to add the client_email value to your Google Analytics profile. Go to the Admin panel and click User
Management, either on a property or on a view.
1093
Interact with the Google Analytics API using Node.js
And add the email you found in the client_email key in the JSON file:
1094
Interact with the Google Analytics API using Node.js
Import the Google library const { google } = require('googleapis')
Remember the {} around the google object, as we need to destructure it from the googleapis library (otherwise we'd need to call google.google and it's ugly)
Define the scope This line sets the scope: const scopes = 'https://www.googleapis.com/auth/analytics.readonly'
Google Analytics API defines several scopes:
1095
Interact with the Google Analytics API using Node.js
https://www.googleapis.com/auth/analytics.readonly to view the data https://www.googleapis.com/auth/analytics to view and manage the data https://www.googleapis.com/auth/analytics.edit to edit the management entities https://www.googleapis.com/auth/analytics.manage.users to manage the account users and
permissions https://www.googleapis.com/auth/analytics.manage.users.readonly to view the users and
their permissions https://www.googleapis.com/auth/analytics.provision to create new Google Analytics
accounts You should always pick the scope that grants the least amount of power. Since we want to only view the reports now, we pick https://www.googleapis.com/auth/analytics.readonly instead of https://www.googleapis.com/auth/analytics .
The Google Analytics Reporting API Note: you can also use the Google Analytics Reporting API to access those permissions. It is a trimmed-down version of the Google Analytics API, offering just the scope https://www.googleapis.com/auth/analytics.readonly and https://www.googleapis.com/auth/analytics .
The API is slightly different than the Analytics API however in how it's used and in which methods it exposes, so we'll skip that.
Create the JWT const jwt = new google.auth.JWT(process.env.CLIENT_EMAIL, null, process.env.PRIVATE_KEY, s copes)
Perform a request Check this code: const { google } = require('googleapis') const scopes = 'https://www.googleapis.com/auth/analytics.readonly' const jwt = new google.auth.JWT(process.env.CLIENT_EMAIL, null, process.env.PRIVATE_KEY, s copes)
1096
Interact with the Google Analytics API using Node.js
It performs a request to the Google Analytics API to fetch the pageviews number in the last 30 days. view_id contains the ID of the view. Not your Google Analytics code, but the view ID. You
can get that from the admin panel, by clicking View Settings on the view you want to access:
1097
Interact with the Google Analytics API using Node.js
You pass this object to the request: { 'auth': jwt, 'ids': 'ga:' + view_id, 'start-date': '30daysAgo', 'end-date': 'today', 'metrics': 'ga:pageviews' }
In addition to the jwt object and the view id, we have 3 parameters. metrics : tells the API what we want to get start-date : defines the starting date for the report end-date : defines the end date for the report
This request is very simple and returns the number of pageviews occurring in the specified time period. The returned result will be something like: { status: 200, statusText: 'OK', headers: {...}, config: {...}, request: {...}, data: { kind: 'analytics#gaData', id: 'https://www.googleapis.com/analytics/v3/data/ga?ids=ga:XXXXXXXXXXXXXXXXXX&metrics =ga:pageviews&start-date=30daysAgo&end-date=today', query: { 'start-date': '30daysAgo', 'end-date': 'today', ids: 'ga:XXXXXXXXXXXXXXXXXX', metrics: [ 'ga:pageviews' ], 'start-index': 1, 'max-results': 1000 }, itemsPerPage: 1000, totalResults: 1, selfLink: 'https://www.googleapis.com/analytics/v3/data/ga?ids=ga:XXXXXXXXXXXXXXXXXX&m etrics=ga:pageviews&start-date=30daysAgo&end-date=today', profileInfo: { profileId: 'XXXXXXXXXXXXXXXXXX', accountId: 'XXXXXXXXXXXXXXXXXX', webPropertyId: 'UA-XXXXXXXXXXX--XX', internalWebPropertyId: 'XXXXXXXXXXXXXXXXXX', profileName: 'XXXXXXXXXXXXXXXXXX', tableId: 'ga:XXXXXXXXXXXXXXXXXX' }, containsSampledData: false,
1098
Interact with the Google Analytics API using Node.js
With this You can access the pageviews count in response.data.rows[0][0] .
Metrics This example was simple. We just asked for this data: { 'start-date': '30daysAgo', 'end-date': 'today', 'metrics': 'ga:pageviews' }
There is a whole lot of data we can use. The Dimensions & Metrics Explorer is an awesome tool to discover all the options. Those terms are two concepts of Google Analytics. Dimensions are attributes, like City, Country or Page, the referral path or the session duration. Metrics are quantitative measurements, like the number of users or the number of sessions. Some examples of metrics: get the pageviews ga:pageviews get the unique users ga:users get the sessions ga:sessions get the organic searches ga:organicSearches Let's build some examples with those metrics.
Common code
1099
Interact with the Google Analytics API using Node.js
Here is the common code used in the examples below. Put the snippet in the authorize() callback. 'use strict' const { google } = require('googleapis') const scopes = 'https://www.googleapis.com/auth/analytics.readonly' const jwt = new google.auth.JWT(process.env.CLIENT_EMAIL, null, process.env.PRIVATE_KEY, s copes) async function getData() { const defaults = { 'auth': jwt, 'ids': 'ga:' + process.env.VIEW_ID, } const response = await jwt.authorize() /* custom code goes here, using `response` */ } getData()
The defaults object will be reused in the examples using the spread operator, which is a handy way of handling defaults values in JavaScript.
Get the number of today sessions const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': 'today', 'end-date': 'today', 'metrics': 'ga:sessions' }) console.dir(result.data.rows[0][0])
Get the number of today sessions coming from organic sources (Search Engines) Add the filters property: const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': 'today', 'end-date': 'today',
1100
Interact with the Google Analytics API using Node.js
Get the number of yesterday sessions const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': 'yesterday', 'end-date': 'yesterday', 'metrics': 'ga:sessions' }) console.dir(result.data.rows[0][0])
Get the number of sessions in the last 30 days const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': '30daysAgo', 'end-date': 'today', 'metrics': 'ga:sessions' }) console.dir(result.data.rows[0][0])
Get the browsers used in the last 30 days const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': '30daysAgo', 'end-date': 'today', 'dimensions': 'ga:browser', 'metrics': 'ga:sessions' }) console.dir(result.data.rows.sort((a, b) => b[1] - a[1]))
Get the number of visitors using Chrome const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': '30daysAgo', 'end-date': 'today', 'dimensions': 'ga:browser', 'metrics': 'ga:sessions', 'filters': 'ga:browser==Chrome', }) console.dir(result.data.rows[0][1])
Get the sessions by traffic source const result = await google.analytics('v3').data.ga.get({ ...defaults, 'start-date': '30daysAgo', 'end-date': 'today', 'dimensions': 'ga:source', 'metrics': 'ga:sessions' }) console.dir(result.data.rows.sort((a, b) => b[1] - a[1]))
The Google Analytics Real Time API The Google Analytics Real Time API is [May 2018] in private beta, and it's not publicly accessible. Check this page.
1102
Interact with the Google Analytics API using Node.js
1103
Glitch, a great Platform for Developers
Glitch, a great Platform for Developers Glitch is a pretty amazing platform to learn and experiment with code. This post introduces you to Glitch and makes you go from zero to hero
Glitch is a great platform to learn how to code. I use Glitch on many of my tutorials, I think it's a great tool to showcase concepts, and also allow people to use your projects and build upon them. Here is an example project I made on Glitch with React and React Router: https://glitch.com/edit/#!/flaviocopes-react-router-v4 With Glitch you can easily create demos and prototypes of applications written in JavaScript, from simple web pages to advanced frameworks such as React or Vue, and server-side Node.js apps. It is built on top of Node, and you have the ability to install any npm package you want, run webpack and much more. It's brought to you by the people that made some hugely successful products, including Trello and Stack Overflow, so it has a lot of credibility bonuses for that.
Why do I think Glitch is great? Glitch "clicked" for me in how it presents itself in a funny interface, but not dumbed down. You have access to logs, the console, and lots of internal stuff.
1104
Glitch, a great Platform for Developers
Also, the concept of remixing so prominent in the interface makes me much more likely to create lots of projects there, as I never have to start from a clean slate. You can start diving into the code without losing time setting up an environment, version control, and focus on the idea, with an automatic HTTPS URL and a CDN for the media assets. Also, there's no lock-in at all, it's just Node.js (or if you don't use server-side JavaScript, it's just HTML, JS and CSS)
Is it free? Yes, it's free, and in the future they might add even more features on top for a paid plan, but they state that the current Glitch will always be free as it is now. There are reasonable limits like You have 128MB of space, excluding npm packages, plus 512MB for media assets You can serve up to 4000 requests per hour Apps are stopped if not accessed for 5 minutes and they do not receive any HTTP request, and long running apps are stopped after 12 hours. As soon as an HTTP request comes in, they start again
An overview of Glitch This is the Glitch homepage, it shows a few projects that they decided to showcase because they are cool, and some starter projects:
1105
Glitch, a great Platform for Developers
Creating an account is free and easy, just press "Sign in" and choose between Facebook and GitHub as your "entry points" (I recommend GitHub):
1106
Glitch, a great Platform for Developers
You are redirected to GitHub to authorize:
Once logged in, the home page changes to also show your projects:
1107
Glitch, a great Platform for Developers
Clicking Your Projects sends you to your profile page, which has your name in the URL. Mine is https://glitch.com/@flaviocopes.
1108
Glitch, a great Platform for Developers
You can pin projects, to find them more easily when you'll have lots of them.
The concept of remixing When you first start, of course you will have no projects of your own. Glitch makes it super easy to start, and you never start from a blank project. You always remix another project. You can remix a project you like, maybe one you found on Twitter or featured in the Glitch homepage, or you can start from a project that's a boilerplate to start something: A simple web page
1109
Glitch, a great Platform for Developers
Node.js Express app A Node.js console A Create-React-App app A Nuxt starter app There are many other starter glitches in these collections: Hello World Glitches Building Blocks If you're learning to code right now, the Learn to Code glitch Collection is very nice. I have created a few starter apps that I constantly use for my demos and tests, and they are: Simple HTML + CSS + JS glitch React + webpack starter glitch Glitch makes it very easy to create your own building blocks, and by pinning them in your profile, you can have them always on the top, easy to find.
Remix a glitch Once you have a glitch you want to build upon, you just click it, and a window shows up:
There are 3 buttons:
1110
Glitch, a great Platform for Developers
Preview a glitch is code that does something. This shows the result of the glitch. Edit Project shows the source of the project, and you can start editing it Remix This clones the glitch to a new one Every time you remix a glitch, a new project is created, with a random name. Here is a glitch right after creating it by remixing another one:
Glitch gave it the name guttural-noodle . Clicking the name you can change it:
1111
Glitch, a great Platform for Developers
You can also change the description. From here you can also create a new glitch from zero, remix the current glitch, or go to another one.
GitHub import/export There is an easy import/export from/to GitHub, which is very convenient:
1112
Glitch, a great Platform for Developers
Keep your project private Clicking the lock makes the glitch private:
1113
Glitch, a great Platform for Developers
Create a new project Clicking "New Project" shows 3 options: node-app node-sqlite webpage
1114
Glitch, a great Platform for Developers
This is a shortcut to going out to find those starter apps, and remix them. Under the hoods, clicking one of those options remixes an existing glitch. On any glitch, clicking "Show" will open a new tab where the app is run:
App URL Notice the URL, it's:
1115
Glitch, a great Platform for Developers
https://flavio-my-nice-project.glitch.me
That reflects the app name. The editing URL is a bit different: https://glitch.com/edit/#!/flavio-my-nice-project
The preview runs on a subdomain of glitch.me , while editing is done on glitch.com . Noticed the fishes on the right of the page? It's a little JavaScript that Glitch recommend to add to the page, to let other people remix the project or see the source:
Running the app Any time you make a change to the source, the app is rebuilt, and the live view is refreshed. This is so convenient, real time applying changes gives an immediate feedback that's a great help when developing.
1116
Glitch, a great Platform for Developers
Secrets You don't want any API key or password that might be used in the code to be seen by everyone. Any of those secret strings must be put in the special .env file, which has a key next to it. If you invite collaborators, they will be able to see the content, as they are part of the project. But anyone remixing it, or people invited by you to help, will not see the file content.
Managing files Adding a new file to a project is easy. You can drag and drop files and folders from your local computer, or click the "New File" button above the files list. It's also intuitive how to rename, copy or delete files:
1117
Glitch, a great Platform for Developers
One-click license and code of conduct Having a license in the code is one of the things that's overlooked in sample projects, but determines what others can do, or can't do, with your project. Without a license, a project is not open source, and all rights are reserved, so the code cannot be redistributed, and other people cannot do anything with it (note: this is my understanding and IANAL - I Am Not A Lawyer). Glitch makes it super easy to add a license, in the New File panel:
1118
Glitch, a great Platform for Developers
You can easily change it as well:
The code of conduct is another very important piece for any project and community. It makes contributors feel welcomed and protected in their participation to the community.
1119
Glitch, a great Platform for Developers
The Add Code of Conduct button adds a sample code of conduct for open source projects you can start from.
Adding an npm package Click the package.json file, and if you don't have one yet, create an empty one. Click the Add Package button that now appears on top, and you can add new package.
Also, if you have a package that needs to be updated, Glitch will show the number of packages that need an update, and you can update them to the latest release with a simple click:
1120
Glitch, a great Platform for Developers
Use a custom version of Node.js You can set the Node.js version to any of these in your package.json . Using .x will use the latest release of a major version, which is the most useful thing, like this: { //... "engines": { "node": "8.x" } }
Storage Glitch has a persistent file system. Files are kept on disk even if your app is stopped, or you don't use if for a long time. This allows you to store data on disk, using local databases or file-based storage (flat-file).
1121
Glitch, a great Platform for Developers
If you put your data in the .data folder, this special name indicates the content will not be copied to a new project with the glitch is remixed.
Embedding a glitch in a page Key to using Glitch to create tutorials is the ability to embed the code and the presentation view, in a page. Click Share and Embed Project to open the Embed Project view. From there you can choose to only embed the code, the app, or customize the height of the widget - and get its HTML code to put on your site:
Collaborating on a glitch 1122
Glitch, a great Platform for Developers
From the Share panel, the Invite Collaborators to edit link lets you invite anyone to edit the glitch in real time with you. You can see their changes as they make it. It's pretty cool!
Asking for help Linked to this collaboration feature, there's a great one: you can ask help from anyone in the world, just by selecting some text in the page, and click the raised hand icon:
This opens a panel where you can add a language tag, and a brief description of what you need:
1123
Glitch, a great Platform for Developers
Once done, your request will be shown in the Glitch homepage for anyone to pick up. When a person jumps in to help, they see the line you highlighted, and I found that comments made a good way to communicate like a chat:
1124
Glitch, a great Platform for Developers
See the logs Click Logs to have access to all the logs of the app:
1125
Glitch, a great Platform for Developers
Access the console From the Logs panel, there is a Console button. Click it to open the interactive console in a separate tab in the browser:
1126
Glitch, a great Platform for Developers
The debugger Clicking the Debugger button in the Logs panel, an instance of the Chrome DevTools opens in another tab with a link to the debugger URL.
The changes history A great feature is the ability to check all your changes in the project history. It's a lot like how Git works - in fact, under the hoods it's Git powering this really easy to use interface, which opens clicking the ⏪ button:
1127
Glitch, a great Platform for Developers
How is Glitch different than Codepen or JSFiddle? One big difference that separates Glitch from other tools is the ability to run server-side code. Codepen and JSFiddle can only run frontend code, while a Glitch can even be used as a lightweight server for your apps - keeping the usage limits in mind. For example I have set up an Express.js server that is triggered by a Webhook at specific times during the day to perform some duties. I don't need to worry about it running on another server, I just wrote it on Glitch and run directly from there.
That's it! I hope you like my small tutorial on using Glitch, and I hope I explained most of the killer features of it.
1128
Glitch, a great Platform for Developers
More questions? I suggest to just try it, and see if it clicks for you too. The Glitch FAQ is a great place to start. Have fun!
1129
Airtable API for Developers
Airtable API for Developers Airtable is an amazing tool. Discover why it's great for any developer to know about it and its API
Airtable is an amazing tool. It's a mix between a spreadsheet and a database. As a developer you get to create a database with a very nice to use interface, with the ease of use and editing of a spreadsheet, and you can easily update your records even from a mobile app.
Perfect for prototypes Airtable is much more than a glorified spreadsheet, however. It is a perfect tool for a developer looking to prototype or create an MVP of an application. An MVP, or Minimum Viable Product, is one initial version of an application or product. Most products fail not because of technical limitations or because "the stack did not scale". They fail because either there is no need for them, or the maker does not have a clear way to market the product. Creating an MVP minimizes the risks of spending months trying to build the perfect app, and then realizing no one wants it.
A great API 1130
Airtable API for Developers
Airtable has an absolutely nice API to work with, which makes it easy to interface with your Airtable database programmatically. This is what makes it 10x superior to a standard spreadsheet, when it comes to data handling AND making it easy to authenticate. The API has a limit of 5 requests per second, which is not high, but still reasonable to work with for most scenarios.
A great documentation for the API Here is the Airtable API documentation: https://airtable.com/api. As developers we spend a lot of time reading through docs and trying to figure out how things work. An API is tricky because you need to interact with a service, and you want to both learn what the service exposes, and how can you use the API to do what you need. Airtable raises the bar for any API documentation out there. It puts your API keys, base IDs and table names directly in the examples, so you just need to copy and paste them into your codebase and you're ready to go. Not just that, the examples in the API docs use the actual data in your table. In this image, notice how the fields example values are actual values I put in my table:
1131
Airtable API for Developers
The API documentation offers examples using curl :
and their Node.js official client:
The official Node.js client Airtable maintains the official Airtable.js Node.js client library, a very easy to use way to access the Airtable data.
1132
Airtable API for Developers
It's convenient because it offers a built-in logic to handle rate limits and retrying the requests when you exceed them. Let's see a few common operations you can perform with the API, but first let's define a couple values we'll reference in the code: API_KEY : the Airtable API key BASE_NAME : the name of the base you'll work with TABLE_NAME : the name of the table in that base. VIEW_NAME : the name of the table view.
A base is a short term for database, and it can contain many tables. A table has one or more views that organize the same data in a different way. There's always at least one view (see more on views)
Authenticate You can set up the AIRTABLE_API_KEY environment variable, and Airbase.js will automatically use that, or explicitly add it into your code: const Airtable = require('airtable') Airtable.configure({ apiKey: API_KEY })
Initialize a base const base = require('airtable').base(BASE_NAME)
or, if you already initialized the Airtable variable, use const base = Airtable.base(BASE_NAME)
Reference a table With a base object, you can now reference a table using const table = base(TABLE_NAME)
1133
Airtable API for Developers
Retrieve the table records Any row inside a table is called a record. Airtable returns a maximum of 100 records in each page of results. If you know you will never go over 100 items in a table, just use the firstPage method: table.select({ view: VIEW_NAME }).firstPage((err, records) => { if (err) { console.error(err) return } //all records are in the `records` array, do something with it })
If you have (or expect) more than 100 records, you need to paginate through them, using the eachPage method:
let records = [] // called for every page of records const processPage = (partialRecords, fetchNextPage) => { records = [...records, ...partialRecords] fetchNextPage() } // called when all the records have been retrieved const processRecords = (err) => { if (err) { console.error(err) return } //process the `records` array and do something with it } table.select({ view: VIEW_NAME }).eachPage(processPage, processRecords)
Inspecting the record content Any record has a number of properties which you can inspect. First, you can get its ID:
1134
Airtable API for Developers
record.id //or record.getId()
and the time of creation: record.createdTime
and you can get any of its properties, which you access through the column name: record.get('Title') record.get('Description') record.get('Date')
Get a specific record You can get a specific record by ID: const record_id = //... table.find(record_id, (err, record) => { if (err) { console.error(err) return } console.log(record) })
Create a new record You can add a new record table.create({ "Title": "Tutorial: create a Spreadsheet using React", "Link": "https://flaviocopes.com/react-spreadsheet/", }, (err, record) => { if (err) { console.error(err) return } console.log(record.getId())
1135
Airtable API for Developers
})
Update a record You can update a single field of a record, and leave the other fields untouched, using update : const record_id = //... table.update(record_id, { "Title": "The modified title" }, (err, record) => { if (err) { console.error(err) return } console.log(record.get('Title')) })
Or, you can update some fields in a record and clear out the ones you did not touch, with replace :
Delete a record A record can be deleted using const record_id = //... table.destroy(record_id, (err, deletedRecord) => { if (err) { console.error(err) return }
Electron Learn the basics of Electron, the framework built by GitHub that powers a lot of innovative and very popular cross-platform applications
What is Electron? Electron is an Open Source and free tool for building cross-platform desktop apps with JS, HTML and CSS, built by GitHub It's very popular and hugely successful applications use it, including: Slack Atom VS Code Calypso (WordPress.com) Discord Electron is a huge project, and in May 2018 it reached version 2.0. Check out the official site at https://electronjs.org
1138
Electron
Before Electron Before Electron, you could not make a cross-platform desktop app with web technologies. On the Mac, there were frameworks like MacGap that let you create an application which basically embedded a Safari page (WebView), and you could load your JavaScript into that. Being a Mac application, you had the option to write native code using Objective-C, and access the system APIs, but this was not portable outside the Mac platform. No chance to make this work on Linux or Windows, and I'm sure those had their own tools to do this kind of thing. There was no unique tool that could run the same app everywhere. Until 2014, when Electron was released (initially under the name Atom Shell, then renamed in 2015).
A quick look into the Electron internals Electron is basically bundling the Chromium rendering library and Node.js (Chromium the open source project made by Google, on which they build the Chrome browser). You have both access to a canvas powered by Chromium, which runs the V8 JavaScript engine, and use any Node.js package, and run your own Node.js code. It's a sort of Node.js for the desktop, if you wish. It does not provide any kind of GUI elements, but rather lets you create UIs using HTML, CSS and JavaScript. Electron aims to be fast, small in size, and as slim as possible, yet providing the core features that all apps can rely upon.
Which kind of apps you can do You can do lots of different kind of apps, including regular apps, with a dock icon, and a window menu bar apps, which don't have any dock icon daemons command line utilities A good collection of Electron apps is available on the official site: https://electronjs.org/apps. With Electron you can create apps and publish them on the Windows and Mac App Store.
1139
Electron
The Electron APIs app On the Mac App Store you can download the Electron APIs app, which is an official sample desktop app built using Electron.
The app is pretty cool and it lets you experiment with several features of Electron. It's open source, and the code is available at https://github.comelectron-quick-start.
1140
Electron
How to create your first Electron app First, create a new folder on your filesystem and into it run: yarn init
to create a package.json file: { "name": "electron", "version": "1.0.0", "main": "index.js", "license": "MIT" }
1141
Electron
change main to main.js , and add this line: "scripts": { "start": "electron ." }
Now install Electron: yarn add --dev electron
Electron can now be started with yarn start
However as you haven't added any code, this command will do nothing, it will just start a bare Electron application, but you're not going to see any windows:
An Hello World Electron GUI app! Let's create an application that shows an Hello World in a window. Create 2 files, main.js : 'use strict'
1142
Electron
const { app, BrowserWindow } = require('electron') const path = require('path') const url = require('url') app.on('ready', () => { // Create the browser window. const win = new BrowserWindow({ width: 800, height: 600 }) // and load the index.html of the app. win.loadURL( url.format({ pathname: path.join(__dirname, 'index.html'), protocol: 'file:', slashes: true }) ) })
and index.html : Hello World!
Hello World!
We are using node document.write(process.versions.node), Chrome document.write(process.versions.chrome), and Electron document.write(process.versions.electron).
Now run again yarn start , and this window should show up:
1143
Electron
This is a very simple one-window app, and when you close this window, the application exits.
Making app developer's life easier Electron aims to make the developer's life easier. Applications have lots of problems in common. They need to perform things that the native APIs sometimes make a little bit more complicated that one might imagine. Electron provides an easy way to manage In-App Purchases, Notifications, Drag and Drop, managing key shortcuts and much more. It also provides a hosted service for app updates, to make updating your apps much simpler than if you had to build such as service yourself.
1144
Networking
Networking
1145
The HTTP protocol
The HTTP protocol A detailed description of how the HTTP protocol, and the Web, work HTTP (Hyper Text Transfer Protocol) is one of the application protocols of TCP/IP, the suite of protocols that powers the Internet. Let me fix that: it's not one of the protocols, it's the most successful and popular one, by all means. HTTP is what makes the World Wide Web work, giving browsers a language to communicate to remote servers that host web pages. HTTP was first standardized in 1991, as a result of the work that Tim Berners-Lee did at CERN, the European Center of Nuclear Research, since 1989. The goal was to allow researchers to easily exchange and interlink their papers. It was meant as a way for the scientific community to work better. Back then the internet main applications basically consisted in FTP (the File Transfer Protocol), Email and Usenet (newsgroups, today almost abandoned). In 1993 Mosaic, the first graphical web browser, was released, and things skyrocketed from there. The Web became the killer app of the Internet. Over time the Web and the ecosystem around it have dramatically evolved, but the basics still remain. One example of evolution: HTTP now powers, in addition to web pages, REST APIs, one common way to programmatically access a service over the Internet. HTTP got a minor revision in 1997 with HTTP/1.1, and in 2015 its successor, HTTP/2, was standardized and it's now being implemented by the major Web Servers used across the globe. The HTTP protocol is considered insecure, just like any other protocol (SMTP, FTP..) not served over an encrypted connection. This is why there is a big push nowadays towards using HTTPS, which is HTTP served over TLS. That said, the building blocks of HTTP/2 and HTTPS have their roots in HTTP, and in this article I'll introduce how HTTP works.
HTML documents
1146
The HTTP protocol
HTTP is the way web browsers like Chrome, Firefox, Edge and many others (also called clients from here on) communicate with web servers. The name Hyper Text Transfer Protocol derives from the need of transferring not just files, like in FTP - the "File Transfer Protocol", but hypertexts, which would be written using HTML, and then represented graphically by the browser with a nice presentation and interactive links. Links were the driving force that drove adoption, along with the ease of creation of new web pages. HTTP is what transfer those hypertext files (and as we'll see also images and other file types) over the network.
Hyperlinks Inside a web browser, a document can point to another document using links. A link is composed by a first part that determines the protocol and the server address, either through a domain name or an IP. This part is not unique to HTTP, of course. Then there's the document part. Anything appended to the address part represents the document path. For example, this document address is https://flaviocopes.com/http/ : https is the protocol. flaviocopes.com is the domain name that points to my server /http/ is the document URL relative to the server root path.
The path can be nested: https://flaviocopes.com/page/privacy/ and in this case the document URL is /page/privacy . The web server is responsible for interpreting the request and, once analyzed, serving the correct response.
A request What's in a request? The first thing is the URL, which we've already seen before. When we enter an address and press enter in our browser, under the hoods the server sends to the correct IP address a request like this:
1147
The HTTP protocol
GET /a-page
where /a-page is the URL you requested. The second thing is the HTTP method (also called verb). HTTP in the early days defined 3 of them: GET POST HEAD
and HTTP/1.1 introduced PUT DELETE OPTIONS TRACE
We'll see them in detail in a minute. The third thing that composes a request is a set of HTTP headers. Headers are a set of key: value pairs that are used to communicate to the server-specific information that is predefined, so the server can know what we mean. I described them in detail in the HTTP request headers list. Give that list a quick look. All of those headers are optional, except Host .
HTTP methods GET GET is the most used method here. It's the one that's used when you type an URL in the browser address bar, or when you click a link. It asks the server to send the requested resource as a response.
HEAD HEAD is just like GET, but tells the server to not send the response body back. Just the headers.
1148
The HTTP protocol
POST The client uses the POST method to send data to the server. It's typically used in forms, for example, but also when interacting with a REST API.
PUT The PUT method is intended to create a resource at that specific URL, with the parameters passed in the request body. Mainly used in REST APIs
DELETE The DELETE method is called against an URL to request deletion of that resource. Mainly used in REST APIs
OPTIONS When a server receives an OPTIONS request it should send back the list of HTTP methods allowed for that specific URL.
TRACE Returns back to the client the request that has been received. Used for debugging or diagnostic purposes.
HTTP Client/Server communication HTTP, as most of the protocols that belong to the TCP/IP suite, is a stateless protocol. Servers have no idea what's the current state of the client. All they care about is that they get request and they need to fulfill them. Any prior request is meaningless in this context, and this makes it possible for a web server to be very fast, as there's less to process, and also it gives it bandwidth to handle a lot of concurrent requests. HTTP is also very lean, and communication is very fast in terms of overhead. This contrasts with the protocols that were the most used at the time HTTP was introduced: TCP and POP/SMTP, the mail protocols, which involve lots of handshaking and confirmations on the receiving ends.
1149
The HTTP protocol
Graphical browsers abstract all this communication, but we'll illustrate it here for learning purposes. A message is composed by a first line, which starts with the HTTP method, then contains the resource relative path, and the protocol version: GET /a-page HTTP/1.1
After that, we need to add the HTTP request headers. As mentioned above, there are many headers, but the only mandatory one is Host : GET /a-page HTTP/1.1 Host: flaviocopes.com
How can you test this? Using telnet. This is a command-line tool that lets us connect to any server and send it commands. Open your terminal, and type telnet flaviocopes.com 80 This will open a terminal, that tells you Trying 178.128.202.129... Connected to flaviocopes.com. Escape character is '^]'.
You are connected to the Netlify web server that powers my blog. You can now type: GET /axios/ HTTP/1.1 Host: flaviocopes.com
and press enter on an empty line to fire the request. The response will be: HTTP/1.1 301 Moved Permanently Cache-Control: public, max-age=0, must-revalidate Content-Length: 46 Content-Type: text/plain Date: Sun, 29 Jul 2018 14:07:07 GMT Location: https://flaviocopes.com/axios/ Age: 0 Connection: keep-alive Server: Netlify Redirecting to https://flaviocopes.com/axios/
1150
The HTTP protocol
See, this is an HTTP response we got back from the server. It's a 301 Moved Permanently request. See the HTTP status codes list to know more about the status codes. It basically tells us the resource has permanently moved to another location. Why? Because we connected to port 80, which is the default for HTTP, but on my server I set up an automatic redirection to HTTPS. The new location is specified in the Location HTTP response header. There are other headers, all described in the HTTP response headers list. In both the request and the response, an empty line separates the request header from the request body. The response body in this case contains the string Redirecting to https://flaviocopes.com/axios/
which is 46 bytes long, as specified in the Content-Length header. It is shown in the browser when you open the page, while it automatically redirects you to the correct location. In this case we're using telnet, the low-level tool that we can use to connect to any server, so we can't have any kind of automatic redirect. Let's do this process again, now connecting to port 443, which is the default port of the HTTPS protocol. We can't use telnet because of the SSL handshake that must happen. Let's keep things simple and use curl , another command-line tool. We cannot directly type the HTTP request, but we'll see the response: curl -i https://flaviocopes.com/axios/
this is what we'll get in return: HTTP/1.1 200 OK Cache-Control: public, max-age=0, must-revalidate Content-Type: text/html; charset=UTF-8 Date: Sun, 29 Jul 2018 14:20:45 GMT Etag: "de3153d6eacef2299964de09db154b32-ssl" Strict-Transport-Security: max-age=31536000 Age: 152 Content-Length: 9797 Connection: keep-alive Server: Netlify
1151
The HTTP protocol
HTTP requests using Axios ....
I cut the response, but you can see that the HTML of the page is being returned now.
Other resources An HTTP server will not just transfer HTML files, but typically it will also serve other files: CSS, JS, SVG, PNG, JPG, lots of different file types. This depends on the configuration. HTTP is perfectly capable of transferring those files as well, and the client will know about the file type, thus interpret them in the right way. This is how the web works: when an HTML page is retrieved by the browser, it's interpreted and any other resource it needs to display property (CSS, JavaScript, images..) is retrieved through additional HTTP requests to the same server.
1152
Caching in HTTP
Caching in HTTP A detailed description of the caching options available through the HTTP protocol Caching is a technique that can help network connections be faster, because the less things need to be transferred, the better. Many resources can be very large, and be very expensive in terms of time and also actual cost (on mobile, for example) to retrieve. There are different caching strategies that are made available by HTTP and used by browsers. No caching The Expires header Conditional GET Using If-Modified-Since and Last-Modified Using If-None-Match and ETag
No caching First, the Cache-Control header can tell the browser to never use a cached version of a resource without first checking the ETag value (more on this later), by using the no-cache value: Cache-Control: no-cache
A more restrictive no-store option tells the browser (and all the intermediary network devices) the not even store the resource in its cache: Cache-Control: no-store
If Cache-Control has the max-age value, that's used to determine the number of seconds this resource is valid as a cache: Cache-Control: max-age=3600
The Expires header
1153
Caching in HTTP
When an HTTP request is sent, the browser checks if it has a copy of that page in the cache, based on the URL you required. If there is, it checks the page for freshness. A page is fresh if the HTTP response Expires header value is less than the current datetime. The Expires header takes this form: Expires: Sat, 01 Dec 2018 16:00:00 GMT
Conditional GET There are different ways to perform a conditional get. All are based on using the If-* request headers: using If-Modified-Since and Last-Modified using If-None-Match and ETag
Using If-Modified-Since and Last-Modified The browser can send a request to the server and instead of just asking for the page, it adds an If-Modified-Since header, based on the Last-Modified header value it got from the currently cached page. This tells the server to only return a response body (the page content) if the resource has been updated since that date. Otherwise the server returns a 304 Not Modified response.
Using If-None-Match and ETag The Web Server (depending on the setup, how page are served, etc) can send an ETag header. That is the identifier of a resource. Every time the resource changes, for example it's updated, the ETag should change as well. It's like a checksum. The browser sends an If-None-Match header that contains one (or more) ETag value. If none match, the server returns the fresh version of the resource, otherwise a 304 Not Modified response.
1154
Caching in HTTP
1155
The HTTP Status Codes List
The HTTP Status Codes List Every HTTP response comes with a status code that signals with a clear number information about how the request was processed An HTTP status code is the first line in an HTTP response, that's sent from a server to the client. This list will be useful if you are trying to find out why a server sent a particular status code, and see what does it mean, or if you are building the server and you are browsing for the perfect status code to return. Status codes are expressed through 3-digit numbers, plus a short description. The first digit of the number identifies the response group. There are 5 groups: 1xx : informational response - indicates that the request was received and understood 2xx : successful response - indicates the action requested by the client was received,
understood and accepted 3xx : redirection - indicates the client must take additional action to complete the request 4xx : client error - indicates there was an error, that seems to have been caused by the
client 5xx : server error - indicates that an error happened on the server
In the rest of the post I list all the useful status codes. (I removed some technology-specific ones, like the WebDAV ones, and the ones very rarely used)
Informational responses Status code
100 Continue
Description The server has received the request headers and the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). Sending a large request body to a server after a request has been rejected for inappropriate headers would be inefficient. To have a server check the request's headers, a client must send Expect: 100-continue as a header in its initial request and receive a 100 Continue status code in response before sending the body. If the client receives an error code such as 403 (Forbidden) or 405 (Method Not Allowed) then it shouldn't send the request's body. The response 417 Expectation Failed indicates that the request should be repeated without the
1156
The HTTP Status Codes List
Expect header as it indicates that the server doesn't support expectations (this is the case, for example, of HTTP/1.0 servers). 101 Switching Protocols
The client asked the server to switch protocols and the server has agreed to do so. See RFC 7231#6.2.2
Successful responses Status code
Description
200 OK
This is the standard response for successful HTTP requests.
201 Created
Typically a response to a POST request. The request has been completed, and a new resource has been created.
202 Accepted
The request has been accepted for processing. There's nothing said about the actual processing, and the result of that, which might happen on a separate server, or batched.
203 NonAuthoritative Information
The original server returned a 200, and a transforming proxy between the client and the server changed the payload
204 No Content
The server successfully processed the request, but is not returning any content.
205 Reset Content
The server successfully processed the request, but is not returning any content. Similar to a 204 response, but the server requires that the client resets the document view (used to clear forms, for example)
206 Partial Content
In response to a Range request coming from the client, the server sends a partial content response. See RFC 7233#4.1
Redirection Status code
Description
301 Moved Permanently
This and all future requests should be directed to the given URI. Only use with GET/HEAD requests, and 308 Permanent Redirect for all the other methods.
302 Found
The resource is temporarily moved to a URL specified by the Location header. Only use with GET/HEAD requests, and 307 Temporary Redirect for all the other methods.
303 See Other
After a POST or PUT request, points to the confirmation message in the Location header, accessible using a new GET request.
304 Not Modified
When the client uses the request headers If-Modified-Since or If-NoneMatch , this response status code indicates that the resource has not been modified.
1157
The HTTP Status Codes List
307 Temporary Redirect
Similar to the 302 request, except it does not allow changing the HTTP method
308 Permanent Redirect
Similar to the 301 request, except it does not allow changing the HTTP method
Client errors Status code
Description
400 Bad Request
Due to a request error that was generated on the client, the server cannot process the request. Errors can include a malformed request, size too large to be handled, or others.
401 Unauthorized
Sent when authentication is required and the client is not authorized
403 Forbidden
The resource is not available for various reasons. If the reason is authentication, prefer the 401 Unauthorized status code.
404 Not Found
The requested resource could not be found.
405 Method Not Allowed
The resource is not available through that HTTP method, but might be with another.
406 Not Acceptable
The client passed an Accept header with values that are not compatible with the server.
407 Proxy Authentication Required
Between the client and the server there is a proxy that requires authentication.
408 Request Timeout
The server timed out waiting for the request.
409 Conflict
Indicates that the request could not be processed because of conflict in the current state of the resource, such as an edit conflict between multiple simultaneous updates.
410 Gone
The resource is no longer available and will not be available again. More powerful than a 404, for example search engines interpret it as an indication to remove that resource from their index.
411 Length Required
The client needs to add a Content-Length header to the request, and it was required.
412 Precondition Failed
Returned if the client sent an If-Unmodified-Since or If-None-Match request header, and the server cannot satisfy that condition.
413 Payload Too Large
The request is larger than the server is willing or able to process.
414 URI Too
The URI provided was too long for the server to process.
1158
The HTTP Status Codes List
Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
The request entity has a media type which the server or resource does not support.
416 Range Not Satisfiable
The client has asked for a portion of the file using the Range header, but the server cannot supply that portion.
417 Expectation Failed
The server cannot meet the requirements of the Expect request header.
421 Misdirected Request
The request was directed at a server that is not able to produce a response (for example because of connection reuse).
426 Upgrade Required
The client should switch to a different protocol such as TLS/1.0, specified in the Upgrade header field.
428 Precondition Required
The server requires the request to contain a If-Match header.
429 Too Many Requests
The user has sent too many requests in a given amount of time. Used for rate limiting.
431 Request Header Fields Too Large
The request cannot be fulfilled because one or more headers, or the whole headers set, is too large.
451 Unavailable For Legal Reasons
The resource is not available due to legal reasons
Server errors Status code
Description
500 Internal Server Error
A generic server error message, given when an unexpected condition was encountered and no more specific message is suitable.
501 Not Implemented
The server either does not recognize the request method, or it lacks the ability to fulfil the request.
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
The server is currently temporarily unavailable (because it is overloaded or down for maintenance).
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
1159
The HTTP Status Codes List
505 HTTP Version Not Supported
The server does not support the HTTP protocol version specified in the request.
1160
The curl guide to HTTP requests
The curl guide to HTTP requests curl is an awesome tool that lets you create network requests from the command line curl is a a command line tool that allows to transfer data across the network. It supports lots of protocols out of the box, including HTTP, HTTPS, FTP, FTPS, SFTP, IMAP, SMTP, POP3, and many more. When it comes to debugging network requests, curl is one of the best tools you can find. It's one of those tools that once you know how to use you always get back to. A programmer's best friend. It's universal, it runs on Linux, Mac, Windows. Refer to the official installation guide to install it on your system. Fun fact: the author and maintainer of curl, swedish, was awarded by the king of Sweden for the contributions that his work (curl and libcurl) did to the computing world. Let's dive into some of the commands and operations that you are most likely to want to perform when working with HTTP requests. Those examples involve working with HTTP, the most popular protocol. Perform an HTTP GET request Get the HTTP response headers Only get the HTTP response headers Perform an HTTP POST request Perform an HTTP POST request sending JSON Perform an HTTP PUT request Follow a redirect Store the response to a file Using HTTP authentication Set a different User Agent Inspecting all the details of the request and the response Copying any browser network request to a curl command
Perform an HTTP GET request When you perform a request, curl will return the body of the response:
1161
The curl guide to HTTP requests
curl https://flaviocopes.com/
Get the HTTP response headers By default the response headers are hidden in the output of curl. To show them, use the i option: curl -i https://flaviocopes.com/
Only get the HTTP response headers Using the I option, you can get only the headers, and not the response body: curl -I https://flaviocopes.com/
Perform an HTTP POST request The X option lets you change the HTTP method used. By default, GET is used, and it's the same as writing curl -X GET https://flaviocopes.com/
Using -X POST will perform a POST request. You can perform a POST request passing data URL encoded: curl -d "option=value&something=anothervalue" -X POST https://flaviocopes.com/
In this case, the application/x-www-form-urlencoded Content-Type is sent.
Perform an HTTP POST request sending JSON Instead of posting data URL-encoded, like in the example above, you might want to send JSON. In this case you need to explicitly set the Content-Type header, by using the H option:
You can also send a JSON file from your disk: curl -d "@my-file.json" -X POST https://flaviocopes.com/
Perform an HTTP PUT request The concept is the same as for POST requests, just change the HTTP method using -X PUT
Follow a redirect A redirect response like 301, which specifies the Location response header, can be automatically followed by specifying the L option: curl http://flaviocopes.com/
will not follow automatically to the HTTPS version which I set up to redirect to, but this will: curl -L http://flaviocopes.com/
Store the response to a file Using the o option you can tell curl to save the response to a file: curl -o file.html https://flaviocopes.com/
You can also just save a file by its name on the server, using the O option: curl -O https://flaviocopes.com/index.html
Using HTTP authentication If a resource requires Basic HTTP Authentication, you can use the u option to pass the user:password values:
1163
The curl guide to HTTP requests
curl -u user:pass https://flaviocopes.com/
Set a different User Agent The user agent tells the server which client is performing the request. By default curl sends the curl/ user agent, like: curl/7.54.0 .
You can specify a different user agent using the --user-agent option: curl --user-agent "my-user-agent" https://flaviocopes.com
Inspecting all the details of the request and the response Use the --verbose option to make curl output all the details of the request, and the response: curl --verbose -I https://flaviocopes.com/
* Trying 178.128.202.129... * TCP_NODELAY set * Connected to flaviocopes.com (178.128.202.129) port 443 (#0) * TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: flaviocopes.com * Server certificate: Let's Encrypt Authority X3 * Server certificate: DST Root CA X3 > HEAD / HTTP/1.1 > Host: flaviocopes.com > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Cache-Control: public, max-age=0, must-revalidate Cache-Control: public, max-age=0, must-revalidate < Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8 < Date: Mon, 30 Jul 2018 08:08:41 GMT Date: Mon, 30 Jul 2018 08:08:41 GMT ...
Copying any browser network request to a curl command 1164
The curl guide to HTTP requests
When inspecting any network request using the Chrome Developer Tools, you have the option to copy that request to a curl request:
What is an RFC? RFCs, Request for Comments, are publications from the technology community In several blog posts I mention "this technology is defined in RFC xxxx", or "see RFC yyyy for the nitty gritty details". What is an RFC? RFC stands for Request for Comments. You might have RFC in various environments now, but traditionally what we mean with RFC on the Internet is a publication that's written by engineers and computer scientists, aimed at other professionals that work in the Internet sphere. RFCs have a long history, starting back in 1969 in ARPANET times. The Internet was created in this way, with RFCs being starting point of discussion, or protocol implementation details that people used to implement the actual software. The name, Request for Comments, encouraged a community discussion around those papers, that originally circulated in printed form. RFCs today, before being added as official RFCs, go through various steps, that might take many months or years of discussions. This is because an RFC today, once published, cannot be changed any more. The entire process is managed by IETF, the Internet Engineering Task Force. Revisions to RFC documents need to be published as independent RFCs, and older RFCs are marked as superseded by those newer revisions. Other RFC supplement what older RFC specify. For example, RFC 1349 from 1992, titled "Type of Service in the Internet Protocol Suite", was obsoleted by RFC 2474 in 1998, titled "Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers". Here are some very famous RFC documents that are very well worth the read. Those are things that will be relevant for a long time (I myself printed some of them back in high school like 20 years ago, and I still have those), and are the foundation of the Internet: RFC 791: IP RFC 793: TCP RFC 1034: DNS RFC 4291: IPv6 RFC 6749: OAuth 2.0
1166
What is an RFC?
Some other RFCs are less technical, like RFC 1855 Netiquette Guidelines, and some others are just funny jokes by engineers for engineers, like RFC 2324, the Hyper Text Coffee Pot Control Protocol. So, in conclusion: RFCs are technical documents that, after going through a rigorous process of discussion and technical verification are added to the list of official protocols recognized by the IETF, and being a standard they can then be implemented by software vendors.
1167
The HTTP Response Headers List
The HTTP Response Headers List Every HTTP response has a set of headers. This post aims to list all those headers, and describe them Every HTTP response can have a set of headers. This post aims to list all those headers, and describe them. Standard headers Accept-Patch Accept-Ranges Age Allow Alt-Svc Cache-Control Connection Content-Disposition Content-Encoding Content-Language Content-Length Content-Location Content-Range Content-Type Date Delta-Base ETag Expires IM Last-Modified Link Location Pragma Proxy-Authenticate Public-Key-Pins Retry-After Server Set-Cookie Strict-Transport-Security Trailer
1168
The HTTP Response Headers List
Transfer-Encoding Tk Upgrade Vary Via Warning WWW-Authenticate
CORS headers Non-standard headers: Content-Security-Policy Refresh X-Powered-By X-Request-ID X-UA-Compatible X-XSS-Protection
Standard headers Accept-Patch Accept-Patch: text/example;charset=utf-8
Specifies which patch document formats this server supports
Accept-Ranges Accept-Ranges: bytes
What partial content range types this server supports via byte serving
Age Age: 12
The age the object has been in a proxy cache in seconds
Allow Allow: GET, HEAD
Valid methods for a specified resource. To be used for a 405 Method not allowed
A server uses "Alt-Svc" header (meaning Alternative Services) to indicate that its resources can also be accessed at a different network location (host or port) or using a different protocol. When using HTTP/2, servers should instead send an ALTSVC frame
If no-cache is used, the Cache-Control header can tell the browser to never use a cached version of a resource without first checking the ETag value. max-age is measured in seconds
The more restrictive no-store option tells the browser (and all the intermediary network devices) the not even store the resource in its cache: Cache-Control: no-store
Connection Connection: close
Control options for the current connection and list of hop-by-hop response fields. Deprecated in HTTP/2
An opportunity to raise a "File Download" dialogue box for a known MIME type with binary format or suggest a filename for dynamic content. Quotes are necessary with special characters
Content-Encoding Content-Encoding: gzip
The type of encoding used on the data. See HTTP compression
Content-Language Content-Language: en
The natural language or languages of the intended audience for the enclosed content
1170
The HTTP Response Headers List
Content-Length Content-Length: 348
The length of the response body expressed in 8-bit bytes
The date and time that the message was sent (in "HTTP-date" format as defined by RFC 7231)
Delta-Base Delta-Base: "abc"
Specifies the delta-encoding entity tag of the response
ETag ETag: "737060cd8c284d8a[...]"
An identifier for a specific version of a resource, often a message digest
Expires Expires: Sat, 01 Dec 2018 16:00:00 GMT
1171
The HTTP Response Headers List
Gives the date/time after which the response is considered stale (in "HTTP-date" format as defined by RFC 7231)
IM IM: feed
Instance-manipulations applied to the response
Last-Modified Last-Modified: Mon, 15 Nov 2017 12:00:00 GMT
The last modified date for the requested object (in "HTTP-date" format as defined by RFC 7231)
Link Link: ; rel="alternate"
Used to express a typed relationship with another resource, where the relation type is defined by RFC 5988
Location Location: /pub/WWW/People.html
Used in redirection, or when a new resource has been created
Pragma Pragma: no-cache
Implementation-specific fields that may have various effects anywhere along the requestresponse chain.
Proxy-Authenticate Proxy-Authenticate: Basic
Request authentication to access the proxy
Public-Key-Pins HTTP Public Key Pinning, announces hash of website's authentic TLS certificate
1172
The HTTP Response Headers List
Retry-After Retry-After: 120 Retry-After: Fri, 07 Nov 2014 23:59:59 GMT
If an entity is temporarily unavailable, this instructs the client to try again later. Value could be a specified period of time (in seconds) or a HTTP-date
A HSTS Policy informing the HTTP client how long to cache the HTTPS only policy and whether this applies to subdomains
Trailer Trailer: Max-Forwards
The Trailer general field value indicates that the given set of header fields is present in the trailer of a message encoded with chunked transfer coding
Transfer-Encoding Transfer-Encoding: chunked
The form of encoding used to safely transfer the entity to the user. Currently defined methods are: chunked, compress, deflate, gzip, identity. Deprecated in HTTP/2
Tk Tk: ?
1173
The HTTP Response Headers List
Tracking Status header, value suggested to be sent in response to a DNT(do-not-track), possible values: "!" — under construction "?" — dynamic "G" — gateway to multiple parties "N" — not tracking "T" — tracking "C" — tracking with consent "P" — tracking only if consented "D" — disregarding DNT "U" — updated
Ask the client to upgrade to another protocol. Deprecated in HTTP/2
Vary Vary: Accept-Language Vary: *
Tells downstream proxies how to match future request headers to decide whether the cached response can be used rather than requesting a fresh one from the origin server
Via Via: 1.0 fred, 1.1 example.com (Apache/1.1)
Informs the client of proxies through which the response was sent
Warning Warning: 199 Miscellaneous warning
A general warning about possible problems with the entity body
WWW-Authenticate WWW-Authenticate: Basic
Indicates the authentication scheme that should be used to access the requested entity
CORS headers Access-Control-Allow-Origin Access-Control-Allow-Credentials Access-Control-Expose-Headers Access-Control-Max-Age Access-Control-Allow-Methods Access-Control-Allow-Headers
1174
The HTTP Response Headers List
Non-standard headers: Content-Security-Policy Helps to protect against XSS attacks. See MDN for more details
Refresh Refresh: 10;http://www.example.org/
Redirect to a URL after an arbitrary delay expressed in seconds
X-Powered-By X-Powered-By: Brain/0.6b
Can be used by servers to send their name and version
X-Request-ID Allows the server to pass a request ID that clients can send back to let the server correlate the request
X-UA-Compatible Sets which version of Internet Explorer compatibility layer should be used. Only used if you need to support IE8 or IE9. See StackOverflow
X-XSS-Protection Now replaced by the Content-Security-Policy header, used in older browsers to stop pages load when an XSS attack is detected
1175
The HTTP Request Headers List
The HTTP Request Headers List Every HTTP request has a set of mandatory and optional headers. This post aims to list all those headers, and describe them. Standard headers A-IM Accept Accept-Charset Accept-Encoding Accept-Language Accept-Datetime Access-Control-Request-Method Access-Control-Request-Headers Authorization Cache-Control Connection Content-Length Content-Type Cookie Date Expect Forwarded From Host If-Match If-Modified-Since If-None-Match If-Range If-Unmodified-Since Max-Forwards Origin Pragma Proxy-Authorization Range Referer TE User-Agent Upgrade
1176
The HTTP Request Headers List
Via Warning Non-standard headers Dnt X-Requested-With X-CSRF-Token
Standard headers A-IM A-IM: feed
Instance manipulations that are acceptable in the response. Defined in RFC 3229
Accept Accept: application/json
The media type/types acceptable
Accept-Charset Accept-Charset: utf-8
The charset acceptable
Accept-Encoding Accept-Encoding: gzip, deflate
List of acceptable encodings
Accept-Language Accept-Language: en-US
List of acceptable languages
Accept-Datetime Accept-Datetime: Thu, 31 May 2007 20:35:00 GMT
Request a past version of the resource prior to the datetime passed
1177
The HTTP Request Headers List
Access-Control-Request-Method Access-Control-Request-Method: GET
The content type of the body of the request (used in POST and PUT requests).
Cookie Cookie: name=value
See more on Cookies
1178
The HTTP Request Headers List
Date Date: Tue, 15 Nov 1994 08:12:31 GMT
The date and time that the request was sent
Expect Expect: 100-continue
It's typically used when sending a large request body. We expect the server to return back a 100 Continue HTTP status if it can handle the request, or 417 Expectation Failed if not
Disclose original information of a client connecting to a web server through an HTTP proxy. Used for testing purposes only, as it discloses privacy sensitive information
The email address of the user making the request. Meant to be used, for example, to indicate a contact email for bots.
Host Host: flaviocopes.com
The domain name of the server (used to determined the server with virtual hosting), and the TCP port number on which the server is listening. If the port is omitted, 80 is assumed. This is a mandatory HTTP request header
If-Match If-Match: "737060cd8c284d8582d"
Given one (or more) ETags , the server should only send back the response if the current resource matches one of those ETags. Mainly used in PUT methods to update a resource only if it has not been modified since the user last updated it
If-Modified-Since If-Modified-Since: Sat, 29 Oct 1994 19:43:31 GMT
1179
The HTTP Request Headers List
Allows to return a 304 Not Modified response header if the content is unchanged since that date
Ask the server to upgrade to another protocol. Deprecated in HTTP/2
Via Via: 1.0 fred, 1.1 example.com (Apache/1.1)
Informs the server of proxies through which the request was sent
Warning Warning: 199 Miscellaneous warning
A general warning about possible problems with the status of the message. Accepts a special range of values.
1181
The HTTP Request Headers List
Non-standard headers There are some widely used non-standard headers as well, including:
Dnt DNT: 1
If enabled, asks servers to not track the user
X-Requested-With X-Requested-With: XMLHttpRequest
Identifies XHR requests
X-CSRF-Token X-CSRF-Token:
Used to prevent CSRF
1182
How HTTP requests work
How HTTP requests work What happens when you type an URL in the browser, from start to finish The HTTP protocol I analyze URL requests only Things relate to macOS / Linux DNS Lookup phase gethostbyname TCP request handshaking Sending the request The request line The request header The request body The response Parse the HTML This article describes how browsers perform page requests using the HTTP/1.1 protocol If you ever did an interview, you might have been asked: "what happens when you type something into the Google search box and press enter". It's one of the most popular questions you get asked. People just want to see if you can explain some rather basic concepts and if you have any clue how the internet actually works. In this post, I'll analyze what happens when you type an URL in the address bar of your browser and press enter. It's a very interesting topic to dissect in a blog post, as it touches many technologies I can dive into in separate posts. This is tech that is very rarely changed, and powers one the most complex and wide ecosystems ever built by humans.
The HTTP protocol First, I mention HTTPS in particular because things are different from an HTTPS connection.
I analyze URL requests only
1183
How HTTP requests work
Modern browsers have the capability of knowing if the thing you wrote in the address bar is an actual URL or a search term, and they will use the default search engine if it's not a valid URL. I assume you type an actual URL. When you enter the URL and press enter, the browser first builds the full URL. If you just entered a domain, like flaviocopes.com , the browser by default will prepend HTTP:// to it, defaulting to the HTTP protocol.
Things relate to macOS / Linux Just FYI. Windows might do some things slightly differently.
DNS Lookup phase The browser starts the DNS lookup to get the server IP address. The domain name is a handy shortcut for us humans, but the internet is organized in such a way that computers can look up the exact location of a server through its IP address, which is a set of numbers like 222.324.3.1 (IPv4). First, it checks the DNS local cache, to see if the domain has already been resolved recently. Chrome has a handy DNS cache visualizer you can see at chrome://net-internals/#dns If nothing is found there, the browser uses the DNS resolver, using the gethostbyname POSIX system call to retrieve the host information.
gethostbyname gethostbyname first looks in the local hosts file, which on macOS or Linux is located in /etc/hosts , to see if the system provides the information locally.
If this does not give any information about the domain, the system makes a request to the DNS server. The address of the DNS server is stored in the system preferences. Those are 2 popular DNS servers: 8.8.8.8 : the Google public DNS server 1.1.1.1 : the CloudFlare DNS server
Most people use the DNS server provided by their internet provider.
1184
How HTTP requests work
The browser performs the DNS request using the UDP protocol. TCP and UDP are two of the foundational protocols of computer networking. They sit at the same conceptual level, but TCP is connection-oriented, while UDP is a connectionless protocol, more lightweight, used to send messages with little overhead. How the UDP request is performed is not in the scope of this tutorial The DNS server might have the domain IP in the cache. It not, it will ask the root DNS server. That's a system (composed of 13 actual servers, distributed across the planet) that drives the entire internet. The DNS server does not know the address of each and every domain name on the planet. What it knows is where the top-level DNS resolvers are. A top-level domain is the domain extension: .com , .it , .pizza and so on. Once the root DNS server receives the request, it forwards the request to that top-level domain (TLD) DNS server. Say you are looking for flaviocopes.com . The root domain DNS server returns the IP of the .com TLD server. Now our DNS resolver will cache the IP of that TLD server, so it does not have to ask the root DNS server again for it. The TLD DNS server will have the IP addresses of the authoritative Name Servers for the domain we are looking for. How? When you buy a domain, the domain registrar sends the appropriate TDL the name servers. When you update the name servers (for example, when you change the hosting provider), this information will be automatically updated by your domain registrar. Those are the DNS servers of the hosting provider. They are usually more than 1, to serve as backup. For example: ns1.dreamhost.com ns2.dreamhost.com ns3.dreamhost.com
The DNS resolver starts with the first, and tries to ask the IP of the domain (with the subdomain, too) you are looking for. That is the ultimate source of truth for the IP address.
1185
How HTTP requests work
Now that we have the IP address, we can go on in our journey.
TCP request handshaking With the server IP address available, now the browser can initiate a TCP connection to that. A TCP connection requires a bit of handshaking before it can be fully initialized and you can start sending data. Once the connection is established, we can send the request
Sending the request The request is a plain text document structured in a precise way determined by the communication protocol. It's composed of 3 parts: the request line the request header the request body
The request line The request line sets, on a single line: the HTTP method the resource location the protocol version Example: GET / HTTP/1.1
The request header The request header is a set of field: value pairs that set certain values. There are 2 mandatory fields, one of which is Host , and the other is Connection , while all the other fields are optional: Host: flaviocopes.com Connection: close
1186
How HTTP requests work
Host indicates the domain name which we want to target, while Connection is always set to close unless the connection must be kept open.
Some of the most used header fields are: Origin Accept Accept-Encoding Cookie Cache-Control Dnt
but many more exist. The header part is terminated by a blank line.
The request body The request body is optional, not used in GET requests but very much used in POST requests and sometimes in other verbs too, and it can contain data in JSON format. Since we're now analyzing a GET request, the body is blank and we'll not look more into it.
The response Once the request is sent, the server processes it and sends back a response. The response starts with the status code and the status message. If the request is successful and returns a 200, it will start with: 200 OK
The request might return a different status code and message, like one of these: 404 Not Found 403 Forbidden 301 Moved Permanently 500 Internal Server Error 304 Not Modified 401 Unauthorized
The response then contains a list of HTTP headers and the response body (which, since we're making the request in the browser, is going to be HTML)
1187
How HTTP requests work
Parse the HTML The browser now has received the HTML and starts to parse it, and will repeat the exact same process we did not for all the resources required by the page: CSS files images the favicon JavaScript files ... How browsers render the page then is out of the scope, but it's important to understand that the process I described is not just for the HTML pages, but for any item that's served over HTTP.
1188
HOW-TOs
HOW-TOs
1189
How to append an item to an array in JavaScript
How to append an item to an array in JavaScript Find out the ways JavaScript offers you to append an item to an array, and the canonical way you should use
Append a single item To append a single item to an array, use the push() method provided by the Array object: const fruits = ['banana', 'pear', 'apple'] fruits.push('mango')
push() mutates the original array.
To create a new array instead, use the concat() Array method: const fruits = ['banana', 'pear', 'apple'] const allfruits = fruits.concat('mango')
Notice that concat() does not actually add an item to the array, but creates a new array, which you can assign to another variable, or reassign to the original array (declaring it as let , as you cannot reassign a const ):
let fruits = ['banana', 'pear', 'apple'] fruits = fruits.concat('mango')
Append multiple items To append a multiple item to an array, you can use push() by calling it with multiple arguments: const fruits = ['banana', 'pear', 'apple'] fruits.push('mango', 'melon', 'avocado')
You can also use the concat() method you saw before, passing a list of items separated by a comma:
or an array: const fruits = ['banana', 'pear', 'apple'] const allfruits = fruits.concat(['mango', 'melon', 'avocado'])
Remember that as described previously this method does not mutate the original array, but it returns a new array.
1191
How to check if a JavaScript object property is undefined
How to check if a JavaScript object property is undefined In a JavaScript program, the correct way to check if an object property is undefined is to use the `typeof` operator. See how you can use it with this simple explanation In a JavaScript program, the correct way to check if an object property is undefined is to use the typeof operator. typeof returns a string that tells the type of the operand. It is used without parentheses,
passing it any value you want to check: const list = [] const count = 2 typeof list //"object" typeof count //"number" typeof "test" //"string" typeof color //"undefined"
If the value is not defined, typeof returns the 'undefined' string. Now suppose you have a car object, with just one property: const car = { model: 'Fiesta' }
This is how you check if the color property is defined on this object: if (typeof car.color === 'undefined') { // color is undefined }
1192
How to deep clone a JavaScript object
How to deep clone a JavaScript object JavaScript offers many ways to copy an object, but not all provide deep copy. Learn the most efficient way, and also find out all the options you have Copying objects in JavaScript can be tricky. Some ways perform a shallow copy, which is the default behavior in most of the cases. Deep copy vs Shallow copy Easiest option: use Lodash Object.assign() Using the Object Spread operator Wrong solutions Using Object.create() JSON serialization
Deep copy vs Shallow copy A shallow copy successfully copies primitive types like numbers and strings, but any object reference will not be recursively copied, but instead the new, copied object will reference the same object. If an object references other objects, when performing a shallow copy of the object, you copy the references to the external objects. When performing a deep copy, those external objects are copied as well, so the new, cloned object is completely independent from the old one. Looking out how to deep clone an object in JavaScript on the internet, you'll find lots of answers but not always the answer is correct.
Easiest option: use Lodash My suggestion to perform deep copy is to rely on a library that's well tested, very popular and carefully maintained: Lodash. Lodash offers the very convenient clone and deepclone functions to perform shallow and deep cloning. Lodash has this nice feature: you can import single functions separately in your project to reduce a lot the size of the dependency.
1193
How to deep clone a JavaScript object
In Node.js: const clone = require('lodash.clone') const clonedeep = require('lodash.clonedeep')
Here is an example that shows those two functions in use: const clone = require('lodash.clone') const clonedeep = require('lodash.clonedeep') const externalObject = { color: 'red' } const original = { a: new Date(), b: NaN, c: new Function(), d: undefined, e: function() {}, f: Number, g: false, h: Infinity, i: externalObject } const cloned = clone(original) externalObject.color = 'blue' console.info('⬇ shallow cloning
')
console.info( '✏ Notice the i.color property we changed on original is also changed in the shallow cop y' ) console.log(original) console.log(cloned) const deepcloned = clonedeep(original) externalObject.color = 'yellow' console.log('') console.info('⬇ deep cloning
')
console.info('✏ Notice the i.color property does not propagate any more') console.log(original) console.log(deepcloned)
In this simple example we first create a shallow copy, and edit the i.color property, which propagates to the copied object. In the deep clone, this does not happen.
1194
How to deep clone a JavaScript object
See this live in Glitch.
Object.assign() Object.assign() performs a shallow copy of an object, not a deep clone.
const copied = Object.assign({}, original)
Being a shallow copy, values are cloned, and objects references are copied (not the objects themselves), so if you edit an object property in the original object, that's modified also in the copied object, since the referenced inner object is the same: const original = { name: 'Fiesta', car: { color: 'blue' } } const copied = Object.assign({}, original) original.name = 'Focus' original.car.color = 'yellow' copied.name //Fiesta copied.car.color //yellow
Using the Object Spread operator This ES6/ES2015 feature provides a very convenient way to perform a shallow clone, equivalent to what Object.assign() does. const copied = { ...original }
Wrong solutions Online you will find many suggestions. Here are some wrong ones:
Using Object.create() Note: not recommended
1195
How to deep clone a JavaScript object
const copied = Object.create(original)
This is wrong, it's not performing any copy. Instead, the original object is being used as the prototype of copied . Apparently it works, but under the hoods it's not: const original = { name: 'Fiesta' } const copied = Object.create(original) copied.name //Fiesta original.hasOwnProperty('name') //true copied.hasOwnProperty('name') //false
JSON serialization Note: not recommended Some recommend transforming to JSON: const cloned = JSON.parse(JSON.stringify(original))
but that has unexpected consequences. By doing this you will lose any Javascript property that has no equivalent type in JSON, like Function or Infinity . Any property that's assigned to undefined will simply be ignored by JSON.stringify , causing them to be missed on the cloned object.
Also, some objects are simply converted to strings, like Date objects for example (also, not taking into account the timezone and defaulting to UTC), Set, Map and many others: JSON.parse( JSON.stringify({ a: new Date(), b: NaN, c: new Function(), d: undefined, e: function() {}, f: Number, g: false, h: Infinity }) )
1196
How to deep clone a JavaScript object
This only works if you do not have any inner objects and functions, but just values.
1197
How to convert a string to a number in JavaScript
How to convert a string to a number in JavaScript Learn how to convert a string to a number using JavaScript JavaScript provides various ways to convert a string value into a number.
Best: use the Number object The best one in my opinion is to use the Number object, in a non-constructor context (without the new keyword): const count = Number('1234') //1234
This takes care of the decimals as well. Number is a wrapper object that can perform many operations. If we use the constructor ( new Number("1234") ) it returns us a Number object instead of a number value, so pay attention.
Watch out for separators between digits: Number('10,000') //NaN Number('10.00') //10 Number('10000') //10000
In the case you need to parse a string with decimal separators, use Intl.NumberFormat instead.
1198
How to convert a string to a number in JavaScript
Other solutions Use parseInt() and parseFloat() Another good solution for integers is to call the parseInt() function: const count = parseInt('1234', 10) //1234
Don't forget the second parameter, which is the radix, always 10 for decimal numbers, or the conversion might try to guess the radix and give unexpected results. parseInt() tries to get a number from a string that does not only contain a number:
parseInt('10 lions', 10) //10
but if the string does not start with a number, you'll get NaN (Not a Number): parseInt("I'm 10", 10) //NaN
Also, just like Number it's not reliable with separators between the digits: parseInt('10,000', 10) //10 ❌ parseInt('10.00', 10) //10 ✅ (considered decimals, cut) parseInt('10.000', 10) //10 ✅ (considered decimals, cut) parseInt('10.20', 10) //10 ✅ (considered decimals, cut) parseInt('10.81', 10) //10 ✅ (considered decimals, cut) parseInt('10000', 10) //10000 ✅
If you want to retain the decimal part, and not just get the integer part, use parseFloat() : parseFloat('10,000', 10) //10 ❌ parseFloat('10.00', 10) //10 ✅ (considered decimals, cut) parseFloat('10.000', 10) //10 ✅ (considered decimals, cut) parseFloat('10.20', 10) //10.2 ✅ (considered decimals) parseFloat('10.81', 10) //10.81 ✅ (considered decimals) parseFloat('10000', 10) //10000 ✅
Use + One "trick" is to use the unary operator + before the string: ;+'10,000' + //NaN ✅ '10.000' + //10 ✅ '10.00' + //10 ✅
See how it returns NaN in the first example, which is the correct behavior: it's not a number.
Use Math.floor() Similar to the + unary operator, but returns the integer part, is to use Math.floor() : Math.floor('10,000') //NaN ✅ Math.floor('10.000') //10 ✅ Math.floor('10.00') //10 ✅ Math.floor('10.20') //10 ✅ Math.floor('10.81') //10 ✅ Math.floor('10000') //10000 ✅
Use * 1 Generally one of the fastest options, behaves like the + unary operator, so it does not perform conversion to an integer if the number is a float. '10,000' * 1 //NaN ✅ '10.000' * 1 //10 ✅ '10.00' * 1 //10 ✅ '10.20' * 1 //10.2 ✅ '10.81' * 1 //10.81 ✅ '10000' * 1 //10000 ✅
Performance Every one of these methods has a different performance on different environments, as it all depends on the implementation. In my case, * 1 is the winner performance-wise 10x faster than other alternatives. Use this JSPerf test to try yourself:
1200
How to convert a string to a number in JavaScript
1201
How to format a number as a currency value in JavaScript
How to format a number as a currency value in JavaScript Learn how to convert a number into a currency value, using the JavaScript Internationalization API Say you have a number like 10 , and it represents the price of something. You want to transform it to $10,00 . If the number has more than 3 digits however it should be displayed differently, for example 1000 should be displayed as $1.000,00
This is in USD, however. Different countries have different conventions to display values. JavaScript makes it very easy for us with the ECMAScript Internationalization API, a relatively recent browser API that provides a lot of internationalization features, like dates and time formatting. It is very well supported:
1202
How to format a number as a currency value in JavaScript
The minimumFractionDigits property sets the fraction part to be always at least 2 digits. You can check which other parameters you can use in the NumberFormat MDN page. This example creates a number formatter for the Euro currency, for the Italian country: const formatter = new Intl.NumberFormat('it-IT', { style: 'currency', currency: 'EUR' })
1203
How to get the current timestamp in JavaScript
How to get the current timestamp in JavaScript Find out the ways JavaScript offers you to generate the current UNIX timestamp The UNIX timestamp is an integer that represents the number of seconds elapsed since January 1 1970. On UNIX-like machines, which include Linux and macOS, you can type date +%s in the terminal and get the UNIX timestamp back: $ date +%s 1524379940
The current timestamp can be fetched by calling the now() method on the Date object: Date.now()
You could get the same value by calling new Date().getTime() or new Date().valueOf()
Note: IE8 and below do not have the now() method on Date . Look for a polyfill if you need to support IE8 and below, or simply use new Date().getTime() if Date.now is undefined (as that's what a polyfill would do) The timestamp in JavaScript is expressed in milliseconds. To get the timestamp expressed in seconds, convert it using: Math.floor(Date.now() / 1000)
Note: some tutorials use Math.round() , but that will approximate the the next second even if the second is not fully completed. or, less readable: ~~(Date.now() / 1000)
1204
How to get the current timestamp in JavaScript
I've seen tutorials using +new Date
which might seem a weird statement, but it's perfectly correct JavaScript code. The unary operator + automatically calls the valueOf() method on any object it is assigned to, which returns the timestamp (in milliseconds). The problem with this code is that you instantiate a new Date object that's immediately discarded.
1205
How to redirect to another web page using JavaScript
How to redirect to another web page using JavaScript JavaScript offers many ways to redirect the user to a different web page. Learn the canonical way, and also find out all the options you have, using plain JavaScript JavaScript offers many ways to redirect the user to a different web page, if during the execution of your program you need to move to a different page. The one that can be considered canonical to navigate to a new URL is window.location = 'https://newurl.com'
If you want to redirect to a different path, on the same domain, use: window.location.pathname = '/new'
This is using the location object offered by the History API.
Other options to redirect As with most things in programming, there are many ways to perform the same operation. Since window is implicit in the browser, you can also do: location = 'https://newurl.com'
Another way is to set the href property of location : window.location.href = 'https://newurl.com'
location also has an assign() method that accepts a URL, and performs the same thing:
window.location.assign('https://newurl.com')
The replace() method is different than the previous ways because it rewrites the current page in the history. The current page is wiped, so when you click the "back" button, you go back to the page that now is the last visited one.
1206
How to redirect to another web page using JavaScript
window.location.replace('https://newurl.com')
This can be convenient in some situations.
Different ways to reach the window object The browser exposes the self and top objects, which all reference the window object, so you can use them instead of window in all the examples above: self.location = 'https://newurl.com' top.location = 'https://newurl.com'
301 redirect using a server-side directive The above examples all consider the case of a programmatic decision to move away to a different page. If you need to redirect because the current URL is old, and move the a new URL, it's best to use server-level directive and set the 301 HTTP code to signal search engines that the current URL has permanently moved to the new resource. This can be done via .htaccess if using Apache. Netlify does this through a _redirects file.
Are 301 redirects possible using JavaScript? Unfortunately not. That's not possible to do on the client-side. The 301 HTTP response code must be sent from the server, well before the JavaScript is executed by the browser. Experiments say that JavaScript redirects are interpreted by the search engines like 301 redirects. See this Search Engine Land post for reference. Google says: Using JavaScript to redirect users can be a legitimate practice. For example, if you redirect users to an internal page once they’re logged in, you can use JavaScript to do so. When examining JavaScript or other redirect methods to ensure your site adheres to
1207
How to redirect to another web page using JavaScript
our guidelines, consider the intent. Keep in mind that 301 redirects are best when moving your site, but you could use a JavaScript redirect for this purpose if you don’t have access to your website’s server.
Use an HTML meta tag Another option is using a meta tag in your HTML:
This will cause the browser to load the new page once it has loaded and interpreted the current one, and not signal search engines anything. The best option is always to use a 301 server-level redirect.
1208
How to remove an item from an Array in JavaScript
How to remove an item from an Array in JavaScript JavaScript offers many ways to remove an item from an array. Learn the canonical way, and also find out all the options you have, using plain JavaScript Here are a few ways to remove an item from an array using JavaScript. All the method described do not mutate the original array, and instead create a new one.
If you know the index of an item Suppose you have an array, and you want to remove an item in position i . One method is to use slice() : const items = ['a', 'b', 'c', 'd', 'e', 'f'] const i = 3 const filteredItems = items.slice(0, i-1).concat(items.slice(i, items.length)) // ["a", "b", "d", "e", "f"]
slice() creates a new array with the indexes it receives. We simply create a new array, from
start to the index we want to remove, and concatenate another array from the first position following the one we removed to the end of the array.
If you know the value In this case, one good option is to use filter() , which offers a more declarative approach: const items = ['a', 'b', 'c', 'd', 'e', 'f'] const valueToRemove = 'c' const filteredItems = items.filter(item => item !== valueToRemove) // ["a", "b", "d", "e", "f"]
This uses the ES6 arrow functions. You can use the traditional functions to support older browsers: const items = ['a', 'b', 'c', 'd', 'e', 'f'] const valueToRemove = 'c' const filteredItems = items.filter(function(item) { return item !== valueToRemove
1209
How to remove an item from an Array in JavaScript
}) // ["a", "b", "d", "e", "f"]
or you can use Babel and transpile the ES6 code back to ES5 to make it more digestible to old browsers, yet write modern JavaScript in your code.
Removing multiple items What if instead of a single item, you want to remove many items? Let's find the simplest solution.
By index You can just create a function and remove items in series: const items = ['a', 'b', 'c', 'd', 'e', 'f'] const removeItem = (items, i) => items.slice(0, i-1).concat(items.slice(i, items.length)) let filteredItems = removeItem(items, 3) filteredItems = removeItem(filteredItems, 5) //["a", "b", "c", "d"]
By value You can search for inclusion inside the callback function: const items = ['a', 'b', 'c', 'd', 'e', 'f'] const valuesToRemove = ['c', 'd'] const filteredItems = items.filter(item => !valuesToRemove.includes(item)) // ["a", "b", "e", "f"]
Avoid mutating the original array splice() (not to be confused with slice() ) mutates the original array, and should be
avoided.
1210
How to remove a property from a JavaScript object
How to remove a property from a JavaScript object There are various ways to remove a property from a JavaScript object. Find out the alternatives and the suggested solution The semantically correct way to remove a property from an object is to use the delete keyword. Given the object const car = { color: 'blue', brand: 'Ford' }
you can delete a property from this object using delete car.brand
It works also expressed as: delete car['brand'] delete car.brand delete newCar['brand']
Setting a property to undefined
1211
How to remove a property from a JavaScript object
If you need to perform this operation in a very optimized way, for example when you're operating on a large number of objects in loops, another option is to set the property to undefined .
Due to its nature, the performance of delete is a lot slower than a simple reassignment to undefined , more than 50x times slower.
However, keep in mind that the property is not deleted from the object. Its value is wiped, but it's still there if you iterate the object:
Using delete is still very fast, you should only look into this kind of performance issues if you have a very good reason to do so, otherwise it's always preferred to have a more clear semantic and functionality.
Remove a property without mutating the object If mutability is a concern, you can create a completely new object by copying all the properties from the old, except the one you want to remove: const car = { color: 'blue', brand: 'Ford' } const prop = 'color' const newCar = Object.keys(car).reduce((object, key) => { if (key !== prop) { object[key] = car[key] } return object }, {})
1212
How to remove a property from a JavaScript object
1213
How to check if a string contains a substring in JavaScript
How to check if a string contains a substring in JavaScript JavaScript offers many ways to check if a string contains a substring. Learn the canonical way, and also find out all the options you have, using plain JavaScript Checking if a string contains a substring is one of the most common tasks in any programming language. JavaScript offers different ways to perform this operation. The most simple one, and also the canonical one going forward, is using the includes() method on a string: 'a nice string'.includes('nice') //true
This method was introduced in ES6/ES2015. It's supported in all modern browsers except Internet Explorer:
1214
How to check if a string contains a substring in JavaScript
To use it on all browsers, use Polyfill.io or another dedicated polyfill. includes() also accepts an optional second parameter, an integer which indicates the
position where to start searching for: 'a nice string'.includes('nice') //true 'a nice string'.includes('nice', 3) //false 'a nice string'.includes('nice', 2) //true
Pre-ES6 alternative to includes(): indexOf() Pre-ES6, the common way to check if a string contains a substring was to use indexOf , which is a string method that return -1 if the string does not contain the substring. If the substring is found, it returns the index of the character that starts the string. Like includes() , the second parameters sets the starting point: 'a nice string'.indexOf('nice') !== -1 //true 'a nice string'.indexOf('nice', 3) !== -1 //false 'a nice string'.indexOf('nice', 2) !== -1 //true
1215
How to check if a string contains a substring in JavaScript
1216
How to uppercase the first letter of a string in JavaScript
How to uppercase the first letter of a string in JavaScript JavaScript offers many ways to capitalize a string to make the first character uppercase. Learn the various ways, and also find out which one you should use, using plain JavaScript One of the most common operations with strings is to make the string capitalized: uppercase its first letter, and leave the rest of the string as-is. The best way to do this is through a combination of two functions. One uppercases the first letter, and the second slices the string and returns it starting from the second character: const name = 'flavio' const nameCapitalized = name.charAt(0).toUpperCase() + name.slice(1)
You can extract that to a function, which also checks if the passed parameter is a string, and returns an empty string if not: const capitalize = (s) => { if (typeof s !== 'string') return '' return s.charAt(0).toUpperCase() + s.slice(1) } capitalize('flavio') //'Flavio' capitalize('f') //'F' capitalize(0) //'' capitalize({}) //''
Instead of using s.charAt(0) you could also use string indexing (not supported in older IE versions): s[0] . Some solutions online advocate for adding the function to the String prototype: String.prototype.capitalize = function() { return this.charAt(0).toUpperCase() + this.slice(1) }
(we use a regular function to make use of this - arrow functions would fail in this case, as this in arrow functions does not reference the current object)
This solution is not ideal, because editing the prototype is not generally recommended, and it's a much slower solution than having an independent function.
1217
How to uppercase the first letter of a string in JavaScript
Don't forget that if you just want to capitalize for presentational purposes on a Web Page, CSS might be a better solution, just add a capitalize class to your HTML paragraph and use: p.capitalize { text-transform: capitalize; }
1218
How to replace all occurrences of a string in JavaScript
How to replace all occurrences of a string in JavaScript Find out the proper way to replace all occurrences of a string in plain JavaScript, from regex to other approaches
Using a regular expression This simple regex will do the task: String.replace(//g, '')
This performs a case sensitive substitution. Here is an example, where I substitute all occurrences of the word 'dog' in the string phrase : const phrase = 'I love my dog! Dogs are great' const stripped = phrase.replace(/dog/g, '') stripped //"I love my ! Dogs are great"
To perform a case insensitive replacement, use the i option in the regex: String.replace(//gi, '')
Example: const phrase = 'I love my dog! Dogs are great' const stripped = phrase.replace(/dog/g, '') stripped //"I love my ! s are great"
Remember that if the string contains some special characters, it won't play well with regular expressions, so the suggestion is to escape the string using this function (taken from MDN): const escapeRegExp = (string) => { return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') }
1219
How to replace all occurrences of a string in JavaScript
Using split and join An alternative solution, albeit slower than the regex, is using two JavaScript functions. The first is split() , which truncates a string when it finds a pattern (case sensitive), and returns an array with the tokens: const phrase = 'I love my dog! Dogs are great' const tokens = phrase.split('dog') tokens //["I love my ", "! Dogs are great"]
Then you join the tokens in a new string, this time without any separator: const stripped = tokens.join('') //"I love my ! Dogs are great"
Wrapping up: const phrase = 'I love my dog! Dogs are great' const stripped = phrase.split('dog').join('')
1220
How to trim the leading zero in a number in JavaScript
How to trim the leading zero in a number in JavaScript If you have a number with a leading zero, like 010 or 02, how to remove that zero? If you have a number with a leading zero, like 010 or 02 , how to remove that zero? There are various ways. The most explicit is to use parseInt() : parseInt(number, 10)
10 is the radix, and should be always specified to avoid inconsistencies across different browsers, although some engines work fine without it. Another way is to use the + unary operator: +number
Those are the simplest solutions. You can also go the regular expression route, like this: number.replace(/^0+/, '')
1221
How to inspect a JavaScript object
How to inspect a JavaScript object Find out the ways JavaScript offers you to inspect an object (or any other kind of value) JavaScript offers many ways to inspect the content of a variable. In particular, let's find out how to print the content of an object. The Console API console.log console.dir JSON.stringify() toSource()
Iterate the properties using a loop How to inspect in Node.js Let's say we have this object car , but we don't know its content, and we want to inspect it: const car = { color: 'black', manufacturer: 'Ford', model: 'Fiesta' }
The Console API Using the Console API you can print any object to the console. This will work on any browser.
console.log console.log(car)
console.dir 1222
How to inspect a JavaScript object
console.dir(car)
This works exactly like console.log('%O', car)
JSON.stringify() This will print the object as a string representation: JSON.stringify(car)
By adding these parameters: JSON.stringify(car, null, 2)
you can make it print more nicely. The last number determines the amount of spaces in indentation:
JSON.stringify() has the advantage of working outside of the console, as you can print the
object in the screen. Or, you can combine it with the Console API to print this in the console: console.log(JSON.stringify(car, null, 2))
1223
How to inspect a JavaScript object
toSource() Similar to JSON.stringify, toSource() is a method available on most types, only in Firefox (and browsers based on it):
Worth mentioning, but not being a standard, and only being implemented in Firefox, makes JSON.stringify a better solution.
Iterate the properties using a loop The for...in loop is handy, as it prints the object properties: const inspect = obj => { for (const prop in obj) { if (obj.hasOwnProperty(prop)) { console.log(`${prop}: ${obj[prop]}`) } } } inspect(car)
1224
How to inspect a JavaScript object
I use hasOwnProperty() to avoid printing inherited properties. You can decide what to do in the loop, here we print the properties names and values to the console using console.log , but you can adding them to a string and then print them on the page.
How to inspect in Node.js The inspect() method exposed by the util package works great in Node.js: util.inspect(car)
1225
How to inspect a JavaScript object
But, a much better presentation is provided by console.dir() , with the colors property enabled: console.dir(car, { colors: true })
1226
How to generate random and unique strings in JavaScript
How to generate random and unique strings in JavaScript How I created an array of 5000 unique strings in JavaScript As I was building the platform for my online course I had the problem of generating a few thousands unique URLs. Every person taking the course will be assigned a unique URL. The backend knows about all those URLs and maps a valid URL to the course content. I wanted a unique URL because I can associate a URL to a purchase email. In this way, I can avoid having a login, and at the same time having a separate URL for each person lets me block eventual abuse if that URL gets unintentionally or intentionally shared in the public. So I set out to write my Node.js script. I used the randomstring package, and I added numbers to a Set object until I got the number I wanted. Using a Set means every string will be unique because calling add and passing a duplicate string will silently do nothing. I made a generateStrings() function that returns the set: const generateStrings = (numberOfStrings, stringLength) => { const randomstring = require('randomstring') const s = new Set() while (s.size < numberOfStrings) { s.add(randomstring.generate(stringLength)) } return s }
I can call it using const strings = generateStrings(100, 20)
where 100 is the number of strings I want, and 20 is the length of each string. Once we get the set, we can iterate over them using the values() Set method: for (const value of strings.values()) {
1227
How to generate random and unique strings in JavaScript
console.log(value) }
1228
How to make your JavaScript functions sleep
How to make your JavaScript functions sleep Learn how to make your function sleep for a certain amount of time in JavaScript Sometimes you want your function to pause execution for a fixed amount of seconds or milliseconds. In a programming language like C or PHP, you'd call sleep(2) to make the program halt for 2 seconds. Java has Thread.sleep(2000) , Python has time.sleep(2) , Go has time.Sleep(2 * time.Second) .
JavaScript does not have a native sleep function, but thanks to the introduction of promises (and async/await in ES2018) we can implement such feature in a very nice and readable way, to make your functions sleep: const sleep = (milliseconds) => { return new Promise(resolve => setTimeout(resolve, milliseconds)) }
You can now use this with the then callback: sleep(500).then(() => { //do stuff })
Or use it in an async function: const doSomething = async () => { await sleep(2000) //do stuff } doSomething()
Remember that due to how JavaScript works (read more about the event loop), this does not pause the entire program execution like it might happen in other languages, but instead only your function sleeps.
1229
How to make your JavaScript functions sleep
1230
How to check if a file exists in Node.js
How to check if a file exists in Node.js How to check if a file exists in the filesystem using Node.js, using the `fs` module The way to check if a file exists in the filesystem, using Node.js, is by using the fs.existsSync() method:
This method is synchronous. This means that it's blocking. To check if a file exists in an asynchronous way, you can use fs.access() , which checks the existence of a file without opening it: const fs = require('fs') const path = './file.txt' fs.access(path, fs.F_OK, (err) => { if (err) { console.error(err) return } //file exists })
1231
How to validate an email address in JavaScript
How to validate an email address in JavaScript There are lots of ways to validate an email address. Learn the correct way, and also find out all the options you have, using plain JavaScript Validation of an email address is one of the common operations one does when processing a form. It's useful in contact forms, signup and login forms, and much more. Some people suggest that you should not validate emails at all. I think a little bit of validation, without trying to be over-zealous, is better.
What are the rules that email validation should follow? An email address is composed by 2 parts the local part, and the domain part. The local part can contain any alphanumeric character: a-zA-Z0-9 punctuation: "(),:;@[\] special characters: !#$%&'*+-/=?^_``{|}~ a dot . , if it's not the first or last character. Also, it can't be repeated The domain part can contain any alphanumeric character: a-zA-Z0-9 the hyphen - , if it's not the first or last character. It can be repeated
Use a Regex The best option to validate an email address is by using a Regular Expression. There is no universal email check regex. Everyone seems to use a different one, and most of the regex you find online will fail the most basic email scenarios, due to inaccuracy or to the fact that they do not calculate the newer domains introduced, or internationalized email addresses Don't use any regular expression blindly, but check it first.
1232
How to validate an email address in JavaScript
I made this example on Glitch that will check a list of email addresses considered valid against a regex. You can change the regex and compare it with other ones you want to use. The one that's currently added is the one I consider the most accurate I found, slightly edited to fix an issue with multiple dots. Note: I did not came up with it. I found it in a Quora answer but I am not sure that was the original source. This is a function that validates using that regex: const validate = (email) => { const expression = /(?!.*\.{2})^([a-z\d!#$%&'*+\-\/=?^_`{|}~\u00A0-\uD7FF\uF900-\uFDCF \uFDF0-\uFFEF]+(\.[a-z\d!#$%&'*+\-\/=?^_`{|}~\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]+)*|" ((([ \t]*\r\n)?[ \t]+)?([\x01-\x08\x0b\x0c\x0e-\x1f\x7f\x21\x23-\x5b\x5d-\x7e\u00A0-\uD7FF \uF900-\uFDCF\uFDF0-\uFFEF]|\\[\x01-\x09\x0b\x0c\x0d-\x7f\u00A0-\uD7FF\uF900-\uFDCF\uFDF0\uFFEF]))*(([ \t]*\r\n)?[ \t]+)?")@(([a-z\d\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]|[a-z\d \u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF][a-z\d\-._~\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF ]*[a-z\d\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])\.)+([a-z\u00A0-\uD7FF\uF900-\uFDCF\uFDF0 -\uFFEF]|[a-z\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF][a-z\d\-._~\u00A0-\uD7FF\uF900-\uFDCF \uFDF0-\uFFEF]*[a-z\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])\.?$/i; return expression.test(String(email).toLowerCase()) }
All the common cases are satisfied, one can assume that 99.9% of the email addresses people will add are validated successfully. The code of the glitch contains other regular expressions that you can easily try by remixing the project. Although pretty accurate, there are a couple issues with some edge cases with this regex, which you can live with (or not) depending on your needs. False negative for weird addresses like "very.(),:;[]".VERY."very@\ "very".unusual"@strange.example.com one."more\ long"@example.website.place
False negative for local addresses: admin@mailserver1 user@localserver
of little use in publicly accessible websites (actually a plus in publicly accessible websites to have those denied)
1233
How to validate an email address in JavaScript
Also, false negative for IP-based emails: user@[2001:DB8::1] [email protected]
There is a false positive for addresses with the local part too long: 1234567890123456789012345678901234567890123456789012345678901234+x@example.com
Do you want a simpler regex? The above regex is very complicated, to the point I won't even try to understand it. Regular expressions masters created it, and it spread through the Internet until I found it. Using it at this point is just a matter of copy/paste it. A much simpler solution is just to check that the address entered contains something, then an @ symbol, and then something else.
In this case, this regex will do the trick: const expression = /\S+@\S+/ expression.test(String('[email protected]').toLowerCase())
This will cause many false positives, but after all the ultimate test on an email address validity happens when you ask the user to click something in the email to confirm the address, and I'd rather try to send to an invalid email than reject a valid email because of an error in my regex. This is listed in the above Glitch, so you can easily try it.
Validate the HTML field directly HTML5 provided us the email field type, so don't forget you can also validate emails using that:
Depending on the browser implementation also this validation will give you different results.
1234
How to validate an email address in JavaScript
This Glitch shows the same emails I tested the regex with, and their result when validated through the HTML form. The results are interesting, and here as well we have invalid emails that pass, and valid emails that don't. Our regex actually does a more accurate job than the HTML filtering built into the browser.
Validate server-side If your app has a server, the server needs to validate the email as well, because you can never trust client code, and also JavaScript might be disabled on the user browser. Using Node.js you have the advantage of being able to reuse the frontend code as-is. In this case the function that validates can work both client-side and server-side. You can also use pre-made packages like isemail, but also in this case results vary. Here is the isemail benchmark on the same emails we used above: https://flavio-email-validationnode-isemail.glitch.me/
1235
How to get the unique properties of a set of object in a JavaScript array
How to get the unique properties of a set of object in a JavaScript array Given an array of objects, here's what you can do if you want to get the values of a property, but not duplicated. Suppose you have a bills array with this content: const bills = [ { date: '2018-01-20', amount: '220', category: 'Electricity' }, { date: '2018-01-20', amount: '20', category: 'Gas' }, { date: '2018-02-20', amount: '120', category: 'Electricity' } ]
and you want to extract the unique values of the category attribute of each item in the array. Here's what you can do: const categories = [...new Set(bills.map(bill => bill.category))]
Explanation Set is a new data structure that JavaScript got in ES6. It's a collection of unique values. We put into that the list of property values we get from using map() , which how we used it will return this array: ['Electricity', 'Gas', 'Electricity']
Passing through Set, we'll remove the duplicates. ... is the spread operator, which will expand the set values into an array.
1236
How to check if a string starts with another in JavaScript
How to check if a string starts with another in JavaScript Checking if a string starts with another substring is a common thing to do. See how to perform this check in JavaScript ES6, introduced in 2015, added the startsWith() method to the String object prototype. This is the way to perform this check in 2018. This means you can call startsWith() on any string, provide a substring, and check if the result returns true or false : 'testing'.startsWith('test') //true 'going on testing'.startsWith('test') //false
This method accepts a second parameter, which lets you specify at which character you want to start checking: 'testing'.startsWith('test', 2) //false 'going on testing'.startsWith('test', 9) //true
1237
How to create a multiline string in JavaScript
How to create a multiline string in JavaScript Discover how to create a multiline string JavaScript never had a true good way to handle multiline strings, until 2015 when ES6 was introduced, along with template literals). Template literals are strings delimited by backticks, instead of the normal single/double quote delimiter. They have a unique feature: they allow multiline strings: const multilineString = `A string on multiple lines` const anotherMultilineString = `Hey this is cool a multiline st r i n g ! `
1238
How to get the current URL in JavaScript
How to get the current URL in JavaScript Find out the ways JavaScript offers you to get the current URL that's opened in the browser To get the current URL of the page you opened in the browser using JavaScript you can rely on the location property exposed by the browser on the window object: window.location
Since window is the global object in the browser, the property can be simply referenced as location
This is a Location object which has many properties on its own:
The current page URL is exposed in location.href
Other properties of location provide useful information: Code
Description
location.hostname
the host name
location.origin
the origin
location.hash
the hash, the part that follow the hash # symbol 1239
How to get the current URL in JavaScript
location.pathname
the path
location.port
the port
location.protocol
the protocol
location.search
the query string
1240
How to initialize a new array with values in JavaScript
How to initialize a new array with values in JavaScript Find out how you can initialize a new array with a set of values in JavaScript Simple solution: new Array(12).fill(0)
fill() is a new method introduced in ES6.
1241
How to create an empty file in Node.js
How to create an empty file in Node.js Discover how create an empty file in a filesystem folder in Node.js The method fs.openSync() provided by the fs built-in module is the best way. It returns a file descriptor: const fs = require('fs') const filePath = './.data/initialized' const fd = fs.openSync(filePath, 'w'))
the w flag makes sure the file is created if not existing, and if the file exists it overwrites it with a new file, overriding its content. Use the a flag to avoid overwriting. The file is still created if not existing. If you don't need the file descriptor, you can wrap the call in a fs.closeSync() call, to close the file: const fs = require('fs') const filePath = './.data/initialized' fs.closeSync(fs.openSync(filePath, 'w'))
1242
How to remove a file with Node.js
How to remove a file with Node.js Discover how to remove a file from the filesystem with Node.js How do you remove a file from the filesytem using Node.js? Node offers a synchronous method, and an asynchronous method through the fs built-in module. The asynchronous one is fs.unlink() . The synchronous one is fs.unlinkSync() . The difference is simple: the synchronous call will cause your code to block and wait until the file has been removed. The asynchronous one will not block your code, and will call a callback function once the file has been deleted. Here's how to use those 2 functions: fs.unlinkSync() :
How to wait for the DOM ready event in plain JavaScript
How to wait for the DOM ready event in plain JavaScript How to run JavaScript as soon as we can, but not sooner You can do so by adding an event listener to the document object for the DOMContentLoaded event: document.addEventListener('DOMContentLoaded', (event) => { //the event occurred })
I usually don't use arrow functions inside for the event callback, because we cannot access this .
In this case we don't need so, because this is always document . In any other event listener I would just use a regular function: document.addEventListener('DOMContentLoaded', function(event) { //the event occurred })
for example if I'm adding the event listener inside a loop and I don't really know what this will be when the event is triggered.
1245
How to add a class to a DOM element
How to add a class to a DOM element TL;DR: Use the add() method on element.classList When you have a DOM element reference you can add a new class to it by using the add method: element.classList.add('myclass')
You can remove a class using the remove method: element.classList.remove('myclass')
Implementation detail: classList is not an array, but rather it is a collection of type DOMTokenList. You can't directly edit classList because it's a read-only property. You can however use its methods to change the element classes.
1246
How to loop over DOM elements from querySelectorAll
How to loop over DOM elements from querySelectorAll TL;DR: Use the for..of loop The querySelectorAll() method run on document returns a list of DOM elements that satisfy the selectors query. It returns a list of elements, which is not an array but a NodeList object. The easiest way to loop over the results is to use the for..of loop: for (const item of document.querySelectorAll('.buttons')) { //...do something }
If you are unfamiliar with the for..of loop I recommend checking out its unique features in my ECMAScript guide.
1247
How to generate a random number between two numbers in JavaScript
How to generate a random number between two numbers in JavaScript The simplest possible way to randomly pick a number between two Use a combination of Math.floor() and Math.random() . This simple one line of code will return you a number between 1 and 6 (both included): Math.floor(Math.random() * 6 + 1)
There are 6 possible outcomes here: 1, 2, 3, 4, 5, 6.
1248
How to remove a class from a DOM element
How to remove a class from a DOM element TL;DR: Use the remove() method on element.classList When you have a DOM element reference you can remove a class using the remove method: element.classList.remove('myclass')
You can add a new class to it by using the add method: element.classList.add('myclass')
Implementation detail: classList is not an array, but rather it is a collection of type DOMTokenList. You can't directly edit classList because it's a read-only property. You can however use its methods to change the element classes.
1249
How to check if a DOM element has a class
How to check if a DOM element has a class How do you check if a particular DOM element you have the reference of, has a class? Use the contains method provided by the classList object, which is: element.classList.contains('myclass')
Technically, classList is an object that satisfies the DOMTokenList interface, which means it implements its methods and properties. You can see its details on the DOMTokenList MDN page.
1250
How to change a DOM node value
How to change a DOM node value Given a DOM element, how do you change its value? Change the value of the innerText property: element.innerText = 'x'
To lookup the element, combine it with the Selectors API: document.querySelector('#today .total')
1251
How to add a click event to a list of DOM elements returned from querySelectorAll
How to add a click event to a list of DOM elements returned from querySelectorAll How to iterate a NodeList and attach an event listener to each element You can add an event listener to all the elements returned by a document.querySelectorAll() call by iterating over those results using the for..of loop: const buttons = document.querySelectorAll("#select .button") for (const button of buttons) { button.addEventListener('click', function(event) { //... }) }
It's important to note that document.querySelectorAll() does not return an array, but a NodeList object. You can iterate it with forEach or for..of , or you can transform it to an array with Array.from() if you want.
1252
How to get the index of an iteration in a for-of loop in JavaScript
How to get the index of an iteration in a forof loop in JavaScript A for-of loop, introduced in ES6, is a great way to iterate over an array: for (const v of ['a', 'b', 'c']) { console.log(v) }
How can you get the index of an iteration? The loop does not offer any syntax to do this, but you can combine the destructuring syntax introduced in ES6 with calling the entries() method on the array: for (const [i, v] of ['a', 'b', 'c'].entries()) { console.log(i, v) }