AWS Server
- Not as hard to create as I imagined
- ssh keys stored as a file
- ssh -i production.pem ubuntu@shopkingsland.click
Caddy
- Caddy is a web service that listens for incoming HTTP requests
- Caddy is used to serve application
- Important Caddy files:
- Configuration file - Contains the definitions for routing HTTP requests that Caddy receives. This is used to determine the location where static HTML files are loaded from, and also to proxy requests into the services you will create later.
- HTML file = This is the directory of files that Caddy serves up when requests are made to the root or your web server.
1/24/2024 Domain Name registration
- https://github.com/webprogramming260/.github/blob/main/profile/webServers/amazonWebServicesRoute53/amazonWebServicesRoute53.md
- route 53 in aws to create domain, once there, create 2 new records, one for the domain, one for subdomains.
HTTPS
- HTTP = non-secure hypertext transport protocol
- HTTPS = secure Hypertext Transport Protocol
- just http but with negotiated secure connection that happens before data is exchanged.
- Secure connection means all data is encrypted using TLS protocol.
TLS
- tls (aka SSL, less secure predecessor) works by negotiating a shared secret that is then used to encrypt data.
- curl -v in device console shows actual negotiation, /dev/null redirection throws away actual http response
- Core piece of tls handshake is exchange of web certificate that identifies the domain name of the server creating the secure connection
- browser will compare the certificate domain name ot the one represented in the url and if they don't match, or the certificate is invalid or out of date, will display a massive warning.
Web certificates
- Generated by trusted 3rd party using public/private key encryption
- issuer responsible for verifying that certificate owner actually owns the domain name represented by certificate
- once there is certificate for your domain name, can serve certificate from your web server then the browser can validate certificate by using the public keys of the certificate issuer.
- used to cost hundreds of dollars a year to get web certificate, but small nonprofit called Let's Encrypt started creating trusted web certificates for free.
- Broke monopoly that trusted web certificate issuers had on industry
- Now anyone who owns a domain name can dynamically generate and renew a certificate for free.
- Let's Encrypt made the web safer and more reliable for everyone
- Caddy uses Let's Encrypt to generate a web certificate every time an HTTPS request is made for a domain name that Caddy does not have a web certificate for.

Enabling HTTPS
- Modern browsers expect web servers to exclusively use HTTPS for all communication.
- next version of HTTP(v3) only supports secure connections. - you should always support HTTPS for any web application that you build.
- obtain and renew web certificates by enabling ACME protocol for you web server and communicating with Let's Encrypt to generate the needed certificates.
More Caddy
- Caddy has ACME support built into it by default, all you need to do is configure caddy with the domain name for your web server.
- go into ubuntu, change file so the :80 and references to domain name are the domain name, then save (esc, then :wq), then restart caddy (sudo service caddy restart)
The Console
- console window aka command line, shell, or terminal.
- Essential web development tool.
- provides access to the file system and allows for the execution of command line applications.
- many to choose from, all OS come with a default. but for the best one installation needed.
- Console Application
- must be POSIX compliant - supports standard set of console commands.
- mac and linux support POSIX
- Windows needs git bash
- don't use git command, cmd, or powershell.
- must be POSIX compliant - supports standard set of console commands.
- Simple Commands
- echo - Output the parameters of the command
- cd - Change directory
- mkdir - Make directory
- rmdir - Remove directory
- rm - Remove file(s)
- mv - Move file(s)
- cp - Copy files
- ls - List files
- curl - Command line client URL browser
- grep - Regular expression search
- find - Find files
- top - View running processes with CPU and memory usage
- df - View disk statistics
- cat - Output the contents of a file
- less - Interactively output the contents of a file
- wc - Count the words in a file
- ps - View the currently running processes
- kill - Kill a currently running process
- sudo - Execute a command as a super user (admin)
- ssh - Create a secure shell on a remote computer
- scp - Securely copy files to a remote computer
- history - Show the history of commands
- ping - Check if a website is up
- tracert - Trace the connections to a website
- dig - Show the DNS information for a domain
- man - Look up a command in the manual
- Chaining commands:
- | Take the output from the command on the left and pipe, or pass, it to the command on the right
- > redirect output to a file. Overwrites file if it exists
- >> redirect output to a file. Appends if the file exists
- ex.
- ls -l | grep ' Nov ' | wc -l - lists files in a directory, pips it into grep to search for files created in Nov, and then pip that into wc to count the number of files found with a date of Nov.
- CTRL-R - use type ahead to find previous commands
- CTRL-C - Kill the currently running command
Important CSS Info
- Look at cs260 GitHub for details
- https://codepen.io/ hub for cool css styles and animation
- Importing fonts:
- @font-face {
- font-family: 'Quicksand';
- src: url("")
- }
- p {
- font-family: Quicksand;
- }
- @font-face {
- If you don't want to host font files:
- @import url('https://fonts.googleapis.com/css2?family=Rubik Microbe&display=swap');
CSS
- located in head element, tells browser not to scale page
- float allows inline elements to wrap around it
- Different frameworks allowed, most popular bootstrap and tailwind gaining popularity
JavaScript
- I already know JS basics, nothing new here
JSON
- JavaScript Object Notation
- Provides a simple and effective way to share and store data.
- Most often Json docs contain objects. objects contain 0 or more key value pairs. Key always string, and value must be one of teh valid JSON data types. key value paris delimited with commas.
{
"class": {
"title": "web programming",
"description": "Amazing"
},
"enrollment": ["Marco", "Jana", "فَاطِمَة"],
"start": "2025-02-01",
"end": null
}- can convert JSON to, and from JS using JSON.parse and JSON.stringify functions
const obj = { a: 2, b: 'crockford', c: undefined };
const json = JSON.stringify(obj);
const objFromJson = JSON.parse(json);
console.log(obj, json, objFromJson);
// OUTPUT:
// {a: 2, b: 'crockford', c: undefined}
// {"a":2, "b":"crockford"}
// {a: 2, b: 'crockford'}- JSON cannot represent undefined object, and gets dropped when converting from JS to JSON
JS object and classes
- objects represent collection of name value pairs referred to as properties.
- property names must be of type String or Symbol, but value can be of any type.
- Objects also can have functionality like constructors, this pointer, static properties and functions, and inheritance.
- Created with new operator. causes object's constructor to be called. once declared you can add properties to object by simply referencing property name in an assignment. Any type of variable can be assigned to a property. includes sub-object, array, or function.
- properties can be referenced either with dot (obj.prop) or bracket notation (obj.['prop']).
const obj = new Object({ a: 3 });
obj['b'] = 'fish';
obj.c = [1, 2, 3];
obj.hello = function () {
console.log('hello');
};
console.log(obj);
// OUTPUT: {a: 3, b: 'fish', c: [1,2,3], hello: func}- object can refer to the standard JS objects (eg. Promise, Map, Object, Function, Date, ...), or can refer specifically to JS Object (ie. new Object() ) or can refer to any JS object you create (e.g. {a:'a', b:2}) overload usage can be confusing Object-literals
- can also declare a variable of object type with the object literal syntax. allows you to provide initial composition of the object.
const obj = {
a: 3,
b: 'fish',
};Object functions
- several interesting static functions associated with it:
- entries: returns an array of key value pairs
- keys: returns array of keys
- values: returns array of values
const obj = {
a: 3,
b: 'fish',
};
console.log(Object.entries(obj));
// OUTPUT: [['a', 3], ['b', 'fish']]
console.log(Object.keys(obj));
// OUTPUT: ['a', 'b']
console.log(Object.values(obj));
// OUTPUT: [3, 'fish']Constructor
- any function that returns an object is considered a constructor and can be invoked with the new operator
Classes
- classes define objects. Using a class clarifies the intent to create a reusable component rather than a one-of object. Class declarations look similar to declaring an object, but classes have an explicit constructor and assumed function declarations.
- You can make properties and functions of classes private by prefixing them with a #.
class Person {
#name;
constructor(name){
this.#name = name;
}
}
const p = new Person('Eich');
p.#name = 'Lie';
// OUTPUT: Uncaught SyntaxError: Private field '#name' must be declared in an enclosing classInheritance
- classes can be extended by using the extends keyword to define inheritance. Parameters that need to be passed to the parent class are delivered using the super function. Any functions defined on the child that have the same name as the parent override the parent's implementation. A parent's function can be explicitly accessed using the super keyword.
class Employee extends Person {
constructor(name, position) {
super(name);
this.position = position;
}
print() {
return super.print() + '. I am a ' + this.position;
}
}
const e = new Employee('Eich', 'programmer');
console.log(e.print());
// OUTPUT: My name is Eich. I am a programmerREGEX
- regex support built right into JS.
- you can create regex using the class constructor or a regex literal
const objRegex = new RegExp('ab*', 'i');
const literalRegex = /ab*/i;- string class has several functions that accept regex.
- match, replace, search, and split
const petRegex = /(dog)|(cat)|(bird)/gim;
const text = 'Both cats and dogs are pets, but not rocks.';
text.match(petRegex);
// RETURNS: ['cat', 'dog']
text.replace(petRegex, 'animal');
// RETURNS: Both animals and animals are pets, but not rocks.
petRegex.test(text);
// RETURNS: trueRest
- parameter that contains the rest of the parameters
function hasNumber(test, ...numbers) {
return numbers.some((i) => i === test);
}
hasNumber(2, 1, 2, 3);
// RETURNS: trueSpread
- opposite of rest. Takes an object that is iterable and expands it into a function's parameters.
function person(firstName, lastName){
return { first: firstName, last: lastName};
}
const p = person(...['Ryan', 'Dahl']);
console.log(p);
// OUTPUT: {first: 'Ryan', last: 'Dahl'}Exceptions
- JS supports handling using try catch and throw syntax. exception can be triggered whenever your code generates an exception using the throw keyword, or whenever an exception is generated by the GS runtime, for example, when an undefined variable is used.
function connectDatabase() {
throw new Error('connection error');
}
try {
connectDatabase();
console.log('never executed');
} catch (err) {
console.log(err);
} finally {
console.log('always executed');
}
// OUTPUT: Error: connection error
// always executed- Throwing exceptions should only happen when something truly exceptional occurs. For ex. file not found exception when file is required for code to run, such as a required config file. code will be easier to debug, and logs more meaningful if you restrict exceptions to truly exceptional situations
Fallbacks
- commonly implemented using exception handling.
- put the normal feature path in a try block and provide a fallback implementation in the catch block.
- ex normally you would get the high scores for a game by making a network request, but if the network is not available then a locally cached version of the last available scores is used.
- by providing fallback, you can always return something, even if the desired feature is temporarily unavailable.
Destructuring
- not destructing
- process of pulling individual items out of an existing one, or removing structure. You can do this with either arrays or objects. helpful when you only care about a few items in the original structure.
- examples:
const [b, c, ...others] = a;
console.log(b, c, others);
// OUTPUT: 1, 2, [4,5]const o = { a: 1, b: 'animals', c: ['fish', 'cats'] };
const { a, c } = o;
console.log(a, c);
// OUTPUT 1, ['fish', 'cats']const o = { a: 1, b: 'animals', c: ['fish', 'cats'] };
const { a: count, b: type } = o;
console.log(count, type);
// OUTPUT 1, animalsconst { a, b = 22 } = {};
const [c = 44] = [];
console.log(a, b, c);
// OUTPUT: undefined, 22, 44let a = 22;
[a] = [1, 2, 3];
console.log(a);
// OUTPUT: 1Scope
-
Global - Visible to all code
-
Module - Visible to all code running in a module
-
Function - Visible within a function
-
Block - Visible within a block of code delimited by curly braces
-
Var
- ignores block scope
- always logically hoisted to the top of the function
- Use
constandletunless you fully understand why you are using var
-
This
- value of
thisdepends on the context in which it is referenced. - 3 contexts:
- Global - when outside a function or object it refers to the
globalThisobject, which represents the context for runtime environment - Function - when inside a function refers to the object that owns the function. either an object you defined or
globalThisif function is outside an object - Object - when inside an object refers to the object
- Global - when outside a function or object it refers to the
- value of
-
Closure
- defined as a function and its surrounding state
- whatever variables are accessible when a function is created are available inside that function. Holds true even if you pass the function outside the scope of its original creation.
- ex of function created as part of an object. means that function has access to the object's this pointer
globalThis.x = 'global';
const obj = {
x: 'object',
f: function () {
console.log(this.x);
},
};
obj.f();
// OUTPUT: object- arrow functions work different
globalThis.x = 'global';
const obj = {
x: 'object',
f: () => console.log(this.x),
};
obj.f();
// OUTPUT: global- but if we make the function return an arrow function, then this pointer will the object's this pointer since that was the active context at the time the arrow function was created.
globalThis.x = 'global';
const obj = {
x: 'object',
make: function () {
return () => console.log(this.x);
},
};
const f = obj.make();
f();
// OUTPUT: objectJavaScript Modules
- allow for the partitioning and sharing of code.
- node.js, a server side JS execution app introduced the concept to support importing of packages of JS from third party providers
- node.js modules called CommonJS modules, JS modules called ES modules
- modules create file-based scope for the code they represent, therefore you must explicitly export the objects from one file and then import them into another file.
export function alertDisplay(msg){
alert(msg);
}import { alertDisplay } from './alert.js';
alertDisplay('called from main.js');ES modules in browser
- more complicated.
- modules can only be called from other modules
- you can import modules from html
<script type="module">
import { alertDisplay } from './alert.js';
alertDisplay('module loaded');
</script>- if you want to put it in global scope, add it into the window object
<html>
<body>
<script type="module">
import { alertDisplay } from './alert.js';
window.btnClick = alertDisplay;
document.body.addEventListener('keypress', function (event) {
alertDisplay('Key pressed');
});
</script>
<button onclick="btnClick('button clicked')">Press me</button>
</body>
</html>Modules with web frameworks
- usually no need to worry about differentiating between global and ES module scope.
Document Object Model
- object representation of the HTML elements that the browser uses to render the display.
- browser also exposes the DOM to external code so that you can write programs that dynamically manipulate the HTML.
- browser that provides access to the DOM through a global variable name
documentthat points to the root element of the DOM.- If you open the browser's debugger console window and type the variable name
documentyou will see the DOM for the document the browser is currently rendering
- If you open the browser's debugger console window and type the variable name
Accessing the DOM
- Every element in an HTML document implements the DOM element interface, which is derived from the DOM Node interface.
- DOM Element Interface provides the means for iterating child elements, accessing the parent element, and manipulating the element's attributes. For your JS code, you can start with the document variable and walk through the every element in the tree.
function displayElement(el) {
console.log(el.tagName);
for (const child of el.children) {
displayElement(child);
}
}
displayElement(document);- provide a CSS selector to the
querySelectorAllfunction in order to select elements from the document. ThetextContentproperty contains all of the element's text. - you can even access a textual representation of an element's HTML content with the
innerHTMLproperty.
const listElements = document.querySelectorAll('p');
for (const el of listElements) {
console.log(el.textContent);
}Modifying the DOM
- DOM supports the ability to insert, modify, or delete the elements in the DOM.
- To create a new element you first create the element on the DOM document
- then insert the new element into DOM tree by appending it to an existing element in the tree
function insertChild(parentSelector, text) {
const newChild = document.createElement('div');
newChild.textContent = text;
const parentElement = document.querySelector(parentSelector);
parentElement.appendChild(newChild);
}
insertChild('#courses', 'new course');- to delete elements call the removeChild function on the parent element.
function deleteElement(elementSelector) {
const el = document.querySelector(elementSelector);
el.parentElement.removeChild(el);
}
deleteElement('#courses div');Injecting HTML
- DOM also allows you to inject entire blocks of HTML into an element.
- following code finds first div element in DOM and replaces all the HTML it contains
const el = document.querySelector('div');
el.innerHTML = '<div class="injected"><b>Hello</b>!</div>';- however, directly injecting HTML as a block of text is common attack vector for hackers.
- if untrusted party can inject JavaScript anywhere in your application then that JS can represent itself as the current user of the application. The attacker can then make requests for sensitive data, monitor activity, and steal credentials.
- Ex below shows how the img element can be used to launch an attack as soon as the page is loaded.
<img src="bogus.png" onerror="console.log('All your base are belong to us')" />- if you are injecting HTML, make sure that it cannot be manipulated by a user. Common injection paths include HTML input controls, URL parameters, and HTTP headers.
- Either sanitize any HTML that contains variables, or simply use DOM manipulation functions instead of using
innerHTML
Event Listeners
- All DOM elements support the ability to attach a function that gets called when an event occurs on the element. (Event listeners)
- ex:
const submitDataEl = document.querySelector('#submitData');
submitDataEl.addEventListeneer('click', function (event) {
console.log(event.type);
});- lots of possible events you can add a listener to. includes things like mouse, keyboard, scrolling, animation, video, audio, WebSocket, and clipboard events.
- Commonly used events:
- clipboard - cut, copied, pasted
- focus - an element gets focus
- keyboard - keys are pressed
- mouse - click events
- text selection - when text is selected
- you can add event listeners directly in HTML
<button onclick='alert("clicked")'>click me</button>Local Storage
localStorageapi provides ability to persistently store and retrieve data (ie scores, usernames, etc.) on a user's browser across user sessions and HTML page renderings.- your frontend JS code could store a user's name on one HTML page, and then retrieve the name later when a different HTML page is loaded.
- same user's name will also be available in local storage the next time the same browser is used to access the same website.
- Also used as a cache for when data cannot be obtained from the server.
- ex. frontend JS could store the last high scores obtained from the service, and then display those scores in the future if the service not available.
- four main functions:
- setItem(name, value) = sets a named item's value into local storage
- getItem(name) = gets a named item's value from local storage
- removeItem(name) = removes a named item from local storage
- clear() = clears all items in local storage.
- objects and arrays need to be converted to a json string using JSON.stringify() on insertion, and JSON.parse() when retrieved.
- in devtools, Application, then Storage > local storage and then your domain name will let you add, view, update, and delete any storage values
Promise
- long running or blocking tasks should be executed with the use of JS
Promise. - allows main rendering thread to continue while some action is executed in background.
- make promise by calling Promise object constructor and passing it an executor function that runs the asynchronous operation.
- asynchronously means that promise constructor may return before the promise executor function runs.
- state of promise execution is always in one of three states.
- pending - running asynchronously
- fulfilled - completed successfully
- rejected = failed to complete
- Resolving and rejecting
- Promise executor function takes two functions as parameters, resolve and reject.
- calling resolve sets promise to fulfilled state, calling reject sets the promise to the rejected state.
const coinToss = new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() > 0.5) {
resolve('success');
} else {
reject('error');
}
}, 10000);
});Then, Catch, finally
- promise object has three functions: then, catch, and finally.
- then function called if promise is fulfilled, catch is called if the promise is rejected, and finally always called after all the processing is completed.
const coinToss = new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() > 0.1) {
resolve(Math.random() > 0.5 ? 'heads' : 'tails');
} else {
reject('fell off table');
}
}, 10000);
});coinToss
.then((result) => console.log(`Coin toss result: ${result}`))
.catch((err) => console.log(`Error: ${err}`))
.finally(() => console.log('Toss completed'));
// OUTPUT:
// Coin toss result: tails
// Toss completedJS Async/await
- await keyword wraps the execution of a promise and removed the need to chain functions.
- await expression will block until the promise state moves to 'fulfilled'. or throws exception if the state moves to rejected.
- two ways to do it: then/catch chain version (see above), and async, try/catch version
try {
const result = await coinToss();
console.log(`Toss result ${result}`);
} catch (err) {
console.error(`Error: ${err}`);
} finally {
console.log(`Toss completed`);
}- ASYNC
- a restriction with await is you cannot call await unless it is called at the top level of the JS, or is in a function that is defined with the async keyword.
- applying async keyword transforms the function so that it returns a promise that will resolve to the value that was previously returned by the function.
- turns any function into asynchronous function, so that it can in turn make asynchronous requests
async function cow() {
return 'moo';
}
console.log(cow());
// OUTPUT: Promise {<fulfilled>: 'moo'}- we can change it to explicitly create a promise instead of the auto generated promise that the await keyword generates.
async function cow() {
return new Promise((resolve) => {
resolve('moo');
});
}
console.log(cow());
// OUTPUT: Promise {<pending>}- AWAIT
- async declares that a function returns a promise.
- await wraps a call to the async function, blocks execution until the promise has resolved, and then returns the result of the promise.
console.log(cow());
// OUTPUT: Promise {<pending>}
console.log(await cow());
// OUTPUT: moo- TOGETHER
- by combining async to define functions that return promises, with await, to wait on the promise, you can create code that is aysynchronous, but still maintains the flow of the code without explicitly using callbacks.
- promise implementation:
const httpPromise = fetch('https://simon.cs260.click/api/user/me');
const jsonPromise = httpPromise.then((r) => r.json());
jsonPromise.then((j) => console.log(j));
console.log('done');
// OUTPUT: done
// OUTPUT: {email: 'bud@mail.com', authenticated: true}- with async/await, you can clarify the code intent by hiding the promise syntax, and also make the execution block until the promise is resolved.
const httpResponse = await fetch('https://simon.cs260.click/api/user/me');
const jsonResponse = await httpResponse.json();
console.log(jsonResponse);
console.log('done');
// OUTPUT: {email: 'bud@mail.com', authenticated: true}
// OUTPUT: doneDebugging JS
- console debugging
- insert console.log functions that output the state of the code as it executes.
var varCount = 20;
let letCount = 20;
console.log('Initial - var: %d, let: %d', varCount, letCount);
for (var varCount = 1; varCount < 2; varCount++) {
for (let letCount = 1; letCount < 2; letCount++) {
console.log('Loop - var: %d, let: %d', varCount, letCount);
}
}
const h1El = document.querySelector('h1');
h1El.textContent = `Result - var:${varCount}, let:${letCount}`;
console.log('Final - var: %d, let: %d', varCount, letCount);- you can also type in the names of variables in your console window, and execute js in the console.
- Browser debugging
- source tab can add breakpoints, which will execute when you reload the page
The internet
- globally connects independent networks and computing devices
- when devices want to talk to one another it must have an IP address (ex 128.187.16.184 is BYU's)
- symbolic (domain) names usually preferred.
- traceroute console utility lets you see hops in a connection
➜ traceroute byu.edu
traceroute to byu.edu (128.187.16.184), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 10.942 ms 4.055 ms 4.694 ms
2 * * *
3 * * *
4 192-119-18-212.mci.googlefiber.net (192.119.18.212) 5.369 ms 5.576 ms 6.456 ms
5 216.21.171.197 (216.21.171.197) 6.283 ms 6.767 ms 5.532 ms
6 * * *
7 * * *
8 * * *
9 byu.com (128.187.16.184) 7.544 ms !X * 40.231 ms !X
- every route is dynamically calculated.
- TCP/IP model is a layered architecture that covers everything from the physical wires to the data that a web application sends
- top layer is application layer, represents user functionality, such as web, mail, files, remote shell, and chat.
- next is transport layer which breaks application layer's information into small chunks and sends the data.
- actual connection made using internet layer
- last is link layer which deals with the physical connections and hardware.
-
Layer Example Purpose Application HTTPS Functionality like web browsing Transport TCP Moving connection information packets Internet IP Establishing connections Link Fiber, hardware Physical connections
Web Servers
- computing device that is hosting a web service that knows how to accept incoming internet connections and speak the HTTP application protocol.
- today most modern programming languages include libraries that provide the ability to make connections and serve up HTTP.
- ex.
go
package main
import (
"net/http"
)
func main() {
// Serve up files found in the public_html directory
fs := http.FileServer(http.Dir("./public_html"))
http.Handle("/", fs)
// Listen for HTTP requests
http.ListenAndServe(":3000", nil)
}- being able to easily create web services makes it easy to completely drop the monolithic web server concept and just build web services right into your web application.
- we can add function that responds with the current time, when the /api/time resource is requested.
package main
import (
"fmt"
"io"
"net/http"
"time"
)
func getTime(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, time.Now().String())
}
func main() {
// Serve up files found in the public_html directory
fs := http.FileServer(http.Dir("./public_html"))
http.Handle("/", fs)
// Dynamically provide data
http.HandleFunc("/api/time", getTime)
// Listen for HTTP requests
fmt.Println(http.ListenAndServe(":3000", nil))
}- gateways (simple web service) listen on the common HTTPS port 443, looks at the request and maps it to the other services running on different ports
- we use caddy
- microservices are web services that provide a single functional purpose
- by partitioning functionality into small logical chunks, you can develop and manage them independently of other functionality in a larger system.
- they can also handle large fluctuations in user demand by simply running more and more stateless copies of the microservice from multiple virtual servers hosted in a dynamic cloud environment.
- ex one microservice for generating your genealogical family tree might be able to handle 1000 users concurrently, so in order to support 1 million users, you just deploy 1000 instances of the service running on scalable virtual hardware
- serverless
- evolved from microservices, is where the server is conceptually removed from the architecture, and you just write a function that speaks HTTP. function loaded through a gateway that maps a web request to the function
- gateway automatically scales the hardware needed to host the serverless function based on demand. This reduces what the web application developer needs to think about down to a single independent function.
Domain names
- you can get the ip address for any domain using dig console utility.
- sometimes there are multiple ip addresses with one domain name, allows for redundancy in case if address failing to resolve.
- Top-level-domain (TLD) represents things after dot. (com, edu, click)
- root domain would look like "byu.edu", "google.com", or "cs260.click".
- you can find more information by using whois console utility
- DNS (Domain name service) allows you to associate a domain name with an ip address, but you have to also lease the ip address before you can use it to uniquely identify a device on the internet
- main records that facilitate mapping domain names to IP addresses:
- address (A) - straight mapping form domain name to IP address
- canonical name (CNAME) - maps one domain name to another domain name. acts as a domain name alias. would use to map byu.com to the same ip as byu.edu.\
- when entering domain name into browser:
- browser checks if it's in the cache, and if not, contacts DNS server and gets IP address. DNS server also keeps a cache of names.
- if not in that cache, will request name from an authoritative name server. if no recognition there you will get an unknown domain name error.
- if process resolves, browser makes the HTTP connection to associated IP address.
- Leasing a domain name:
- you can pay to lease an unused domain name for a specific period of time. before it expires, you can choose to extend.
- varies from 3 to 200 a year.
- buying or subleasing from a private party can be very expensive, and you are better off buying something obscure. One reason why companies have such strange names these days. Web services intro
- from frontend JS we can make requests to external services running anywhere in the world.
- This allows us to get external data that we then inject into the DOM for the user to read.
- To make a web service request, we supply the URL of the web service to the fetch function that is built into the browser.
- Next step in building a full stack web app is create our own web service.
- it will provide the static frontend files along with functions to handle fetch requests for things like storing data persistently, providing security, running tasks, executing application logic that you don't want your user to be able to see, and communicating with other users.
- web service functionality represents backend of app
- web service functions generally called endpoints, or APIs.
- access web service endpoints from frontend JS with fetch function.
URL
- uniform resource locator
- represents location of a web resource, such as web page, font, image, video stream, database record, or JSON object, or visitation counter, or gaming session
- looking at different parts is a good way to understand what it represents.
- URL syntax:
- ://:/?#
- URN or URI are part of web resources
- Uniform Resource Name is unique resource name that does not specify location information
- Uniform Resource Identifier is general resource identifier that could refer to either a url or urn.
Ports
- Connecting to a device on the internet you need both IP address and a numbered port.
- #s allow single device to support multiple protocols as well as different services.
- ports may be exposed externally, or only used internally.
- internet governing body, IANA, defines standard usage for port numbers:
- 0-1023 standard protocols. web service should avoid these unless it is providing protocol represented by the standard.
- from 1024-49151 ports that have ben assigned to requesting entities, but common to find these used by services running internally
- from 49152 to 65535 considered dynamic and used to create dynamic connections to device.
- common port numbers:
-
Port Protocol 20 File Transfer Protocol (FTP) for data transfer 22 Secure Shell (SSH) for connecting to remote devices 25 Simple Mail Transfer Protocol (SMTP) for sending email 53 Domain Name System (DNS) for looking up IP addresses 80 Hypertext Transfer Protocol (HTTP) for web requests 110 Post Office Protocol (POP3) for retrieving email 123 Network Time Protocol (NTP) for managing time 161 Simple Network Management Protocol (SNMP) for managing network devices such as routers or printers 194 Internet Relay Chat (IRC) for chatting 443 HTTP Secure (HTTPS) for secure web requests
-
- when you built your web server you externally exposed port 22 so that you could use SSH to open remote console on the server, port 443 for secure HTTP communication, and port 80 for unsecure HTTP communication
- Caddy listens on ports 80 and 443. When Caddy gets a request on port 80, it automatically redirects the request to port 443 so that a secure connection is used.
- Internally you can have as many web services running as you would like, but each should use a different port to communicate on
- simon service runs on port 3000 so can't use 3000 for startup service. instead uses 4000
- doesn't matter what high range port you use, only matters you are consistent and that they are only used by one service.
HTTP
- how the web talks
- when browser makes a request to a web server it does it using the HTTP protocol
- when a web client (browser) and a server talk they exchange HTTP requests and responses. browser will make an HTTP request and the server will generate an HTTP response.
- exchange seen using
curl-
request
-
HTTP syntax: <url path, parameters, anchor> [
]* [ ] -
First line of HTTP request contains the verb of the request followed by the path, parameters, and anchor of the URL, and finally the version of HTTP being used.
-
following are optional headers defined by key value pairs
-
after headers you have optional body
-
body start is delimited from the headers with two new lines
-
-
Response [
]* [<body>]
- response syntax similar to request syntax
- major difference that first line represents the version and the status of the response
-
Verbs:
-
Verb Meaning GET Get the requested resource. This can represent a request to get a single resource or a resource representing a list of resources. POST Create a new resource. The body of the request contains the resource. The response should include a unique ID of the newly created resource. PUT Update a resource. Either the URL path, HTTP header, or body must contain the unique ID of the resource being updated. The body of the request should contain the updated resource. The body of the response may contain the resulting updated resource. DELETE Delete a resource. Either the URL path or HTTP header must contain the unique ID of the resource to delete. OPTIONS Get metadata about a resource. Usually only HTTP headers are returned. The resource itself is not returned.
-
-
Status Codes:
- 1xx informational
- 2xx - success
- 3xx - redirect to some other location, or that previously cached resource is still valid
- 4xx - client errors, request is invalid
- 5xx - Server errors, request cannot be satisfied due to an error on the server
-
Code Text Meaning 100 Continue The service is working on the request 200 Success The requested resource was found and returned as appropriate. 201 Created The request was successful and a new resource was created. 204 No Content The request was successful but no resource is returned. 304 Not Modified The cached version of the resource is still valid. 307 Permanent redirect The resource is no longer at the requested location. The new location is specified in the response location header. 308 Temporary redirect The resource is temporarily located at a different location. The temporary location is specified in the response location header. 400 Bad request The request was malformed or invalid. 401 Unauthorized The request did not provide a valid authentication token. 403 Forbidden The provided authentication token is not authorized for the resource. 404 Not found An unknown resource was requested. 408 Request timeout The request takes too long. 409 Conflict The provided resource represents an out of date version of the resource. 418 I'm a teapot The service refuses to brew coffee in a teapot. 429 Too many requests The client is making too many requests in too short of a time period. 500 Internal server error The server failed to properly process the request. 503 Service unavailable The server is temporarily down. The client should try again with an exponential back off.
-
Headers:
- specify metadata about a request or response
-
Header Example Meaning Authorization Bearer bGciOiJIUzI1NiIsI A token that authorized the user making the request. Accept image/* The format the client accepts. This may include wildcards. Content-Type text/html; charset=utf-8 The format of the content being sent. These are described using standard MIME types. Cookie SessionID=39s8cgj34; csrftoken=9dck2 Key value pairs that are generated by the server and stored on the client. Host info.cern.ch The domain name of the server. This is required in all requests. Origin cs260.click Identifies the origin that caused the request. A host may only allow requests from specific origins. Access-Control-Allow-Origin https://cs260.click Server response of what origins can make a request. This may include a wildcard. Content-Length 368 The number of bytes contained in the response. Cache-Control public, max-age=604800 Tells the client how it can cache the response. User-Agent Mozilla/5.0 (Macintosh) The client application making the request.
-
Body:
- format defined by the content-type header. may be HTML text, binary image format, JSON, or JS.
-
Cookies:
- HTTP itself is stateless, meaning one HTTP request does not know anything about a previous or future request. However, that does not mean that a server or client cannot track state across requests
- cookies are common methods for tracking state
- generated by a server and passed to client as HTTP header
- client then caches the cookie and returns it as an HTTP header back to the server on subsequent requests
- allows server to remember things like language preference, user's authentication credentials.
- server can use cookies to track and share everything that a user does.
- nothing inherently evil about cookies; problem comes from web applications that use them as a means to violate user's privacy or inappropriately monetize their data.
-
HTTP Versions:
- HTTP continually evolves in order to increase performance and support new types of apps.
-
FETCH
- fetch api is preferred way to make HTTP requests today
- fetch built into browser's JS runtime. You can call it from JS code running in a browser.
- basic usage takes a URL and returns promise. promise then function takes callback function asynchronously called when requested URL content is obtained.
- if returned content is of type application/json you can use the json function on the response object to convert it to a JS object
- following makes fetch request to get and display inspirational quote
fetch('https://api.quotable.io/random')
.then((response) => response.json())
.then((jsonResponse) => {
console.log(jsonResponse);
});- response:
{
content: 'Never put off till tomorrow what you can do today.',
author: 'Thomas Jefferson',
}- To do a POST request you populate the options parameter with the HTTP method and headers
fetch('https://jsonplaceholder.typicode.com/posts', {
method: 'POST',
body: JSON.stringify({
title: 'test title',
body: 'test body',
userId: 1,
}),
headers: {
'Content-type': 'application/json; charset=UTF-8',
},
})
.then((response) => response.json())
.then((jsonResponse) => {
console.log(jsonResponse);
});- Node.js created in 2009. first successful application for deploying JS outside of a browser.
- changed the js mindset from browser technology to one that could run on the server as well.
- means that js can power your entire technology stack. one language to rule them all.
- Node.js is often just referred to as Node, and is currently maintained by the Open.js Foundation.
- production env web server comes with Node.js already installed.
- easiest way to install node.js is first install the Node Version Manager (NVM) and use it to install and manage Node
- node -v
- you can execute a line of js with Node.js from your console with the -e parameter
- node -e "console.log(1+1)"
- to do real work you need to execute an entire project composed of many files.
- Do this by making single starting JS file, named something like index.js that references code found in the rest of your project.
- preexisting packages of JS for implementing common tasks helpful
- to load a package using node.js:
- install package locally on your machine using NPM then include a require statement in your code that references package name.
- NPM automatically installed when you install Node.js
- NPM needs to be initialized.
- npm init in js directory with index.js
- npm init -y will accept all defaults
- contains:
- metadata about the project such as name and default entry JS file
- commands/scripts you can execute to do things like run, test, or distribute your code
- packages that this project depends upon
- include node_modules in your .gitignore file.
- when cloning, first run npm install in project directory, and NPM will download all of the previously installed packages and recreate the node_modules directory.
- main steps:
- Create your project directory Initialize it for use with NPM by running npm init -y Make sure .gitignore file contains node_modules Install any desired packages with npm install Add require('') to your application's JavaScript Use the code the package provides in your JavaScript Run your code with node index.js
- with js we can write code that listens on a network port, receives HTTP requests, processes them, and then responds. We can use this to create a simple web service that we then execute using Node.js
- express provides support for
- Routing requests for service endpoints
- Manipulating HTTP requests with JSON body content
- Generating HTTP responses
- Using middleware to add functionality
- create express application by using NPM to install Express package then calling express constructor to create the express app and listen for HTTP requests on a desired port.
const express = require('express');
const app = express();
app.listen(8000);- express app object supports all HTTP verbs as functions on the object
- if you want to have a route function that handles an HTTP GET request for the URL path /store/provo you would call the get method on the app
app.get('/store/provo', (req, res, next) => {
res.send({name: 'provo'});
});- get takes two params, url path matching pattern, and callback function that is invoked when pattern matches
- callback func has 3 params, represent HTTP request object (req), HTTP response object (res), and next routing function that calls if routing func wants another func to generate a response
- app compares routing function patterns in the order that they are added to app object.
- if two routing funcs with patterns that both match, first added will be called and given the next matching func in the next param
- real store endpoint would allow any store name to be provided as a param in path. Express supports path params by prefixing param name with a colon.
- express creates map of path params and populates it with matching values found in the url path.
- then reference params using req.params object
- we can rewrite getStore endpoint like this:
app.get('/store/:storeName', (req, res, next) => {
res.send({name: req.params.storeName});
});- If you want endpoint that used POST or DELETE HTTP verb then use post or delete func on the Express app object
- route path can also include limited wildcard syntax or even full regular expressions in path pattern. Here are a couple route funcs using diff pattern syntax
// Wildcard - matches /store/x and /star/y
app.put('/st*/:storeName', (req, res) => res.send({update: req.params.storeName}));
// Pure regex
app.delete(/\/store\/(.+)/, (req, res) => res.send({delete: req.params[0]}));- next parameter omitted. we are not calling next so we don't need it as a param.
- if you do not call next then no following functions will be invoked for the request.
- standard middle ware has two pieces: mediator and middleware.
- SOP - Same Origin Policy
- only allows JS to make requests to a domain if it is the same domain that the user is currently viewing
- ex request from byu.iinstructure.com for service endpoints made to byu.instructure.com would fail because domains do not match.
- provides significant security, but introduces complications when building web apps
- why CORS was invented
- CORS - Cross Origin Resource Sharing
- allows client (browser) to specify origin of request then let the server respond with what origins are allowed.
- server may say that all origins allowed, ex if they are a general purpose image provider, or only a specific origin is allowed, ex if they are a bank's authentication service.
- if server doesn't specify what origin allowed then browser assumes that it must be the same origin
- With CORS, browser protecting user from accessing course website's authentication endpoint from wrong origin
- CORS only meant to alert user that something nefarious being attempted.
- hacker can still proxy requests through their own server to the course website and completely ignore the Access-Control-Allow-Origin header.
- course website needs to implement its own precautions to stop hacker from using its services inappropriately.
- if you want to make requests to a different domain than the one your web app is hosted on, you need to make sure that domain allows requests as defined by the Access-Control-Allow-Origin header it returns
- urls you make requests to need to return Access-Control-Allow-Origin headers.
- you need to test the services you want to use before you include them in your application.
- make sure they are responding with a * or your calling origin. if not you will not be a le to use them
- Web services provide interactive functionality of your web app.
- commonly authenticate users
- track session state
- provide, store, and analyze data,
- connect peers
- aggregate user info
- making web service easy to use, performant, and extensible factors that determine success of your app
- good design will result in increased productivity, satisfied users, and lower processing costs
- helpful to model application's primary objects and the interactions of the objects
- attempt to stay as close to the model that is in your user's mind as possible.
- avoid introducing model that focuses on programming constructs and infrastructure
- ex chat program should model participants, conversations, and messages.
- should not model user devices, notwork connections, and data blobs.
- once you know your primary objects you can create sequence diagrams that show how the objects interact with each other. This will help clarify your model and define the necessary endpoints.
- use SequenceDiagram.org for creating and sharing diagrams
- web services usually provided over HTTP, and that greatly influences design of the service.
- HTTP verbs (GET, POST, PUT, DELETE) often mirror designed actions of a web service.
- ex. web service for managing comments might list comments (GET), create a comment (POST), update a comment (PUT), and delete a comment (DELETE)
- Likewise, MIME content types defined by IANA are natural fit for defining types of content that you want to provide (HTML, PNG, MP3, MP4)
- goal is to leverage those technologies as much as possible so that you don't have to recreate the functionality they provide and instead take advantage of the significant networking infrastructure built up around HTTP
- includes caching servers to increase performance, edge servers that bring content closer, and replication servers that provide redundant copies of your content and make app more resilient to network failures.
- web service usually divided up into multiple service endpoints
- each endpoint provides single functional purpose
- service endpoints often called an Application Programming Interface (API) throwback to old desktop apps and the programming interfaces that they exposed.
- things to consider when designing endpoints:
- Grammatical - with HTTP everything is resource (noun or object). you act on the resource with an HTTP verb.
- ex order resource contained in a store resource. You then create, get, update, and delete order resources on the store resource
- Readable - resource you are referencing should be clearly readable in the URL path.
- ex. order resource might contain the path to both the order and store where the order resource resides:
/store/provo/order/28502 - makes it easier to remember how to use the endpoint because it is human-readable
- ex. order resource might contain the path to both the order and store where the order resource resides:
- Discoverable - as you expose resources that contain other resources you can provide endpoints for the aggregated resources.
- makes it so someone using your endpoints only needs to remember top level endpoint, and then they can discover everything else.
- ex. if you have a store endpoint that returns info about a store you can include an endpoint for working with a store in the response
- Compatible - when building endpoints make it so that you can add new functionality without breaking existing clients.
- usually means that clients of your service endpoints should ignore anything that they don't understand. Consider the two following JSON response versions
- Grammatical - with HTTP everything is resource (noun or object). you act on the resource with an HTTP verb.
- Version 1:
{
"name": "John Taylor"
}Version 2
{
"name": "John Taylor",
"givenName": "John",
"familyName": "Taylor"
}-
- by adding new rep of the name field, you provide new functionality for clients that know ho wto use the new fields without harming older clients that ignore new fields and simply use old representation
- all done without officially versioning the endpoint
- if you can control all of your client code you can mark name field as depreciated and in a future version remove it once all the clients have upgraded.
- usually you want to keep compatibility with at least one previous version of the endpoint so that there is enough time for all the clients to migrate before compatibility is removed.
- Simple
- keeping endpoints focused on the primary resources of your app helps to avoid temptation to add endpoints that duplicate or create parallel access to primary resources.
- Very helpful to write some simple class and sequence diagrams that outline primary resources before you begin coding. Resources should focus on the actual resources of the system you are modeling.
- should not focus on data structure or devices used ot host the resources. There should only be one way to act on a resource. Endpoints should only do one thing.
- Documented
- The Open API Specification is a good example of tooling that helps create, use, and maintain documentation of your service endpoints.
- make use of such tools in order to provide client libraries for your endpoints and a sandbox for experimentation.
- create an initial draft of your endpoint documentation before you begin will help you mentally clarify your design and produce a better final result.
- Remote Procedure Calls - expose service endpoints as simple function calls.
- when used over HTTP usually leverages POST HTTP verb.
- name of the function is either entire path of URL or parameter in the POST body
- one advantage is that it maps directly to function calls that might exist within the server.
- Could also be a disadvantage as it directly exposes inner workings of the service, adn creates coupling between endpoints and the implementation.
- Representational State Transfer - attempts to take advantage of foundational principles of HTTP.
- REST HTTP verbs always act upon a resource. Operations on a resource impact the state of the resource as it is transferred by a REST endpoint call.
- This allows for caching functionality of HTTP to work optimally.
- ex. GET will always return the same resource until a PUT is executed on the resource. When PUT used, cached resource replaced with updated
- focuses on manipulation of data instead of a function call (RPC) or a resource (REST). Heart of GraphQL is query that specifies desired data and how it should be joined and filtered.
- developed to address frustration concerning the massive number of REST, or RPC calls, that a web app client needed to make in order to support even a simple UI widget
- Instead of making a call for getting a store, and then a bunch of calls for getting store's orders and employees, GraphQL would send a single query that would request all of that information in one big JSON response.
- server would examine the query, join the desired data, then filter out anything that was not wanted.
- ex.
query {
getOrder(id: "2197") {
orders(filter: {date: {allofterms: "20220505"}}) {
store
description
orderedBy
}
}
}- helps to remove a lot of the logic for parsing endpoints and mapping requests to specific resources. Basically in GraphQL there is only one endpoint. Query endpoint
- downside of that is that now the client now has significant power to consume resources on the server.
- No clear boundary on what, how much, or how complicated the aggregation of data is.
- Also difficult for the server to implement authorization rights to data as they have to be baked into the data schema.
- However, there are standards for how to define a complex schema.
- Common GraphQL packages provide support for schema implementations along with database adaptors for query support.
- when running programs from console, automatically terminates when you close the console or if the computer restarts
- to keep it running full time needs to be registered as a daemon. daemon comes from the idea of something that is always there working in the background. Hopefully you only have good daemons running in your background.
- we want our web services to continue running as a daemon. Also need an easy way to start and stop our services. That is what PM2 does (Process Manager 2)
- PM2 already installed on your production server as part of the AMI you selected when you launched.
- Deployment scripts in simon projects automatically modify pm2 to register and restart your web services.
- You shouldn't need to do anything with pm2
- useful commands:
pm2 ls- see pm2 in action after ssh-ing into your server.- should print out two services, simon and startup, that are configured to run on web server.
- you can try other commands but only if you understand what you're doing. using incorrectly could cause services to stop.
- https://github.com/webprogramming260/.github/blob/main/profile/webServices/pm2/pm2.md
- if you want to setup another subdomain that accesses a different web service on your web server, you need to :
- add rule to the Caddyfile to tell it how to direct requests for the domain
- create a directory and add the files for the web service
- Configure pm2 to host the web service
- Modify Caddyfile
- copy section for the startup subdomain and alter it so that it represents desired subdomain and give it a different port number that is not currently used on your server.
tacos.cs260.click {
reverse_proxy _ localhost:5000
header Cache-Control none
header -server
header Access-Control-Allow-Origin *
}
- tells caddy that when it gets request for tacos.cs260.click it will act as proxy for those requests and pass them on to the web service that is listening on the same machine (localhost) on port 5000.
- other settings tell caddy to return headers that disable caching, hide the fact that caddy is the server (no reason to tell hackers anything about your server), and allow any other origin server to make endpoint requests to this subdomain (disabling cors).
- you can change settings as needed
- restart caddy
sudo service caddy restart - caddy will attempt to proxy requests, but there is no web service listening on port 5000 and you will get error from caddy if you make a request to tacos.cs260.click.
- Create the web service
- copy the services, startup directory to directory that represents the purpose of your service. ex
cp -r ~/services/tacos
- copy the services, startup directory to directory that represents the purpose of your service. ex
- Saving the web service
- From ssh console session run pm2 ls, then
cd ~/services/tacos
pm2 start index.js -n tacos -- 5000
pm2 save- debug main.js, select debugger Node.js
- code will execute and the debug console window will automatically open to show you debugger output where you can see the results of console.log() statements
- you can set breakpoints
- to debug web service, use same instructions as before, then set breakpoints on
getstoreendpoint callback and theapp.listencall. - set browser location to whatever localhost it is.
- nodemon package is wrapper around node that watches for files in the project directory to change. when you change something it automatically restarts node.
- critical to separate where you develop your application, from where the production release of your app is made publicly available.
- stages such as staging, internal testing, development, external testing, production.
- most often required to be separate.
- all put together through continuous integration.
- CI processes checkout application code, lint it, build it, test it, stage it, test it more, and then finally, if everything checks out, deploy application to the production environment, and notify the different departments in the company of the release.
- For us, you will use and manage development environment, and your production environment.
- never consider production environment a place to develop or experiment with your app. You can shell into the production env to configure your server or to debug a production problem, but deployment of app should happen using automated CI process.
- our CI process will use a simple console shell script
- advantage of using automated deployment process is that it is reproducible.
- can't accidentally delete a file, or misconfigure something with a stray keystroke.
- having automated script encourages you to iterate quickly because it is so much easier to deploy your code.
- you can add small feature, deploy it, and get feedback within minutes from users.
- deployment scripts change with each new tech that we have to deploy. Initially, just copy up HTML files, but soon they include the ability to modify the config of your web server, run transpiler tools, and bundle code into a deployable package.
- -k parameter in deployment script deployment provides cred file to access prod env
- -k param domain name of the prod env
- -s param represents name of the app you are deploying (either simon or startup)
- deployment scripts very helpful
- shell scripting powerful tool for automating common development tasks and is well worth adding to your bucket of skills.
- often web apps need to upload one or more files from frontend app running in the browser to backend service.
- accomplish by using HTML input element of type file on the frontend, and the Multer NPM package on the backend
- Frontend
- register event handler for when selected file changes and only accepts certain file types.
- frontend JS handles uploading of file to server and then uses filename returned from server to set the src attribute of the image element in the dom. if error happens then alert displayed to user.
- Backend
- to build storage support into our server, first install Multer NPM package to our project. there are others but Multer is commonly used. (npm install multer)
- multer handles reading the file from the HTTP request, enforcing size limit of the upload, and storing the file in the uploads directory.
- service code does the following:
- handles requests for static files so that we can serve up our frontend code
- handles errors such as when the 64k file limit is violated
- provides a GET endpoint to serve up a file from the uploads directory
- Where to store files
- take serious thought about where you store your files. server is not good production level solution because:
- only so much available space, only 8GB by default. Once that is used up server will fail to operate correctly, and you may need to rebuild your server
- in production system, servers are transient and are often replaced as new versions are released, or capacity requirements change. means you will lose any state you store on your server
- server storage not usually backed up. if server fails, you will lose your customer's data
- if you have multiple application servers then you can't assume that the server you uploaded the data to is going to be the one you request a download from.
- instead use a dedicated storage service that has durability guarantees, is not tied to compute capacity, and can be accessed by multiple application servers.
- take serious thought about where you store your files. server is not good production level solution because:
- web apps commonly need to store files associated with the app or the users of the app. includes files such as images, user uploads, documents, and movies.
- files usually have an ID, some metadata, and the bytes representing the file itself.
- can be stored using a database service, but overkill and simpler solutions are cheaper.
- bad idea to store files right on the server. bad because:
- server has limited drive space, if that runs out entire app will fail
- consider server as ephemeral or temporary. can be thrown away and replaced by a copy at any time. If you start storing files on the server, then server has state that cannot be easily replaced.
- need backup copies of your app and user files. if you only have one copy of your files on your server, then they will disappear when your server disappears, and you must always assume that will happen
- instead use storage service specifically designed to support production storage and delivery of files
- AWS S3
- many solutions, one of most popular is AWS S3. has following advantages:
- unlimited capacity
- only pay for storage that you use
- optimized for global access
- keeps multiple redundant copies of every file
- you can version files
- performant
- supports metadata tags
- can make your files publicly available directly from s3
- you can keep files private and only accessible to your app
- in this course, no storage for Simon. but if you want to use s3 as storage for Startup, then learn how to use AWS SDK. find detailed info on AWS website. steps are:
- creating s3 bucket to store data in
- getting credentials so app can access the bucket
- using creds in app
- using sdk to write, list, read, and delete files from bucket
- don't include creds in code. if you put them into GitHub repo they will immediately be stolen and used to take over your aws account
- many solutions, one of most popular is AWS S3. has following advantages:
- web apps commonly need to store app and user data persistently. data can be many things, but usually representation of complex interrelated objects.
- includes things like user profile, organizational structure, game play info, usage history, billing info, peer relationship, library catalog, and so forth.
- Historically, SQL databases have served as general purpose data service solution, but after 20110, specialty data services that better support document, graph, JSON, time, sequence, and key-value pair data began to take significant roles in apps from major companies
- these services called NoSQL solutions because do not use general purpose relational database paradigms popularized by SQL databases.
- however all have very different underlying data structures, strengths and weaknesses.
- you should not simply split all the possible data services into two narrowly defined boxes, SQL and nosql, when you are considering the right data service for your app.
- MongoDB
- for projects in this course that require data services, we will use MongoDB.
- mongo increases developer productivity by using JSON objects as its core data model.
- makes it easy to have an app that uses JSON from top to bottom of the tech stack. mongo db made up of one or more collections that each contain JSON documents. You can think of a collection as a large array of JS objects, each with unique ID.
- mongo has no strict schema requirements. each document in collection usually follows a similar schema, but each doc may have specialized fields that are present, and common fields that are missing.
- allows the schema of collection to morph organically as the data model of the app evolves. To add a new field to a mongo collection you just insert the field into the docs as desired.
- if field not present, or has different type in some docs, then doc simply doesn't match query criteria when the field is referenced.
- Query syntax for mongo also follows JS inspired flavor.
// find all houses
db.house.find();
// find houses with two or more bedrooms
db.house.find({ beds: { $gte: 2 } });
// find houses that are available with less than three beds
db.house.find({ status: 'available', beds: { $lt: 3 } });
// find houses with either less than three beds or less than $1000 a night
db.house.find({ $or: [(beds: { $lt: 3 }), (price: { $lt: 1000 })] });
// find houses with the text 'modern' or 'beach' in the summary
db.house.find({ summary: /(modern|beach)/i });- Using MongoDB in your app
- first step is install mongodb package using NPM. (npm install mongodb)
- next use MongoClient object to make client connection to database server. requires username, password, and the hostname of the database server.
const { MongoClient } = require('mongodb');
const userName = 'holowaychuk';
const password = 'express';
const hostname = 'mongodb.com';
const url = `mongodb+srv://${userName}:${password}@${hostname}`;
const client = new MongoClient(url);- with client connection you can then get a database object and from that a collection object. Collection object allows you to insert, and query for, documents
- you don't have to do anything special to insert a JS object as a mongo doc. you can just call the insertOne function on the collection object and pass it the JS object.
- when you insert a doc, if the database or collection does not exist, Mongo will automatically create them for you.
- when doc is inserted into the collection it will automatically be assigned a unique ID
- Managed Services
- much work of a dev team that manages data service has now been moved to services hosted and managed by a 3rd party.
- relieves dev team from much dat-to-day maintenance. team can instead focus more on the app and less on the infrastructure.
- with managed data service you simply supply the data and the service grows, or shrinks, to support desired capacity and performance criteria.
- MongoDB Atlas
- all major cloud providers offer multiple data services. for this class we will use data service provided by MongoDB called Atlas.
- main steps to take are
- create account
- create database cluster
- create root database user credentials, remember these
- set network access to your database to be available from anywhere
- copy connection string and use the info in code
- save connection and credential info in your production and development environments as instructed above
- you can always find connection string to your Atlas cluster by pressing Connect button from your Database > DataServices view
- Keeping keys out of code
- load creds when the app executes. have a JSON configuration file that contains creds that you dynamically load into the JS that makes the database connection
- then use the config file in your development environment and deploy it to your production environment, but NEVER commit it to gitHub.
- do this:
- create
dbConfig.jsonin same directory as db javascript (e.g. database.js) that you use to make database requests - insert mongo db creds into the dbConfig.json file in JSON format using the following ex
- import dbConfig.json content into your database.js file using a Node.js require statement and use the data that it represents to create the connection URL
- include dbConfig.json in .gitignore file
- create
const config = require('./dbConfig.json');
const url = `mongodb+srv://${config.userName}:${config.password}@${config.hostname}`;- Testing connection on startup
- make an async request to ping the database. ex:
const config = require('./dbConfig.json');
const url = `mongodb+srv://${config.userName}:${config.password}@${config.hostname}`;
const client = new MongoClient(url);
const db = client.db('rental');
(async function testConnection() {
await client.connect();
await db.command({ ping: 1 });
})().catch((ex) => {
console.log(`Unable to connect to database with ${url} because ${ex.message}`);
process.exit(1);
});- Using Mongo from your code
- you should be good to use Atlas from both development and production env.
- test with this:
const { MongoClient } = require('mongodb');
const config = require('./dbConfig.json');
async function main() {
// Connect to the database cluster
const url = `mongodb+srv://${config.userName}:${config.password}@${config.hostname}`;
const client = new MongoClient(url);
const db = client.db('rental');
const collection = db.collection('house');
// Test that you can connect to the database
(async function testConnection() {
await client.connect();
await db.command({ ping: 1 });
})().catch((ex) => {
console.log(`Unable to connect to database with ${url} because ${ex.message}`);
process.exit(1);
});
// Insert a document
const house = {
name: 'Beachfront views',
summary: 'From your bedroom to the beach, no shoes required',
property_type: 'Condo',
beds: 1,
};
await collection.insertOne(house);
// Query the documents
const query = { property_type: 'Condo', beds: { $lt: 2 } };
const options = {
sort: { score: -1 },
limit: 10,
};
const cursor = collection.find(query, options);
const rentals = await cursor.toArray();
rentals.forEach((i) => console.log(i));
}
main().catch(console.error);- if your app is going to remember a user's data then it will need a way to uniquely associate the data with a particular credential
- usually means that you
authenticatea user by asking for info, such as email address and password. You then remember, for some period of time, that the user has authenticated by storing an authentication token on the user's device - often token stored in a cookie that is passed back to your web service on each request.
- service can now associate data that the user supplies with a unique identifier that corresponds to their authorization token.
- Once you have the ability to authenticate a user and store info about that user, you can also store the authorization for the user (admin, editor, customer, etc.)
- simple app might have a single field that represents role of the user. Service code would then use that role to allow, limit, or prevent what a service endpoint does.
- complex web app will usually have very powerful authorization representation that controls user's access to every part of the app.
- ex. editor role might have authorization only to work on content that they created or were invited to.
- authentication and authorization can become very complex, very quickly. also primary target for hackers.
- if they can bypass authentication or escalate what they are authorized to do, then can gain control of your app. also, constantly forcing users to authenticate in a secure way causes users to not want to use an app.
- creates opposing priorities: keep system secure or make it easy to use.
- many service providers and package developers have created solutions that you can use. If using well-trusted service, removes cost of building, testing, and managing that critical infrastructure yourself.
- Standard protocols for authenticating and authorizing: OAuth, SAML, and OIDC
- usually support concepts like single sign on and federated login. single sign in allows user to use same credentials for multiple web apps.
- federated login allows user to log in once, then authentication token reused to automatically log the user in to multiple websites. ex. logging in to Gmail also allows you to use Google Docs and YouTube w/out logging in again
- we will use our own authentication using simple email/password design. knowing how to implement a simple auth design will help you appreciate what auth services provide. If you want to experiment with different auth services you might consider AWS Cognito or Google Firebase.
- first step towards supporting auth in web apps is providing way for users to uniquely identify themselves.
- usually requires two service endpoints: one to create an authentication credential, and a second to authenticate, or login on future visits.
- once user authenticated wwe can control access to other endpoints. ex services often have a getMe endpoint that gives info about the currently authenticated user
- Endpoint design
- create authentication endpoint\
- takes email and password and returns cookie containing auth token and user ID. If email already exists it returns a 409 status code.
- create authentication endpoint\
POST /auth/create HTTP/2
Content-Type: application/json
{
"email":"marta@id.com",
"password":"toomanysecrets"
}HTTP/2 200 OK
Content-Type: application/json
Set-Cookie: auth=tokenHere
{
"id":"337"
}- Login authentication endpoint takes an email and password and returns a cookie containing the authentication token and user ID. If the email does not exist or the password is bad it returns a 401 (unauthorized) status code.
POST /auth/login HTTP/2
Content-Type: application/json
{
'email':'marta@id.com',
'password':'toomanysecrets'
}HTTP/2 200 OK
Content-Type: application/json
Set-Cookie: auth=tokenHere
{
"id":"337"
}- GetMe endpoint
- uses the authentication token stored int eh cookie to look up and return information about the authenticated user. If token or user dne returns 401 (unauthorized) status code
GET /user/me HTTP/2
Cookie: auth=tokenHereHTTP/2 200 OK
Content-Type: application/json
{
"email":"marta@id.com"
}- with service endpoints designed, we can now build our web service using Express
const express = require('express');
const app = express();
app.post('/auth/create', async (req, res) => {
res.send({ id: 'user@id.com' });
});
app.post('/auth/login', async (req, res) => {
res.send({ id: 'user@id.com '});
});
const port = 8080;
app.listen(port, function () {
console.log(`Listening on port ${port}`);
});- we can now implement create authentication endpoint
- first step read credentials from the body of the HTTP request
- since body designed to contain JSON we need to tell Express that it should parse HTTP requests, with content type of application/json, automatically into a JS object. use express.json middleware
app.use(express.json());
app.post('/auth/create', (req, res) => {
if (await getUser(req.body.email)) {
res.status(409).send({ msg: 'Existing user' });
} else {
const user = await createUser(req.body.email, req.body.password);
res.send({
id: user._id,
});
}
});- we want to persistently store our users in Mongo and so we need to set up our code to connect to and use the database.
const { MongoClient } = require('mongodb');
const userName = 'holowaychuk';
const password = 'express';
const hostname = 'mongodb.com';
const url = `mongodb+srv://${userName}:${password}@${hostname}`;
const client = new MongoClient(url)- with mongo collection object we can implement the getUser and createUser functions
function getUser(email) {
return collection.findOne({ email: email });
}
async function createUser(email, password) {
const user = {
email: email,
password: password,
token: 'xxx',
};
return collection.insertOne(user);
}- this would work, but we need a real token
- to gen a reasonable auth token we use the uuid package. UUID stands for Universally Unique Identifier, and does good job of creating a hard to guess, random, unique ID.
const uuid = require('uuid');
token: uuid.v4();- need to securely store passwords for so many reasons
- instead of storing password directly, we want to cryptographically hash the password so that we never store the actual password. when we want to validate a password during login, we can hash the login password and compare it to our stored hash of the password
- to hash, we will use bcrypt package.
- creates very secure one-way hash of the password
const bcrypt = require('bcrypt');
async function createUser(email, password) {
// Hash the password before we insert it into the database
const passwordHash = await bcrypt.hash(password, 10);
const user = {
email: email,
password: passwordHash,
token: uuid.v4(),
};
await collection.insertOne(user);
return user;
}- now we want to pass generated authentication token to browser when the login endpoint is called, and get it back on subsequent requests. use cookies
- cookie parser package provides middleware for cookies
- import cookieParser object and tell app to use it. When user successfully created, or logs in, we set the cookie header.
- Since we are storing an authentication token in the cookie, we want to make it as secure as possible, and so we use the httpOnly, secure, and sameSite options
- httpOnly tells browser to not allow JS running on the browser to read cookie
- secure requires HTTPS to be used when sending the cookie back to the server
- sameSite will only return the cookie to the domain that generated it.
const cookieParser = require('cookie-parser');
// User the cookie parser middleware
app.use(cookieParser());
apiRouter.post('/auth/create', async (req, res) => {
if (await DB.getuser(req.body.email)) {
res.status(409).send({ msg: 'Existing user' });
} else {
const user = await DB.createUser(req.body.email, req.body.password);
// Set the cookie
setAuthCookie(res, user.token);
res.send({
id: user._id,
});
}
});
function setAuthCookie(res, authToken) {
res.cookie('token', authToken, {
secure: true,
httpOnly: true,
sameSite: 'strict',
});
}- login authorization endpoint needs to get the hashed password from the database, compare it to the provided password using bcrypt.compare, and if successful set the auth token in the cookie.
- if passwords don't match, or there is no user with the given email, the endpoint returns status 401 (unauthorized).
app.post('/auth/login', async (req, res) => {
const user = await getUser(req.body.email);
if (user) {
if (await bcrypt.compare(req.body.password, user.password)) {
setAuthCookie(res, user.token);
res.send({ id: user._id });
return;
}
}
res.status(401).send({ msg: 'Unauthorized' });
});- now we can implement getMe endpoint to demonstrate that it call actually works
- get user object from database by querying on the auth token, if not or no user with that token, return status 401 (unauthorized)
app.get('/user/me', async (req, res) => {
authToken = req.cookies['token'];
const user = await collection.findOne({ token: authToken });
if (user) {
res.send({ email: user.email });
return;
}
res.status(401).send({ msg: 'Unauthorized' });
})const { MongoClient } = require('mongodb');
const uuid = require('uuid');
const bcrypt = require('bcrypt');
const cookieParser = require('cookie-parser');
const express = require('express');
const app = express();
const userName = 'holowaychuk';
const password = 'express';
const hostname = 'mongodb.com';
const url = `mongodb+srv://${userName}:${password}@${hostname}`;
const client = new MongoClient(url);
const collection = client.db('authTest').collection('user');
app.use(cookieParser());
app.use(express.json());
// createAuthorization from the given credentials
app.post('/auth/create', async (req, res) => {
if (await getUser(req.body.email)) {
res.status(409).send({ msg: 'Existing user' });
} else {
const user = await createUser(req.body.email, req.body.password);
setAuthCookie(res, user.token);
res.send({
id: user._id,
});
}
});
// loginAuthorization from the given credentials
app.post('/auth/login', async (req, res) => {
const user = await getUser(req.body.email);
if (user) {
if (await bcrypt.compare(req.body.password, user.password)) {
setAuthCookie(res, user.token);
res.send({ id: user._id });
return;
}
}
res.status(401).send({ msg: 'Unauthorized' });
});
// getMe for the currently authenticated user
app.get('/user/me', async (req, res) => {
authToken = req.cookies['token'];
const user = await collection.findOne({ token: authToken });
if (user) {
res.send({ email: user.email });
return;
}
res.status(401).send({ msg: 'Unauthorized' });
});
function getUser(email) {
return collection.findOne({ email: email });
}
async function createUser(email, password) {
const passwordHash = await bcrypt.hash(password, 10);
const user = {
email: email,
password: passwordHash,
token: uuid.v4(),
};
await collection.insertOne(user);
return user;
}
function setAuthCookie(res, authToken) {
res.cookie('token', authToken, {
secure: true,
httpOnly: true,
sameSite: 'strict',
});
}
const port = 8080;
app.listen(port, function () {
console.log(`Listening on port ${port}`);
});- TDD test driven development is proven methodology for accelerating app creation protecting against regression bugs, and demonstrating correctness.
- TDD for console based applications and server based code is fairly straight forward
- web app UI code is significantly more complex to test, and using automated tests to drive your UI development is even more difficult
- Problem is that browser required to execute UI code. means you actually have to test the app in the browser.
- also every one of the major browsers behaves slightly differently, viewport size makes a big difference, all the code executes asynchronously, network disruptions are common, and there is a human factor
- not testing your code doesn't work either, which means you have to manually test everything every time you make any change, or let your users test everything. not good recipe for long term success
- Problem many strong players have been working on for decades, and solutions, while not perfect, are getting better and better.
- options:
- executing automated tests in the browser
- testing on different browsers and devices
- companies that build web browsers know all difficulties of testing apps. Have to test every possible use of HTML, CSS, and JS that a user could think of.
- no way manual testing is going to work and so early on they started putting hooks into their browsers that allowed them to be driven from automated external processes.
- Lots of alternatives now. State of JS includes statistics on how popular these frameworks are.
- Playwright:
- backed by Microsoft
- integrates well with VS Code
- runs as a Node.js process
- considered one of the least flaky of the testing frameworks
- Ex of playwright:
<body>
<p id="welcome" data-testid="msg">Hello world</p>
<button onclick="changeWelcome()">change welcome</button>
<script>
function changeWelcome() {
const welcomeEl = document.querySelector('#welcome');
welcomeEl.textContent = 'I feel welcomed';
}
</script>
</body>- installing Playwright
- npm init playwright@latest
- testing services is usually easier than writing UI tests because it does not require a browser.
- but does still take effort to learn how to write tests that are effective of efficient
- Making this a standard part of your development process will give you a significant advantage as you progress in your professional career
- We are going to use Jest
mkdir testJest
cd testJest
npm init -y
npm install express
code .- make a file named server.js and use express to create an application with two endpoints: one to get a store (getStore), and another to update a store
- to test endpoints we need another package so that we can make http requests without having to actually send them over the network. This is done with supertest
- HTTP based on client-server architecture. Client always initiates request and the server responds. Great if you are building global document library connected by hyperlinks, but for many other cases it just doesn't work.
- apps for notifications, distributed task processing, peer-to-peer communication, or asynchronous events need communication that is initiated by two or more connected devices.
- Websocket created to solve problems.
- core feature of WebSocket is that it is fully duplexed. Means that after the initial connection is made from a client, using vanilla HTTp, and then upgraded by the server to a WebSocket connection, the relationship changes to a peer-to-peer connection where either party can efficiently send data at any time.
- Websocket connections are still only between two parties. so if you want to facilitate a conversation between a group of users, the server must act as the intermediary. each peer must first connect to the server, then the server forwards messages amongst the peers
- JS running on a browser can initiate a WebSocket connection with the browser's WebSocket API.
- first create Websocket object by specifying the port you want to communicate on.
- then send messages with the send function, and register a callback using the onmessage function to receive messages.
const socket = new WebSocket('ws://localhost:9900');
socket.onmessage = (event) => {
console.log('received: ', event.data);
};
socket.send('I am listening');- server uses the ws package to create a WebSocketServer that is listening on the same port the browser is using. By specifying a port when you create the WebSocketServer, you are telling the server to listen for HTTP connections on that port and to automatically upgrade them to a WebSocket connection if the request has a connection: Upgrade header.
- when connection detected calls the server's on connection callback. Server can then send messages with the send function, and register a callback using the on message function to receive messages.
const { WebSocketServer } = require('ws');
const wss = new WebSocketServer({ port: 9900 });
wss.on('connection', (ws) => {
ws.on('message', (data) => {
const msg = String.fromCharCode(...data);
console.log('received: %s', msg);
ws.send(`I heard you say "${msg}"`);
});
ws.send('Hello webSocket');
})- you can debug both sides of the WebSocket communication with VS code to debug the server, and Chrome to debug the client.
- chrome's debugger has support specifically for working with Websocket communication.
- testWebSocket and change to directory, install ws, make a file named main.js, set breakpoints on the
ws.sendlines, so you can inspect the code, start debugging by pressingf5. You may need to choose Node.js as the debugger
- preventing potential for harm needs to be in the forefront of your mind whenever you create or use a webb app.
- authorization log captures all the attempts to create a session on your server.
sudo less +G /var/log/auth.log- you will see lots of other attempts with specific usernames associated with common exploits. should all be failing to connect
- as an experiment, someone created a test server with a user named admin with password, password, within 15 minutes, attacker had logged in, bypassed all the restrictions that were in place, and started using the server to attack other servers on the internet.
- even for seemingly insignificant applications, security is always important
- list of common phrases used by security community
- Hacking - process of making a system do something it's not supposed to do.
- Exploit - code or input that takes advantage of a programming or configuration flaw
- Attack Vector - method hacker employs to penetrate and exploit a system
- Attack Surface - the exposed parts of a system that an attacker can access. ex. open ports (22, 443, 80), service endpoints, or user accounts
- Attack Payload - actual code, or data, that a hacker delivers to a system in order to exploit it.
- Input sanitization - "Cleaning" any input of potentially malicious data
- Black Box testing - Testing an application without knowledge of the internals of the app
- White box testing - testing an app with knowledge of teh source code and internal infrastructure
- Penetration testing - Attempting to gain access to, or exploit, a system in ways that are not anticipated by the developers.
- Mitigation - action taken to remove, or reduce, a threat.
- common motivations
- Disruption - by overloading a system, encrypting essential data, or deleting critical infrastructure, an attacker can destroy normal business operations
- may be an attempt at extortion, or simply be an attempt to punish a business that that attacker does not agree with
- Data exfil - privately extracting or exposing system's data, attacker can embarrass company, exploit insider info, sell info to competitors, or leverage info for additional attacks
- Resource consumption
- by taking control of a company's computing resources, an attacker can use it for other purposes such as mining cryptocurrency, gathering customer info, or attacking other systems
- Disruption - by overloading a system, encrypting essential data, or deleting critical infrastructure, an attacker can destroy normal business operations
- security should always be a primary objective of any app. Building a web app that looks good and performs well is a lot less important than building an app that is secure
- a few common exploitation techniques that you should be aware of
- Injection - when an app interacts with a database on the backend, a programmer will often take user input and concatenate it directly into a search query
- allows hacker to use a specially crafted query to make the database reveal hidden info or even delete database
- Cross site scripting (XSS) - category of attacks where an attacker can make malicious code execute on a different user's browser. If successful, attacker can turn a website that a user trusts into one that can steal passwords and hijack a user's account
- Denial of Service - includes attacks where main goal is to render service inaccessible
- done by deleting database using SQL injection, sending unexpected data to service endpoint that causes program to crash, or simply making more requests than a server can handle
- Credential Stuffing - if hacker has user's credentials from previous website attack, there is good chance that they can successfully use those creds on a different website.
- hacker can also try to brute force attack a system by trying every possible combination of password
- Social engineering - appealing to human's desire to help, in order to gain unauthorized access or info
- Injection - when an app interacts with a database on the backend, a programmer will often take user input and concatenate it directly into a search query
- taking time to learn techniques a hacker uses to attack a system is the first step in preventing them from exploiting your systems.
- develop a security mindset, where you always assume any attack surface will be used against you.
- Sanitize input data - always assume that any data you receive from outside your system will be used to exploit your system.
- if the input data can be turned into an executable expression ,or can overload computing, bandwidth, or storage resources.
- Logging - not possible to think of every way that your system can be exploited, but you can create an immutable log of requests that will expose when a system is being exploited.
- you can then trigger alerts, and periodically review the logs for unexpected activity
- Traps - create what appears to be valuable info and then trigger alarms when the data is accessed
- Educate - teach yourself, users, and everyone you work with, to be security minded. Anyone who has access to your system should understand how to prevent physical, social, and software attacks
- Reduce attack surfaces - do not open access anymore than is necessary to properly provide your app. includes what network ports are open, what account privileges are allowed, where you can access the system from, and what endpoints are available
- Layered security
- least required access policy - don't give any one user all the credentials necessary to control the entire system
- Safeguard credentials
- public review
- Sanitize input data - always assume that any data you receive from outside your system will be used to exploit your system.
- Open Web Application Security Project is non-profit research entity that manages top ten list of the most important web application security risks. Understanding, and periodically reviewing list will help to keep your web apps secure
- occurs when app doesn't properly enforce permissions on users. could mean that a non admin user can do things that only admin should be able to do, or admin accounts are improperly secured.
- mitigations:
- strict access enforcement at the service level
- clearly defined roles and elevation paths
- mitigations:
- occur when sensitive data is accessible eiter without encryption, with weak encryption protocols, or when cryptographic protections are ignored.
- Sending any unencrypted data over a public network connection allows an attacker to capture the data. Even private, internal, network connections, or data that is stored without encryption, is susceptible to exploitation once attacker gains access to internal system
- ex of ineffective cryptographic methods include hashing algorithms like MD5 and SHA-1 that are trivial to crack with modern hardware and tools
- another failure happens when apps do not validate the provided web certificate when establishing a network connection. case of falsely assuming that if the protocol is secure then the entity represented by the protocol is acceptable.
- mitigations:
- use strong encryption for all data. includes external, internal, in transit, and at rest data
- updating encryption algorithms as older algorithms become compromised.
- Properly using cryptographic safeguards
- occur when attacker allowed to supply data that is then injected into a context where it violates the expected use of the user input.
- ex. input field that is only expected to contain a user's password. Instead, attacker supplies SQL database command in the password input.
- mitigations
- sanitizing input
- use database prepared statements
- restricting execution rights
- limit output
- broadly refers to architectural flaws that are unique for individual systems, rather than implementation errors.
- happens either when app team doesn't focus on security when designing system, or doesn't continuously reevaluate the app security
- based on unexpected uses of the business logic that controls the functionality of the app. ex. if app allows for trial accounts to be easily created, then attacker could create a denial of service attack by creating millions of accounts and utilizing the maximum allowable usage
- mitigations:
- integration testing
- strict access control
- security education
- security design pattern usages
- scenario reviews
- exploit the config of an app.
- ex. using default passwords, not updating software, exposing config settings, or enabling unsecured remote config.
- mitigations:
- config reviews
- setting defaults to disable all access
- automated config audits
- requiring multiple layers of access for remote configs.
- the longer an app has been deployed, more likely it is that the attack surface, and corresponding exploits of the app will increase.
- due to the cost of maintaining an app and keeping it up to date in order to mitigate newly discovered exploits
- mitigations
- keeping a manifest of your software stack including versions
- reviewing security bulletins
- regularly updating software
- required components to be up-to-date
- replacing unsupported hardware
- identification and auth failures include any situation where user's identity can be impersonated or assumed by an attacker
- ex allowed repeated attempts to guess a user's password
- another ex of identification failure would be a weak password recovery process that doesn't properly verify the user.
- mitigations:
- rate limiting requests
- properly managing credentials
- multifactor auth
- auth recovery
- represent attacks that allow external software, processes, or data to compromise your app
- modern web apps extensively use open source and commercially produced packages to provide key functionality. Using these packages without security audit gives them unknown amount of control over your app.
- mitigations
- only using trusted package repos
- using your own private vetted repo
- audit all updates to third party packages and data sources
- one of the first things an attacker will do after penetrating your app is delete or alter any logs that might reveal the attacker's presence.
- secure system will store logs that are accessible, immutable and contain adequate info to detect intrusion, and conduct post-mortem analysis
- mitigations
- real time log processing
- automated alerts for metric threshold violations
- periodic log reviews
- visual dashboards for key indicators
- causes app service to make unintended internal requests, that utilized the service's elevated privileges, in order to expose internal data or services
- Mitigations
- sanitizing returned data
- not returning data
- whitelisting accessible domains
- rejecting HTTP redirects
- you will not really internalize how security exploits work until you get some practice with them. One way is to use a practice security web app. lots of practice apps but two are Gruyere and Juice Shop
- provides tutorials and practice with things like cross-site scripting (XSS), Denial of Service (DoS), SQL injection, and elevation of privilege attacks
- easy to start, play with, and reset when you want to start over
- OWASP provides security training app called Juice Shop. you need to download the code and run it locally, but you have full control.
- seek to make the job of writing web apps easier by providing tools for completing common app tasks, this includes things like modularizing code, creating single page apps, simplifying reactivity, and supporting diverse hardware devices.
- each framework has advantages and disadvantages. Some are very opinionated about how to do things, some have major institutional backing, others have a strong open source community.
- Vue - combines HTML, css, and JS into a single file. HTML is represented by a template element that can be aggregated into other templates
- SFC
<script>
export default {
data() {
return {
name: 'world',
};
;}
};
</script>
<style>
P {
color: green;
}
</style>
<template>
<p>Hellow {{ name }}!</p>
</template>- Svelt - combines HTML, CSS, and JS into single file. but requires a transpiler to generate browser-ready code, instead of a runtime virtual DOM
<script >
let name = 'world';
</script>
<
<style>
P {
color: green;
}
</style>
<p>Hello {name}!</p>- React - combines JS and HTML into component element, CSS must be declared outside the JSX file.
- component itself highly leverages the functionality of JS and can be represented as a function or class
import 'hello.css';
const Hello = () => {
let name = 'world';
return <p>Hello {name}</p>;
};CSS
p {
color: green;
}- Angular - component defines what JS, HTML, and CSS are combined together. keeps strong separation of files that are usually grouped together in a directory rather than using the single file representation JS
@Component({
selector: 'app-hello-world',
templateUrl: './hello-world.component.html',
styleUrls: ['./hello-world.component.css'],
})
export class HelloWorldComponent {
name: string;
constructor() {
this.name = 'world';
}
}HTML
<p>hello {{name}}</p>CSS
p {
color: green;
}- provide a powerful web programming framework
- name comes from its focus on making reactive web page components that automatically update based on user interactions in teh underlying data
- created by Jordan Walke for use at Facebook in 2011. used as main framework for instagram
- abstracts HTML into a JS variant called JSX. JSX is converted into valid HTML and JS using a preprocessor called Babel.
const i = 3;
const list = (
<ol class='big'>
<li>Item {i}</li>
<li>Item {3+i}</li>
</ol>
);Babel will convert that into valid JS
const i = 3;
const list = React.createElement(
'ol',
{ class: 'big'},
React.createElement('li', null, 'Item ', i),
React.createElement('li', null, 'Item ', 3+i)
);- the
React.createElementfunction will generate DOM elements and monitor the data they represent for changes. When change is discovered, React will trigger dependent changes.
- react allows you to modularize the functionality of your app. This allows underlying code to directly represent the components that a user interacts with. also enables code reuse as common app components often show up repeatedly.
- primary purposes of a component is to generate the user interface. this is done with the component's render function
- whatever is returned from render function is inserted into the component HTML element
- ex. JSX file containing react component element named Demo would cause react to load the demo component, call the render function, and insert the result into the place of the demo element jsx
<div>
Component: <Demo />
</div>- Demo is not a valid HTML element. Transpiler will replace tag with the resulting rendered HTML React Component
function Demo() {
const who = 'world';
return <b>Hello {who}</b>;
}resulting HTML
<div>Component: <b>Hello world</b></div>- components also allow you to pass info to them in the form of element properties. c
- component receives the properties in its constructor and then can display them when it renders.
<div>Component: <Demo who="Walke" /></div>function Demo(props) {
return <b>Hello {props.who}</b>;
}<div>Component: <b>Hello Walke</b></div>- component can have internal state. state is created by calling the React.useState hook function. the UseState function returns a variable that contains the current state and a function to update the state.
const Clicker = () => {
const [clicked, updateClicked] = React.useState(false);
const onClicked = (e) => {
updateClicked(!clicked);
};
return <p onClick={(e) => onClicked(e)}>clicked: {`${clicked}`}</p>;
};
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(<Clicker />);you can use JSX even without a function. simple variable representing JSX will work anyplace you would otherwise provide a component
- react also supports class style components, but don't use them, react team is moving away from that. but be aware of syntax
- component's properties and state are used by the React framework to determine the reactivity of the interface. Reactivity controls how a component reacts to actions taken by the user or events that happen within the app.
- Whenever a component's state or properties change, the render function for the component and all of its dependent component render functions are called.
- as web programming becomes more and more complex it became necessary to abstract away some of that complexity with a series of tools.
- common functional pieces in a web app tool chain include
- Code repo - stores code in shared, versioned, location
- Linter - Removes, or warns of non-idiomatic code usage
- Prettier - Formats code according to a shared standard
- Transpiler - compiles code into a different format. ex. JSX to JS, TypeScript to JS, or SCSS to CSS
- Polyfill - Generates backward compatible code for supporting old browser versions that do not support the latest standards
- Bundler - Packages code into bundles for delivery to the browser. Enables compatibility (ex with ES6 module support), or performance (with lazy loading)
- Minifier - Removes whitespace and renames variables in order to make code smaller and more efficient to deploy
- Testing - Automated tests at multiple levels to ensure correctness
- Deployment - Automated packaging and delivery of code from the development environment to the production environment
- We will use:
- Github - code repo
- Vite - jsx
- TS - development and debugging support
- ESBuilding - converting to ES6 modules and transpiling(with babel underneath)
- Rollup - bundling and tree shaking
- PostCSS for css transpiling
- and a simple bash script for deployment
- you don't have to completely understand what each of these pieces in the chain are accomplishing, but the more you know about them the more you can optimize your development efforts
- common way to configure your project is to use a CLI to initially set up a web app.
- saves you the trouble of configuring the toolchain parameters and gets you quickly started with a default app
- our toolchain will use vite
- bundles code quickly, has great debugging support, and allows you to easily support JSX, TypeScript, and different CSS flavors.
npm create vite@latest demoVite -- --template react
cd demoVite
npm install
npm run dev
- router provides essential functionality for single-page apps.
- with multiple-webpage app the headers, footers, nav, and common components must be either duplicated in each HTML page, or injected before the server sends the page to the browser
- with single page apps, browser only loads one HTML page and then JS is used to manipulate the DOM and give it the appearance of multiple pages.
- router defines the routes a user can take through 5the app, and automatically manipulates the DOM to display the appropriate framework components.
- we will use react-router-dom v6.
- simplified routing functionality of react-router-dom derives from the project react-router for its core functionality. don't confuse the two, or versions of react-router-dom before version 6 when reading tutorials and stuff
- whole thing consists of BrowserRouter component that encapsulates the entire app and controls the routing action. Link, or NavLink, component captures user nav events and modifies what is rendered by the Routes component by matching up the to and path attributes.
- making the UI react to changes in user input or data is one of the architectural foundations of React.
- React enables reactivity with three major pieces of a React component:
- props, state, and render
- when JSX is rendered, React parses JSX and creates list of any references to component's state or prop objects
- then monitors those objects and if it detects that they have changed it will call the component's render function so that the impact of the change is visualized.
- following ix contains two components: parent component and a child component.
- survey has state named color. question has prop named answer. Survey passes its color state to the question as a prop. means that any change to the survey's color will also be reflected in the question's color. powerful means for a parent to control a child's functionality.
- be careful about assumptions of when state is updated. just because you called updateState does not mean that you can access the updated state on the next line of code. happens asynchronously, and you never really know what is going to happen.
const Survey = () => {
const [color, updateColor] = React.useState('#737AB0');
// When the color changes update the state
const onChange = (e) => {
updateColor(e.target.value);
};
return (
<div>
<h1>Survey</h1>
{/* Pass the survey color as a parameter to the Question.
When the color changes the Question parameter will also be updated and rendered. */}
<Question answer={color} />
<p>
<span>Pick a color: </span>
{/* Set the Survey color state as the value of the color picker.
When the color changes, the value will also be updated and rendered. */}
<input type='color' onChange={(e) => onChange(e)} value={color} />
</p>
</div>
);
};
// The Question component
const Question = ({ answer }) => {
return (
<div>
{/* Answer rerendered whenever the parameter changes */}
<p>Your answer: {answer}</p>
</div>
);
};
ReactDOM.render(<Survey />, document.getElementById('root'));- allow React function style components to be able to do everything that a class style component can do and more.
- as new features are added to React they are including them as hooks.
- makes function style components the preferred way of doing things in React
- allows you to represent lifecycle events
- ex run a function every time the component completes rendering:
function UseEffectHookDemo() {
React.useEffect(() => {
console.log('rendered');
});
return <div>useEffectExample</div>;
}
ReactDOM.render(<UseEffectHookDemo />, document.getElementById('root'));- you can also take action when the component cleans up by returning a cleanup function from the function registered with useEffect.
- ex. every time the component is clicked the state changes and so the component is re-rendered. causes both cleanup function to be called in addition to hook function. If function not re-rendered then only cleanup function would be called
function UseEffectHookDemo() {
const [count, updateCount] = React.useState(0);
React.useEffect(() => {
console.log('rendered');
return function cleanup() {
console.log('cleanup');
};
});
return <div onClick={() => updateCount(count+1)}>useEffectExample {count}</div>;
}
ReactDOM.render(<UseEffectHookDemo />, document.getElementById('root'));- useful when you want to create side effects for things such as tracking when a component is displayed or hidden, or creating and disposing of resources.
- you can control what triggers a useEffect hook by specifying its dependencies.
- ex. two state variables, but only want the useEffect hook to be called when the component is initially called and when the first variable is clicked.
- pass an array of dependencies as a second parameter to the useEffect call.
function UseEffectHookDemo() {
const [count1, updateCount1] = React.useState(0);
const [count2, updateCount2] = React.useState(0);
React.useEffect(() => {
console.log(`count1 effect triggered ${count1}`);
}, [count1]);
return (
<ol>
<li onClick={() => updateCount1(count1 + 1)}>Item 1 - {count1}</li>
<li onClick={() => updateCount2(count2 + 1)}>Item 2 - {count2}</li>
</ol>
);
}
ReactDOM.render(<UseEffectHookDemo />, document.getElementById('root'));- specifying an empty array as the hook dependency then it is only called when the component is first rendered
- hooks can only be used in function style components and must be called at the top scope of the function.
- hook cannot be called inside of a loop or conditional. restriction ensures that hooks are always called in the same order when a component is rendered.
- I wish we started with React, transitioning everything is going to be VERY complicated.
- adds static type checking to JS.
- provides type checking while you are writing the code to prevent mistakes like using a string when a number is expected.
- consider:
function increment(value){
return value + 1;
}
let count = 'one';
console.log(increment(count));- when this executes the console will log one1 because count variable was accidentally initialized with a string instead of a number
- with TypeScript you expplicitly define the types, and as the JS is transpiled (with something like babel), an error will be generated long before the code is seen by user.
- to provide type safety for our increment function, it would look like this:
function increment(value: number) {
return value + 1;
}
let count: number = 'one';
console.log(increment(count));- in addition to defining types for function parameters, you can define the types of object props.
- ex. when defining state for React class style component, you can specify the types of all the state and property values
export class About extends React.Component {
state: {
imageUrl: string;
quote: string;
price: number;
};
constructor(props: { price: number }) {
super(props);
this.state = {
imageUrl: '',
quote: 'loading...',
price: props.price,
};
}
}
- you can likewise specify the type of React function style component's props w/ inline object def
function Clicker(props: { initialCount: number }) {
const [count, updateCount] = React.useState(props.initialCount);
return <div onClick={() => updateCount(1 + count)}>Click Count: {count}</div>;
}- because it is so common to define object prop types, TypeScript introduced use of interface keyword to define collection of params and types that object must contain in order to satisfy the interface type.
interface Book {
title: string;
id: number;
}- then create an object and pass it to a function that requires the interface
function catalog(book: Book) {
console.log(`Cataloging ${book.title} with ID ${book.id}`);
}
const myBook = { title: 'Essentials', id: 2938 };
catalog(myBook);- TS also provides other benefits, such as warning you of potential uses of an uninitialized variable.
- correct by using an if block
const containerEl = document.querySelector<HTMLElement>('#picture');
if (containerEl) {
const width = containerEl.offsetWidth;
}- in above ex. return type is coerced for the querySelector call. this is required because the assumed return type for that function is the base class Element, but query will return subclass HTMLElement
- so we need to cast that to the subclass with querySelector() syntax
- TS introduces ability to define the possible values for a new type. Useful for doing things like defining an enumberable
- with plain JS you might create an enumerable with a class
export class AuthState {
static Unknown = new AuthState('unknown');
static Authenticated = new AuthState('authenticated');
static Unauthenticated = new AuthState('unauthenticated');
constructor(name) {
this.name = name;
}
}- with TS you can define this by declaring a new type and defining what its possible values are
type Authstate = 'unknown' | 'authenticated' | 'unauthenticated';
let auth: Authstate = 'authenticated';- You can also use unions to specify all the possible types that a variable can represent
function square(n: number | string) {
if (typeof n === 'string') {
console.log(`${n}^2`);
} else {
console.log(n * n);
}
}- if you want to experiment use CodePen or official TypeScript playground.
- playground has the advantage of showing you inline errors and what the resulting JS will be
- to use TypeScript in your web app you can create your project using vite. Vite knows how to use typescript without any additional configuration
- if you want to convert an existing app, then install the npm typescript package to your dev dependencies
- this will only include typescript package when you are developing and will not distribute it with a production bundle.
- once it is installed, then configure how you want TS to interact with your code by creating a tsconfig.json file
- if proj structure is configured to have your source code in a directory named src, and you want to output to a directory named build then use this:
{
"compilerOptions": {
"rootDir": "src",
"outDir": "build",
"target": "es5",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noFallthroughCasesInSwitch": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx"
},
"include": [
"./src/**/*"
]
}
- performance of your app plays a huge role in determining user satisfaction
- to prevent losing users, you want app to load in about one second.
- you need to consistently measure and improve the responsiveness of your app.
- main things you want to monitor include
- Browser app latency
- Network latency
- Service endpoint latency
- latency defined as delay that your user experiences before request is satisfied
- impacted by speed of the user's device, the amount of data that needs to be processed, and the time complexity of the processing algorithm
- when a user requests your app in a browser, the browser will request your index.html page first.
- followed by requests for any files that index.html links, such as js, css, video, and image files.
- once JS is loaded, it will start making requests to services. Includes any endpoints that you provide as well as ones provided by third parties.
- each requests takes time for the browser to load and render
- page with lots of large images and lots of service calls, will take longer than a page that only loads simple text from a single HTML file
- Likewise, if your JS does significant processing while page loading, then your user will notice the resulting latency
- you want to make app processing as asynchronous as possible so that it is done in the background without impacting the user experience
- you can reduce the impact of file size, and HTTP requests in general, by doing on or more of the following
- use compression when transferring files over HTTP
- reduce the quality of images and video to the lowest acceptable level
- minify JS and CSS. Removes all whitespace and creates smaller variable names
- use HTTP/2 or HTTP/3 so that your HTTP headers are compressed and the communication protocol is more efficient.
- you can also reduce the number of requests you make by combining the responses from multiple endpoint requests into a single request.
- this eliminates duplicated fields, but also decreases the overhead associated with each request
- you pay a latency price for every network request that you make.
- you want to avoid making unnecessary or large requests
- network latency is impacted by the amount of data that you send, the amount of data a user can receive per second (called bandwidth), and distance the data has to travel
- If the user has a low bandwidth connection that can only receive data at rates lower than 1mb/s, then you need to be careful to reduce the number of bytes that you send to that user.
- global latency is also a problem for users. If your app is hosted in a data center in San Fran, and used by someone living in Nairobi, then there will be an additional 100/400ms latency for each request
- you man mitigate the impact of global latency by hosting your app files in data centers that are close to the users you are trying to serve.
- Apps that are seeking to reach a global audience will often host their app from dozens of places around the world
- impacted by the number of requests that are made and the amount of time that it takes to process each request
- When a web app makes a request to a service endpoint there is usually some functionality in the application that is blocked until the endpoint returns
- ex if a user requests the scores for the game, the app will delay rendering until those scores are returned
- you want to reduce latency of your endpoints as much as possible. ideally less than 10ms
- chrome network tab
- you cna see the network requests made by your app and the time necessary for each request, by using the browser's debugging tools.
- this will show you what files and endpoins are requested and how long they are taking.
- if you sort by time or size, it will be clearer what areas need your attention.
- make sure you clear your cache before running tests so that you can see what the real latency is and not just the time it takes to load from the browser's cache
- Simulating real users
- network tools also allows you to simulate low bandwidth connections by throttling your network.
- you can simulate a 3G network connection that you would find on a low end mobile phone
- throttling while testing is really useful since web developers often hav high end computers and significant network bandwidth.
- that means you are not having the same experience as your users, and you will be surprised when they don't use your app because it is so slow
- network tools also allows you to simulate low bandwidth connections by throttling your network.
- Chrome Lighthouse
- you can also use lighthouse tool to run an analysis of your app. this will give you an average performance rating based upon the initial load time, longest content paint, and time before the user can interact with the page.
- Chrome performance tab
- when you are ready to dig into your app's frontend performance make sure you experiment with the Chrome debugger's performance tab.
- this breaks down the details of your app based upon discrete intervals of time so that you can isolate where things are running slow
- You start profiling teh performance by pressing the record button and then interacting with your app.
- Chrome will record memory usage, screenshots, and timing info. You can then press the stop recording button and review the data.
- when you are ready to dig into your app's frontend performance make sure you experiment with the Chrome debugger's performance tab.
- Global speed tests
- You also want to test your app from different locations around the world. There are many online providers that will run these tests for you
- Pingdom.com
- will give suggestions
- DotComTools allows you to run tests from multiple locations at once.
- Pingdom.com
- You also want to test your app from different locations around the world. There are many online providers that will run these tests for you
- Properly considering the user experience (UX) of your app will make all the difference in your success.
- Focusing first on tech, cost, or revenue tends to lead to an unsatisfying user experience.
- instead consider why someone is using your app, how they want to interact, how visually appealing it is, and how easy it is to get something done.
- often useful to think of user experience as a story. Consider background plot, user entering the stage, interacting with other actors, and getting the audience to applaud
- there is always a reason someone is using your app.
- if you can clearly define background plot, then experience will better match the user's expectation
- if you know what results in a satisfied audience, then you build the app experience around delivering that result
- Consider tourism app for Philadelphia.
- they know user visits because they want to have an experience in Philly. App immediately provides a time relevant proposal for that experience.
- all navigation options for having successful experience(events, food, deals, and trip planning) are immediately accessible.
- Google broke all rules for web app design when they released their homepage in 1998
- Previously, common for app designers to pile everything they could into the initial view of the app
- ads, navigation options, lots of hyperlinks, and color choices.
- Key point is that simplicity attracts user's attention and engages them in the app experience.
- Building off of google's positive reaction, other major apps immediately followed their example.
- Keep things focused on a single purpose:
- creating an account, viewing images, or beginning your travel experience.
- tension with web apps between being consistent with how other apps work and being unique so that your experience stands out.
- avoid being so different that a user has to think hard in order to use your app.
- usually avoided by using standard conventions that a user expects to find on a web app.
- What a standard layout is defined to be will migrate over time as new trends in app fashion seek to make things look fresh
- user should never get lost while using your app.
- to help orient your user you want to carefully design the flow of the application and provide the proper navigational controls
- Application map
- first step in building your app should be to design an application map that has all the views that you will present to the user.
- this helps clarify the functional pieces of the app and their relationship to each other.
- ex. if you were building a music player you might start with a landing page that displays some marketing info and allows the user to create an account or log in.
- if already logged in, then they start with a dashboard that shows recent or suggested songs.
- From there they can either search the music catalog, navigate to a collection of songs based on playlist, album, or artist, or go to an individual song
- if app map starts looking like a gov bureaucracy then you probably want to reconsider interrelation of functionality.
- convoluted app map is strong indicator that the user experience will be likewise convoluted.
- Device controls
- with concise app map in place, you can design navigational controls that allow the user to successfully use the app.
- you want to make sure the navigational controls provided by device are completely supported.
- Breadcrumb
- you always want to indicate where the user is, where they came from, and where they can go. You can do this with a breadcrumb control that lists the path the user took to get where they are
- breadcrumb quickly orients the user and also allows them to jump up the navigational path
- Common actions
- You also want to anticipate where a user would commonly want to go based upon the view that they are in.
- For example, if they are playing a song by one artist, it is common that they will want to view related artists.
- you want to provide a navigational link that will take them to a search view with a prepopulated query for related artists
- You also want to anticipate where a user would commonly want to go based upon the view that they are in.