{"text":"# Hypothesis Testing and Types of Errors\n\n## Summary\n\n\nSuppose we want to study income of a population. We study a sample from the population and draw conclusions. The sample should represent the population for our study to be a reliable one.\n\n**Null hypothesis** \\((H\\_0)\\) is that sample represents population. Hypothesis testing provides us with framework to conclude if we have sufficient evidence to either accept or reject null hypothesis. \n\nPopulation characteristics are either assumed or drawn from third-party sources or judgements by subject matter experts. Population data and sample data are characterised by moments of its distribution (mean, variance, skewness and kurtosis). We test null hypothesis for equality of moments where population characteristic is available and conclude if sample represents population.\n\nFor example, given only mean income of population, we validate if mean income of sample is close to population mean to conclude if sample represents the population.\n\n## Discussion\n\n### What are the math representations of population and sample parameters?\n\nPopulation mean and population variance are denoted in Greek alphabets \\(\\mu\\) and \\(\\sigma^2\\) respectively, while sample mean and sample variance are denoted in English alphabets \\(\\bar x\\) and \\(s^2\\) respectively. \n\n\n### What's the relevance of sampling error to hypothesis testing?\n\nSuppose we obtain a sample mean of \\(\\bar x\\) from a population of mean \\(\\mu\\). The two are defined by the relationship |\\(\\bar x\\) - \\(\\mu\\)|>=0: \n\n + If the difference is not significant, we conclude the difference is due to sampling. This is called **sampling error** and this happens due to chance.\n + If the difference is significant, we conclude the sample does not represent the population. The reason has to be more than chance for difference to be explained.Hypothesis testing helps us to conclude if the difference is due to sampling error or due to reasons beyond sampling error.\n\n\n### What are some assumptions behind hypothesis testing?\n\nA common assumption is that the observations are independent and come from a random sample. The population distribution must be Normal or the sample size is large enough. If the sample size is large enough, we can invoke the *Central Limit Theorem (CLT)* regardless of the underlying population distribution. Due to CLT, sampling distribution of the sample statistic (such as sample mean) will be approximately a Normal distribution. \n\nA rule of thumb is 30 observations but in some cases even 10 observations may be sufficient to invoke the CLT. Others require at least 50 observations. \n\n\n### What are one-tailed and two-tailed tests?\n\nWhen acceptance of \\(H\\_0\\) involves boundaries on both sides, we invoke the **two-tailed test**. For example, if we define \\(H\\_0\\) as sample drawn from population with age limits in the range of 25 to 35, then testing of \\(H\\_0\\) involves limits on both sides.\n\nSuppose we define the population as greater than age 50, we are interested in rejecting a sample if the age is less than or equal to 50; we are not concerned about any upper limit. Here we invoke the **one-tailed test**. A one-tailed test could be left-tailed or right-tailed.\n\nConsider average gas price in California compared to the national average of $2.62. If we believe that the price is higher in California, we consider right-tailed test. If we believe that California price is different from national average but we don't know if it's higher or lower, we consider two-tailed test. Symbolically, given the **alternative or research hypothesis** \\(H\\_1\\), we state, \n\n + \\(H\\_0\\): \\(\\mu = \\$ 2.62\\)\n + \\(H\\_1\\) right-tailed: \\(\\mu > \\$ 2.62\\)\n + \\(H\\_1\\) two-tailed: \\(\\mu \\neq \\$ 2.62\\)\n\n### What are the types of errors in hypothesis testing?\n\nIn concluding whether sample represents population, there is scope for committing errors on following counts: \n\n + Not accepting that sample represents population when in reality it does. This is called **type-I** or **\\(\\alpha\\) error**.\n + Accepting that sample represents population when in reality it does not. This is called **type-II** or **\\(\\beta\\) error**.For instance, granting loan to an applicant with low credit score is \\(\\alpha\\) error. Not granting loan to an applicant with high credit score is (\\(\\beta\\)) error.\n\nThe symbols \\(\\alpha\\) and \\(\\beta\\) are used to represent the probability of type-I and type-II errors respectively. \n\n\n### How do we measure type-I or \\(\\alpha\\) error?\n\nThe p-value can be interpreted as the probability of getting a result that's same or more extreme when the null hypothesis is true. \n\nThe observed sample mean \\(\\bar x\\) is overlaid on population distribution of values with mean \\(\\mu\\) and variance \\(\\sigma^2\\). The proportion of values beyond \\(\\bar x\\) and away from \\(\\mu\\) (either in left tail or in right tail or in both tails) is **p-value**. If p-value <= \\(\\alpha\\) we reject null hypothesis. The results are said to be **statistically significant** and not due to chance. \n\nAssuming \\(\\alpha\\)=0.05, p-value > 5%, we conclude the sample is highly likely to be drawn from population with mean \\(\\mu\\) and variance \\(\\sigma^2\\). We accept \\((H\\_0)\\). Otherwise, there's insufficient evidence to be part of population and we reject \\(H\\_0\\). \n\nWe preselect \\(\\alpha\\) based on how much type-I error we're willing to tolerate. \\(\\alpha\\) is called **level of significance**. The standard for level of significance is 0.05 but in some studies it may be 0.01 or 0.1. In the case of two-tailed tests, it's \\(\\alpha/2\\) on either side.\n\n\n### How do we determine sample size and confidence interval for sample estimate?\n\n**Law of Large Numbers** suggests larger the sample size, the more accurate the estimate. Accuracy means the variance of estimate will tend towards zero as sample size increases. Sample Size can be determined to suit accepted level of tolerance for deviation. \n\nConfidence interval of sample mean is determined from sample mean offset by variance on either side of the sample mean. If the population variance is known, then we conduct z-test based on Normal distribution. Otherwise, variance has to be estimated and we use t-test based on t-distribution. \n\nThe formulae for determining sample size and confidence interval depends on what we to estimate (mean/variance/others), sampling distribution of estimate and standard deviation of estimate's sampling distribution.\n\n\n### How do we measure type-II or \\(\\beta\\) error?\n\nWe overlay sample mean's distribution on population distribution, the proportion of overlap of sampling estimate's distribution on population distribution is **\\(\\beta\\) error**. \n\nLarger the overlap, larger the chance the sample does belong to population with mean \\(\\mu\\) and variance \\(\\sigma^2\\). Incidentally, despite the overlap, p-value may be less than 5%. This happens when sample mean is way off population mean, but the variance of sample mean is such that the overlap is significant.\n\n\n### How do we control \\(\\alpha\\) and \\(\\beta\\) errors?\n\nErrors \\(\\alpha\\) and \\(\\beta\\) are dependent on each other. Increasing one decreases the other. Choosing suitable values for these depends on the cost of making these errors. Perhaps it's worse to convict an innocent person (type-I error) than to acquit a guilty person (type-II error), in which case we choose a lower \\(\\alpha\\). But it's possible to decrease both errors but collecting more data. \n\nJust as p-value manifests \\(\\alpha\\), **Power of Test** manifests \\(\\beta\\). Power of test is \\(1-\\beta\\). Among the various ways to interpret power are: \n\n + Probability of rejecting the null hypothesis when, in fact, it is false.\n + Probability that a test of significance will pick up on an effect that is present.\n + Probability of avoiding a Type II error.Low p-value and high power help us decisively conclude sample doesn't belong to population. When we cannot conclude decisively, it's advisable to go for larger samples and multiple samples.\n\nIn fact, power is increased by increasing sample size, effect sizes and significance levels. Variance also affects power. \n\n\n### What are some misconceptions in hypothesis testing?\n\nA common misconception is to consider \"p value as the probability that the null hypothesis is true\". In fact, p-value is computed under the assumption that the null hypothesis is true. P-value is the probability of observing the values, or more extremes values, if the null hypothesis is true. \n\nAnother misconception, sometimes called **base rate fallacy**, is that under controlled \\(\\alpha\\) and adequate power, statistically significant results correspond to true differences. This is not the case, as shown in the figure. Even with \\(\\alpha\\)=5% and power=80%, 36% of statistically significant p-values will not report the true difference. This is because only 10% of the null hypotheses are false (base rate) and 80% power on these gives only 80 true positives. \n\nP-value doesn't measure the size of the effect, for which **confidence interval** is a better approach. A drug that gives 25% improvement may not mean much if symptoms are innocuous compared to another drug that gives small improvement from a disease that leads to certain death. Context is therefore important. \n\n## Milestones\n\n1710\n\nThe field of **statistical testing** probably starts with John Arbuthnot who applies it to test sex ratios at birth. Subsequently, others in the 18th and 19th centuries use it in other fields. However, modern terminology (null hypothesis, p-value, type-I or type-II errors) is formed only in the 20th century. \n\n1900\n\nPearson introduces the concept of **p-value** with the chi-squared test. He gives equations for calculating P and states that it's \"the measure of the probability of a complex system of n errors occurring with a frequency as great or greater than that of the observed system.\" \n\n1925\n\nRonald A. Fisher develops the concept of p-value and shows how to calculate it in a wide variety of situations. He also notes that a value of 0.05 may be considered as conventional cut-off. \n\n1933\n\nNeyman and Pearson publish *On the problem of the most efficient tests of statistical hypotheses*. They introduce the notion of **alternative hypotheses**. They also describe both **type-I and type-II errors** (although they don't use these terms). They state, \"Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong.\" \n\n1949\n\nJohnson's textbook titled *Statistical methods in research* is perhaps the first to introduce to students the Neyman-Pearson hypothesis testing at a time when most textbooks follow Fisher's significance testing. Johnson uses the terms \"error of the first kind\" and \"error of the second kind\". In time, Fisher's approach is called **P-value approach** and the Neyman-Pearson approach is called **fixed-α approach**. \n\n1993\n\nCarver makes the following suggestions: use of the term \"statistically significant\"; interpret results with respect to the data first and statistical significance second; and pay attention to the size of the effect.","meta":{"title":"Hypothesis Testing and Types of Errors","href":"hypothesis-testing-and-types-of-errors"}} {"text":"# Polygonal Modelling\n\n## Summary\n\n\nPolygonal modelling is a 3D modelling approach that utilizes edges, vertices and faces to form models. Modellers start with simple shapes and add details to build on them. They alter the shapes by adjusting the coordinates of one or more vertices. A polygonal model is called faceted as polygonal faces determine its shape.  \n\nPolygonal or polyhedral modelling fits best where visualization matters more than precision. It's extensively used by video game designers and animation studios. Assets in video games form whole worlds for gamers. Features of these assets are built using polygonal modelling. \n\nComputers take less time to render polygonal models. So, polygonal modelling software run well on browsers. For higher precision, advanced 3D models such as NURBS are suitable. However, NURBs can't be 3D printed unless they are converted to polygons. Many industrial applications easily handle polygonal model representations.\n\n## Discussion\n\n### Can you describe the basic elements of polygonal modelling?\n\nA **vertex** is the smallest component of a 3D model. Two or more edges of a polygon meet at a vertex. \n\n**Edges** define the shape of the polygons and the 3D model. They are straight lines connecting the vertices. \n\nTriangles and quadrilaterals are the polygons generally used. Some applications offer the use of polygons with any number of edges (N-gons) to work with. \n\nFaces of polygons combine to form polygonal **meshes**. One can **deform** meshes. That is, one may move, twist or turn meshes to create 3D objects using deformation tools in the software. The number of polygons in a mesh makes its **polycount**. \n\n**UV coordinates** are the horizontal (U) and vertical (V) axes of the 2D space. 3D meshes are converted into 2D information to wrap textures around them. \n\nPolygon density in the meshes is its **resolution**. Higher resolution indicates better detailing. Good 3D models contain high-resolution meshes where fine-detailing matters and low-resolution meshes where detailing isn't important. \n\n\n### How are polygonal meshes generated?\n\nPolygonal meshes are generated by converting a set of spatial points into vertices, faces and edges. These components meet at shared boundaries to form physical models. \n\nPolygonal mesh generation (aka meshing) is of two types: **Manual** and **Automatic**. In manual meshing, the positions of vertices are edited one by one. In automatic meshing, values are fed into the software. The software automatically constructs meshes based on the specified values. The automatic method enables the rapid creation of 3D objects in games, movies and VR. \n\nMeshing is performed at two levels. At the model's surface level, it's called **Surface meshing**. Surface meshes won't have free edges or a common edge shared by more than two polygons. \n\nMeshing in its volume dimension is called **Solid meshing**. The solid surfaces in solid meshing are either polyhedral or trimmed. \n\nThere are many ways to produce polygonal meshes. Forming primitives from standard shapes is one way. Meshes can also be drawn by interpolating edges or points of other objects. Converting existing solid models and stretching custom-made meshes into fresh meshes are two other options. \n\n\n### What are free edges, manifold edges and non-manifold edges?\n\nA **free edge** in a mesh is an edge that doesn't fully merge with the edge of its neighbouring element. The nodes of meshes with free edges won't be accurately connected. Such edges within the geometry will affect the overall output. Therefore, unwanted free meshes should be removed. \n\nA **manifold edge** is an edge shared utmost by two faces. It means, when there is a third face sharing the edge, it becomes a **non-manifold** edge. \n\nA non-manifold edge cannot be replicated in the real world. Hence it should be removed while modelling. In the event of 3D printing, non-manifold edges will produce failed models. \n\n\n### How would you classify the polygonal meshing process based on grid structure?\n\nA grid structure works on the principle of Finite Element Analysis (FEA). An FEA node can be thought of as the vertex of a polygon in polygonal modelling. An FEA element shall represent an edge, a shape and a solid in three different dimensions. \n\nDividing the expanse of a polygonal model into small elements before computing forms a grid. Grid structure-wise, meshing is of two types: \n\n + **Structured meshing** displays a definite pattern in the arrangement of nodes or elements. The size of each element in it is nearly the same. It enables easy access to the coordinates of these elements. It's applicable to uniform grids made of rectangles, ellipses and spheres that make regular grids.\n + **Unstructured meshing** is arbitrary and forms irregular geometric shapes. The connectivity between elements is not uniform. So, unstructured meshes do not follow a definite pattern. It requires that the connectivity between elements is well-defined and properly stored. The axes of these elements are unaligned (non-orthogonal).\n\n### How are mesh generation algorithms written for polygonal modelling?\n\nMesh generation algorithms are written according to the principles of the chosen mesh generation method. There are many methods to generating meshes. It depends on the mesh type. \n\nA mesh generation method serves the purposes of generating nodes (geometry) and connecting nodes (topology). \n\nLet's take the Delaunay triangulation method for instance. According to it, the surface domain elements are discretized into non-overlapping triangles. The nodes are so created that the angles between them when triangulated are the least. The circumcircle drawn about each triangle cannot accommodate an additional triangle within it. \n\nDelaunay triangulation is applied through several algorithms. Boyer-Watson algorithm is one of them. It's an incremental algorithm that adds one node at a time in a given triangulation. If the new point falls within the circumcircle of a triangle, the triangle is removed. Using the new point a fresh triangle is formed. \n\n\n### How does one fix the polygon count for models?\n\nPolygon count or polycount gives a measure of visual quality. Detailing needs a high number of polygons. It gives a photorealistic effect. But high polycount impacts efficiency. It may take more time to load and render. When a model takes more time to download, we may run out of patience. Real-time rendering delays cause a video or animation to stop and start. So, a good polygonal model is a combination of high visual quality and low polycount. \n\nThe threshold number to call a polygon count high is subjective. For mobile devices, anywhere between 300 to 1500 polygons is good. Desktops can comfortably accommodate 1500 to 4000 polygons without affecting performance. \n\nThese polycount numbers vary depending on the CPU configuration and other hardware capabilities. Advanced rendering capabilities smoothly handle anywhere between 10k to 40k polygons. Global mobile markets are vying to produce CPUs that can render 100k to 1 million polygons for an immersive 3D experience. \n\nHigher polycount increases the file sizes of 3D assets. Websites will have upload limits. So it's also important to keep file sizes in mind while fixing the polygon count. \n\n\n### What are some beginner pitfalls to polygonal modelling?\n\n**Irregular meshes**: As beginners, we may miss triangles and create self-intersecting surfaces. Or we may leave holes on mesh surfaces or fill in with backward triangles. Irregular meshes will affect the model's overall appearance. Eyeball checks and use of mesh generation software will help us avoid mesh-related errors. \n\n**Incorrect measurements**: It may distort the model's proportionality and ruin the output. It's best to train our eyes to compare images and estimate the difference in depths. Comparing our model with the reference piece on the image viewer tool will tell us the difference. \n\n**Too many subdivisions early in the modelling**: It will disable us from making changes without tampering with the measurements. So, we may end up creating uneven surfaces. Instead, it's better to start with fewer polygons and add to them as we build the model. \n\n**Topology error**: We may get the edge structure and mesh distributions wrong. We need to equip ourselves by learning how to use mesh tools. It's important to learn where to use triangles, quads and higher polygons. Duplicates are to be watched out for. Understanding the flow of edges is vital. \n\n## Milestones\n\n1952\n\nGeoffrey Colin Shepherd furthers Thomas Bradwardine's 14th-century work on non-convex polygons. He extends polygon formation to the imaginary plane. It paves the way for the construction of complex polygons. In polygonal modelling, complex polygons have circuitous boundaries. A polygon with a hole inside is one example. \n\n1972\n\nBruce G Baumgart introduces a paper on **winged edge data structure** at Stanford University. Winged data structure is a way of representing polyhedrons on a computer. The paper states its exclusive use in AI for computer graphics and world modelling. \n\n1972\n\nNewell introduces the **painter's algorithm**. It's a painting algorithm that paints a polygon. It considers the distance of the plane from the viewer while painting. The algorithm paints the farthest polygon from the viewer first and proceeds to the nearest. \n\n1972\n\nEdwin Catmull and Fredrick Parke create the **world's first 3D rendered movie**. In the movie, the animation of Edwin's left hand has precisely drawn and measured polygons. \n\n1992\n\nFowlery et al. present *Modelling Seashells* at ACM SIGGRAPH, Chicago. They use polygonal meshes among others to create comprehensive computer imagery of seashells. \n\n1998\n\nAndreas Raab suggests the **classification of edges** of a polygonal mesh. They shall be grouped as sharp, smooth, contour and triangulation edges. It solves the problem of choosing the right lines to draw. \n\n1999\n\nDeussen et al. successfully apply Adreas Raab's algorithm that constructs a skeleton from a 3D polygonal model. They use it in connection with the intersecting planes.","meta":{"title":"Polygonal Modelling","href":"polygonal-modelling"}} {"text":"# Relation Extraction\n\n## Summary\n\n\nConsider the phrase \"President Clinton was in Washington today\". This describes a *Located* relation between Clinton and Washington. Another example is \"Steve Balmer, CEO of Microsoft, said…\", which describes a *Role* relation of Steve Balmer within Microsoft. \n\nThe task of extracting semantic relations between entities in text is called **Relation Extraction (RE)**. While Named Entity Recognition (NER) is about identifying entities in text, RE is about finding the relations among the entities. Given unstructured text, NER and RE helps us obtain useful structured representations. Both tasks are part of the discipline of Information Extraction (IE). \n\nSupervised, semi-supervised, and unsupervised approaches exist to do RE. In the 2010s, neural network architectures were applied to RE. Sometimes the term **Relation Classification** is used, particularly in approaches that treat it as a classification problem.\n\n## Discussion\n\n### What sort of relations are captured in relation extraction?\n\nHere are some relations with examples:\n\n + *located-in*: CMU is in Pittsburgh\n + *father-of*: Manuel Blum is the father of Avrim Blum\n + *person-affiliation*: Bill Gates works at Microsoft Inc.\n + *capital-of*: Beijing is the capital of China\n + *part-of*: American Airlines, a unit of AMR Corp., immediately matched the moveIn general, affiliations involve persons, organizations or artifacts. Geospatial relations involve locations. Part-of relations involve organizations or geo-political entities. \n\n**Entity tuple** is the common way to represent entities bound in a relation. Given n entities in a relation r, the notation is \\(r(e\\_{1},e\\_{2},...,e\\_{n})\\). An example use of this notation is *Located-In(CMU, Pittsburgh)*. \n\nRE mostly deals with binary relations where n=2. For n>2, the term used is **higher-order relations**. An example of 4-ary biomedical relation is *point\\_mutation(codon, 12, G, T)*, in the sentence \"At codons 12, the occurrence of point mutations from G to T were observed\". \n\n\n### What are some common applications of relation extraction?\n\nSince structured information is easier to use than unstructured text, relation extraction is useful in many NLP applications. RE enriches existing information. Once relations are obtained, they can be stored in databases for future queries. They can be visualized and correlated with other information in the system. \n\nIn question answering, one might ask \"When was Gandhi born?\" Such a factoid question can be answered if our relation database has stored the relation *Born-In(Gandhi, 1869)*. \n\nIn biomedical domain, protein binding relations can lead to drug discovery. When relations are extracted from a sentence such as \"Gene X with mutation Y leads to malignancy Z\", these relations can help us detect cancerous genes. Another example is to know the location of a protein in an organism. This ternary relation is split into two binary relations (Protein-Organism and Protein-Location). Once these are classified, the results are merged into a ternary relation. \n\n\n### Which are the main techniques for doing relation extraction?\n\nWith **supervised learning**, the model is trained on annotated text. Entities and their relations are annotated. Training involves a binary classifier that detects the presence of a relation, and a classifier to label the relation. For labelling, we could use SVMs, decision trees, Naive Bayes or MaxEnt. Two types of supervision are feature-based or kernel-based. \n\nSince finding large annotated datasets is difficult, a **semi-supervised** approach is more practical. One approach is to do a phrasal search with wildcards. For example, `[ORG] has a hub at [LOC]` would return organizations and their hub locations. If we relax the pattern, we'll get more matches but also false positives. \n\nAn alternative is to use a set of specific patterns, induced from an initial set of seed patterns and seed tuples. This approach is called **bootstrapping**. For example, given the seed tuple *hub(Ryanair, Charleroi)* we can discover many phrasal patterns in unlabelled text. Using these patterns, we can discover more patterns and tuples. However, we have to be careful of **semantic drift**, in which one wrong tuple/pattern can lead to further errors. \n\n\n### What sort of features are useful for relation extraction?\n\nSupervised learning uses features. The named entities themselves are useful features. This includes an entity's bag of words, head words and its entity type. It's also useful to look at words surrounding the entities, including words that are in between the two entities. Stems of these words can also be included. The distance between the entities could be useful. \n\nThe **syntactic structure** of the sentence can signal the relations. A syntax tree could be obtained via base-phrase chunking, dependency parsing or full constituent parsing. The paths in these trees can be used to train binary classifiers to detect specific syntactic constructions. The accompanying figure shows possible features in the sentence \"[ORG American Airlines], a unit of AMR Corp., immediately matched the move, spokesman [PERS Tim Wagner] said.\" \n\nWhen using syntax, expert knowledge of linguistics is needed to know which syntactic constructions correspond to which relations. However, this can be automated via machine learning. \n\n\n### Could you explain kernel-based methods for supervised relation classification?\n\nUnlike feature-based methods, kernel-based methods don't require explicit feature engineering. They can explore a large feature space in polynomial computation time. \n\nThe essence of a kernel is to compute the **similarity** between two sequences. A kernel could be designed to measure structural similarity of character sequences, word sequences, or parse trees involving the entities. In practice, a kernel is used as a similarity function in classifiers such as SVM or Voted Perceptron. \n\nWe note a few kernel designs: \n\n + **Subsequence**: Uses a sequence of words made of the entities and their surrounding words. Word representation includes POS tag and entity type.\n + **Syntactic Tree**: A constituent parse tree is used. Convolution Parse Tree Kernel is one way to compare similarity of two syntactic trees.\n + **Dependency Tree**: Similarity is computed between two dependency parse trees. This could be enhanced with shallow semantic parsers. A variation is to use dependency graph paths in which the shortest path between entities represents a relation.\n + **Composite**: Combines the above approaches. Subsequence kernels capture lexical information whereas tree kernels capture syntactic information.\n\n### Could you explain distant supervised approach to relation extraction?\n\nDue to extensive work done for Semantic Web, we already have many knowledge bases that contain `entity-relation-entity` triplets. Examples include DBpedia (3K relations), Freebase (38K relations), YAGO, and Google Knowledge Graph (35K relations). These can be used for relation extraction without requiring annotated text. \n\nDistant supervision is a combination of unsupervised and supervised approaches. It extracts relations without supervision. It also induces thousands of features using a probabilistic classifier. \n\nThe process starts by linking named entities to those in the knowledge bases. Using relations in the knowledge base, the patterns are picked up in the text. Patterns are applied to find more relations. Early work used DBpedia and Freebase, and Wikipedia as the text corpus. Later work utilized semi-structured data (HTML tables, Wikipedia list pages, etc.) or even a web search to fill gaps in knowledge graphs. \n\n\n### Could you compare some semi-supervised or unsupervised approaches of some relation extraction tools?\n\nDIPRE's algorithm (1998) starts with seed relations, applies them to text, induces patterns, and applies the patterns to obtain more tuples. These steps are iterated. When applied to *(author, book)* relation, patterns take the form `(longest-common-suffix of prefix strings, author, middle, book, longest-common-prefix of suffix strings)`. DIPRE is an application of Yarowsky algorithm (1995) invented for WSD. \n\nLike DIPRE, Snowball (2000) uses seed relations but doesn't look for exact pattern matches. Tuples are represented as vectors, grouped using similarity functions. Each term is also weighted. Weights are adjusted with each iteration. Snowball can handle variations in tokens or punctuation. \n\nKnowItAll (2005) starts with domain-independent extraction patterns. Relation-specific and domain-specific rules are derived from the generic patterns. The rules are applied on a large scale on online text. It uses pointwise mutual information (PMI) measure to retain the most likely patterns and relations. \n\nUnlike earlier algorithms, TextRunner (2007) doesn't require a pre-defined set of rules. It learns relations, classes and entities on its own from a large corpus. \n\n\n### How are neural networks being used to do relation extraction?\n\nNeural networks were increasingly applied to relation extraction from the early 2010s. Early approaches used **Recursive Neural Networks** that were applied to syntactic parse trees. The use of **Convolutional Neural Networks (CNNs)** came next, to extract sentence-level features and the context surrounding words. A combination of these two networks has also been used. \n\nSince CNNs failed to learn long-distance dependencies, **Recurrent Neural Networks (RNNs)** were found to be more effective in this regard. By 2017, basic RNNs gave way to gated variants called GRU and LSTM. A comparative study showed that CNNs are good at capturing local and position-invariant features whereas RNNs are better at capturing order information long-range context dependency. \n\nThe next evolution was towards **attention mechanism** and **pre-trained language models** such as BERT. For example, attention mechanism can pick out most relevant words and use CNNs or LSTMs to learn relations. Thus, we don't need explicit dependency trees. In January 2020, it was seen that BERT-based models represent the current state-of-the-art with an F1 score close to 90. \n\n\n### How do we evaluate algorithms for relation extraction?\n\nRecall, precision and F-measures are typically used to evaluate on a gold-standard of human annotated relations. These are typically used for supervised methods. \n\nFor unsupervised methods, it may be sufficient to check if a relation has been captured correctly. There's no need to check if every mention of the relation has been detected. Precision here is simply the correct relations against all relations as judged by human experts. Recall is more difficult to compute. Gazetteers and web resources may be used for this purpose. \n\n\n### Could you mention some resources for working with relation extraction?\n\nPapers With Code has useful links to recent publications on relation classification. GitHub has a topic page on relation classification. Another useful resource is a curated list of papers, tutorials and datasets.\n\nThe current state-of-the-art is captured on the NLP-progress page of relation extraction. \n\nAmong the useful datasets for training or evaluation are ACE-2005 (7 major relation types) and SemEval-2010 Task 8 (19 relation types). For distant supervision, Riedel or NYT dataset was formed by aligning Freebase relations with New York Times corpus. There's also Google Distant Supervision (GIDS) dataset and FewRel. TACRED is a large dataset containing 41 relation types from newswire and web text. \n\n## Milestones\n\n1998\n\nAt the 7th Message Understanding Conference (MUC), the task of extracting relations between entities is considered. Since this is considered as part of template filling, they call it **template relations**. Relations are limited to organizations: employee\\_of, product\\_of, and location\\_of. \n\nJun \n2000\n\nAgichtein and Gravano propose *Snowball*, a semi-supervised approach to generating patterns and extracting relations from a small set of seed relations. At each iteration, it evaluates for quality and keeps only the most reliable patterns and relations. \n\nFeb \n2003\n\nZelenko et al. obtain **shallow parse trees** from text for use in binary relation classification. They use contiguous and sparse subtree kernels to assess similarity of two parse trees. Subsequently, this **kernel-based** approach is followed by other researchers: kernels on dependency parse trees of Culotta and Sorensen (2004); subsequence and shortest dependency path kernels of Bunescu and Mooney (2005); convolutional parse kernels of Zhang et al. (2006); and composite kernels of Choi et al. (2009). \n\n2004\n\nKambhatla takes a **feature-based** supervised classifier approach to relation extraction. A MaxEnt model is used along with lexical, syntactic and semantic features. Since kernel methods are a generalization of feature-based algorithms, Zhao and Grishman (2005) extend Kambhatla's work by including more syntactic features using kernels, then use SVM to pick out the most suitable features. \n\nJun \n2005\n\nSince binary classifiers have been well studied, McDonald et al. cast the problem of extracting **higher-order relations** into many binary relations. This also makes the data less sparse and eases computation. Binary relations are represented as a graph, from which cliques are extracted. They find that probabilistic cliques perform better than maximal cliques. The figure corresponds to some binary relations extracted for the sentence \"John and Jane are CEOs at Inc. Corp. and Biz. Corp. respectively.\" \n\nJan \n2007\n\nBanko et al. propose **Open Information Extraction** along with an implementation that they call *TextRunner*. In an unsupervised manner, the system is able to extract relations without any human input. Each tuple is assigned a probability and indexed for efficient information retrieval. TextRunner has three components: self-supervised learner, single-pass extractor, and redundancy-based assessor. \n\nAug \n2009\n\nMintz et al. propose **distant supervision** to avoid the cost of producing hand-annotated corpus. Using entity pairs that appear in Freebase, they find all sentences in which each pair occurs in unlabelled text, extract textual features and train a relation classifier. The include both lexical and syntactic features. They note that syntactic features are useful when patterns are nearby in the dependency tree but distant in terms of words. In the early 2010s, distant supervision becomes an active area of research. \n\nAug \n2014\n\nNeural networks and word embeddings were first explored by Collobert et al. (2011) for a number of NLP tasks. Zeng et al. apply **word embeddings** and **Convolutional Neural Network (CNN)** to relation classification. They treat relation classification as a multi-class classification problem. Lexical features include the entities, their surrounding tokens, and WordNet hypernyms. CNN is used to extract sentence level features, for which each token is represented as *word features (WF)* and *position features (PF)*. \n\nJul \n2015\n\nDependency shortest path and subtrees have been shown to be effective for relation classification. Liu et al. propose a recursive neural network to model the dependency subtrees, and a convolutional neural network to capture the most important features on the shortest path. \n\nOct \n2015\n\nSong et al. present *PKDE4J*, a framework for dictionary-based entity extraction and rule-based relation extraction. Primarily meant for biomedical field, they report F-measures of 85% for entity extraction and 81% for relation extraction. The RE algorithm uses dependency parse trees, which are analyzed to extract heuristic rules. They come up with 17 rules that can be applied to discern relations. Examples of rules include verb in dependency path, nominalization, negation, active/passive voice, entity order, etc. \n\nAug \n2016\n\nMiwa and Bansal propose to **jointly model the tasks of NER and RE**. A BiLSTM is used on word sequences to obtain the named entities. Another BiLSTM is used on dependency tree structures to obtain the relations. They also find that shortest path dependency tree performs better than subtrees of full trees. \n\nMay \n2019\n\nWu and He apply **BERT pre-trained language model** to relation extraction. They call their model *R-BERT*. Named entities are identified beforehand and are delimited with special tokens. Since an entity can span multiple tokens, their start/end hidden token representations are averaged. The output is a softmax layer with cross-entropy as the loss function. On SemEval-2010 Task 8, R-BERT achieves state-of-the-art Macro-F1 score of 89.25. Other BERT-based models learn NER and RE jointly, or rely on topological features of an entity pair graph.","meta":{"title":"Relation Extraction","href":"relation-extraction"}} {"text":"# React Native\n\n## Summary\n\n\nTraditionally, *native mobile apps* have been developed in specific languages that call platform-specific APIs. For example, Objective-C and Swift for iOS app development; Java and Kotlin for Android app development. This means that developers who wish to release their app on multiple platforms will have to implement it in different languages.\n\nTo avoid this duplication, *hybrid apps* came along. The app was implemented using web technologies but instead of running it inside a web browser, it was wrapped and distributed as an app. But it had performance limitations.\n\nReact Native enables web developers write code once, deploy on any mobile platform and also use the platform's native API. **React Native** is a platform to build native mobile apps using JavaScript and React.\n\n## Discussion\n\n### As a developer, why should I adopt React Native?\n\nSince React Native allows developers maintain a single codebase even when targeting multiple mobile platforms, development work is considerably reduced. Code can be reused across platforms. If you're a web developer new to mobile app development, there's no need to learn a new language. You can reuse your current web programming skills and apply them to the mobile app world. Your knowledge of HTML, CSS and JS will be useful, although you'll be applying them in a different form in React Native. \n\nReact Native uses ReactJS, which is a JS library invented and later open sourced by Facebook. ReactJS itself has been gaining adoption because it's easy to learn for a JS programmer. It's performant due to the use of *virtual DOM*. The recommended syntax is ES6 and JSX. ES6 brings simplicity and readability to JS code. JSX is a combination of XML and JS to build reusable component-based UI. \n\n\n### How is React Native different from ReactJS?\n\nReact Native is a framework whereas ReactJS is a library. In ReactJS projects, we typically use a bundler such as *Webpack* to bundle necessary JS files for use in a browser. In React Native, we need only a single command to start a new project. All basic modules required for the project will be installed. We also need to install Android Studio for Android development and Xcode for iOS development. \n\nIn ReactJS, we are allowed to use HTML tags. In React Native, we create UI components using React Native components that are specified using JSX syntax. These components are mapped to native UI components. Thus, we can't reuse any ReactJS libraries that render HTML, SVG or Canvas. \n\nIn ReactJS, styling is done using CSS, like in any web app. In React Native, styling is done using JS objects. For component layout, React Native's *Flexbox* can be used. CSS animations are also replaced with the *Animated* API. \n\n\n### How does React Native work under the hood?\n\nBetween native and JavaScript worlds is a bridge (implemented in C++) through which data flows. Native code can call JS code and vice versa. To pass data between the two, data is serialized. \n\nFor example, a UI event is captured as a native event but the processing for this is done in JavaScript. The result is serialized and sent over the bridge to the native world. The native world deserializes the response, does any necessary processing and updates the UI. \n\n\n### What are some useful developer features of React Native?\n\nReact Native offers the following:\n\n + **Hot Reloading**: Small changes to your app will be immediately visible during development. If business logic is changed, Live Reload can be used instead.\n + **Debugging**: Chrome Dev Tools can be used for debugging your app. In fact, your debugging skills from the web world can be applied here.\n + **Publishing**: Publishing your app is easy using CodePush, now part of Visual Studio App Center.\n + **Device Access**: React Native gets access to camera, sensors, contacts, geolocation, etc.\n + **Declarative**: UI components are written in a declarative manner. Component-based architecture also means that one developer need not worry about breaking another's work.\n + **Animations**: For performance, these are serialized and sent to the native driver. They run independent of the JS event loop.\n + **Native Code**: Native code and React Native code can coexist. This is important because React Native APIs may not support all native functionality.\n\n### How does React Native compare against platforms in terms of performance?\n\nSince React Native is regularly being improved with each release, we can except better performance than what we state below.\n\nA comparison of React Native against iOS native programming using Swift showed comparable performance of CPU usage for list views. When resizing maps, Swift was better by 10% but React Native uses far less memory here. For GPU usage, Swift outperforms marginally except for list views. \n\nReact Native apps can leak memory. Therefore, `FlatList`, `SectionList`, or `VirtualizedList` could be used rather than `ListView`. The communication between native and JS runtimes over the bridge is via message queues. This is also a performance bottleneck. For better performance, ReactNavigation is recommended over Navigator component. \n\nWhen comparing against Ionic platform, React Native outperforms Ionic across metrics such as CPU usage, memory usage, power consumption and list scrolling. \n\n\n### Are there real-world examples of who's using React Native?\n\nFacebook and Instagram use React Native. Other companies or products using it include Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. \n\nWalmart moved to React Native because it was hard to find skilled developers for native development. They used an incremental approach by migrating parts of their code to React Native. They were able to reuse 95% of their code between iOS and Android. They could reuse business logic with their web apps as well. They could deliver quick updates from their server rather than an app store. \n\nBloomberg developed their app in half the time using React Native. They were also able to push updates, do A/B testing and iterate quickly. \n\nAirbnb engineers write code for the web, iOS and Android. With React Native, they stated, \n\n> It's now feasible for us to have the same engineer skilled in JavaScript and React write the feature for all three platforms.\n\nHowever, in June 2018, Airbnb decided to move away from React Native and back to native development due to technical and organizational challenges. \n\n\n### What backend should I use for my React Native app?\n\nReact Native provides UI components. However, the React Native ecosystem is vast. There are frameworks/libraries for AR/VR, various editors and IDEs that support React Native, local databases (client-side storage), performance monitoring tools, CI/CD tools, authentication libraries, deep linking libraries, UI frameworks, and more. \n\nSpecifically for backends, **Mobile Backend as a Service (MBaaS)** is now available. Some options include RN Firebase, Baqend, RN Back, Feather and Graph Cool. These services make it easy for developers to build their React Native apps. \n\nThe more traditional approach is to build and manage your own backend. Some developers choose Node.js or Express.js because these are based on JavaScript that they're already using to build React Native UI. This can be paired with a database such as Firebase, MySQL, or MongoDB. Another option is to use Django with GraphQL. Even WordPress can be used, especially if the app is content driven. These are merely some examples. Developers can use any backend that suits their expertise and app requirements.\n\n\n### Could you point me to some useful React Native developer resources?\n\nHere are some useful resources:\n\n + Expo is a free and open source toolchain for your React Native projects. Expo also has a collection of apps developed and shared by others. The easiest way to create a new app is to use the create-react-native-app codebase.\n + If you wish learn by studying app code written by others, React Active News maintains a curated list of open source React Native apps.\n + React.parts is a place to find reusable components for React Native.\n + Visual Studio App Center is a useful tool to build and release your app.\n + Use React Navigation for routing and navigation in React Native apps.\n + React Native provides only the UI but here's a great selection of tools to complement React Native.\n\n\n## Milestones\n\n2011\n\nAt Facebook, Jordan Walke and his team release ReactJS, a JavaScript library that brings a new way of rendering pages with more responsive user interactions. A web page can be built from a hierarchy of UI components. \n\n2013\n\nReact Native starts as an internal hackathon project within Facebook. Meanwhile, ReactJS is open sourced. \n\nMar \n2015\n\nFacebook open sources React Native for iOS on GitHub. The release for Android comes in September. \n\n2016\n\nMicrosoft and Samsung commit to adopt React Native for Windows and Tizen. \n\n2017\n\nReact Native sees a number of improvements over the year: better navigation, smoother list rendering, more performant animations, and more.","meta":{"title":"React Native","href":"react-native"}} {"text":"# Web of Things\n\n## Summary\n\n\nWeb of Things (WoT) is a set of building blocks that seeks to make the Internet of Things (IoT) more interoperable and usable. It simplifies application development (including cross-domain applications) by adopting the web paradigm. Web developers will have a low barrier to entry when programming for the IoT. \n\nThe key concepts of WoT include Thing Description, Thing Model, Interaction Model, Hypermedia Controls, Protocol Bindings, Profiles, Discovery and Binding Templates. IoT devices (aka Things) are treated as web resources, which makes WoT a Resource-Oriented Architecture (ROA). \n\nWoT is standardized by the W3C. There are developer tools and implementations. As of December 2023, widespread industry adoption of WoT is yet to happen. Highly resource-constrained devices that can't run a web stack will not be able to adopt WoT.\n\n## Discussion\n\n### Why do we need the Web of Things (WoT)?\n\nThe IoT ecosystem is fragmented. Applications or devices from different vendors don't talk to one another due to differing data models. Consumers need to use multiple mobile apps to interact with their IoT devices. While IoT has managed to network different devices via various connectivity protocols (Zigbee, IEEE 802.15.4, NB-IoT, Thread, etc.), there's a disconnect at the application layer. \n\nFor developers, this disconnect translates to more effort integrating new devices and services. Each application exposes its own APIs. This results in tight coupling between clients and service providers. It's more effort maintaining and evolving these services. \n\nWoT brings interoperability at the application layer with a unifying data model. It reuses the web paradigm. IoT devices can be treated as web resources. Just as documents on the web are interlinked and easily navigated, Things can be linked, discovered, queried and acted upon. Mature web standards such as REST, HTTP, JSON, AJAX and URI can be used to achieve this. This means that web developers can become IoT developers. They can create reusable IoT building blocks rather than custom proprietary implementations that work for limited use cases. \n\n\n### What integration patterns does WoT cover?\n\nAn IoT device can directly expose a WoT API. This is the simplest integration pattern. It's also challenging from a security perspective or if the device is behind a firewall. For more resource-constrained devices running LPWAN protocols, direct access is difficult. They would connect to the cloud via a gateway, which exposes the WoT API. When devices spread over a large area need to cooperate, they would connect to the cloud in different ways and the cloud exposes the WoT API. \n\nLet's consider specific use cases. A remote controller connects directly to an electrical appliance in a trusted environment. Similarly, a sensor acting as a control agent connects to an electrical appliance. A remote control outside a trusted environment connects to a gateway or a edge device which then connects to an electrical appliance. Connected devices are mapped to digital twins that can be accessed via a client device. A device can be controlled via its digital twin in the cloud. These various integration patterns can be combined through system integration. \n\n\n### What's the architecture of WoT?\n\nWoT standardizes a layered architecture of four layers (lower to higher): Access, Find, Share and Compose. The protocols or techniques used at each of these layers are already widely used on the web. These four layers can't be mapped to the OSI model, nor are they strictly defined at the interfaces. They're really a collection of services to ease the development of IoT solutions. \n\nAt the access layer, solution architects have to think about resource, representation and interface designs. They should also define how resources are interlinked. At the find layer, web clients can discover root URLs, the syntax and semantics of interacting with Things. At the compose layer, tools such as Node-RED and IFTTT can help create mashups. \n\n\n### What are Thing Description (TD) and Thing Model (TM) in WoT?\n\nTD is something like the business card of the Thing. It reveals everything about the Thing. It informs the protocol, data encoding, data structure, and security mechanism used by the Thing. TD itself is in JSON-LD format and is exposed by the Thing or can be discovered by consumers from a Thing Description Directory (TDD). \n\nIn object-oriented programming, objects are instantiated from classes. Likewise, a TD can be seen as an instantiation of a TM. A TM is a logical description of a Thing's interface and interactions. However, it doesn't contain instance-specific information such as an IP address, serial number of GPS location. A TM can include security details if those are applicable for all instances of that TM. \n\nBoth TD and TM are represented and serialized in JSON-LD format. Whereas a TD can be validated against its TM, a TM can't be validated. \n\n\n### What's the WoT interaction model?\n\nApart from links, a Thing may expose three types of interaction affordances: \n\n + **Properties**: Property is a state of the Thing. State may be read-only or read-write. Properties can be made observable. Sensor values, stateful actuators, configuration, status and computation results are examples.\n + **Actions**: Action invokes a function of the Thing. Action can be used to update one or more properties including read-only ones.\n + **Events**: Event is used to asynchronously send data from the Thing to a consumer. Focus is on state transitions rather than the state itself. Examples include alarms or samples of a time series.Like documents on the web, WoT also links and forms. These are called **hypermedia controls**. Links are used to discover and interlink Things. Forms enable more complex operations than what's possible by simply dereferencing a URI. \n\n\n### What are protocol bindings in WoT?\n\nWoT's abstractions make it protocol agnostic. It doesn't matter if a Thing uses MQTT, CoAP, Modbus or any other connectivity protocol. WoT's interaction model unifies all these so that applications talk in terms of properties, actions and events. But abstractions have to be translated into protocol actions. This is provided by **protocol bindings**. For a door handle for example, protocol binding tells how to open/close the door at the level of knob or lever. \n\nW3C has published a non-normative document called **WoT Binding Templates**. This gives blueprints on how to write TDs for different IoT platforms or standards. This includes protocol-specific metadata, payload formats, and usage in specific IoT platforms. The consumer of a TD would implement the template, that is, the protocol stack, media type encoder/decoder and platform stack. \n\n\n### Who has implemented WoT?\n\nW3C maintains a list of developer resources. This include tools, implementations, TD directories and WoT middleware. For example, Eclipse Thingweb is a Node.js implementation to expose and consume TD. From other sources, there are implementations in Python, Java, Rust and Dart. Among the TD directories are TinyIoT Thing Directory and WoTHive. Major WoT deployments during 2012-2021 have been documented. \n\nKrellian Ltd. offers WebThings Gateway and WebThings Framework. WebThings was initially developed at Mozilla. However, its API differs from W3C specifications in many ways. \n\nThe sayWoT! platform from evosoft (a Siemens subsidiary) gives web and cloud developers an easy way to develop IoT solutions. One study compared many WoT platforms including WoT-SDN, HomeWeb, KNX-WoT, EXIP, WTIF, SOCRADES, WoTKit, µWoTO, and more. \n\nWoT is being leveraged to create digital twins. WoTwins and Eclipse Ditto with WoT integration are examples of this. Ortiz et al. used WoT TD effectively in real-time IoT data processing in smart ports use case. WoTemu is an emulation framework for WoT edge architecture. \n\n\n### What standards cover WoT?\n\nThe W3C is standardizing WoT. The following are the main normative specifications: \n\n + WoT Architecture 1.1 (Recommendation)\n + WoT Thing Description 1.1 (Recommendation)\n + WoT Discovery (Recommendation)\n + WoT Profile (Working Draft)Informative specifications include WoT Scripting API, WoT Binding Templates, WoT Security and Privacy Guidelines, and WoT Use Cases and Requirements. \n\nBeginners can start at the W3C WoT webpage for latest updates, community groups, documentation and tooling.\n\nAt the IETF, there's a draft titled *Guidance on RESTful Design for Internet of Things Systems*. This is relevant to WoT. \n\n\n### What are some limitations of WoT?\n\nWoT depends on the web stack. Hence, it's not suited for very low-power devices or mesh deployments. \n\n**Matter** protocol, known earlier as Project CHIP, is an alternative to WoT. This is promoted by the Connectivity Standards Alliance (CSA), formerly called Zigbee Alliance. Matter is based on Thread, IPv6 and Dotdot. While Matter is not web friendly like WoT, it appears to have better industry traction. However, Matter devices that expose WoT TDs can talk to WoT devices. \n\nThere's a claim that WoT hasn't adequately addressed security, privacy and data sharing issues. This is especially important when IoT devices are directly exposed to the web. Devices are energy inefficient since they're always on. They're vulnerable to DoS attacks. \n\nWoT alone can't solve complex problems such as optimize workflows across many IoT devices or applications. Hypermedea and EnvGuard are two approaches to solve this. Larian et al. compared many WoT platforms. They noted that current IoT middleware and WoT resource discovery need to be improved. Legacy systems would require custom code to interface to the WoT architecture. \n\n## Milestones\n\nNov \n2007\n\nWilde uses the term \"Web of Things\" in a paper titled *Putting Things to REST*. He makes the case for treating a Thing (such as a sensor) as a web resource. It could then be accessed via RESTful calls rather than the more restrictive SOAP/WSDL API calls. Web concepts of URI, HTTP, HTML, XML and loosely coupling can be applied effectively towards Things. \n\n2011\n\nGuinard publishes his Doctor of Science dissertation in the field of Web of Things. In 2016, he co-authors (with Trifa) a book titled *Building the Web of Things*. Guinard sees WoT as\n\n> A refinement of the Internet of Things (IoT) by integrating smart things not only into the Internet (the network), but into the Web (the application layer).\n\nJul \n2013\n\n**Web of Things Community Group** is created. Subsequently in 2014, a workshop is held (June) and an Interest Group is formed (November). \n\nDec \n2016\n\nFollowing the first in-person meeting and a WoT Plugfest in 2015, the **W3C WoT Working Group** is formed. It's aim is to produce two normative specifications (Architecture, Thing Description) and two informative specifications (Scripting API, Binding Templates). \n\nJun \n2018\n\nFrom the Eclipse Foundation, the first commit on GitHub is made for the **Eclipse Thingweb** project. The project aims to provide Node.js components and tools for developers to build IoT systems that conform to W3C WoT standards. The project releases v0.5.0 in October. \n\nApr \n2020\n\nW3C publishes WoT Architecture and WoT Thing Description as separate **W3C Recommendation** documents. \n\nJul \n2022\n\nTzavaras et al. propose using **OpenAPI** descriptions and ontologies to bring Things closer to the world of Semantic Web. Thing Descriptions can be created in OpenAPI while also conforming to W3C WoT architecture. They argue that OpenAPI is already a mature standard. It provides a uniform way to interact with web services and Things. \n\nNov \n2022\n\nMarkus Reigl at Siemens comments that WoT will do for IoT what HTML did for the WWW in the 1990s. TD is not a mere concept. It leads to executable software code. He predicts IoT standardization will gain momentum. \n\nDec \n2023\n\nW3C publishes WoT Architecture 1.1 and WoT Thing Description 1.1 as W3C Recommendation documents. In addition, WoT Discover is also published as a W3C Recommendation.","meta":{"title":"Web of Things","href":"web-of-things"}} {"text":"# TensorFlow\n\n## Summary\n\n\nTensorFlow is an open source software library for numerical computation using **data flow graphs**. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This dataflow paradigm enables parallelism, distributed execution, optimal compilation and portability. \n\nThe typical use of TensorFlow is for Machine Learning (ML), particularly Deep Learning (DL) that uses large scale multi-layered neural networks. More specifically, it's best for classification, perception, understanding, discovery, prediction and creation. \n\nTensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization for ML/DL research. The system is general enough to be applicable in a wide variety of other domains as well.\n\n## Discussion\n\n### For which use cases is TensorFlow best suited?\n\nTensorFlow can be used in any domain where ML/DL can be employed. It can also be used in other forms of AI, including reinforcement learning and logistic regression. On mobile devices, applications include speech recognition, image recognition, object localization, gesture recognition, optical character recognition, translation, text classification, voice synthesis, and more. \n\nSome of the areas are: \n\n + **Voice/Speech Recognition**: For voice-based interfaces as popularized by Apple Siri, Amazon Alexa or Microsoft Cortana. For sentiment analysis in CRM. For flaw detection (noise analysis) in industrial systems.\n + **Text-Based Applications**: For sentimental analysis (CRM, Social Media), threat detection (Social Media, Government) and fraud detection (Insurance, Finance). For machine translation such as with Google Translate. For text summarization using sequence-to-sequence learning. For language detection. For automated email replies such as with Google SmartReply.\n + **Image Recognition**: For face recognition, image search, machine vision and photo clustering. For object classification and identification within larger images. For cancer detection in medical applications.\n + **Time-Series Analysis**: For forecasting. For customer recommendations. For risk detection, predictive analytics and resource planning.\n + **Video Detection**: For motion detection in gaming and security systems. For large-scale video understanding.\n\n### Could you name some applications where TensorFlow is being used?\n\nTensorFlow is being used by Google in following areas: \n\n + RankBrain: Google search engine.\n + SmartReply: Deep LSTM model to automatically generate email responses.\n + Massively Multitask Networks for Drug Discovery: A deep neural network model for identifying promising drug candidates.\n + On-Device Computer Vision for OCR - On-device computer vision model to do optical character recognition to enable real-time translation.\n + Retinal imaging - Early detection of diabetic retinopathy using deep neural network of 26 layers.\n + SyntaxNet - Built for Natural Language Understanding (NLU), this is based on TensorFlow and open sourced by Google in 2016.Outside Google, we mention some known real-world examples. Mozilla uses TensorFlow for speech recognition. UK supermarket Ocado uses it for route planning for its robots, demand forecasting, and product recommendations. A Japanese farmer has used it to classify cucumbers based on shape, length and level of distortion. As an experiment, Intel used TensorFlow on traffic videos for pedestrian detection. \n\nFurther examples were noted at the TensorFlow Developer Summit, 2018. \n\n\n### Which platforms and languages support TensorFlow?\n\nTensorFlow is available on 64-bit Linux, macOS, Windows and also on the mobile computing platforms like Android and iOS. Google has announced a software stack specifically for Android development called TensorFlow Lite. \n\nTensorFlow has official APIs available in the following languages: Python, JavaScript, C++, Java, Go, Swift. Python API is recommended. Bindings in other languages are available from community: C#, Haskell, Julia, Ruby, Rust, Scala. There's also C++ API reference for TensorFlow Serving. R's `tensorflow` package provides access to the complete TensorFlow API from within R. \n\nNvidia's **TensorRT**, a Programmable Inference Accelerator, allows you to optimize your models for inference by lowering precision and thereby reducing latency. \n\n\n### How is TensorFlow different from other ML/DL platforms?\n\nTensorFlow is relatively painless to setup. With its growing community adoption, it offers a healthy ecosystem of updates, tutorials and example code. It can run on a variety of hardware. It's cross platform. It has APIs or bindings in many popular programming languages. It supports GPU acceleration. Through TensorBoard, you get an intuitive view of your computation pipeline. Keras, a DL library, can run on TensorFlow. However, it's been criticized for being more complex and slower than alternative frameworks. \n\nCreated in 2007, **Theano** is one of the first DL frameworks but it's been perceived as too low-level. Support for Theano is also ending. Written in Lua, **Torch** is meant for GPUs. It's Python port released by Facebook, called **PyTorch**, is popular for analyzing unstructured data. It's developer friendly and memory efficient. **Caffe2** does well for modeling convolutional neural networks. **Apache MXNet**, along with its simplified DL interface called **Gluon**, is supported by Amazon and Microsoft. Microsoft also has **Microsoft Cognitive Toolkit (CNTK)** that can handle large datasets. For Java and Scala programmers, there's **Deeplearning4j**. \n\n\n### Which are the tools closely related to TensorFlow?\n\nThe following are closely associated with or variants of TensorFlow:\n\n + **TensorFlow Lite**: Enables low-latency inferences on mobile and embedded devices.\n + **TensorFlow Mobile**: To use TensorFlow from within iOS or Android mobile apps, where TensorFlow Lite cannot be used.\n + **TensorFlow Serving**: A high performance, open source serving system for machine learning models, designed for production environments and optimized for TensorFlow.\n + **TensorLayer**: Provides popular DL and RL modules that can be easily customized and assembled for tackling real-world machine learning problems.\n + **TensorFlow Hub**: A library for the publication, discovery, and consumption of reusable parts of machine learning models.\n + **TensorFlow Model Analysis**: A library for evaluating TensorFlow models.\n + **TensorFlow Debugger**: Allows us to view the internal structure and states of running TensorFlow graphs during training and inference.\n + **TensorFlow Playground**: A browser-based interface for beginners to tinker with neural networks. Written in TypeScript and D3.js. Doesn't actually use TensorFlow.\n + **TensorFlow.js**: Build and train models entirely in the browser or Node.js runtime.\n + **TensorBoard**: A suite of visualization tools that helps to understand, debug, and optimize TensorFlow programs.\n + **TensorFlow Transform**: A library for preprocessing data with TensorFlow.\n\n### What's the architecture of TensorFlow?\n\nTensorFlow can be deployed across platforms, details of which are abstracted away from higher layers. The core itself is implemented in C++ and exposes its features via APIs in many languages, with Python being the most recommended. \n\nAbove these language APIs, is the **Layers** API that offers commonly used layers in deep learning models. To read data, **Datasets** API is the recommended way and it creates input pipelines. With **Estimators**, we can create custom models or bring in models pre-made for common ML tasks. \n\n**XLA (Accelerated Linear Algebra)** is a domain-specific compiler for linear algebra that optimizes TensorFlow computations. If offers improvements in speed, memory usage, and portability on server and mobile platforms. \n\n\n### Could you explain how TensorFlow's data graph works?\n\nTensorFlow uses a **dataflow graph**, which is a common programming model for parallel computing. Graph nodes represent **operations** and edges represent data consumed or produced by the nodes. Edges are called **tensors** that carry data. In the example figure, we show five graph nodes: `a` and `b` are placeholders to accept inputs; `c`, `d` and `e` are simple arithmetic operations.\n\nIn TensorFlow 1.x, when a graph is created, tensors don't contain the results of operations. The graph is evaluated through **sessions**, which encapsulate the TensorFlow runtime. However, with **eager execution**, operations are evaluated immediately instead of building a graph for later execution. This is useful for debugging and iterating quickly on small models or data. \n\nFor ingesting data into the graph, **placeholders** can be used for the simplest cases but otherwise, **datasets** should be preferred. To train models, **layers** are used to modify values in the graph. \n\nTo simplify usage, high-level API called **estimators** should be used. They encapsulate training, evaluation, prediction and export for serving. Estimators themselves are built on layers and build the graph for you. \n\n\n### How is TensorFlow 2.0 different from TensorFlow 1.x?\n\nIt makes sense to write any new code in TensorFlow 2.0. Existing 1.x code can be migrated to 2.0. The recommended path is to move to TensorFlow 1.14 and then to 2.0. Compatibility module `tf.compat` should help. \n\nHere are the key changes in TensorFlow 2.0: \n\n + **API Cleanup**: Many APIs are removed or moved. For example, `absl-py` package replaces `tf.app`, `tf.flags`, and `tf.logging`. Main namespace `tf.*` is cleaned up by moving some items into subpackages such as `tf.math`. Examples of new modules are `tf.summary`, `tf.keras.metrics`, and `tf.keras.optimizers`.\n + **Eager Execution**: Like Python, eager execution is the default behaviour. Code execute in order, making `tf.control_dependencies()` redundant.\n + **No More Globals**: We need to keep track of variables. Untracked `tf.Variable` will get garbage collected.\n + **Functions, Not Sessions**: Functions are more familiar to developers. Although `session.run()` is gone, for efficiency and JIT compilation, `tf.function()` decorator can be used. This automatically invokes *AutoGraph* to convert Python constructs into TensorFlow graph equivalents. Functions can be shared and reused.\n\n\n## Milestones\n\n2011\n\nGoogle Brain invents **DistBelief**, a framework to train large models for machine learning. DistBelief can make use of computing clusters of thousands of machines for accelerated training. The framework manages details of parallelism (multithreading, message passing), synchronization and communication. Compared to MapReduce, DistBelief is better at deep network training. Compared to GraphLab, DistBelief is better at structured graphs. \n\nNov \n2015\n\nUnder Apache 2.0 licensing, Google open sources TensorFlow, which is Google Brain's second-generation machine learning system. While other open source ML frameworks exist (Caffe, Theano, Torch), Google's competence in ML is supposedly 5-7 years ahead of the rest. However, Google doesn't open source algorithms that run on TensorFlow, not its advanced hardware infrastructure. \n\nApr \n2016\n\nVersion 0.8 of TensorFlow is released. It comes with distributed training support. Powered by gRPC, models can be trained on hundreds of machines in parallel. For example, Inception image classification network was trained using 100 GPUs with an overall speedup of 56x compared to a single GPU. More generally, the system can map the dataflow graph onto heterogeneous devices (multi-core CPUs, general-purpose GPUs, mobile processors) in the available processes. \n\nMay \n2016\n\nGoogle announced that it's been using **Tensor Processing Unit (TPU)**, a custom ASIC built specifically for machine learning and tailored for TensorFlow. \n\nJun \n2016\n\nTensorFlow v0.9 is released with support for iOS and Raspberry Pi. Android support has been around from the beginning. \n\nFeb \n2017\n\nVersion 1.0 of TensorFlow is released. The API is in Python but there's also experimental APIs in Java and Go. \n\nNov \n2017\n\nGoogle releases a preview of **TensorFlow Lite** for mobile and embedded devices. This enables low-latency inferences for on-device ML models. In future, this should be preferred over **TensorFlow Mobile**. With TensorFlow 1.4, we can build models using high-level **Keras** API. Keras, which was previously in `tf.contrib.keras`, is now the core package `tf.keras`. \n\nSep \n2019\n\nTensorFlow 2.0 is released following an alpha release in June. It improves workflows for both production and experimentation. It promises better performance on GPU acceleration.","meta":{"title":"TensorFlow","href":"tensorflow"}} {"text":"# Wi-Fi Calling\n\n## Summary\n\n\nWi-Fi Calling is a technology that allows users to make or receive voice calls via a local Wi-Fi hotspot rather than via their mobile network operator's cellular radio connection. Voice calls are thus carried over the Internet, implying that Wi-Fi Calling relies on VoIP. However, unlike other VoIP services such as Skype or Viber, Wi-Fi Calling gives operators more control.\n\nWi-Fi Calling is possible only if the operator supports it, user's phone has the feature and user has enabled it. Once enabled, whether a voice call uses the cellular radio link or Wi-Fi link is almost transparent to the user. With cellular networks going all IP and offering VoLTE, Wi-Fi Calling has become practical and necessary in a competitive market.\n\nWi-Fi Calling is also called *Voice over Wi-Fi (VoWi-Fi)*.\n\n## Discussion\n\n### In what scenarios can Wi-Fi Calling be useful to have?\n\nIn places where cellular coverage is poor, such as in rural residences, concrete indoors, basements, or underground train stations, users will not be able to make or receive voice calls. In these scenarios, the presence of a local Wi-Fi network can serve as the \"last-mile\" connectivity to the user. Wi-Fi can therefore complement the cellular network in places where the latter's coverage is poor.\n\nFor example, a user could be having an active voice call via the cellular network and suddenly enters a building with poor coverage. Without Wi-Fi Calling, the call might get dropped. With Wi-Fi Calling, the call can be seamlessly handed over to the Wi-Fi network without even the user noticing it. Astute users may notice that their call is on Wi-Fi since smartphones may indicate this via an icon. More importantly, user intervention is not required to switch between cellular and Wi-Fi. Such seamless handover has become possible because cellular network's IP and packet switching: VoWi-Fi can be handed off to VoLTE, and vice versa. \n\n\n### Isn't Wi-Fi Calling the same as Skype, Viber or WhatsApp voice calls?\n\nMany smartphone apps allow voice (and even video) calls over the Internet. They are based on VoIP technology. We normally call them over-the-top (OTT) services since they merely use the phone's data connection and operators bill for data usage and not for the service itself. However, many of these systems require both parties to have the same app installed. Even when this constraint is removed, the service is controlled by the app provider.\n\nWi-Fi Calling gives cellular operators greater control. Driven by competition from OTT services, Wi-Fi Calling gives operators an opportunity to regain market share for voice calls. Voice packets are carried securely over IP to the operator's core network, thus allowing the operator to reuse many resources and procedures already in place for VoIP calls. Likewise, messages and video–*Video over LTE (ViLTE)*–can also be carried over Wi-Fi. \n\nFrom an architectural perspective, Wi-Fi Calling is served by operator's IP Multimedia Subsystem (IMS), whereas Skype calls are routed out of the operator's network into the Internet.\n\n\n### Isn't Wi-Fi Calling the same as Wi-Fi Offload?\n\nNot exactly. Wi-Fi Calling can be seen as a form of offload but they have different motivations. Wi-Fi Offload came about to ease network congestion and improve QoS for users in high-density areas. The offload is transparent for users whose devices are authenticated via EAP-SIM/AKA. \n\nWi-Fi Calling is in response to OTT services stealing revenue from mobile operators. Even VoLTE was deployed by operators, voice calls couldn't be made over Wi-Fi and OTT services was what users used when they had access to Wi-Fi. Wi-Fi Calling aims to overcome this problem. \n\n\n### What are the possible benefits of Wi-Fi Calling?\n\nFor subscribers, benefits include seamless connectivity and mobility between cellular and Wi-Fi. The selection is automatic and transparent to users. Data is protected using IPSec from mobile to core network, along with traditional SIM-based authentication. Users can potentially lower their monthly bills through service bundles and reduced roaming charges. Sometimes calling home from another country could be free depending on the subscribed plan and operator. \n\nMoreover, the user's phone will have a single call log (likewise, for message log). The default dialler can be used along with all saved contacts. Those receiving the call will see the caller's usual phone number. These are not possible with a third-party installed app.\n\nFor operators, Wi-Fi complements cellular coverage and capacity. T-Mobile was one of the early adopters because it had poor indoor coverage. Wi-Fi Network performance is optimized by allowing bandwidth-intensive traffic to be offloaded to Wi-Fi when so required. All their IMS-based services can now be extended to Wi-Fi access rather than losing out to OTT app/service providers. \n\n\n### How does the network architecture change for Wi-Fi Calling?\n\nTwo network functions are involved: \n\n + **Evolved Packet Data Gateway (ePDG)**: Serves an untrusted Wi-Fi network. An IPSec tunnel protects data between mobile and ePDG, from where it goes to Packet Gateway (PGW). The mobile needs an update with an IPsec client. No changes are needed for the access point.\n + **Trusted Wireless Access Gateway (TWAG)**: Serves a trusted Wi-Fi network, which is typically under operator's control. In this case, data between mobile and TWAG is encrypted at radio access and IPSec is not used. From TWAG, data goes to PGW. No changes are needed for the mobile but Wi-Fi access point needs to be updated.If the network is not an Evolved Packet Core (EPC), then Tunnel Termination Gateway (TTG) is used instead of ePDG; Wireless Access Gateway (WAG) is used instead of TWAG; GGSN is used instead of PGW.\n\nThe untrusted mode is often used for Wi-Fi Calling, since public hotspots can be used without updating the access point. It's the operator who decides if a non-3GPP access can be considered trusted. \n\n\n### How is an end-user device authenticated for Voice over Wi-Fi service?\n\nWithin the network, *3GPP AAA Server* is used to authenticate end devices. Authentication is based on SIM and the usual network functions located in the Home Subscriber Server (HSS). 3GPP AAA Server does not maintain a separate database and relies on the HSS. \n\nVendors who sell AAA servers usually give the ability to do authentication of devices that don't have SIM. For legacy networks, they can interface with HLR rather than HSS. They support AAA protocols such as RADIUS and Diameter. They support various EAP methods including TLS, PEAP and CHAP. \n\n\n### What are the 3GPP standards covering Wi-Fi Calling?\n\nDocuments that specify \"non-3GPP access\" are applicable to Wi-Fi Calling. The following are some relevant documents (non-exhaustive list): \n\n + TR 22.814: Location services\n + TR 22.912: Study into network selection requirements\n + TS 23.402: Architectural enhancements\n + TS 24.234: 3GPP-WLAN interworking: WLAN UE to network protocols, Stage 3\n + TS 24.302: Access to EPC, Stage 3\n + TS 29.273: 3GPP EPS AAA interfaces\n + TS 33.402: System Architecture Evolution: security aspects\n + TR 33.822: Security aspects for inter-access mobilityIn addition, GSMA has released a list of Permanent Reference Documents on VoWi-Fi. \n\nWi-Fi Calling is a technology that comes from the cellular world. From Wi-Fi perspective, there's no special IEEE standard that talks about Wi-Fi Calling.\n\n\n### Are there commercial services offering Wi-Fi Calling?\n\nIn June 2016, it was reported that all four major operators in the US support Wi-Fi Calling, with T-Mobile supporting as many as 38 different handsets. In November 2016, there were 40+ operators offering Wi-Fi Calling in 25+ countries. Moreover, even affordable phones or devices without SIMs are supporting Wi-Fi Calling. An operator will normally publish a list of handsets that are supported, which usually includes both Android and iPhone models. In September 2017, it was reported that AT&T has 23 phones and Verizon has 17 phones that support Wi-Fi Calling. \n\nWi-Fi Calling may involve regulatory approval based on the country's licensing framework. For example, India's TRAI commented in October 2017 that Wi-Fi Calling can be introduced since licensing allows telephony service to be provided independent of the radio access. \n\n\n### Within enterprises, how can IT teams plan for Wi-Fi Calling?\n\nSome access points have the ability to prioritize voice traffic and this can be repurposed for Wi-Fi Calling. Examples include Aerohive, Aruba, Cisco Aironet and Ruckus. Enterprises can also work with operators to deploy femto/pico cells or distributed antenna systems. \n\nA minimum of 1 Mbps may be needed to support Wi-Fi Calling although Republic Wireless in the US claims 80 kbps is enough to hold a call, although voice quality may suffer. In reality, voice needs just 12 kbps but can scale down to 4.75 kbps. \n\n\n### How will users be billed for Wi-Fi Calling?\n\nThis is completely operator dependent and based on subscriber's current plan. For example, Canada's Rogers says that calls and messages are deducted from airtime and messaging limits. Roaming charges may apply only for international roaming. Verizon Wireless states that a voice call will use about 1 MB/minute of data; a video call will use 6-8 MB/minute. Billing is linked to user's current plan. \n\n\n### What are some practical issues with Wi-Fi Calling?\n\nBack in 2014, T-Mobile had handoff problems but it was improved later. The service was also not offered by other operators and not supported by most handsets. Even when a handset supports it, operators may not offer the service if the handset has not been purchased from the operator. \n\nSince any Wi-Fi hotspot can be used, including public ones, security is a concern. For this reason, all data over Wi-Fi must be protected and subscriber must be authenticated by the cellular operator. Seamless call continuity across cellular and Wi-Fi could be a problem, particularly when firewalls and VPNs are involved. Some users have reported problems when using Wi-Fi behind corporate firewalls. Likewise, IT teams in enterprises may have the additional task of ensuring Wi-Fi coverage and managing traffic. \n\nSince Wi-Fi Calling often uses public hotspots, there's no QoS control. However, it's argued that in places where cellular has poor coverage, QoS cannot be guaranteed anyway. In addition, QoS on Wi-Fi can often be achieved implicitly because of excess capacity. With the coming of 802.11ac and the ability to prioritize traffic via Wi-Fi Multimedia (WMM), QoS is unlikely to be a problem. \n\n## Milestones\n\n2007\n\nT-Mobile in the US launches something called \"HotSpot @ Home\". This is based on a technology named *Unlicensed Mobile Access*, which is a commercial name of a 3GPP feature named *Generic Access Network*. GAN operates in the IP layer, which means that access can be via any protocol, not just Wi-Fi. UMA does not take off because of lack of handsets that support it. It also had other operational issues related to interference, handover and configuration setup. \n\nNov \n2011\n\nRepublic Wireless, a mobile virtual network operator (MVNO) in the US, rolls out \"Hybrid Calling\". Calls are primarily on Wi-Fi and cellular will be used as a fallback option. Their General Manager, Brian Dally, states,\n\n> Every other mobile carrier talks about offloading to Wi-Fi, we talk about failing over to cellular.\n\nSep \n2014\n\nT-Mobile introduces Wi-Fi Calling in the US. This comes at the heels of the operator's rollout of VoLTE. Meanwhile, Apple iPhone starts supporting Wi-Fi Calling. \n\nApr \n2015\n\nSprint introduces Wi-Fi Calling in the US. EE does the same in the UK. Meanwhile, Google gets into telecom by launching *Project Fi*, which allows seamless switching between Wi-Fi and cellular. Google doesn't have its own cellular network but uses those of Sprint, T-Mobile, and US Cellular. \n\nOct \n2015\n\nIn the US, AT&T obtains regulatory approval to launch Wi-Fi Calling. By 2016, all four major US operators rollout Wi-Fi Calling nationwide. \n\nJun \n2017\n\nUMA, which may be called first generation Wi-Fi Calling, is decommissioned by T-Mobile in the US. \n\nNov \n2018\n\nResearchers discover security vulnerabilities with Wi-Fi Calling due to various reasons. They propose possible solutions to overcome these.","meta":{"title":"Wi-Fi Calling","href":"wi-fi-calling"}} {"text":"# Design Thinking\n\n## Summary\n\n\nDesign thinking is a problem-solving method used to create practical and creative solutions while addressing the needs of users. The process is extremely user centric as it focuses on understanding the needs of users and ensuring that the solutions created solve users' needs. \n\nIt's an iterative process that favours ongoing experimentation until the right solution is found.\n\n## Discussion\n\n### Why is the design thinking process important?\n\nDesign thinking helps us to innovate, focus on the user, and ultimately design products that solve real user problems. \n\nThe design thinking process can be used in companies to reduce the time it takes to bring a product to the market. Design thinking can significantly reduce the amount of time spent on design and development. \n\nThe design thinking process increases return of investment as the products are user-centric, which helps increase user engagement and user retention. It's been seen that a more efficient workflow due to design thinking gave 75% savings in design and development time, 50% reduction in defect rate, and a calculated ROI of more than 300%. \n\n\n### When and where should the design thinking process be used?\n\nThe design thinking process should especially be used when dealing with **human-centric challenges** and **complex challenges**. The design thinking process helps break down complex problems and experiment with multiple solutions. Design thinking can be applied in these contexts: human-centred innovation, problems affecting diverse groups, involving multiple systems, shifting markets and behaviours, complex societal challenges, problems that data can't solve, and more. \n\nA class of problems called **wicked problems** is where design thinking can help. Wicked problems are not easy to define and information about them is confusing. They have many stakeholders and complex interdependencies. \n\nOn the contrary, design thinking is perhaps an overkill for obvious problems, especially if they're not human centred. In such cases, traditional problem-solving methods may suffice. \n\n\n### What are the principles of the design thinking process?\n\nThere are some basic principles that guide us in applying design thinking: \n\n + **The human rule**: All design activity is social because all social innovation will bring us back to the \"human-centric point of view\".\n + **The ambiguity rule**: Ambiguity is inevitable, and it can't be removed or oversimplified. Experimenting at the limits of your knowledge and ability is crucial in being able to see things differently.\n + **The redesign rule**: While technology and social circumstances may change, basic human needs remain unchanged. So, every solution is essentially a redesign.\n + **The tangibility rule**: Making ideas tangible by creating prototypes allows designers to communicate them effectively.\n\n### What are the typical steps of a design thinking process?\n\nThe process involves five steps: \n\n + **Empathy**: Put yourself in the shoes of the user and look at the challenge from the point of view of the user. Refrain from making assumptions or suggesting answers. Suspend judgements throughout the process.\n + **Define**: Create a challenge statement based on the notes and thoughts you have gained from the empathizing step. Go back to the users and modify the challenge statement based on their inputs. Refer to the challenge statement multiple times throughout the design thinking process.\n + **Ideate**: Come up with ideas to solve the proposed challenge. Put down even the craziest ideas.\n + **Prototype**: Make physical representations of your ideas and solutions. Get an understanding of what the final product may look like, identify design flaws or constraints. Take feedback from users. Improve the prototype through iterations.\n + **Test**: Evaluate the prototype on well-defined criteria.Note that empathy and ideate are divergent steps whereas others are convergent. Divergent means expanding information with alternatives and solutions. Convergent is reducing information or filtering to a suitable solution. \n\n\n### What are the specific tools to practice design thinking?\n\nDesign thinking offers tools for each step of its five-step process. These are summarized in the above figure. These tools offer individuals and teams something concrete to effectively practice design thinking.\n\nNew Metrics has enumerated 14 different tools: immersion, visualization, brainstorming, empathy mapping, journey mapping, affinity mapping, rapid iteration, assumption testing, prototyping, design sprints, design criteria, finding the value proposition, and learning launch. They describe each tool briefly and note the benefits. More tools include focus groups, shadowing, concept maps, personas, positioning matrix, minimum viable product, volume model, wireframing, and storyboards. \n\nFor specific software tools, we note the following: \n\n + **Empathize**: Typeform, Zoom, Creatlr\n + **Define**: Smaply, Userforge, MakeMyPersona\n + **Ideate**: SessionLab, Stormboard, IdeaFlip\n + **Prototype**: Boords, Mockingbird, POP\n + **Test**: UserTesting, HotJar, PingPong\n + **Complete Process**: Sprintbase, InVision, Mural, Miro\n\n### What should I keep in mind when applying the design thinking process?\n\nEvery designer can use a variation of the design thinking process that suits them and customize it for each challenge. Although distinct steps are defined, design thinking is not a linear process. Rather, it's very much **iterative**. For example, during prototyping we may go back to redefine the problem statement or look for alternative ideas. Every step gives us new information that might help us improve on previous steps.\n\nAdopt Agile methodology. Design thinking is strong on ideation while Scrum is strong on implementation. Combine the two to make a powerful hybrid Agile approach. \n\nWhile the steps are clear, applying them correctly is not easy. To identify what annoys your clients, ask questions. Empathy means that you should relate to their problems. Open-ended questions will stimulate answers and help identify the problems correctly. \n\nAt the end of the process, as a designer, reflect on the way you've gone through the process. Identify areas of improvement or how you could have done things differently. Gather insights on the way you went through the design thinking process.\n\n\n### What do I do once the prototype is proven to work?\n\nThe prototype itself can be said to \"work\" only after we have submitted it to the clients for feedback. Use this feedback to improve the prototype. Make the actual product after incorporating all the feedback from the prototype. \n\nGathering feedback itself is an important activity. Present your solution to the client by describing the thought process by which the challenge was solved. Take notes from users and ensure that they are satisfied with the final product. It's important not to defend your product. It's more important to listen to what users have to say and make changes to improve the solution. \n\nPresent several versions of the prototype so that users can compare and express what they like and dislike. Consider using *I Like, I Wish, What If* method for gathering feedback. Get feedback from regular users as well as extreme users with highly opinionated views. Be flexible and improvise during testing sessions. Allow users to contribute ideas. \n\nRecognize that prototyping and testing is an iterative process. Be prepared to do this a few times. \n\n\n### How is design thinking different from user-centred design?\n\nOn the surface, both design thinking and user-centred design (UCD) are focused on the needs of users. They have similar processes and methods. They aim for creative or innovative solutions. To elicit greater empathy among designers, UCD has been more recently called human-centred design (HCD). \n\nHowever, design thinking goes beyond usability. It considers technical feasibility, economic viability, desirability, etc. without losing focus on user needs. While UCD is dominated by usability engineers and focuses on user interfaces, design thinking has a larger scope. Design thinking brings more multi-disciplinary perspectives that can suggest innovative solutions to complex problems. While it borrows from UCD methods, it goes beyond the design discipline. \n\nSome see UCD as a framework and design thinking as a methodology that can be applied within that framework. Others see these as complementary: a team can start with design thinking for initial exploration and later shift to UCD for prototyping and implementation. \n\n\n### What are some ways to get more ideas?\n\nDesign thinking is not about applying standard off-the-shelf solutions. It's about solving difficult problems that typically require creative approaches and innovation. More ideas, the better. Use different techniques such as brainstorming, mind mapping, role plays, storyboarding, etc. \n\nInnovation is not automatic and needs to be fostered. We should create the right mindsets, an open and explorative culture. Designers should combine both logic and imagination. Teams should be cross-disciplinary and collaborative. Work environments must be conductive to innovation. \n\nWhen framing the problem, think about how the challenge can be solved in a certain place or scenario. For example, think about how one of your ideas would function differently in a setting such as a kitchen.\n\nWrite down even ideas that may not work. Further research and prototyping might help refine it. Moreover, during the prototyping and testing steps, current ideas can spark new ideas. \n\n## Milestones\n\nSep \n1962\n\n*The Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications* is held in London. It explores design processes and new design methods. Although the birth of design methodology can be traced to Zwicky's *Morphological Method* (1948), it's this conference that recognizes design methodology as a field of academic study. \n\n1966\n\nThe term **Design Science** is introduced. This shows that the predominant approach is to find \"a single rationalised method, based on formal languages and theories\". \n\n1969\n\nHerbert A. Simon, a Nobel Prize laureate and cognitive scientist, mentions the design thinking process in his book *The Sciences of the Artificial* and further contributes ideas that are now known as the principles of design thinking. \n\n1970\n\nThis decade sees some resistance to the adoption of design methodology. Even early pioneers begin to dislike \"the continual attempt to fix the whole of life into a logical framework\". \n\n1973\n\nRittel publishes *The State of the Art in Design Methods*. He argues that the early approaches of the 1960s were simplistic, and a new generation of methodologies are beginning to emerge in the 1970s. Rather than optimize through systematic methods, the **second generation** is about finding a satisfactory solution in which designers partner with clients, customers and users. This approach is probably more relevant to architecture and planning than engineering and industrial design. \n\n1980\n\nThis decade sees the development of **engineering design methodology**. An example is the series of *International Conferences on Engineering Design*. The American Society of Mechanical Engineers also launches a series of conferences on Design Theory and Methodology. \n\nOct \n1982\n\nNigel Cross discusses the problem-solving nature of designers in his seminal paper *Designerly Ways of Knowing*. \n\n1987\n\nPeter Rowe, Director of Urban Design Programs at Harvard, publishes his book *Design Thinking*. This explores the underlying structure and focus of inquiry in design thinking. \n\n1991\n\nIDEO, an international design and consulting firm, brings design thinking to the mainstream by developing their own customer-friendly technology.","meta":{"title":"Design Thinking","href":"design-thinking"}} {"text":"# Single Page Application\n\n## Summary\n\n\nA web application broadly consists of two things: data (content) and control (structure, styling, behaviour). In traditional applications, these are spread across multiple pages of HTML, CSS and JS files. Each page is served in HTML with links to suitable CSS/JS files. A Single Page Application (SPA) brings a new programming paradigm for the web.\n\nWith SPA, we have a single HTML page for the entire application. This page along with necessary CSS and JS for the site are loaded when the page is first requested. Subsequently, as the user navigates the app, only relevant data is requested from the server. Other files are already available with the client. The page doesn't reload but the view and HTML DOM are updated. \n\nSPA (along with PWA) is the modern way to build web applications. SPA enhances user experience. There are frameworks that simplify building SPAs.\n\n## Discussion\n\n### Could you explain the single page application for a beginner?\n\nIn a typical multi-page application, each page is generated as HTML on the server and served to the client browser. Each page has its own URL that's used by the client to request that page.\n\nWhen a user navigates from one page to another, the entire page loads. However, it's common for all pages to share many UI components: sidebar, header, footer, navigation menu, login/logout UI, and more. It's therefore wasteful to download these common elements with every page request. In terms of user experience, moving from one page to another might be annoying. Current page might lose UI interaction as user waits for another page to load.\n\nIn SPA, there's a single URL. When a link is clicked, relevant content is downloaded and specific UI components are updated to render that content. User experience improves because user stays with and can interact with the current page while the new content is fetched from the server. When an update happens, there's no transition to another page. Parts of the current page are updated with new content. \n\n\n### How does the lifecycle of an SPA request/response compare against a traditional multi-page app?\n\nIn multi-page apps, each request is for a specific page or document. Server looks at the URL and serves the corresponding page or document. The entire app is really a collection of pages. \n\nIn SPA, the first client request loads the app and all its relevant assets. These could be HTML plus JS/CSS files. If the app is complex, this initial bundle of files could be large. Therefore, the first view of the app can take some time to appear. During this phase, a loader image may be shown to the user. \n\nSubsequently, when the user navigates within the SPA, an API is called to fetch new data. The server responds with only the data, typically in JSON format. The browser receives this data and updates the app view. User sees this new information without a page reload. The app stays in the same page. Only the view changes by updating some components of the page. \n\nSPAs are well-suited when we wish to build rich interactive UI with lots of client-side behaviour. \n\n\n### Which are the different SPA architectures?\n\nApplication content might be stored in files or databases. It can be dynamic (news sites) or contextual (user specific). Therefore, the application has to transform this content into HTML so that users can read them in a web browser. This transformation process is called *rendering*. From this perspective, we note the following SPA architectures: \n\n + **Client-Side Rendering**: When browser requests the site, server responds quickly with a basic HTML page. This is linked to CSS/JS files. While these files are loading, user sees a loader image. Once data loads, JavaScript on the browser executes to complete the view and DOM. Slow client devices can spoil user experience.\n + **Server-Side Rendering**: HTML page is generated on the fly at the server. Users therefore see the content quickly without any loader image. At the browser, once events are attached to the DOM, app is ready for user interaction.\n + **Static Site Generators**: HTML pages are pre-generated and stored at the server. This means that the server can respond immediately. Better still, the page can be served by a CDN. This is the fastest approach. This approach is not suitable for dynamic content.\n\n### What are the benefits of an SPA?\n\nWith SPA, applications load faster and use less bandwidth. User experience is seamless, similar to a native app. Users don't have to watch slow page reloads. Developers can build feature-rich applications such as content-editing apps. On mobile devices, the experience is richer: clicks can be replaced with scrolling and amazing transitions. With browsers providing many developer tools, SPAs are also easy to debug on the client side. \n\nSPA optimizes bandwidth usage. Main resources (HTML/CSS/JS) are downloaded only once and reused. Subsequently, only data is downloaded. In addition, SPAs can cache data, thereby saving bandwidth. Caching also enables the application to work offline. \n\n\n### What are some criticisms or disadvantages of an SPA?\n\nAmong the disadvantages of SPA is **SEO**. SPA has a single URL and all routing happens via JavaScript. More recently, Google is able to crawl and index JS files. In general, use multi-page apps if SEO is important. Adopt SPA for SaaS platforms, social networks or closed communities where SEO doesn't matter. \n\nSPA breaks **browser navigation**. Browser's back button will go to previous page rather than previous app view. This can be overcome with the *HTML5 History API*. \n\nSPA could lead to **security issues**. Cross-site scripting attacks are possible. If developers are not careful, sensitive data could be part of the initial data download. Since all this data is not necessarily displayed on the UI, it can give developers a false sense of security. Developers could also unknowingly provide access to privileged functionality at the client side. \n\nSPA needs **client-side processing** and therefore may not work well on old browsers or slow devices. It won't work if users turn off JavaScript in their browsers. SPAs can be hard to maintain due to reliance on many third-party libraries. \n\nIt's worth reading Adam Silver's article on the many disadvantages of SPAs. \n\n\n### What are some best practices when converting a traditional app to SPA?\n\nAn SPA has to implement many things that come by default in traditional apps: browsing history, routing, deep linking to particular views. Therefore, **select a framework** that facilitates these. Select a framework with a good ecosystem and a modular structure. It must be flexible and performant for even complex UI designs. \n\nAfter the initial page loads, subsequent data is loaded by making API calls. Building an SPA implies a **well-defined API**. Involve both frontend and backend engineers while creating this API. In one approach, serve static files separately from the data that's handled by API endpoints. \n\nDefine clearly which parts of the UI are dynamic. This helps to organize project modules. Structure the project to enable **reusable components**. \n\nDue to its high reliance on JavaScript, invest in **build tools** for better dependency management. Webpack is a good choice. A build process can do code compilation (via Babel), file bundling and minification. \n\nWhen converting to an SPA, don't take an all-out approach. **Migrate incrementally**, perhaps one page at a time. \n\n\n### How do I test and measure performance of an SPA?\n\nTesting tools Selenium, Cypress and Puppeteer can also be used to measure app performance. WebPageTest is an online tool that's easier to use. Compared to multi-page apps, there's more effort to fill forms or navigate across views. \n\nApplication performance on the client side can be monitored via Navigation Timing API and Resource Timing API. But these fail to capture JavaScript execution times. To address this, User Timing API can be used. LinkedIn took this approach and improved the performance of their SPA by 20%. Among the techniques they used are lazy rendering (defer rendering outside viewport) and lazy data fetching. \n\nAt Holiday Extras, their app took 23 seconds to load on a good 3G connection. To reduce this, they adopted code splitting to defer loading of non-critical libraries. CSS was also split into three parts loaded at different stages: critical, body, onload. They moved from JS rendering to HTML rendering, and then started serving static HTML from Cloudfront CDN. They did real user monitoring (RUM). Among the tools they used were React, EJS, Webpack, and Speed Curve. \n\n\n### Could you mention some popular websites or web apps that are SPAs?\n\nFacebook, Google Maps, Gmail, Twitter, Google Drive, and GitHub are some examples of websites built as SPAs. \n\nFor example, in Gmail we can read mails, delete mails, compose and send mails without leaving the page. It's the same with Google Maps in which new locations are loaded and displayed in a seamless manner. In Grammarly, writers get suggestions and corrections as they compose their content. All this is powered by HTML5 and AJAX to build responsive apps. \n\nTrello is another example of SPA. The card layout, overlays, and user interactions are all done without any page reloads. \n\n\n### Which are some tools and frameworks to help me create an SPA?\n\nThe three main frameworks for building SPAs are React, Angular and Vue on the client side, and Node.js on the server side. All these are based on JavaScript. Other JavaScript frameworks include Meteor, Backbone, Ember, Polymer, Knockout and Aurelia. \n\nDevelopers can choose the right framework by comparing how each implements or supports UI, routing, components, data binding, usability, scalability, performance, and testability. For example, while Ember comes with routing, React doesn't; but many modules for React support routing. React supports reusable components. React supports one-way data binding whereas Angular supports two-way data binding. Ember and Meteor are opinionated whereas React and Angular are less so and more flexible. \n\n.NET/C# developers can consider using Blazor. Blazor can work both at client side and server side. It runs in a web browser due to WebAssembly. \n\nDesign tools support traditional multi-page sites. Adobe Experience Manager Sites is a tool that allows designers to create or edit SPAs. It supports drag-and-drop editing, out-of-the-box components and responsive web design. \n\n\n### How does an SPA differ from PWA?\n\nPWA uses standard web technologies to deliver mobile native app-like experience. They were meant to make responsive web apps feel more native on mobile platforms. PWA enables the app to work offline, push notifications and access device hardware. Unlike SPA, PWA use service workers, web app manifest and HTTPS. \n\nPWA load almost instantly since service workers run in a separate thread from the UI. SPAs need to pre-fetch assets at the start and therefore there's always an initial loading screen. SPAs can also use service workers but PWA do it better. In terms of accessibility, PWA are better than SPAs. SPAs might be suited for data-intensive sites that are not necessarily visually stunning. \n\nBut PWA are not so different from SPA. Both offer app-like user experience. Many PWA are built with the same frameworks that are used to build SPA. In fact, an app might initially be developed as an SPA. Later, additional features such as caching, manifest icons and loading screens could be added. These make an SPA more like a PWA. \n\n## Milestones\n\n1995\n\nIn the mid-1990s, rich interactions on web browsers become possible due to two different technologies: **Java Applets** and **Macromedia Flash**. Browsers are merely proxies for these technologies that have to be explicitly installed as browser plugins. With these technologies, all content is either loaded upfront or loaded on demand as the view changes. No page reloads are necessary. In this sense, these are ancestors of modern SPAs. \n\n2005\n\nJesse James Garrett publishes a paper titled *Ajax: A New Approach to Web Applications*. This describes a novel way to design web applications. AJAX, that expands to **Asynchronous Javascript + XML**, makes asynchronous requests in the background while the user continues to interact with the UI in the foreground. Once the server responds with XML (or JSON or any other format) data, the browser updates the view. AJAX uses the `XMLHTTPRequest` API. While this was around since the early 2000s, Garrett's paper popularizes this approach. \n\n2008\n\nWith the launch of GitHub, many JavaScript libraries and frameworks are invented and shared via GitHub. These become the building blocks on which true SPAs would later be built. \n\nSep \n2010\n\nTwitter releases a new version of its app with client-side rendering using JavaScript. Initial page load becomes slow. Due to diversity of client devices and browsers, user experience becomes inconsistent. In 2012, Twitter updates the app towards server-side rendering and defers all JS execution until the content is rendered on browser. They also organize the code as CommonJS modules and do lazy loading. These changes reduce the initial page load to a fifth. \n\nMay \n2016\n\nGoogle builds an app for its Google I/O event. Google engineers call this both an SPA and a PWA. With an App Engine backend, the app uses web components, Web Animations API, material design, Polymer and Firebase. During the event the app brings more user engagement than the native app. We might say that the app started as a SPA to create a PWA. In general, it's better to plan for a PWA from the outset rather than re-engineer an SPA at a later point. \n\nFeb \n2019\n\nGoogle engineers compare different SPA architectures in terms of performance. One of these is called **rehydration** which combines both server-side and client-side renderings. This has the drawback that content loads quickly but not immediately interactive, thus frustrating the user. \n\nMay \n2019\n\nWith the rise of edge computing, Section describes in a blog post how a Nuxt.js app (based on Vue.js) can be deployed at the edge. The app is housed within a Node.js module deployed at the edge. This SPA uses server-side rendering.","meta":{"title":"Single Page Application","href":"single-page-application"}} {"text":"# Document Object Model\n\n## Summary\n\n\nDocument Object Model (DOM) is the object-oriented representation of an HTML or XML document. It defines a platform-neutral programming interface for accessing various components of a webpage, so that JavaScript programs can change document structure, style, and content programmatically. \n\nIt generates a hierarchical model of the HTML or XML document in memory. Programmers can access/manipulate tags, IDs, classes, attributes and elements using commands or methods provided by the document object. It's a logical structure because DOM doesn't specify any relationship between objects. \n\nTypically you use DOM API when documents can fit into memory. For very large documents, streaming APIs such as Simple API for XML (SAX) may be used.\n\nThe W3C DOM and WHATWG DOM are standards implemented in most modern browsers. However, many browsers extend these standards. Web applications must keep in view the DOM standard used for maintaining interoperability across browsers.\n\n## Discussion\n\n### What are the different components of a DOM?\n\nPurpose of DOM is to mirror HTML/XML documents as an in-memory representation. It's composed of: \n\n + Set of objects/elements\n + Hierarchical structure to combine objects\n + An interface to access/modify objectsDOM lists the required interface objects, with supported methods and fields. DOM-compliant browsers are responsible to supply concrete implementation in a particular language (mostly JavaScript).\n\nSome HTML DOM objects, functions & attributes: \n\n + **Node** - Each tree node is a Node object. Different types of nodes inherit from the basic `Node` interface.\n + **Document** - Root of the DOM tree is the HTMLDocument node. Usually available directly from JavaScript as document or window. Gives access to properties associated with a webpage such as URL, stylesheets, title, or characterSet. The field `document.documentElement` represents the child node of type `HTMLElement` and corresponds to `` element.\n + **Attr** – An attribute in an `HTMLElement` object providing the ability to access and set an attribute. Has name and value fields.\n + **Text** — A leaf node containing text inside a markup element. If there is no markup inside, text is contained in a single `Text` object (only child of the element).\n\n### Can you show with an example how a web page gets converted into its DOM?\n\nThe simplest way to see the DOM generated for any webpage is using \"Inspect\" option within your browser menu. DOM element navigation window that opens allows you to scroll through the element tree on the page. You can also alter some element values and styles – text, font, colours. Event listeners associated with each elements are also listed. \n\nThe document is the root node of the DOM tree and offers many useful properties and methods. `document.getElementById(str)` gives you the element with `str` as id (or name). It returns a reference to the DOM tree node representing the desired element. Referring to the figure, `document.getElementById('div1')` will return the first \"div\" child node of the \"body\" node.\n\nWe can also see that \"html\" node has two direct children, \"head\" and \"body\". This example also shows three leaf nodes containing only text. These are one \"title\" and two \"p\" tags.\n\nCorresponding CSS and JavaScript files referenced from HTML code can also be accessed through DOM objects. \n\n\n### How is JavaScript used to manipulate the DOM of a web page?\n\nThe ability to manipulate webpages dynamically using client-side programming is the basic purpose behind defining a DOM. This is achieved using DHTML. DHTML is not a markup language but a technique to make dynamic web pages using client-side programming. For uniform cross-browser support of webpages, DHTML involves three aspects:\n\n + **JavaScript** - for scripting cross-browser compatible code\n + **CSS** - for controlling the style and presentation\n + **DOM** - for a uniform programming interface to access and manipulate the web page as a documentGoogle Chrome, Microsoft Edge, Mozilla Firefox and other browsers support DOM through standard JavaScript. JavaScript programming can be used to manipulate the HTML page rendering, the underlying DOM and the supporting CSS. List of some important DOM related JavaScript functionalities: \n\n + Select, Create, Update and Delete DOM Elements (reference by ID/Name)\n + Style setting of DOM Elements – color, font, size, etc\n + Get/set attributes of Elements\n + Navigating between DOM elements – child, parent, sibling nodes\n + Manipulating the BOM (Browser Object Model) to interact with the browser\n + Event listeners and propagation based on action triggers on DOM elements\n\n### Can DOM be applied to documents other than HTML or XML?\n\nBy definition, DOM is a language-neutral object interface. W3 clearly defines it as an API for valid HTML and well-formed XML documents. Therefore, a DOM can be defined for any XML compliant markup language. The WHATWG community manages the HTML DOM interface. Some Microsoft specific XML extensions define their own DOM. \n\n**Scalable Vector Graphics (SVG)** is an XML-based markup language for describing two-dimensional vector graphics. It defines its own DOM API. \n\n**XAML** is a declarative markup language promoted by Microsoft, used in UI creation of .NET Core apps. When represented as text, XAML files are XML files with `.xaml` extension. By treating XAML as a XAML node stream, XAML readers communicate with XAML writers and enable a program to view/alter the contents of a XAML node stream similar to the XML Document Object Model (DOM) and the `XmlReader` and `XmlWriter` classes. \n\n**Standard Generalized Markup Language (SGML)** is a standard for how to specify a document markup language or tag set. The DOM support for SGML documents is limited to parallel support for XML. While working with SGML documents, the DOM will ignore `IGNORE` marked sections and `RCDATA` sections. \n\n\n### What are the disadvantages of using DOM?\n\nThe biggest problem with DOM is that it is **memory intensive**. While using the DOM interface, the entire HTML/XML is parsed and a DOM tree (of all nodes) is generated and returned. Once parsed, the user can navigate the tree to access data in the document nodes. DOM interface is easy and flexible to use but has an overhead of parsing the entire HTML/XML before you can start using it. So when the document size is large, the memory requirement is high and initial document loading time is also high. For small devices with limited on board memory, DOM parsing might be an overhead. \n\nSAX (Simple API for XML) is another document parsing technique where the parser doesn’t read in the entire document. Events are triggered when the XML is being parsed. When it encounters a tag start (e.g. ``), then it triggers the tagStarted event. When the end of the tag is seen (``), it triggers tagEnded. So it's better in terms of memory efficiency for heavy applications. \n\nIn earlier days, the DOM standard was not uniformly adopted by various browsers, but that incompatibility issue doesn’t exist anymore. \n\n\n### What sort of DOM support is offered by React, Node.js and other JavaScript-based platforms?\n\nEverything in DOM is a node – document/element/attribute nodes, etc. But if you have a list of 10 items on your webpage and after some user interaction, need to update one of them, the entire DOM will be re-rendered. This is especially troublesome in Single Page Applications (SPAs).\n\nReact WebUI framework solves this by creating a **virtual DOM** which is an in-memory data-structure cache for selective rendering. Differences in the node rendering are computed, browser's displayed DOM is updated efficiently, \"reconciled\" by the algorithm. \n\nNodeJS runtime environment has its own implementation for DOM interface, used when we need to work with HTML on server side for some reason. The DOM `Node` interface is an abstract base class upon which many other DOM API objects are based, thus letting those object types to be used similarly and often interchangeably. `jsdom` is a pure JavaScript implementation of WHATWG DOM and HTML Standards for use with Node.js. \n\nIn AngularJS scripting framework, there are directives for binding application data to attributes of HTML DOM elements. Ex. `ng-disabled` directive binds AngularJS application data to the disabled attribute of HTML elements. \n\n## Milestones\n\n1995\n\nBrendan Eich and Netscape design and release JavaScript, first supported in Netscape Navigator. In subsequent years, JavaScript becomes one of the core technologies of the World Wide Web, alongside HTML and CSS. All major web browsers have a dedicated JavaScript engine to execute it. In 1997, it's standardized as ECMAScript. \n\n1996\n\nJScript is introduced as the Microsoft dialect of the ECMAScript standard. Limited support for user-generated events and modifying HTML documents in the first generation of JavaScript & JScript is called \"DOM Level 0\" or **Legacy DOM**. No independent standard is developed for DOM Level 0, but it's partly described in the specifications for HTML 4. \n\n1997\n\nNetscape and Microsoft release version 4.0 of Netscape Navigator and Internet Explorer respectively. DHTML support is added to enable changes to a loaded HTML document. DHTML requires extensions to Legacy DOM implementations but both browsers developed them in parallel and remain incompatible. These versions of the DOM later become known as the **Intermediate DOM**. \n\n1998\n\nW3C DOM Working Group drafts a standard DOM specification, known as **DOM Level 1** that becomes the W3C Recommendation in 1998. This is after the standardization of ECMAScript. \n\n2001\n\nMicrosoft Internet Explorer version 6 comes out with support for W3C DOM. \n\n2004\n\nMozilla comes out with its *Design Principles for Web Application Technologies*, the consensus opinion of the Mozilla Foundation and Opera Software in the context of standards for Web Applications and Compound Documents. This defines browser code compatibility with HTML, CSS, DOM, and JavaScript. \n\n2005\n\nLarge parts of W3C DOM are well-supported by all the common ECMAScript-enabled browsers including Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino). \n\n2020\n\nThe HTML DOM living standard is a constantly updated standard maintained by WHATWG.org, with latest updates happening continuously.","meta":{"title":"Document Object Model","href":"document-object-model"}} {"text":"# Open Data\n\n## Summary\n\n\nThe idea of open data is to share data freely with others. Openness also allows others to modify, reuse or redistribute the data. Openness has two facets: legal and technical. **Legal openness** is about applying a suitable open license to the data. **Technical openness** is about removing technical barriers and making it easy to access, read, store or process the data. \n\nBy opening up data, others can unlock value in the form of information and knowledge. For example, in the travel sector, locations, images, prices and reviews are data that can help us plan a holiday. Information is data within a given context. Knowledge personalizes information and helps us make decisions. \n\nMany organizations worldwide promote open data. Open licenses, datasets and tools are available. Governments are releasing open data that citizens can use.\n\n## Discussion\n\n### Could you describe open data?\n\nDatabase is really about structure and organization of data, also known as the database model. This is generally covered by copyright. In the context of open data, we're more concerned with the contents of the database, which we simply call data. \n\nData can mean a single item or an entire collection. Particularly for factual databases, a collection can be protected but not individual items. For example, a protected collection may be about the melting point of various substances but no one can be prevented from stating a particular item, such as element E melts at temperature T. \n\nTo understand the meaning of openness, we can refer to the *Open Definition* that states, \"Open means anyone can freely access, use, modify, and share for any purpose (subject, at most, to requirements that preserve provenance and openness).\" \n\nOpen data should be accessible at a reasonable cost if not free. It should be available in bulk. There shouldn't be restrictions on who or for what purpose they wish to use the data. Tools to use the data should be freely available and not proprietary. Data shouldn't be locked up behind passwords or firewalls. \n\n\n### Where could open data be useful?\n\nOpen data can make governments more transparent. Citizens will have confidence that their money is being spent as budgeted or in implementing the right policies. For example, one activist noted that Canadian citizens used open data to save their government $3.2bn in fraudulent charitable donations. In Brazil, DataViva provides millions of interactive visualizations based on open government data. \n\nNew business opportunities are possible. For example, once Transport for London opened their data, developers used it to build apps. Likewise, Thomson Reuters uses open data to provide better services to its customers. OpenStreetMap and Copernicus are examples that enable new GIS applications. \n\nIn research, open data is a part of what is called Open Science. It leads to reproducible research and faster advancements. Open data also enables researchers to revalidate their own findings. \n\nOpen data can be used to protect the environment. Some apps that do this include mWater, Save the Rain and Ecofacts. \n\n\n### Could you mention some sources of open data?\n\nThere are many curated lists on the web for open data. From some of these, we mention a few useful sources of open data by category: \n\n + **General**: DBpedia, Datasets Subreddit, Kaggle, FiveThirtyEight, Microsoft Marco\n + **Government**: Data.gov, Data.gov.uk, European Union Open Data Portal\n + **Economy**: World Bank Open Data, Global Financial Data, International Monetary Fund\n + **Business**: OpenCorporates, Yellowpages, EU-Startups, Glassdoor\n + **Health & Science**: World Health Organization, HealthData.gov, NHS Digital, Open Science Data Cloud, NASA Earth Data, LondonAir\n + **Research**: Google Scholar, Pew Research Center, OpenLibrary Data Dumps, CERN Open Data\n + **Environment**: Climate Data Online, IEA Atlas of Energy\n\n### Which are some organizations working with or for open data?\n\nWe mention a few organizations:\n\n + Open Data Institute: Works with companies and governments to build an open, trustworthy data ecosystem, where people can make better decisions using data and manage any harmful impacts.\n + Open Data Commons: Provides a set of legal tools to publish and use open data. They've published open licenses applicable to data.\n + Open Knowledge Foundation: A worldwide network of people passionate about openness, using advocacy, technology and training to unlock information and enable people to work with it to create and share knowledge. It was briefly called Open Knowledge International. The Open Definition is one of their projects.\n + Open Data Charter: A collaboration between governments and organisations working to open up data based on a shared set of principles.\n\n### Why and what aspects of open data should we standardize?\n\nData is more valuable if we can combine two different datasets to obtain new insights. **Interoperability** is the key. Given diverse systems, tools and data formats, interoperability can be almost impossible. \n\nWithout standards, it becomes more difficult for us to publish, access or share data effectively. Standards also make it easier to repeat processes, compare results and reach a shared understanding. Moreover, we need **open standards** that are available to public and are defined through collaboration and consensus. \n\nOpen standards should define a common data model. The data pipeline should be streamlined. It should be easy to combine data. It should promote common understanding. \n\nOpen Data Institute's page on standards is a useful resource to learn more. A checklist for selecting a suitable standard looks at the licensing, project needs, maintenance of the standard, and guidance on usage. \n\n\n### What sort of licensing should I adopt when opening my data?\n\nReleasing your data without a license creates uncertainty. In some jurisdictions, data lacking explicit permission may be protected by intellectual property rights. It's therefore better to attach a license. \n\nAspects that define a license include public domain, attribution, share-alike, non-commercial, database only and no derivatives. \n\nThere are a number of licenses that conform to Open Definition. *Creative Commons CC0* is an example. It releases the data into public domain. Anyone can copy, modify and distribute, even for commercial purposes, without asking permission. A similar license is *Open Data Commons Public Domain Dedication and Licence (PDDL)*. Other conformant but less reusable licenses are data licenses from governments (Germany, UK, Canada, Taiwan). UK's Open Government Licence is an example.\n\nTwo other licenses worth looking at come from the Linux Foundation: CDLA–Sharing-1.0 and CDLA-Permissive-1.0, where CDLA refers to Community Data License Agreement. \n\n*Open Data Commons Open Database License (ODC-ODbL)* is seen as a \"viral license\". Any changes you make to the dataset, you're required to release the same under this license. \n\n\n### What are some challenges with open data?\n\nPublishers continue to use customized licenses. This makes it hard to reuse data. It makes licenses incompatible across datasets. Instead, they should use standardized open licenses. Ambivalent or redundant clauses cause confusion. Licenses often are not clear about the data to which they apply. Data is often not linked to legal terms. \n\nData is hard to find. Sometimes their locations change. A platform such as CKAN might help. \n\nData could be misinterpreted, which could result in a wrong decision. This creates a fear of accountability and prevents producers from opening their datasets. \n\nQuality of data is another concern. Data should be machine-readable and in raw form. Publishing data in HTML or PDF is not research friendly. For context and interpretation, metadata should be shared. \n\nRaw data is good but also requires advanced technical skills and domain knowledge. We therefore need to build better data literacy. AI algorithms are being used to analyse data but many are black-box models. They also promote data centralization and control, which are at odds with the open data movement. \n\nData infrastructure must of consistent quality. Data can also be biased by gender or against minority groups. \n\n## Milestones\n\n1942\n\nThe concept of open data starts with Robert King Merton, one of the fathers of the sociology of science. He explains how freely sharing scientific research and results can stimulate growth and innovation. \n\n1994\n\nIn the US, the Government Printing Office (GPO) goes online and opens a few government-related documents. This is an early example of **open government data** at a time when the Internet was becoming popular. Historically and legally, the idea can be traced back to the the Freedom of Information Act of 1966. \n\n2005\n\nThe Open Knowledge Foundation creates the **Open Definition**. This is based on established principles from the open source movement for software. This definition is later translated into more than 30 languages. In November 2015, Open Definition 2.1 is released. \n\nFeb \n2006\n\nAt a TED talk, Hans Rosling presents compelling visuals of global trends in health and economics. Using publicly available datasets from different sources, he debunks myths about the developing world. He makes a case for governments and organizations to open up their data. Data must be enabled using design tools. His mantra is to animate and liberate data from closed databases. Data must also be searchable. \n\nDec \n2007\n\nThirty individuals meet at Sebastopol, California to discuss open public data. Many of them come from the culture of free and open source software movement. This event can be seen as a convergence of many past US and European efforts in open data. They identify **eight principles**: complete, primary, timely, accessible, machine processable, non-discriminatory, non-proprietary, and license-free. \n\nFeb \n2009\n\nAt a TED talk, Tim Berners-Lee gets people to chant \"Raw data, now.\" He makes reference to Rosling's talk of 2006 and adds that it's important to link data from different sources. He calls this **Linked Data**. He mentions *DBpedia*, which takes data from Wikipedia and connects them up. \n\nMay \n2009\n\nThe US government launches **Data.gov** with 47 datasets. About five years later, it has about 100,000 datasets. \n\n2010\n\nMany governments attempt to open their data to public but there are concerns about privacy. It's only in 2015 that privacy principles become an essential part of discussions on open data. It's become clear that providing raw data is not always possible. Governments must balance potential benefits to public against privacy rights of individuals. \n\n2012\n\nGuillermo Moncecchi of Open Knowledge International writes that beyond transparency, open data is also about building a **public data infrastructure**. While the focus during 2007-2009 was on data transparency, during 2010-2016 focus shifts to public infrastructure. Consider data about street light locations. Citizens can use this data to solve problems on their own. Metrics change from number of published datasets to APIs and reuse. \n\nJun \n2017\n\nOpen Knowledge Foundation publishes the **Global Open Data Index (GODI)**. This shows that only 38% of government data is really open. This is based on Open Definition 2.1. A later update available online shows that only 166 of 1410 datasets (11%) are open. The report gives a number of recommendations to governments and policy makers to make their data more open. \n\nJan \n2018\n\nOpen Data Charter replaces their earlier ambitious call of \"open by default\" with a more practical \"publish with purpose\". Rather than getting governments to open up as much data as possible quickly, the intent is to get them to take small steps in that direction. Governments can open datasets with a clear view of tangible benefits to citizens. \n\nFeb \n2018\n\nUsing open data exposed by Here.com via an API, Tjukanov uses visualization to show average traffic congestion in many of the world's largest cities. Interesting patterns emerge, plotted within a radius of 50km from the city center. He uses QGIS, an open source GIS platform. This is an example of the value that open data plus open source can unlock.\n\nJan \n2020\n\nUsing open data collated by the Institute for Government from data.gov.uk, Peter Cook produces interesting **organograms**, which are visualizations of organization structures. He does this for many UK government departments. Clicking on individual data points shows name, job title, salary range and other details. This sort of data has been available (though patchy) since March 2011.","meta":{"title":"Open Data","href":"open-data"}} {"text":"# Remote Pair Programming\n\n## Summary\n\n\nPair programming is a practice in which developers work in pairs. When the pair is not sitting next to each other, we call it **Remote Pair Programming (RPP)**. Remote pair programming has the same benefits as pair programming: higher quality software, fewer defects, knowledge sharing, team cohesion, faster delivery, and more. \n\nThe nature of remote working demands the use of suitable tools. Selecting the right set of tools can enable teams get the best out of pairing. There are plenty of tools in the market today (Sep 2021) for RPP.\n\nSince the coming of COVID-19 in 2020 and an increased adoption of work-from-home culture, RPP has become all the more essential. \n\nRPP is also called *Distributed Pair Programming (DPP)*, a term that appears to be popular among researchers.\n\n## Discussion\n\n### How is remote pair programming different from pair programming?\n\nIn pair programming, the pair can point to code with their fingers. In RPP, this is done using the keyboard cursor or the mouse pointer. In some tools, both developers may be able to move their own cursors or mouse pointers. In other tools, there's only one cursor and mouse pointer. The person controlling it becomes the driver. \n\nIn pair programming, both developers are looking at the same monitor. RPP allows more flexibility. The navigator could be checking out other parts of the code while the driver is typing. \n\nIn pair programming, it's possible to share physical artefacts such as printed documents or a whiteboard. In RPP, everything must happen online. It's therefore essential to have tools to perform these activities online. \n\nRPP can be used for diverse use cases beyond coding: mentoring, hiring, live tutorials, etc. These are use cases where participants are likely to be at remote locations.\n\nWe should clarify that sharing code for reviews, making pull requests or using version control isn't RPP. These are asynchronous workflows. RPP happens only when both developers are participating concurrently within the same workspace. \n\n\n### What should I look for when selecting a tool for RPP?\n\nHere are some factors to consider: \n\n + **Installation**: Often called *Cloud IDEs*, these are tools hosted in the cloud. No local installation is required other than a web browser. Other tools require local installation. Better still are *plugins* that extend popular editors/IDEs with collaborative editing capability.\n + **Cross-Platform**: Some tools may run on Windows but not on Mac or Linux. Plugins are better in this regard. They extend familiar software already available for various platforms.\n + **Editing**: Simultaneous editing by both developers. Copy/paste between systems. One developer can navigate to other parts of codebase or other applications while the other is editing. Editor/IDE agnostic.\n + **Multimodal**: Bidirectional live audio/video streaming. Chat window. Integrated with editor/IDE.\n + **Usability**: Uncluttered layouts. Awareness about what's shared and the current editing context. Automatic turn-off of notifications, thus providing a distraction-free environment.\n + **Performance**: Minimal lag. Supports high video resolutions. Fall back to lower resolutions on low-bandwidth connections. Visibility into performance metrics.\n + **Integration**: Connect to code repositories (GitHub, GitLab, Bitbucket) and other tools (Jira, Trello).\n + **Others**: Security, cost and customer support are also important. Open source may be important for some teams.\n\n### What tools are available for RPP?\n\nThere are dozens of tools out there. One way to classify tools is as follows: \n\n + **Screen Show**: Only screensharing. Before switching roles, we need to push code changes to a shared repository. Videoconferencing tools such as Skype, Google Hangouts, Google Meet and Slack Calls are examples.\n + **Screen Control/Share**: Temporary remote control of your partner's system. Interactions can lag. Zoom, VNC, Join.me, CoScreen, Tuple, TeamViewer and tmux are examples.\n + **Application Share**: True collaborative editing and hence most preferred. Each environment can be personalized. Developers can use different editors or IDEs. A developer can navigate within the codebase without interrupting the partner. Developers can even edit different parts of the code in parallel. Live Share (with Visual Studio and VS Code), CodeTogether, GitLive, Floobits, Drovio, Atom Teletype, and AWS Cloud9 are examples.Among the Cloud IDEs are AWS Cloud9, Codenvy, and Replit. For privacy, Codenvy has a self-hosted option. \n\nAmong the plugins are Live Share, Remote Collab, Atom Teletype, CodeTogether, GitLive and Floobits. CodeTogether supports VS Code, IntelliJ and Eclipse. Guests can join from IDE or browser. GitLive supports VS Code, IntelliJ and Android Studio. Floobits supports Emacs, Sublime Text, Neovim, IntelliJ and Atom. \n\n\n### How do I select the right tool for RPP?\n\nTeletype for Atom suits pair programming's driver-navigator style. Live Share allows more open-ended collaboration, which perhaps is not what you want. However, Live Share might suit ping-pong style of pairing. In strong-style pairing, we want the navigator to guide the driver step by step. This is best done with only screensharing. \n\nIn Linux, tmux and tmate are popular. These work even on low-bandwidth connections. However, this may not be the best choice for beginners who find it hard to learn the commands. \n\nCloud IDEs may not be optimal for all-day coding. It's also too dependent on net connection. Plugins such as CodeTogether track changes within IDEs and are therefore not demanding on bandwidth (unlike screensharing). CodeTogether team also found that allowing multiple developers to edit code is hard to follow. Their design enforces a master controller who can give/take temporary control to others. \n\n\n### Could you share some tips for RPP?\n\nRPP is easiest if the developers have met before in person. If not, have icebreakers at the start of the project. Informal chats or online games can also help in building rapport. Allow for frequent breaks since remote pairs get tired more easily. When remote pairing across time zones, select a time convenient for both. \n\nStart every session with a clear agenda. Tackle one task at a time. Some companies such as GitLab have an internal app to help developers pair up. \n\nBefore pairing for long hours, get the basics right. Use a good headset that mitigates external noise and echo. Use a large monitor or even two monitors. Use comfortable desk and chair. \n\nMake use of non-verbal cues. Lean forward when you wish to speak. Gesture to draw attention. \n\nFrequently check with your partner about audio/video quality and quickly take corrective action. An audio splitter with two connected headsets allows another colleague to easily join in on any conversation. Always have the audio and video on, even during breaks. This gives the feeling of being connected to the \"office vibe\". \n\n## Milestones\n\n1998\n\n**Extreme Programming (XP)** as practiced at Chrysler is talked about. Pair programming is one of the core practices within XP. \n\n2001\n\nIn the *Agile Manifesto*, Beck et al. point out that face-to-face interactions are most effective. In this light, remote pairing is guaranteed to fail. This motivates research into adapting XP for distributed teams. Schümmer and Schümmer present *TUKAN* as a \"synchronous distributed team programming environment\". TUKAN includes chat, audio, multi-user code editing and integration with version management. They use the term **distributed pair programming**. \n\n2004\n\nHanks publishes results of an empirical study on RPP. He uses a screensharing application called Virtual Network Computing (VNC). He modifies VNC to support a second cursor that the navigator can use as a pointer. Unlike in earlier tools, the second cursor appears only when required. \n\nJul \n2015\n\nIn a literature survey, da Silva Estácio and Prikladnicki find that most studies have been from a teaching perspective. Only a few studies talk about RPP in a real-world software development setting. They also survey RPP tools. They make recommendations about tool features: shared code repository, support for specific pairing roles, role switching, gesturing, etc. \n\nOct \n2015\n\nTsompanoudi et al. modify a previously available Eclipse plugin and use it in an educational setting to help students learn programming collaboratively. Tasks are streamlined using **collaboration scripts** that were studied by other researchers as early as 2007. \n\nJan \n2019\n\n**Tuple** launches alpha release. It claims to be the \"best pair programming tool on macOS\". The focus of Tuple is performance: low CPU usage, low latency, high resolution video and no distracting UI components (sometimes called UI chrome). It also exposes performance graphs to the user so that they can take corrective action. \n\n2020\n\nThe COVID-19 pandemic forces teams to work remotely. Teams used to pair programming are now required to do the same remotely. The situation forces teams to better understand the dynamics of pairing remotely. It's also expected that the arrival of 5G will make RPP more reliable and sophisticated. \n\nMar \n2021\n\nPackt Publishing Limited publishes Bolboacă's book titled *Practical Remote Pair Programming*.","meta":{"title":"Remote Pair Programming","href":"remote-pair-programming"}} {"text":"# Continuous Integration\n\n## Summary\n\n\nContinuous Integration (CI) is the practice of routinely integrating code changes into the main branch of a repository, and testing the changes, as early and often as possible. Ideally, developers will integrate their code daily, if not multiple times a day. \n\nMartin Fowler, Chief Scientist at ThoughtWorks, has stated that, \n\n> Continuous Integration doesn't get rid of bugs, but it does make them dramatically easier to find and remove.\n\n## Discussion\n\n### Why do we need Continuous Integration?\n\nIn the past, developers on a team might work in isolation for an extended period of time and only merge their changes to the master branch once their work was completed. This made merging code changes difficult and time consuming, and also resulted in bugs accumulating for a long time without correction. These factors made it harder to deliver updates to customers quickly. \n\nWith CI, each code change can potentially trigger a build-and-test process. Testing becomes an essential part of the build process. Bugs, if any, are highlighted early before they get a chance to grow or become hard to trace. Essentially, CI breaks down the development process into smaller pieces while also employing a repeatable process of build and test. \n\n\n### What are the benefits of Continuous Integration?\n\nAmong the many benefits of CI are the following: \n\n + Shorter integration cycles\n + Better visibility of what others are doing leading to greater communication\n + Issues are caught and resolved early\n + Better use of time for development rather than debugging\n + Early knowledge that your code works along with others' changes\n + Ultimately, enabling the team to release software faster\n\n### How does Continuous Integration work?\n\nWith continuous integration, developers frequently commit to a shared repository using a version control system such as Git. Prior to each commit, developers may choose to run local unit tests on their code as an extra verification layer before integrating. A continuous integration service automatically builds and runs unit tests on the new code changes to immediately surface any errors. \n\nContinuous integration refers to the build and unit testing stages of the software release process. Every revision that is committed triggers an automated build and test. \n\n\n### What are some CI tools and to choose among them?\n\nThere are many solutions out there. Some of them include Codeship, TravisCI, SemaphoreCI, CircleCI, Jenkins, Bamboo, and Teamcity.\n\nSome factors to consider when selecting a tool include price (commercial or free), features, ease of use, integration (with other tools and frameworks), support (commercial or community) and more. \n\n\n### What are the challenges to Continuous Integration?\n\nTo improve and perfect your CI, you need to overcome 3 major challenges: \n\n + **No Standalone Fresh Checkout**: The single biggest hurdle to a smooth CI build is ensuring that your application's tests can be run from a fresh code checkout (e.g. a git clone). This means that all of your app's dependencies are either included in the checkout, or they're specified and can be pulled in by a script in the checkout.\n + **Unreliable Tests**: Now that your app sets up with a single command, you've built a foundation for effective CI. The next challenge is to ensure that your test results are repeatable and reliable. Intermittent or \"expected\" failures that persist for too long are pernicious. Once the habit of treating failures as intermittent takes hold, legitimate errors often get ignored.\n + **Obscure Build Results**: Once you've produced a reliable test suite, the next challenge is to get results quickly, take appropriate action on them, and distribute information to the people who matter.\n\n### How are Continuous Integration, Continuous Delivery, and Continuous Deployment practices related to one another?\n\nContinuous integration leads to both continuous delivery and continuous deployment. Continuous deployment is like continuous delivery, except that releases happen automatically. \n\nMore specifically, continuous integration requires automated testing to ensure that nothing is broken when new commits are made. Continuous delivery takes this to the next step by automating the release process so that your customers get regular fixes and upgrades.\n\nContinuous delivery still requires manual intervention to initiate the final deployment to production. Continuous deployment automates this last step too. There's no \"Release Day\" as such. Customers see a steady stream of improvements and this enables early feedback. Since releases are small, they're less risky and easier to fix. Jeff Humble, author of the book *Continuous Delivery*, says this about Continuous Deployment, \n\n> Essentially, it is the practice of releasing every good build to users.\n\n## Milestones\n\n1991\n\nGrady Booch first proposes the term **Continuous Integration (CI)**. In 1994, he uses the term in his book *Object-Oriented Analysis and Design with Applications*. \n\n1997\n\nKent Beck and Ron Jeffries invent **Extreme Programming (XP)** while on the Chrysler Comprehensive Compensation System project. Beck publishes about continuous integration in 1998. Extreme Programming embraces the practice of CI. \n\n2001\n\n**CruiseControl**, one of the first open-source CI tools, is released.","meta":{"title":"Continuous Integration","href":"continuous-integration"}} {"text":"# Grammar and Spell Checker\n\n## Summary\n\n\nA well-written article with correct grammar, punctuation and spelling along with an appropriate tone and style to match the needs of the intended reader or community is always important. Software tools offer algorithm-based solutions for grammar and spell checking and correction.\n\nClassical rule-based approaches employ a dictionary of words along with a set of rules. Recent neural network-based approaches learn from millions of published articles and offer suggestions for appropriate choice of words and way to phrase parts of sentences to adjust the tone, style and semantics of the sentence. They can alter suggestions based on the publication domain of the article like academic, news, etc.\n\nGrammar and spelling correction are tasks that belong to a more general NLP process called **lexical disambiguation**.\n\n## Discussion\n\n### What is a software grammar and spell checker, its general tasks and uses?\n\nA grammar and spell checker is a software tool that checks a written text for grammatical mistakes, appropriate punctuation, misspellings, and issues related to sentence structure. More recently, neural network-based tools also evaluate tone, style, and semantics to ensure that the writing is flawless.\n\nOften such tools offer a visual indication by highlighting or underlining spelling and grammar errors in different colors (often red for spelling and blue for grammar). Upon hovering or clicking on the highlighted parts, they offer appropriately ranked suggestions to correct those errors. Certain tools offer a suggestive corrected version by displaying correction as strikeout in an appropriate color.\n\nSuch tools are used to improve writing, produce engaging content, and for assessment and training purposes. Several tools also offer style correction to adapt the article for specific domains like academic publications, marketing, and advertising, legal, news reporting, etc.\n\nHowever, till today, no tool is a perfect alternative to an expert human evaluator. \n\n\n### What are some important terms relevant to a grammar and spell checker?\n\nThe following NLP terms and approaches are relevant to grammar and spell checker: \n\n + **Part-of-Speech (PoS)** tagging marks words as noun, verb, adverb, etc. based on definition and context.\n + **Named Entity Recognition (NER)** is labeling a sequence of text into predefined categories such as name, location, etc. Labels help determine the context of words around them.\n + **Confusion Set** is a set of probable words that can appear in a certain context, e.g. set of articles before a noun.\n + **N-Gram** is a sub-sequence of n words or tokens. For example, \"The sun is bright\" has these 2-grams: {\"the sun\", \"sun is\", \"is bright\"}.\n + **Parallel Corpus** is a collection of text placed alongside its translation, e.g. text with errors and its corresponding corrected version(s).\n + **Language Model (LM)** determines the probability distribution over a sequence of words. It says how likely is a particular sequence of words.\n + **Machine Translation (MT)** is a software approach to translate one sequence of text into another. In grammar checking, this refers to translating erroneous text into correct text.\n\n### What are the various types of grammar and spelling errors?\n\nWe describe the following types: \n\n + **Sentence Structure**: Parts of speech are organized incorrectly. For example, \"she began to singing\" shows misplaced 'to' or '-ing'. Dependent clause without the main clause, run-on sentence due to missing conjunction, or missing subject are some structural errors.\n + **Syntax Error**: Violation of rules of grammar. These can be in relation to subject-verb agreement, wrong/missing article or preposition, verb tense or verb form error, or a noun number error.\n + **Punctuation Error**: Punctuation marks like comma, semi-colon, period, exclamation, question mark, etc. are missing, unnecessary, or wrongly placed.\n + **Spelling Error**: Word is not known in the dictionary.\n + **Semantic Error**: Grammar rules are followed but the sentence doesn't make sense, often due to a wrong choice of words. \"I am going to the library to buy a book\" is an example where 'bookstore' should replace 'library'. Rule-based approaches typically can't handle semantic errors. They require statistical or machine learning approaches, which can also flag other types of errors. Often a combination of approaches leads to a good solution.\n\n### What are classical methods for implementing grammar and spell checkers?\n\nClassical methods of spelling correction match words against a given dictionary, an approach alluded by critiques to be unreliable as it can't detect incorrect use of correctly spelled words; or correct words not in the dictionary, like technical words, acronyms, etc.\n\nGrammar checkers use hand-coded grammar rules on PoS tagged text for correct or incorrect sentences. For instance, the rule `I + Verb (3rd person, singular form)` corresponds to the incorrect verb form usage, as in the phrase \"I has a dog.\" These methods provide detailed explanations of flagged errors making it helpful for learning. However, rule maintenance is tedious and devoid of context. \n\nStatistical approaches validate parts of a sentence (n-grams) against their presence in a corpus. These approaches can flag words used out of context. However, it's challenging to provide detailed explanations. Their efficiency is limited to the choice of corpora. \n\n**Noisy channel model** is one statistical approach. A LM based on trigrams and bigrams gives better results than just unigrams. Where rare words are wrongly corrected, using a blacklist of words or a probability threshold can help. \n\n\n### What are Machine Learning-based methods for implementing grammar and spell checkers?\n\nML-based approaches are either Classification (discriminative) or Machine Translation (generative).\n\n**Classification** approaches work with well-defined errors. Each error type (article, preposition, etc.) requires training a separate multi-class classifier. For example, a proposition error classifier takes n-grams associated with propositions in a sentence and outputs a score for every candidate proposition in the confusion set. Contextual corrections also consider features like PoS and NER. A model can be a linear classifier like a Support Vector Machine (SVM), an n-gram LM-based or Naïve Bayes classifier, or even a DNN-based classifier. \n\n**Machine Translation** approaches can be Statistical Machine Translation (SMT) or Neural Machine Translation (NMT). Both these use parallel corpora to train a sequence-to-sequence model, where text with errors translates to corrected text. NMT uses encoder-decoder architecture, where an encoder determines a latent vector for a sentence based upon the input word embeddings. The decoder then generates target tokens from the latent vector and relevant surrounding input and output tokens (attention). These benefit from transfer learning and advancements in transformer-based architecture. Editor models reduce training time by outputting edits to input tokens from a reduced confusion set instead of generating target tokens. \n\n\n### How can I train an NMT model for grammar and spell checking?\n\nIn general, NMT requires training an **encoder-decoder model** using cross-entropy as the loss function by comparing maximum likelihood output to the gold standard correct output. To train a good model requires a large number of parallel corpora and compute capacity. Transformers are attention-based deep seq2seq architectures. Pre-trained language models generated by transformer architectures like BERT provide contextual embeddings to find the most likely token given the surrounding tokens, making it useful to flag contextual errors in an n-gram.\n\n**Transfer learning** via fine tuning weights of a transformer using the parallel corpus of incorrect to correct examples makes it suitable for GEC use. Pre-processing or pre-training with synthetic data improves the performance and accuracy. Further enhancements can be to use separate heads for different types of errors. \n\n**Editor models** are better as they output edit sequences instead of corrected versions. Training and testing of editor models require the generation of edit sequences from source-target parallel texts.\n\n\n### What datasets are available for training and evaluation of grammar and spell check models?\n\nMT or classification models need datasets with annotated errors. NMT requires a large amount of data. \n\n*Lang 8*, the largest available parallel corpora, has 100,051 English entries. *Corpus of Linguistic Acceptability (CoLA)* is a dataset of sentences labeled as either grammatically correct or incorrect. It can be used, for example, to fine tune a pre-trained model. *GitHub Typo Corpus* is harvested from GitHub and contains errors and their corrections. \n\nBenchmarking data in Standard Generalized Markup Language (SGML) format is available. Sebastian Ruder offers a detailed list of available benchmarking test datasets along with the various models (publications and source code). \n\n**Noise models** use transducers to produce erroneous sentences from correct ones with a specified probability. They induce various error types to generate a larger dataset from a smaller one, like replacing a word from its confusion set, misplace or remove punctuations, induce spelling, tense, noun number, or verb form mistakes, etc. **Round-trip MT**, such as English-German-English translation, can also generate parallel corpora. **Wikipedia edit sequences** offer millions of consecutive snapshots to serve as source-target pairs. However, only a tiny fraction of those edits are language related.\n\n\n### How do I annotate or evaluate the performance of grammar and spell checkers?\n\nERRor ANnotation Toolkit (ERRANT) enabled suggestions with explanation. It automatically annotates parallel English sentences with error type information, thereby standardizing parallel datasets and facilitating detailed error type evaluation. \n\nTraining and evaluation require comparing the output to the target gold standard and giving a numerical measure of effectiveness or loss. Editor models have an advantage as the sequence length of input and output is the same. Unequal sequences need alignment with the insertion of empty tokens.\n\nMax-Match (\\(M^2\\)) scorer determine the smallest edit sequence out of the multiple possible ways to arrive at the gold standard using the notion of Levenshtein distance. The evaluation happens by computing precision, recall, and F1 measure between the set of system edits and the set of gold edits for all sentences after aligning the sequences to the same length.\n\nDynamic programming can also align multiple sequences to the gold standard when there is more than one possible correct outcome. \n\n\n### Could you mention some tools or libraries that implement grammar and spell checking?\n\n*GNU Aspell* is a standard utility used in GNU OS and other UNIX-like OS. *Hunspell* is a spell checker that's part of popular software such as LibreOffice, OpenOffice.org, Mozilla Firefox 3 & Thunderbird, Google Chrome, and more. Hunspell itself is based on MySpell. Hunspell can use one or more dictionaries, stemming, morphological analysis, and Unicode text. \n\nPython packages for spell checking include `pyspellchecker`, `textblob` and `autocorrect`. \n\nA search for \"grammar spell\" on GitHub brings up useful dictionaries or code implemented in various languages. There's a converter from British to American English. Spellcheckr is a JavaScript implementation for web frontends. \n\nDeep learning models include Textly-DRF-API and GECwBERT.\n\nMany online services or offline software also exist: WhiteSmoke from 2002, LanguageTool from 2005, Grammarly from 2009, Ginger from 2011, Reverso from 2013, and Trinka from 2020. Trinka focuses on an academic style of writing. Grammarly focuses on suggestions in terms of writing style, clarity, engagement, delivery, etc. \n\n## Milestones\n\n1960\n\nBlair implements a simple spelling corrector using heuristics and a dictionary of correct words. Incorrect spellings are associated with the corrected ones via abbreviations that indicate similarity between the two. Blair notes that this is in some sense a form of pattern recognition. In one experiment, the program successfully corrects 89 of 117 misspelled words. In general, research interest in spell checking and correction begins in the 1960s. \n\n1971\n\nR. E. Gorin writes **Ispell** in PDP-10 assembly. Ispell becomes the main spell-checking program for UNIX. Ispell is also credited with introducing the generalized affix description system. Much later, Geoff Kuenning implements a C++ version with support for many European languages. This is called **International Ispell**. GNU Aspell, MySpell and Hunspell are other software inspired by Ispell. \n\n1980\n\nIn the 1980s, GEC systems are **syntax-based systems**, such as EPISTLE. They determine the syntactic structure of each sentence and the grammatical functions fulfilled by various phrases. They detect several classes of grammatical errors, such as disagreement in number between the subject and the verb.\n\n1990\n\nThis decade focuses on simple **linear classifiers** to flag incorrect choice of articles or statistical methods to identify and flag use of commonly confused words. Confusion can be due to identical sounding words, typos etc.\n\n2000\n\nRule-based methods evolve in the 2000s. Rule generation is based on parse trees, designed heuristically or based on linguistic knowledge or statistical analysis of erratic texts. These methods don't generalize to new types of errors. New rules need to be constantly added. \n\n2005\n\nThe mid-2000s sees methods to record and create aligned corpora of pre- and post-editing ESL (English as a Second Language) writing samples. SMTs offer improvement in identifying and correcting writing errors. GEC sees the use of semantic and syntactic features including PoS tags and NER information for determining the applicable correction. Support Vector Machines (SVMs), n-gram LM-based and Naïve Bayes classifiers are used to predict the potential correction.\n\n2010\n\n**DNN-based classifier** approaches are proposed in 2000s and early 2010s. However, a specific set of error types have to be defined. Typically only well-defined errors can be addressed with these approaches. SMT models learn mappings from source text to target text using a noisy channel model. SMT-based GEC models use parallel corpora of erratic text and grammatically correct version of the same text in the same language. Open-source SMT engines are available online and include Moses, Joshua and cdec. \n\n2016\n\n**Neural Machine Translation (NMT)** shows better prospects by capturing some learner errors missed by SMT models. This is because NMT can encode structural patterns from training data and is more likely to capture an unseen error. \n\n2018\n\nWith the advent of **attention-based transformer architecture** in 2017, its application to GEC gives promising results. \n\n2019\n\nMethods to improve the training data by text augmentation of various types, including cyclic machine translation, emerge. These improve the performance of GEC tools significantly and enable better flagging of style or context-based errors or suggestions. Predicting edits instead of tokens allows the model to pick the output from a smaller confusion set. Thus, editor models lead to faster training and inference of GEC models.","meta":{"title":"Grammar and Spell Checker","href":"grammar-and-spell-checker"}} {"text":"# Question Answering\n\n## Summary\n\n\nSearch engines, and information retrieval systems in general, help us obtain relevant documents to any search query. In reality, people want answers. Question Answering (QA) is about giving a direct answer in the form of a grammatically correct sentence. \n\nQA is a subfield of Computer Science. It's predominantly based on Information Retrieval and Natural Language Processing. Both questions and answers are in natural language. \n\nQA is also related to an NLP subfield called *text summarization*. Where answers are long and descriptive, they're probably summarized from different sources. In this case, QA is also called **focused summarization** or **query-based summarization**. \n\nThere are lots of datasets to train and evaluate QA models. By late 2010s, neural network models have brought state-of-the-art results.\n\n## Discussion\n\n### Which are the broad categories of questions answered by QA systems?\n\n**Factoid** questions are the simplest. An example of this is \"What is the population of the Bahamas?\" Answers are short and factual, often identified by named entities. Variations of factoid questions include single answer, list of answers (such as \"Which are the official languages of Singapore?\"), or yes/no. Questions typically ask what, where, when, which, who, or is.\n\nQA research started with factoid questions. Later, research progressed to questions that sought **descriptive** answers. \"Why is the sky blue?\" requires an explanation. \"What is global warming?\" requires a definition. Questions typically ask why, how or what.\n\n**Closed-domain** questions are about a specific domain such as medicine, environment, baseball, algebra, etc. **Open-domain** questions are regardless of the domain. Open-domain QA systems use large collections of documents or knowledge bases covering diverse domains. \n\nWhen the system is given a single document to answer a question, we call it **reading comprehension**. If information has to be searched in multiple documents across domains, the term **open-context open-domain QA** has been used. \n\n\n### What are the main approaches or techniques used in question answering?\n\nQA systems rely on external sources from where answers can be determined. Broad approaches are the following: \n\n + **Information Retrieval-based**: Extends traditional IR pipeline. *Reading comprehension* is applied on each retrieved document to select a suitable named entity, sentence or paragraph. This has also been called *open domain QA*. The web (or CommonCrawl), PubMed and Wikipedia are possible sources.\n + **Knowledge-based**: Facts are stored in knowledge bases. Questions are converted (by semantic parsers) into semantic representations, which are then used to query the knowledge bases. Knowledge could be stored in relational databases or as RDF triples. This has also been called *semantic parsing-based QA*. DBpedia and Freebase are possible knowledge sources.\n + **Hybrid**: IBM's DeepQA is an example that combines both IR and knowledge approaches.\n\n### What are some variations of question answering systems?\n\nWe note the following variations or specializations of QA systems:\n\n + **Visual QA (VQA)**: Input is an image (or video) rather than text. VQA is at the intersection of computer vision and NLP.\n + **Conversational QA**: In dialogue systems, there's a continuity of context. The current question may be incomplete or ambiguous but it can be resolved by looking at past interactions. CoQA and QuAC are two datasets for this purpose.\n + **Compositional QA**: Complex questions are decomposed into smaller parts, each answered individually, and then the final answers is composed. This technique is used in VQA as well.\n + **Domain-Specific QA**: Biomedical QA is a specialized field where both domain patterns and knowledge can be exploited. AQuA is a dataset specific to algebra.\n + **Context-Specific QA**: Social media texts are informal. Models that do well on newswire QA have been shown to do poorly on tweets. Community forums (Quora, StackOverflow) provide multi-sentence questions with often long answers that are upvoted or downvoted.\n\n### What are the key challenges faced by question answering systems?\n\nQA systems face two challenges: **question complexity (depth)** and **domain size (breadth)**. Systems are good at either of these but not both. An example of depth is \"What's the cheapest bus to Chichen Itza leaving tomorrow?\" A much simpler question is \"Where is Chichen Itza?\" \n\n**Common sense reasoning** is challenging. For example, 'longest river' requires reverse sorting by length; 'by a margin of' involves some sort of comparison; 'at least' implies a lower cut-off. Temporal or spatial questions require reasoning about time or space relations. \n\n**Lexical gap** means that a concept can be expressed using different words. For example, we're looking for a 'city' but the question asks about a 'venue'. Approaches to solving this include string normalization, query expansion, and entailment. \n\n**Ambiguity** occurs when a word or phrase can have multiple meanings, only one of which is intended in a given context. The correct meaning can be obtained via corpus-based methods (distributional hypothesis) or resource-based methods. \n\nSometimes the answer is **distributed** across different sources. QA systems need to align different knowledge ontologies. An alternative is to decompose the question into simpler queries and combine the answers later. \n\n\n### What are the steps in a typical question answering pipeline?\n\nIn IR-based factoid QA, tokens from the question or the question itself forms the query to the IR system. Sometimes stopwords may be removed, the query rephrased or expanded. From the retrieved documents, relevant sentences or passages are extracted. Named entities, n-gram overlap, question keywords, and keyword proximity are some techniques at this stage. Finally, a suitable answer is picked. We can train classifiers to extract an answer. Features include answer type, matching pattern, number of matching keywords, keyword distance, punctuation location, etc. Neural network models are also common for answer selection. \n\nFor knowledge-based QA, the first step is to invoke a semantic parser to obtain a logical form for querying. Such a parser could be rule-based to extract common relations, or it could be learned via supervised machine learning. More commonly, semi-supervised or unsupervised methods are used based on web content. Such methods help us discover new knowledge relations in unstructured text. Relevant techniques include distant supervision, open information extraction and entity linking. \n\n\n### How are neural networks being used in question answering?\n\nWidespread use of neural networks for NLP started with **distributed representation** for words. A feedforward model learned the representation as it was being trained on a language modelling task. In these representations, semantically similar words will be close to one another. The next development was towards **compositional distributional semantics**, where sentence-level representations are composed from word representations. These were more useful for question answering. \n\nIyyer et al. reduced dependency parse trees to vector representations that were used to train an RNN. Yu et al. used a CNN for answer selection. A common approach to answer selection is to look at the similarity between question and answer in the semantic space. Later models added an attention layer between the question and its candidate answers. Tan et al. evaluated BiLSTMs with attention and CNN. Dynamic Coattention Network (DCN) is also based on attention. Facebook researchers combined a seq2seq model with multitasking. \n\nTransformer architecture has been applied for QA. In fact, QA was one of the tasks to which BERT was fine-tuned (on SQuAD) and evaluated. BERTserini used fine-tuned BERT along with information retrieval from Wikipedia. \n\n\n### What are some useful datasets for training or evaluating question answering models?\n\nDatasets are used for training and evaluating QA systems. Based on the design and makeup, each dataset might evaluate different aspects of the system better.\n\nAmong the well-known datasets are Stanford Question Answering Dataset (SQuAD), Natural Question (NQ), Question Answering in Context (QuAC) and HotpotQA. All four are based on Wikipedia content. Conversational Question Answering (CoQA) is a dataset that's based on Wikipedia plus other sources. Wikipedia often presents data in tables. WikiTableQuestions is a dataset in which answers are in tables rather than freeform text. TyDi QA is a multilingual dataset. TweetQA takes its data from Twitter. \n\nQuestion Answering over Linked Data (QALD) is a series of datasets created from knowledge bases such as DBpedia, MusicBrainz, Drugbank and LinkedSpending. \n\nOther datasets to note are ELI5, ShARC, MS MARCO, NewsQA, CMU Wikipedia Factoid QA, CNN/DailyMail QA, Microsoft WikiQA, Quora Question Pairs, CuratedTREC, WebQuestions, WikiMovies, GeoQuery and ATIS. \n\nPapers With Code lists dozens of datasets along with their respective state-of-the-art models. \n\n## Milestones\n\n1961\n\nMIT researchers implement a program named *Baseball*. It reads a question from a punched card. It references a dictionary of words and idioms to generate a \"specification list\", which is a canonical expression of what the question is asking. Content analysis involves syntactic phrase structures. \n\n1963\n\nBertram Raphael at MIT publishes a memo titled *Operation of a Semantic Question-Answering System*. He describes a QA model that accepts a restricted form of English. Factual information comes from a relational model. Program is written in LISP. Raphael credits LISP's list-processing capability for making the implementation a lot easier. \n\nDec \n1993\n\nDeveloped at MIT, *START* goes online. This is probably the world's first web-based QA system. It can answer questions on places, people, movies, dictionary definitions, etc. \n\nJun \n1997\n\nWith the growth of web, *AskJeeves* is launched as an online QA system. However, it basically does pattern matching against a knowledge base of questions and returns curated answers. If there's no match, it falls back to a web search. In February 2006, the system is rebranded as *Ask*. \n\nNov \n1999\n\nAt the 8th Text REtrieval Conference (TREC-8), a Question Answering track is introduced. This is to foster research in QA. TREC-8 focuses on only open-domain closed-class questions (fact-based short answers). At future TREC events, the QA track continues to produce datasets for training and evaluation.\n\n2002\n\nIt's helpful to identify the type of question being asked. Li and Roth propose a machine learning approach to **question classification**. Such a classification imposes constraints on potential answers. Due to ambiguity, their model allows for multiple classes for a single question. For example, \"What do bats eat?\" could belong to three class: food, plant, animal. The features used for learning include words, POS tags, chunks, head chunks, named entities, semantically related words, n-grams, and relations. \n\n2010\n\nAfter about three years of effort, IBM Watson competes at human expert levels in terms precision, confidence and speed at the *Jeopardy!* quiz show. It's *DeepQA* architecture integrates many content sources and NLP techniques. Answer candidates come with confidence measures. They're then scored using supporting evidence. Watson wins *Jeopardy!* in February 2011. \n\nDec \n2014\n\nYu et al. look at the specific task of answer selection. Using **distributed representations**, they look for answers that are semantically similar to the question. This is a departure from a classification approach that uses hand-crafted syntactic and semantic features. They use a bigram model with a convolutional layer and a average pooling layer. These capture syntactic structures and long-term dependencies without relying external parse trees. \n\nJul \n2017\n\nChen et al. use Wikipedia as the knowledge source for open-domain QA. Answers are predicted as text spans. Earlier research typically consider a short piece of already identified text. Since the present approach searches over multiple large documents, they call it \"machine reading at scale\". Called *DrQA*, this system integrates document retrieval and document reading. Bigram features and bag-of-words weighted with TF-IDF are used for retrieval. The reader uses BiLSTM each for the question and passages, with attention between the two. \n\nOct \n2018\n\nResearchers at Google release **BERT** that's trained on 3.3 billion words of unlabelled text. BERT is a pre-trained language model. As a sample task, they fine-tune BERT for question answering. SQuAD v1.1 and v2.0 datasets are used. Question and text containing the answer are concatenated to form the input sequence. Start and end tokens of the answer are predicted using softmax. For questions without answers, start/end tokens point to `[CLS]` token. \n\nJan \n2019\n\nGoogle release *Natural Questions (NQ)* dataset. It has 300K pairs plus 16K questions with answers from five different annotators. Answer comes from a Wikipedia page and the model is required to read the entire page. The questions themselves are based on real, anonymized, aggregated queries from Google Search. Answers can be yes/no, long, long and short, or no answer. \n\n2019\n\nOn SQuAD 2.0 dataset, many implementations start surpassing human performance. Many of these are based on the **transformer neural network architecture** including BERT, RoBERTa, XLNet, and ALBERT. Let's note that SQuAD 2.0 combines 100K questions from SQuAD 1.1 plus 50K unanswerable questions. When there's no answer, models are required to abstain from answering. \n\nJun \n2019\n\nSince datasets are available only for some domains and languages, Lewis et al. propose a method to synthesize questions to train QA models. Passages are randomly selected from documents. Random noun phrases or named entities are picked as answers. \"Fill-in-the-blanks\" questions are generated. Using neural machine translation (NMT), these are converted into natural questions. \n\nFeb \n2020\n\nGoogle Research releases *TyDi QA*, a typologically diverse multilingual dataset. It has 200K question-answer pairs from 11 languages. To avoid shared words in a pair, a human was asked to frame a question when they didn't know the answer. Google Search identified a suitable Wikipedia article to answer the question. The person then marked the answer. Researchers expect their model to generalize well to many languages.","meta":{"title":"Question Answering","href":"question-answering"}} {"text":"# Inter-Service Communication for Microservices\n\n## Summary\n\n\nIn a monolithic application, all parts of the app access a shared database. Each part can easily invoke the functionality of another part. In a microservices architecture, an app is composed of many microservices, each potentially managing its own database. What happens if one service requires data or processing from another service? This is not as trivial or efficient as in a monolithic application.\n\nInter-service communication (ISC) is an important consideration when designing a microservices-based application. A badly designed app can result in a lot of communication among the different services, resulting in a *chatty app* or *chatty I/O*. Communication can be reduced by having fewer microservices but this replaces the monolith with smaller monoliths. The goal is therefore to achieve a balance by following good design principles and patterns.\n\n## Discussion\n\n### Why do microservices need to communicate with one another?\n\nIn the traditional approach to building monolithic applications, lot of communication was internal to the app. Such communication was often local, fast and easily manageable. When designing a microservices architecture, we break up the monolithic into independent parts, each of which has a well-defined role. While each microservice can be deployed and scaled independently, none of them deliver the full value of the application. A microservice will often require data managed by another, or require the services of another.\n\nFor example, consider a taxi booking application. Trip management and passenger management are separate microservices but a trip cannot be initiated without some knowledge or authentication of the passenger. Hence these two independent microservices, each doing its specific roles, will still need to communicate. \n\nWhile microservices architecture has brought benefits to build large-scale applications, it has also exposed the communication across microservices. Complexity that was previously hidden is now visible. Dozens or even hundreds of microservices that make up an app must be \"wired together\" properly to make the whole thing work.\n\n\n### What are the different types of inter-service communication for microservices?\n\nBroadly, there are two types: \n\n + **Synchronous**: Client sends a request and waits for a response. Client code execution itself may prefer to receive the response via a callback (thread is not blocked) or wait (thread is blocked). Either way, the communication to the external world is synchronous.\n + **Asynchronous**: Client sends a request and doesn't wait for a response.With synchronous communication protocols, the receiver has to be available to send back the response. From application perspective, synchronous implies a less responsive user experience since we have to wait for the response. If one of the services in a chain of synchronous requests delays its response, the entire call flow gets delayed. \n\nWith asynchronous communication protocols, the request (often called message) is typically sent to a message queue. Even if the receiver is not available, the message will remain in the queue and can be processed at a later time. Even if a service fails to respond, the original asynchronous call is not affected since it's not waiting for a response. \n\n\n### What protocols and data formats are suitable for inter-service communication for microservices?\n\n**HTTP/HTTPS** is a synchronous protocol. Often service APIs are exposed as REST endpoints. **AMQP** and **MQTT** are examples of asynchronous protocols. To manage the queue, we can use RabbitMQ **message broker**. Instead of a message queue, we can also use an **event bus** for updating data asynchronously. \n\nSynchronous protocols are usually limited to one-to-one interactions. Asynchronous protocols have more options: **one-to-one** (notifications), **one-to-many** (publish/subscribe), or even allow for responses coming back asynchronously. For example, a user sends a tweet on her Twitter account that has many followers. This is an example of one-to-many publish/subscribe model.\n\nThe data that these protocols carry must be formatted in a manner understood by all. **Text-based formats** include JSON and XML. XML in particular is very verbose. Therefore, some implementations may prefer **binary formats**: MessagePack, Thrift, ProtoBuf, Avro. Note that these are well-known and popular formats and using them enables easier integration of microservices. It's also possible (but not preferred) to use a proprietary non-standard format internal to an application.\n\n\n### What design patterns are available for inter-service communication for microservices?\n\nHere are a few patterns to note:\n\n + **Saga Pattern**: A sequence of transactions, each one local to its database. A microservice triggers another via an event or message. If something fails, reverse operations undo the changes.\n + **API Composition**: Since table joins are not possible across databases, a dedicated service (or API gateway) coordinates the \"joins\", which are now at application layer rather than within the database.\n + **Command Query Responsibility Segregation (CQRS)**: Services keep materialized views based on data from multiple services. These views are updated via subscribed events. CQRS separates writes (commands) from the reads (queries).\n + **Event Sourcing**: Rather than store state, we store events. State may be computed from these events as desired. This is often used with CQRS: write events but derive states by replaying the events.\n + **Orchestration**: A central controller or orchestrator coordinates interactions across microservices. API Composition is a form of orchestration.\n + **Choreography**: Each microservice knows what to do when an event occurs, which are posted on an event stream/bus.\n + **Service Mesh**: Push application networking functions down to the infrastructure and not mix them with business logic.\n\n### Could you compare orchestration and choreography?\n\nOrchestration is a centralized approach. Calls are often synchronous: orchestrator calls service A, waits for response, then calls service B, and so on. This is good if service B depends on data from service A. However, if service A is down, service B can't be called. By coupling B with A, we've created a dependency. The orchestrator also becomes a single point of failure. \n\nChoreography enables peer-to-peer interactions without a centralized controller. It's more flexible and scalable than the orchestration approach. It's event-driven architecture applied to microservices. The logic of handling an event is built into the microservice. Choreography is asynchronous and non-blocking. The patterns CQRS and Event Sourcing are applicable to choreography. \n\nThere are also hybrid approaches where a service orchestrates a few services while it interacts with others via an event stream. In another approach, an orchestrator emits events for other services and consumes response events asynchronously from the event stream for further processing. \n\nTo conclude, \n\n> Orchestration is about control whereas choreography is about interactions.\n\n\n### How do we handle service API calls that fail?\n\nThe simplest solution is to **retry** after a specified timeout. A maximum number of retries can be attempted. However, if the operation is not idempotent (that is, it changes application state), then retry is not a safe recovery method. \n\nThe other approach is to use a **circuit breaker**. Many failed requests can result in a bottleneck. There's no point sending further requests. This is where we \"open the circuit\" to prevent further requests to a service that's not responding. \n\nWe can also proactively reduce chances of failure by **load balancing** requests. A request must be processed by a service instance and we can select an instance that has less load. Container orchestrators (Kubernetes) or service meshes (Istio) enable this. \n\n\n### Are there any best practices for defining a service API?\n\nMicroservices must be designed to be independent of one another. One approach is to use **Domain-Driven Design**. This talks about understanding the problem space, using design patterns, and refactoring continuously. API should model the domain. It shouldn't leak internal implementations. \n\nAPIs must have well-defined semantics and versioning schemes. A microservice can support multiple versions of an API or you could have a service for each version. Public APIs are usually REST over HTTP. Internal APIs can adopt RPC-style, where remote calls can look like local calls. However, they should be designed right to avoid chattiness. Consider the trade-off between making many I/O calls and retrieving too much data. \n\nSince application state is now distributed across microservices, design for and manage **Eventual Consistency**. \n\nWhile REST calls may use JSON, RPC calls can be more efficient with binary formats enabled by RPC frameworks such as gRPC, Apache Avro and Apache Thrift. To simplify API design and development, use an **Interface Definition Language (IDL)**. This will generate client code, serialization code and API documentation. \n\n\n### What are some anti-patterns to avoid when microservices need to communicate?\n\nSharing a database across many microservices is an anti-pattern since this introduces tighter coupling. A single data model for all microservices is another. Using synchronous protocols across many microservices increases latencies and makes your app brittle to failures. If microservices are not properly defined, this may result in chatty I/O that affects performance and responsiveness. \n\nAn application may depend on hundreds of shared libraries. In the spirit of code reuse, all microservices may be relying on these libraries. This results in another anti-pattern called **distributed monolith**. Reusing code within a domain or service boundary is fine but anything beyond that is coupling. This form of coupling is worse than code duplication. Shared libraries can be considered but not made mandatory in some areas: logging, tracing, routing. \n\nIt's not required that every event should contain full data, particularly when consumers are going to use only some of it. Consider sending essential data and URLs pointing to additional data. To communicate, consider REST plus its alternatives such as messaging and event sourcing. \n\n\n### What tools can I use to implement inter-service communication for microservices?\n\nAmong the message brokers are RabbitMQ, Kafka, ActiveMQ, and Kestrel. Cloud providers offer their own messaging systems such as Amazon SQS, Google Cloud Pub/Sub, and Firebase Cloud Messaging (FCM). Microsoft Azure offers Event Grid, Event Hubs and Service Bus for messaging. NATS, a CNCF project, is an open source messaging system.\n\nIstio enables service mesh technology. Azure Service Fabric Mesh is an alternative. These rely on Envoy as the networking proxy. Similar proxies include Linkerd and Cilium. Conduit is a service mesh designed for Kubernetes. \n\nNetflix's Conductor can help with orchestration. For logging and monitoring, we have Retrace, Logstash, Graylog, and Jaeger. OpenTracing is an API that enables distributed tracing. Circuit breaker pattern can be implemented with Netflix's Hystrix. \n\nTo define service APIs, Swagger can be used. In Java, REST-based microservices can be created with Spring Boot. \n\n## Milestones\n\n2004\n\nEric Evans publishes *Domain-Driven Design: Tackling Complexity in the Heart of Software*. The book relates directly to object-oriented programming. Only later, it's relevance to microservices would be understood. \n\n2007\n\nMichael Nygard explains the **circuit breaker** pattern in his book *Release It!: Design and Deploy Production-Ready Software*. This helps us build fault tolerant systems. In 2011, Netflix invents the Hystrix framework that includes the circuit breaker pattern. \n\n2009\n\nNetflix embraces API-driven architecture that affects both development and operations. This is today seen as the birth of microservices. \n\n2010\n\nAt the start of this decade, the three-tier model (web tier, app tier, database tier) is found to break under heavy load. Microservices come to the rescue but they also bring problems relating to inter-service communication. Companies introduce libraries that are built into microservices to handle the networking aspects: Google's Stubby, Netflix's Hystrix, Twitter's Finagle. This however introduces coupling. A few years later these evolve to networking proxy and **service mesh**. \n\nJan \n2016\n\nVersion 0.0.7 of **Linkerd** is open sourced on GitHub. It's based on Twitter's Finagle and Netty. This is one of the early beginnings of a service mesh. Likewise, **Istio** version 0.1.0 is released as an alpha version in May 2017.","meta":{"title":"Inter-Service Communication for Microservices","href":"inter-service-communication-for-microservices"}} {"text":"# Technical Debt\n\n## Summary\n\n\nTo write good software, developers have to follow best practices: architecture, design patterns, code structure, naming convention, coding guidelines, test coverage, documentation, etc. In practice, business needs may push developers to release a functioning product as quickly as possible. Developers may violate the best practices, hoping to make improvements at a later time. When this happens we have **technical debt**. \n\nThe goal is not to eliminate technical debt. We accept technical debt in the short term and manage it in the long term. If it's ignored, problems will accumulate until demoralized developers, missed deadlines, unhappy customers and increasing costs drive the product out of the market. \n\nThe problems indicated by technical debt are related to many other terms: software maintenance, software evolution, software aging, code decay, code reengineering, design smells, code refactoring, deficit programming, and technical inflation.\n\n## Discussion\n\n### Why do we call technical debt a \"debt\"?\n\nThe debt metaphor comes from finance. It's common for companies to borrow money to grow faster and capture the market quickly. However, this debt has to be eventually repaid with interest. The longer the company delays the repayment, the more it pays in terms of interest. Ward Cunningham applied this debt metaphor to technology, more specifically to software development. \n\nDue to various reasons, developers might not adopt the best approach to build their product. They might release the product even when they don't understand some parts of it. This is the debt they acquire in the hope that they will fix these issues in a future release. Bad code and poor understanding of that code lead to more time and effort to add new features. This is the interest paid on the debt. An hour saved today would require more than an hour tomorrow to fix the problem. \n\nCode refactoring leads to better design that's easy to understand and maintain. However, the longer they postpone this refactoring, the more difficult it becomes to refactor, just as interest payments continue to grow in financial debt. \n\n\n### What's principal and interest with respect to technical debt?\n\nIn finance, when we borrow money, the principal has to be paid along with interest. The longer we delay the payment of principal, the more interest we pay.\n\nIn technical debt, suppose we keep working with poor code, design or documentation. **Interest** is the extra effort we pay to maintain the software. As the software gets more complex with each release, interest keeps going up. \n\nInstead, we could refactor the code incrementally and make it better for future maintenance. **Principal** is therefore the cost of refactoring. Paying the principal today reduces future interest payments (extra effort). \n\n\n### How does technical debt arise in a project?\n\nTechnical debt could arise due to \"business pressure, incorrect design decisions, postponing refactoring indefinitely, updating dependencies or simply the lack of experience of the developer\". Bad practices can lead to technical debt: starting development without proper design, lack of testing, poor documentation, or poor collaboration. \n\nA software product may start with a clean design but incur technical debt as requirements change. Sometimes a software component could undergo a number of incremental changes made by multiple developers. These developers may not fully understand that component or its original purpose. Some call this \"bit rot technical debt\". Coding by copy-paste is a symptom. \n\nSometimes a third-party software is upgraded in an incompatible manner, such as Magento 1.x to 2.0. All websites using Magento 1.x now have a technical debt since there's no simple upgrade path. \n\nConsider a web application. Though Ajax might be the right approach, it would take longer to develop. So developers use frames instead. This is a conscious decision that incurs technical debt. Developers must still implement a clean solution that can be migrated to Ajax easily in a future release. \n\n\n### What are the different types of technical debt?\n\nBack in 2009, Martin Fowler identified four types of technical debt: \n\n + **Reckless & Deliberate**: Developers are aware of good design practices but deliberately choose to ignore them and produce messy code. This leads to excessive interest payments and long time to payback the principal.\n + **Reckless & Inadvertent**: Developers are clueless about good design practices and produce messy code.\n + **Prudent & Deliberate**: Developers willing incur debt because they estimate that interest payments are small (rarely touched code). They do a cost-benefit analysis and accept the debt if it can be overcome.\n + **Prudent & Inadvertent**: This is often retrospective in nature. Developers release clean code but later realize that they could have done it differently. This is inadvertent debt. Developers are constantly learning and trying to improve their code. As systems evolve and requirements change, developers may find better designs.From the perspective of interest payment, we could classify technical debt as **eventually repaid** (refactoring on a daily basis, debt tends to zero), **sustainable** (refactoring regularly, debt is constant) or **compound growth** (adding new features but not refactoring, exponential growth of debt). \n\n\n### Could you share some case studies of technical debt?\n\nAccording to a study by Stripe, developers spend nearly half their time fighting with technical debt. They estimated that this amounts to $85 billion in annual cost. \n\nAnother study of large software organizations showed that 25% of development time is the cost of technical debt. Only some used backlogs and static analyzers to manage technical debt. Very few had a systematic process for addressing technical debt. \n\nTwitter's platform was built on Ruby on Rails that was hard to optimize for search performance. Twitter solved this by eventually switching to a Java server, thus paying off its technical debt. \n\nA Canadian company successfully released a product locally. When they expanded to the rest of Canada, they had to cater to 20% of French-speaking Canadians. They quickly solved this by using a global flag and lots of if-else statements in code. Later they got an order from Japan. Had they made their software multilingual earlier, they could have easily updated their software for Japanese or any other language. \n\n\n### What are the consequences of technical debt?\n\nIf not managed, technical debt has long term effects on the product. It becomes difficult to add features or improve the product. More time is spent on paying off the interest. Any change implies higher costs. As product quality deteriorates, system outages or security breaches can lead to lost sales or fines. The cost of a quick release might be poor design, more bugs, volatile performance, and insufficient testing. \n\nTechnical debt has a human cost too. Developers become unhappy. Adding new features becomes a pain. Even when new developers are hired, it takes time to explain what the code does or why it's so messy. New developers could start blaming older developers for the technical debt. Teamwork suffers. \n\nCrippling technical debt might force the team to postpone big changes. They might not adopt latest technologies or upgrade to the latest versions of third-party libraries. Developers might get stuck with outdated frameworks and have no opportunity to upgrade their skills. They may even leave for better opportunities elsewhere. \n\nUltimately, these problems can be related to business risk, cost, sales, and employee retention. Technical debt gives developers a language to communicate clearly with business folks. \n\n\n### How can I manage technical debt within my project?\n\nTechnical debt when addressed early requires only some code refactoring. If this is postponed, a more expensive rewrite may be needed. \n\nAs in financial debt, it's better to pay off the principal to save on future interest payments. In other words, developers must **continuously refactor code**. If a new feature takes three days, a developer could take an extra day to refactor. This might simplify the current feature and make the code easier to work with for the future. \n\nOnce code is shipped to customers, developers are reluctant to change it, for the fear of breaking system behaviour. The solution is to add **more tests**. In fact, technical debt implies a more disciplined approach towards refactoring and testing. \n\nDevelopers should follow **good design practices**: reuse rather than duplicate code; design highly cohesive and loosely coupled modules; name variable, classes and methods to reveal intention; document the code; organize code reviews. \n\nUnlike bugs, technical debt is often invisible. **Tools** such as *Designite* can help identify and track debt. Prioritize high-interest debts for refactoring. Motivate and reward developers for refactoring. \n\n## Milestones\n\n1992\n\nWard Cunningham coins the term **Technical Debt** while working on WyCASH+, a financial software written in Smalltalk and employing object-oriented programming. He notes that it's important to revise and rewrite code towards better understanding. He also notes,\n\n> A little debt speeds development so long as it is paid back promptly with a rewrite. … The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt.\n\n2001\n\n**The Agile Manifesto** is signed. In the following years, as the Agile software movement gains adoption, it also brings more visibility to technical debt. Technical debt becomes an essential concept of software engineering. Agile is about faster development and responding quickly to customer needs. This focus on short-term deliveries should not compromise long-term goals. It's in this context that technical debt becomes relevant in Agile. \n\nSep \n2009\n\nRobert C. Martin (commonly called Uncle Bob) makes the point that technical debt is incurred due to real-world constraints. It's risky but could be beneficial. It requires discipline and clean coding. **Messy code** is not technical debt. Messy code is due to laziness and unprofessionalism. \n\nOct \n2009\n\nMartin Fowler expands on Uncle Bob's explanation of technical debt. In the process, he identifies the **four quadrants** of technical debt. He also notes that technical debt is a useful metaphor when communicating with non-technical people. \n\n2016\n\nAt the IEEE 8th International Workshop on Managing Technical Debt, researchers present their findings on **technical debt in embedded systems**. They find that test, architecture and code debts are more common. In embedded systems, runtime aspects are prioritized higher than design aspects. When the expected lifetime of a component is more than ten years, its maintainability is more seriously considered.","meta":{"title":"Technical Debt","href":"technical-debt"}} {"text":"# Document Development Life Cycle\n\n## Summary\n\n\nDocumentation is considered a means of communication among software developers and software users. It includes written specifications for software, what it does, how it fulfils the specified details, and how to use it. \n\n**Document Development Life Cycle (DDLC)** is a methodical procedure. It enables the creation of documents in a structured order. It's important while creating a document to improve its precision and understandability for end consumers. \n\nEach member of the content management system must be aware of DDLC. It includes technical writers and other content producers. The content developers can improve the accuracy and quality of the complete documentation. For this, they must follow a systemic process. They need to deliver it on schedule. \n\nDDLC can be divided in the following phases: Analysis and planning, Design, Content development, Proofreading and editing, Publishing, and Maintenance.\n\n## Discussion\n\n### Why is the document development life cycle important?\n\nThe objective of the workflow for user information/ documentation, is to:\n\n + Transform software use cases into designs for online and print information. This includes navigation and classification.\n + Implement the documentation design as an in-depth, accurate, and consistent set of information.\n + Make sure that the data is enough for people to use the product.\n + Generate or distribute the data in a timely and effective manner.It helps extract textual information on software specifications. This includes what it does, how it completes the required tasks, and even user instructions. \n\n\n### What are the stages of DDLC?\n\nTo begin a software product's documentation process, the first stage is researching the target audience's needs and analyzing their pain points. \n\nThe next stage is to design the document and break it down into smaller segments. Then, structure it with the appropriate format and style. \n\nTechnical writers handle the content development stage. This stage requires domain knowledge and technical information. Reviewing and editing are also crucial. Technical writing tools ease collaboration and a transparent review process. \n\nAfter completing the first draft, technical and editorial experts review the document to ensure accuracy and generate feedback. The writers must integrate the suggestions to maintain quality. \n\nTechnical writers take a printout of the document to ensure structure and address any minor issues. The content is published online with added links. \n\nA clear plan for frequent updates is necessary. It requires monitoring with authoring tools. Be responsive and provide users with instant information. \n\n\n### What tools are used for documentation development?\n\nMicrosoft Word will work for a small company with one/two technical writers. This tool won't be enough for a larger company. They'll need to look at other authoring tools like ClickHelp to handle the development life cycle (DDLC). \n\nTechnical writers can use translation tools to write. They can also translate their documentation. This helps to speed up the documentation process. \n\nTools are required for taking screenshots. Pictures in software documentation help achieve better outcomes. They provide more basic yet thorough understanding of the product. \n\nSometimes, the best option might be to use diagrams to present information. The major benefit comes from embedding a diagram from a cloud-based diagramming tool into a topic. All the diagrams across all the documents update automatically on updating the source. \n\nInserting videos in online documentation is a recent trend. Documentation becomes clearer and enjoyable for the readers. Hence, it produces excellent results. \n\n\n### Describe the Spiral model of DDLC and the Waterfall model of DDLC.\n\nThe Waterfall Model is a linear sequential approach to software development, where each stage of the development process must be completed before the next one can begin. It has six steps: Analyse, Design, Write, Revise, Distribute and Maintain. It has an advantage. Once the design is finalised, you don't need to return to the Design and Analyse phases for further iteration. Process optimization is achieved here. \n\nThe Spiral Model is a more flexible and iterative approach that incorporates elements of the Waterfall model and prototyping. It allows for feedback and revision at each phase of development and is based on the idea of continuous improvement. It is divided into four parts: Analyse, Design, Write and Evaluate. The disadvantage is that, it is time-consuming. All four stages must be followed, every time the document needs to be changed/updated. There are similarities in both process models. Hence, there are challenges for the documentation team in both. \n\n\n### What are the challenges associated with DDLC?\n\nDocument development can be a time-consuming process. And, deadlines may be tight, especially in fast-paced industries. \n\nThe document's content might be complex or technical. This would make it challenging to present the information in a way that is easy to understand for the target audience.\n\nKeeping track of document revisions. Ensuring that everyone is working on the most up-to-date version can be challenging. This is particularly when multiple stakeholders are involved.\n\nDocument approval processes may involve many stakeholders with differing opinions. It can result in delays and conflicts.\n\nIt can be challenging to ensure that all documents are consistent in terms of formatting, style, and tone, particularly when multiple authors are involved.\n\nEnsuring that the documents are accessible to all users, including those with disabilities, may require additional effort.\n\nThe document might be needed to be translated into multiple languages. This can add additional complexity and time to the development process. \n\n\n### What are some practical tips for technical writers?\n\nUnderstand the target audience and their knowledge level. It helps customize the writing style and language accordingly. \n\nAvoid using technical and complex terms as much as possible. Use simple and clear language that is easy to understand.\n\nOrganize your content in a logical and simple manner. Use headings, subheadings, and bullet points to break up the content.\n\nUse visuals such as diagrams, charts, and illustrations. This would help make your content more engaging and easier to understand.\n\nAlways review and edit your work. It helps ensure accuracy and clarity. Get feedback from your peers or subject matter experts to improve your writing.\n\nKeep up with the latest industry trends and technologies. This ensures the writing is relevant and informative.\n\nUse templates and style guides to ensure consistency in your writing style and formatting.\n\nBe open to feedback. Be willing to adapt your writing style to meet the needs of your audience.\n\nContinuously improve your writing skills by seeking feedback and learning from others. Stay updated with the latest trends and technologies in your field. \n\n## Milestones\n\n1986\n\nThe American National Standards Institute (ANSI) releases the Standard Generalized Markup Language (SGML). It becomes the basis of several subset markup languages including HTML. \n\n1987\n\nEarly desktop publishing and page layout software begin appearing on writers’ desktops. This includes products like Ventura Publisher, Interleaf, FrameMaker, and Aldus PageMaker. \n\n1992\n\nProEdit is founded in Atlanta, GA. \n\n1999\n\nWriters begin using XML, an “eXtensible Markup Language”. It's is evolving from HTML. \n\n2002\n\nThe Sarbanes-Oxley Act of 2002 creates new opportunities for technical writers documenting policies, procedures, and internal controls.","meta":{"title":"Document Development Life Cycle","href":"document-development-life-cycle"}} {"text":"# Dotdot\n\n## Summary\n\n\nDotdot is the universal language of the Internet of Things (IoT), making it possible for devices to work together on any network. Dotdot makes devices interoperable, regardless of vendor or connectivity technologies. Consumers can buy any appliance and expect it to talk to other appliance, so long as both Dotdot certified.\n\nDotdot was developed by the Zigbee Alliance, an open, global, non-profit organization. Dotdot itself was made possible by the Zigbee Cluster Library (ZCL) defined for the Zigbee stack. Dotdot is more universal. It sits at the application layer but lower layers can be Zigbee, Thread, Wi-Fi, Bluetooth, and more. Zigbee devices could already interoperate among themselves seamlessly. Dotdot extends this to non-Zigbee devices. \n\nOther organizations or communities (Bluetooth, OMG, OCF, GS1, Haystack, Schema.org) are also developing specifications for IoT interoperability at the application layer. These could complement or compete against Dotdot.\n\n## Discussion\n\n### What's the need for Dotdot?\n\nIoT devices connect with various wired or wireless technologies and networking protocols. That's fine so long as devices can understand one another at the application layer. \n\nBack in 2017, a toaster couldn't communicate with a coffee machine. A smart hub probably couldn't control an off-the-shelf smart lock or thermostat. A light switch couldn't turn off or dim a lamp. This is because devices come from different vendors. Their interfaces are either closed, limited or proprietary. For example, devices from Apple, Google, Amazon or Samsung probably need different mobile apps to control them. \n\nThis is a difficult scenario for any systems integrator who has to understand and interface many different technologies. Even at runtime, devices will sacrifice some processing to protocol translation. What we need is a common language at the application layer that all devices can understand. This is exactly what Dotdot provides. \n\nIt's possible to build a cloud platform that understands all devices. The cloud then becomes the convergence point for devices to talk to one another. But for many applications, relying on the cloud is unacceptable due to reliability, complexity and latency. \n\n\n### What are the benefits of using Dotdot?\n\nDotdot has the following benefits: \n\n + Single solution for all markets - A single application layer that works over many networks means a single choice for developers and consumers. This ends market fragmentation across segments: home, building, industrial, retail, health, energy, and more.\n + Easy - All necessary documents, references and tools are available in a single location and a single certification mark on every certified product and package.\n + Secure - Unique IDs, DTLS sessions, operational certificates and Access Control Lists (ACLs).\n + Global - Built on the open standards and global membership of 400 Zigbee Alliance members. Uses 2.4 GHz and 800-900 MHz bands that are globally available.\n + Proven - Based on the Zigbee Cluster Library (ZCL) that's deployed in over 300 million products for over 10 years.\n + Reliable and robust - Dotdot brings a rich catalog of device interaction models to IP networks. This enables devices to interoperate natively, while being able to interact with similar/complimentary devices on a Zigbee network.\n + Interoperable - Certification ensures device-to-device interoperability. Enables and connects multi-vendor ecosystems.\n\n### Where does Dotdot fit within the protocol stack?\n\nDotdot is meant for the application layer. Zigbee Alliance also has a certification program to certify devices that are compliant to the Dotdot specification. At the lower layers, Dotdot can interwork with any network or connectivity protocol. This includes Zigbee, Thread, Wi-Fi, Bluetooth, and more. \n\nWith Dotdot, we can have direct device-to-device communications within a Zigbee network or within a Thread network. When a Zigbee device needs to talk to a Thread device, there will be a gateway to translate between the two network. However, the devices can understand each other at the application layer, thanks to Dotdot. With Dotdot over Thread, there's no need for a gateway to connect the device to the cloud. Since Thread is IP based, such devices can directly talk to the cloud. \n\n\n### How is Dotdot related to the Zigbee Cluster Library (ZCL)?\n\nZigbee Alliance standardized ZCL so that different devices can interoperate at the application layer. ZCL contains 100+ device types, 2400+ certified products, and has matured over 15 years. Only problem with ZCL is that it works only with the Zigbee stack. This is where Dotdot fits in. \n\nDotdot can be seen as a universal application layer that can work with any underlying stack. As an example, Dotdot interfacing to Thread stack was shown at CES 2017. In this case, ZCL maps to CoAP Resources and HTTP verbs such as GET, PUT, POST, and DELETE. \n\nDotdot enables Zigbee devices to get connected to the Internet. In fact, Dotdot is considered an alias for **ZCL over IP (ZCLIP)**. It's been said that, \n\n> Dotdot is a standard that allows you to put ZCL on any “rails” other than Zigbee – WiFi, Thread, and so on.\n\nZCL is optimized for constrained devices. Messages are compact, most fitting within 127-byte 802.15.4 packet. Zigbee Alliance gets a head start by reusing the work done on ZCL for IP networks. \n\n\n### Which are the documents relevant to Dotdot development?\n\nFor mapping ZCL to IP and RESTful interfaces, the following IETF documents are relevant: \n\n + RFC 6690: Constrained RESTful Environments (CoRE) Link Format\n + RFC 7252: Constrained Application Protocol (CoAP)\n + RFC 7049: Concise Binary Object Representation (CBOR)ZCL Specification is an essential reference. It's also worth reading NXP's ZCL User Guide.\n\n\n### What exactly is the Dotdot Commissioning App?\n\nThe Dotdot Commissioning App is an app that's mostly based on the Thread Commissioning App developed for commissioning Thread-enabled devices. This app facilitates management and expansion of a Dotdot network. \n\nThe app first discovers Dotdot-compliant devices in a Thread network. It interrogates each device to discover services, clusters and endpoints. It discovers commands supported by each device. Once this is done, the app can send commands and change attributes on devices. \n\nZigbee Alliance members get access to this app. This saves operators the trouble of developing their own apps.\n\n\n### What are the alternatives to Dotdot?\n\nIoTivity is a reference implementation of Open Connectivity Foundation (OCF) specifications. It uses RESTful interfaces. \n\nIPSO Alliance is also involved in defining specifications. \n\nObject Management Group (OMG) defined the Data Distribution Service (DDS) for it's Industrial Internet Consortium (IIC). OMG has defined standards for healthcare and retail. \n\nBluetooth Special Interest Group (SIG) concerns itself with application layer interoperability among constrained devices. It does this via Generic Attributes (GATT) profiles and assigned numbers. \n\nGS1 is involved in supply chain standards for data exchange across Retail, Healthcare, and Transport & Logistics industries. It's standards include EPC Information Service (EPCIS), Core Business Vocabulary (CBV), Global Product Classification (GPC), and more. \n\nIETF's Extensible Provisioning Protocol (EPP) is an application-layer, client-server protocol for the provisioning and management of objects stored in a shared central repository. Developed for Internet domains and hosts, it can be extended to IoT. \n\nProject Haystack is an open community looking at semantic data models and web services. \n\nSchema.org manages a common set of semantic schemas. It's current ontology can be extended to IoT. \n\n## Milestones\n\n2007\n\nZigbee Alliance publishes the first release of the **Zigbee Cluster Library (ZCL)**. Revision 6 of this document appears in January 2016. A cluster is a related collection of commands and attributes, which together define an interface to specific functionality. Example clusters include HVAC, lighting, security and safety, and measurement and sensing. ZCL later becomes a starting point for Dotdot.\n\n2013\n\nAt a meeting in Boston, a stack vendor and two door lock manufacturers discuss how best to interface their products. Over a few days, they agree on what eventually becomes the **Door Lock Cluster** within ZCL. This cluster defines how doors can be locked/unlocked or their pin codes changed. This is just one example about how clusters in ZCL are organized around specific device types. \n\n2017\n\nThough the IoT world has seen a number of protocols at the connectivity layers the last few years, this is the year when there's serious talk about interoperability at the application layer. Since various devices have to understand one another in terms of types, capabilities and interfaces, it's also called **semantic interoperability**. Many specifications and guidelines are available by the end of the year. \n\nJan \n2017\n\nAt CES, Zigbee Alliance demonstrates how Dotdot can enable devices across Zigbee and IP networks talk to each other. In particular, they show Thread-based devices that have Dotdot at the application layer. \n\nFeb \n2017\n\nThread Group releases version 1.1 of the **Thread specification** and a certification program. \n\nDec \n2017\n\nThe Zigbee Alliance and Thread Group announce the availability of the **Dotdot over Thread Specification**. The specification is available to Zigbee Alliance members. By mid-2018, they expect to release Dotdot Commissioning Application and launch a certification program. \n\nJan \n2019\n\nDotdot over Thread certification program is launched. Support for more networks other than Thread is added later in the year.","meta":{"title":"Dotdot","href":"dotdot"}} {"text":"# Bidirectional RNN\n\n## Summary\n\n\nMany applications are sequential in nature. One input follows another in time. Dependencies among these give us important clues as to how they should be processed. Since Recurrent Neural Networks (RNNs) model the flow of time, they're suited for these applications. \n\nRNN has the limitation that it processes inputs in strict temporal order. This means current input has context of previous inputs but not the future. Bidirectional RNN (BRNN) duplicates the RNN processing chain so that inputs are processed in both forward and reverse time order. This allows a BRNN to look at future context as well. \n\nTwo common variants of RNN include GRU and LSTM. LSTM does better than RNN in capturing long-term dependencies. **Bidirectional LSTM (BiLSTM)** in particular is a popular choice in NLP. These variants are also within the scope of this article.\n\n## Discussion\n\n### Could you explain Bidirectional RNN with an example?\n\nConsider the phrase, 'He said, \"Teddy \\_\\_\\_\". From these three opening words it's difficult to conclude if the sentence is about Teddy bears or Teddy Roosevelt. This is because the context that clarifies Teddy comes later. RNNs (including GRUs and LSTMs) are able to obtain the context only in one direction, from the preceding words. They're unable to look ahead into future words. \n\nBidirectional RNNs solve this problem by processing the sequence in both directions. Typically, two separate RNNs are used: one for forward direction and one for reverse direction. This results in a hidden state from each RNN, which are usually concatenated to form a single hidden state. \n\nThe final hidden state goes to a decoder, such as a fully connected network followed by softmax. Depending on the design of the neural network, the output from a BRNN can either be the complete sequence of hidden states or the state from the last time step. If a single hidden state is given to the decoder, it comes from the last states of each RNN. \n\n\n### What are some applications of Bidirectional RNN?\n\nBiLSTM has become a popular architecture for many NLP tasks. An early application of BiLSTM was in the domain of speech recognition. Other applications include sentence classification, sentiment analysis, review generation, or even medical event detection in electronic health records. \n\nBiLSTM has been used for POS tagging and Word Sense Disambiguation (WSD). For Named Entity Recognition (NER), Lample et al. used word representations that captured both character-level characteristics and word-level context. These were fed into a BiLSTM encoder layer. The sequence of hidden states was decoded by a CRF layer. \n\nFor lemmatization, one study used two-layer bidirectional GRUs for the encoder. The decoder was a conditional GRU plus another GRU layer. Another study used a two-layer BiLSTM encoder and a one-layer LSTM decoder. A stack of four BiLSTMs has been used for Semantic Role Labelling (SRL). \n\nIn general, the paradigm of **embed-encode-attend-predict** has become popular in NLP work. The encode part benefits from BiLSTM, which has been shown to capture position-sensitive features. \n\nBeyond NLP, BiLSTM has been applied to image processing applications such as OCR. \n\n\n### What are merge modes in Bidirectional RNN?\n\nMerge mode is about how forward and backward hidden states should be combined before being passed on to the next layer. In Keras package, supported modes are summation, multiplication, concatenation and averaging. The default mode is concatenation and this is what most research papers use. \n\nIn MathWorks, as on December 2019, only concatenation was supported. \n\n\n### What are some limitations of Bidirectional RNN?\n\nOne limitation with BRNN is that the entire sequence must be available before we can make predictions. For some applications such as real-time speech recognition, the entire utterance may not be available and BRNN may not be adequate. \n\nIn the case of language models, the task is to predict the next word given preceding words. BRNN is clearly not suitable since it expects future words as well. Applying BRNN in this application will give poor accuracy. Moreover, BRNN is slower than RNN since results of the forward pass must be available for the backward pass to proceed. Gradients will therefore have a long dependency chain. \n\nLSTMs capture long-term dependencies better than RNN and also solve the exploding/vanishing gradient problem. However, stacking many layers of BiLSTM creates the vanishing gradient problem. Deep neural networks so successful with CNNs are not so successful with BiLSTMs. \n\n## Milestones\n\nNov \n1997\n\nSchuster and Paliwal propose **Bidirectional Recurrent Neural Network (BRNN)** as an extension of the standard RNN. Since the forward and backward RNNs don't interact, they can be trained similar to the standard RNN. On regression and classification experiments they observe better results with BRNN. \n\n2005\n\nFor phoneme classification in speech recognition, Graves and Schmidhuber use **Bidirectional LSTM** and obtain good results. It's based on the insight that humans often understand sounds and words only after hearing the future context. In particular, often we don't require an output immediately upon receiving an input. We can afford to wait for a sequence of inputs and then work on the output. They also show that BRNN takes eight times longer to converge than BiLSTM. \n\nSep \n2016\n\nGoogle replaces its phrase-based translation system with **Neural Machine Translation (NMT)**. It uses a deep LSTM network with 8 encoder and 8 decoder layers. The first layer of encoder is BiLSTM while all others are LSTM. \n\nApr \n2017\n\nPre-trained word embeddings are commonly used in neural networks for NLP. However, they don't capture context. Supervised learning for capturing context uses limited labelled data. To overcome this limitation, Peters et al. use BiLSTM to learn a language model (LM) and initialize a neural network for sequence tagging. This network uses **two-layer bidirectional GRUs**. They experiment with NER and chunking. They find best result when LM embeddings are used at the output of the first layer. \n\nJul \n2017\n\nFor the task of Semantic Role Labelling (SRL), He et al. use an eight-layer network consisting of four BiLSTMs. Their network includes highway connections and transform gates that control inter-layer information flow. Output prediction is done by a softmax layer. \n\nFeb \n2018\n\nWhen BRNNs are stacked, they suffer from vanishing gradients and overfitting. Ding et al. propose a **Densely Connected BiLSTM (DC-BiLSTM)** as a solution. This essentially means that a layer's hidden state includes the hidden states of all preceding layers. They show that the proposed architecture can handle up to 20 layers while improving performance over BiLSTM. \n\nJun \n2018\n\nPeters et al. publish details of a language model called **Embeddings from Language Models (ELMo)**. ELMo representations are deep, that is, they're a linear combination of the states of all LSTM layers rather than using only the top layer representation. They show that higher layers capture context-dependent semantics whereas lower layers capture syntax. While their model uses both forward and backward LSTMs, forward LSTM stack is independent of the backward LSTM stack. Representations at each layer of the two stacks are concatenated. For this reason, they use the term **Bidirectional Language Model (BiLM)**.","meta":{"title":"Bidirectional RNN","href":"bidirectional-rnn"}} {"text":"# Word Sense Disambiguation\n\n## Summary\n\n\nMany words have multiple meanings or senses. For example, the word *bass* has at least eight different senses. The correct sense can be established by looking at the context of use. This is easy for humans because we know from experience how the world works. **Word Sense Disambiguation (WSD)** is about enabling computers to do the same. \n\nWSD involves the use of syntax, semantics and word meanings in context. It's therefore a part of **computational lexical semantics**. WSD is considered an AI-complete problem, which means that it's as hard as the most difficult problems in AI. \n\nBoth supervised and unsupervised algorithms are available. The term **Word Sense Induction (WSI)** is sometimes used for unsupervised algorithms. In the 2010s, word embeddings became popular. Such embeddings used with neural network models represent the current state-of-the-art models for WSD.\n\n## Discussion\n\n### Could you explain Word Sense Disambiguation with an example?\n\nConsider sentences \"I can hear bass sounds\" and \"They liked grilled bass\". The meaning or sense of the word 'bass' is low frequency tones or a type of fish respectively. The word alone is not sufficient to determine the correct sense. When we consider the word in the context of surrounding words, the sense becomes clear. Using WSD, the second sentence can be **sense-tagged** as \"They like/ENJOY grilled/COOKED bass/FISH\". \n\nContext comes from the words 'sounds' or 'grilled'. It's also helpful to know that these collocated words are noun and adjective respectively. One comes after 'bass' and the other comes before 'bass'. These syntactic relations give additional information. In general, pre-processing steps such as POS tagging and parsing help WSD. \n\nA difficult example is \"the astronomer married the star\". \n\nThere's no universally agreed senses for a word. Senses can also vary with domains. WSD also relies on knowledge. Without knowledge, it's impossible to determine the correct sense. It's expensive to build knowledge resources. Back in the 1990s, this was seen as the *knowledge acquisition bottleneck*. \n\n\n### Could you mention some applications of WSD?\n\nWSD is usually seen as an \"intermediate task\", as a means to an end. Obtaining the correct word sense is helpful in many NLP applications. Exactly how WSD is used is application specific. \n\nHere are some examples: \n\n + **Machine Translation**: An English translation of the French word 'grille' can be railings, bar, grid, scale, schedule, etc. Correct word sense disambiguation is therefore necessary.\n + **Information Retrieval**: When searching for judicial references with the word 'court', we wish to avoid matches pertaining to royalty.\n + **Thematic Analysis**: Themes are identified based on word distribution but we include only words of the relevant sense.\n + **Grammatical Analysis**: In POS tagging or syntactic analysis, WSD is useful. In the French sentence \"L'étagère plie sous les livres\", livres refers to 'books' and not 'pounds'.\n + **Speech Processing**: WSD helps in obtaining the correct phonetization in speech synthesis.\n + **Text Processing**: For inserting diacritics, WSD helps in correcting the French word 'comte' to 'comté'. For case changes, WSD corrects 'HE READ THE TIMES' to 'He read the Times'. To \"Wikify\" online documents, WSD helps.\n\n### What are some essential terms to know about word senses?\n\nConsider the word 'bank'. This can refer to a financial institution or a sloping mound. These two senses of the same word are unrelated but they look and sound the same. We call them **homonyms**. The sense relation is called **homonymy**. Typically, homonyms have different origins and different dictionary entries. \n\nA bank can also refer to the building that houses the financial institution. These senses are semantically related. The sense relation is called **polysemy**. \n\n**Synonyms** are different words with same or nearly same meaning. **Antonyms** are words with opposite meaning. \n\nConsider two words in which one is a subclass of the other, a type-of relation, such as mango and fruit. Mango is a **hyponym** of fruit. Fruit is a **hypernym** of mango. \n\nConsider two words that form a part-whole relation, such as wheel and car. Wheel is a **meronym** of car. Car is a **holonym** of wheel. \n\n\n### Which are the essential elements for doing WSD?\n\nWSD requires two main sources of information: **context** and **knowledge**. Context is established from neighbouring words and the domain of discourse. Sense-tagged corpora provide knowledge, leading to data-driven or corpus-based WSD. Use of lexicons or encyclopaedia lead to knowledge-driven WSD. \n\nWe also need to know possible word senses. These can be **enumerative**, with WordNet being an example. A **generative** model underspecifies senses until context is considered. Rules generate senses. These rules capture regularities in the creation of senses. \n\nIn **lexical sample** task, WSD is applied for a sample of pre-selected words. Supervised ML approach is possible based on hand-labelled corpus. In **all-words** task, WSD is applied for all words, for which supervised ML approach is not practical. Dictionary-based approaches or bootstrapping techniques are more suitable. \n\nA **bag-of-words** approach can be used for context. To preserve word ordering, **collocation** can be used when forming feature vectors. Such a vector might include the word's root form and its POS. Syntactic relations, distance from target and selectional preferences are other approaches. \n\n\n### Could you describe some algorithms for WSD?\n\nA simple supervised approach is to use a **naive Bayes classifier**. We maximize the probability of word sense given a feature vector. The problem is simplified by using Bayes' Rule and assuming features are independent of one another. Another approach is **decision list classifier**. \n\nThe **Corpus Lesk** algorithm uses a sense-tagged corpus. We also use the definition or gloss of each sense from a dictionary. Examples from the corpus and the gloss become the signature of the sense. Then we compute the number of overlapping words between the signature and the context. Inverse Document Frequency (IDF) weighting is applied to discount function words (the, of, etc.). The simplified Lesk algorithm uses only the gloss for signature and doesn't use weights. \n\nFor evaluation, **most frequent sense** is used as a baseline. Frequencies can be taken from a sense-tagged corpus such as SemCor. Lesk algorithm is also a suitable baseline. *Senseval* and *SemEval* have standardized sense evaluation. \n\n\n### How are word embeddings relevant to the problem of WSD?\n\nSchütze (1998) proposed the use of word vectors and context vectors. These are large dimensional vectors and often sparse. To make them practical for computation, Singular Value Decomposition (SVD) reduces the number of dimensions. Within such a vector space model, Latent Semantic Analysis (LSA) helps to determine semantic relations. Word embeddings is a modern alternative. \n\nWord embeddings were proposed by Bengio et al. (2003). These are low-dimensional dense vectors that capture semantic information. However, words with multiple senses are reduced to a single vector. This is not directly useful for WSD. To overcome this, Trask et al. (2015) proposed **sense2vec**, where representations are of senses, not words. Sense2vec improved the accuracy of other NLP tasks such as named entity recognition and neural dependency parsing. \n\nIacobacci et al. (2016) explored the direct use of word embeddings. Using *It Makes Sense (IMS)* framework along with Word2vec, they improved F1 scores on various WSD datasets. Word embeddings of target word and its surrounding words are converted into \"sense vectors\" using various methods: concatenation, average, fractional decay, exponential decay. \n\n\n### What are some neural network approaches to WSD?\n\nNeural network approaches to WSD have become popular in the 2010s. Wiriyathammabhum et al. (2012) applied **Deep Belief Networks (DBN)**. They pre-trained the hidden layers using various knowledge sources, layer by layer. They then used a separate fine tuning step for better discrimination. \n\nWe lose sequential and syntactic information when averaging word vectors. Instead, Yuan et al. (2016) proposed a semi-supervised **LSTM** model with label propagation. To capture contexts from both sides, Kågebäck and Salomonsson (2016) applied **Bidirectional LSTM**. \n\nMany models consider only context. Knowledge sources such as WordNet are ignored. **Gloss-Augmented WSD (GAS)** considers both context and glosses (sense definitions) and uses BiLSTM. \n\nOne attention-based approach is a **encoder-decoder model** with multiple attentions on different linguistic features. Another is **GlossBERT**. It uses BERT, encodes context-gloss pairs of all possible senses, and treats WSD as a sentence-pair classification problem. \n\n\n### What are some resources to do WSD?\n\nA number of datasets and sense-annotated corpora are available to train WSD models: Senseval and SemEval tasks (all-words, lexical sample, WSI), AIDA CoNLL-YAGO, MASC, SemCor, and WebCAGe. Likewise, word sense inventories include WordNet, TWSI, Wiktionary, Wikipedia, FrameNet, OmegaWiki, VerbNet, and more. These are supported by the modular Java framework **DKPro WSD**. \n\n**UKB** is an open-source toolkit that can be used for knowledge-based WSD. It should be used with optimal default parameters. \n\nIn January 2017, Google released word sense annotations on MASC and SemCor datasets. Senses are from New Oxford American Dictionary (NOAD). NOAD senses are also mapped to WordNet. \n\nACLWiki has curated a list of useful WSD resources. This includes papers, inventories, annotated corpora and software. \n\nRuder captures the current state of the art in WSD with links to relevant papers. Papers With Code also maintains a list of recent papers on WSD.\n\n## Milestones\n\nJul \n1949\n\nWarren Weaver considers the task of using computers to **translate** text from one language to another. He recognizes the importance of context and meaning. He makes references to cryptography and statistical semantic studies as possible approaches to obtaining the correct meaning of a word. Weaver also notes that a word mostly has only one meaning within a particular domain. \n\n1953\n\nOswald and Lawson propose **microglossaries** for machine translation. These are glossaries assembled for a particular domain. Through the 1950s, researchers produce many such domain-specific glossaries to aid machine translation. For example, a microglossary for mathematics would define 'triangle' as a geometric shape and not as a musical instrument. \n\n1955\n\nErwin Reifler defines what he calls *semantic coincidences* between a word and its context. He also notes that **syntactic relations** can be used to disambiguate. For example, the word 'kept' can have an object that's gerund (He kept eating), adjectival phrase (He kept calm), or noun phrase (He kept a record). \n\n1957\n\nMasterman makes use of synonyms, near synonyms and associated words from Roget's Thesaurus. She gives the example of \"flowering plant\". Using the **thesaurus**, we can determine that 'vegetable' is the only common sense for the words 'flowering' and 'plant'. This is therefore the correct sense of the word 'plant' in this context. \n\n1965\n\nMadhu and Lytle propose the use of what they call **Figure of Merit**. This is a probabilistic measure that's useful when grammatical structure alone is unable to disambiguate. They focus on scientific and engineering literature and identify ten groups. The group or context is determined using words with single meaning. Then the most probable meaning of words with multiple meanings is selected given the context. Paradoxically, this is also the time when interest in machine translation declines. \n\n1970\n\nIn the 1970s, some notable approaches include **Semantic Networks** of Quillian and Simmons; **Preferential Semantics** of Wilks; word-based understanding of Riesbeck. Early semantic networks can be traced to the late 1950s. \n\n1986\n\nLesk uses a **machine-readable dictionary (MRD)** for WSD. In general, the 1980s sees large-scale lexical resources (such as WordNet) for automated knowledge extraction. This is also when focus shifts from linguistic theories to empirical methods. \n\n1990\n\nThe use of neural networks had been suggested in the early 1980s but was limited to a few words and hand-coded. Véronis and Ide extend this idea by using a machine-readable Collins English Dictionary. Network is formed using dictionary entries and words used to define them. Word nodes activate sense nodes. Feedback allows competing senses to inhibit one another. \n\n1991\n\nBrown et al. show that it's possible to disambiguate by aligning sentences in two languages. A word in one language might translate into different words in another language, each with a unique sense. In 1992, Gale et al. extend this idea by using Canadian Hansards (parliamentary debates) that's available in more than one language. This avoids expensive hand-labelled corpus. \n\n1995\n\nSupervised WSD algorithms have the problem of requiring sense-annotated corpora, which is expensive and laborious to create. Yarowsky proposes an **unsupervised** WSD algorithm. The algorithm uses two useful constraints: one sense per collocation, one sense per discourse. It's a bootstrapping procedure that seeds a small number of sense annotations. The algorithm then determines and iteratively improves on the senses for other occurrences of the word. An example is to disambiguate 'plant', which can be about plant life or a manufacturing plant. \n\n1998\n\nSchütze proposes a vector space approach to WSD via **clustering**. Senses are seen as clusters of similar contexts. A sense of a particular word is the cluster to which it's closest. Since the technique is unsupervised, senses are induced from a corpus. Word vectors are calculated using cooccurrences. Word vectors are sparse but context vectors are dense. The dimensions of both vectors are reduced using Singular Value Decomposition (SVD). \n\n1999\n\nMihalcea and Moldovan make use of **WordNet** for WSD. They rank different senses using WordNet's semantic density for a word-pair and **web mining** for word pair cooccurrences. In 2004, Peter Turney also employs web mining to calculate cooccurrence probabilities that are used to generate semantic features for WSD. \n\n2010\n\nThis decade sees an increasing use of word embeddings and neural network models for WSD. Some of these include Gloss-Augmented WSD (GAS), GlossBERT, and use of BiLSTM. \n\nMar \n2018\n\nRamiro et al. study the evolution of word senses. They note that cognitive efficiency drives this evolution through a process called **nearest-neighbour chaining**. For new word senses, reuse of existing words (polysemy) is more common than new word forms.","meta":{"title":"Word Sense Disambiguation","href":"word-sense-disambiguation"}} {"text":"# Confusion Matrix\n\n## Summary\n\n\nIn statistical classification, we create algorithms or models to predict or classify data into a finite set of classes. Since models are not perfect, some data points will be classified incorrectly. Confusion matrix is basically a tabular summary showing how well the model is performing. \n\nIn one dimension, the matrix takes the actual values. The matrix then maps these to the predicted values in the other dimension. In reality, the matrix is like a histogram. The entries in the matrix are counts. For example, it records how many data points were predicted as \"true\" when they were actually \"false\". \n\nConfusion matrix is useful in both binary classification as well as multiclass classification problems. There are many performance metrics that can be computed from the matrix. Learning these metrics is handy for a statistician or data scientist.\n\n## Discussion\n\n### What are the elements and terminology used in Confusion Matrix?\n\nLet's also consider a concrete example of a pregnancy test. Based on a urine test, we predict if a person is pregnant or not. We assume that the ground truth (pregnant or not) is available to us. We therefore have four possibilities: \n\n + **True Positive (TP)**: We predict a pregnant person is pregnant. This is a good prediction.\n + **True Negative (TN)**: We predict a non-pregnant person is not pregnant. This is a good prediction.\n + **False Positive (FP)**: We predict a non-pregnant person is pregnant. This type of error is also called *Type I Error*.\n + **False Negative (FN)**: We predict a pregnant person is not pregnant. This type of error is also called *Type II Error*.When these are arranged in matrix form, it will be apparent that correct predictions are represented along the main diagonal. Incorrect predictions are in the non-diagonal cells. This makes it easy to see where predictions have gone wrong. We may also say that the matrix represents the model's inability to classify correctly, and hence the \"confusion\" in the model. \n\n\n### What metrics are used for evaluating the performance of a prediction model?\n\nPerformance metrics from a confusion matrix are represented in the following equations: \n\n$$Recall\\ or\\ Sensitivity=TP/(TP+FN)=TP/AllPositives\\\\Specificity=TN/(TN+FP)=TN/AllNegatives\\\\Precision=TP/(TP+FP)=TP/PredictedPositives\\\\Prevalence=TP+FN/Total=AllPositives /Total\\\\Accuracy=(TP+TN)/Total\\\\Error\\ Rate=(FP+FN)/Total$$\n\nIt's important to understand the significance of these metrics. Accuracy is an overall measure of correct prediction, regardless of the class (positive or negative). The complement of accuracy is error rate or misclassification rate.\n\nHigh recall implies that very few positives are misclassified as negatives. High precision implies very few negatives are misclassified as positives. There's a trade-off here. If model is partial towards positives, we'll end up with high recall but low precision. It model favours negatives, we'll end up with low recall and high precision. \n\nHigh specificity, like high precision, implies that very few negatives are misclassified as positives. If positive represents some disease, specificity is the model's confidence in clearing a person as disease-free. Selectivity is the model's confidence in diagnosing a person as diseased. \n\nIdeally, recall, specificity, precision and accuracy should all be close to 1. FNR, FPR and error rate should be close to 0.\n\n\n### Could you give a numerical example showing calculations of performance measures of a prediction model?\n\nThis example has 165 samples. We show the following calculations: \n\n + Recall or True Positive Rate (TPR): TP/(TP+FN) = 100/(100+5) = 0.95\n + False Negative Rate (FNR): 1 - TPR = 0.05\n + Specificity or True Negative Rate (TNR): TN/(TN+FP) = 50/(50+10) = 0.17\n + False Positive Rate (FPR): 1 - TNR = 0.83\n + Precision: TP/(TP+FP) = 100/(100+10) = 0.91\n + Prevalence: (TP+FN)/Total = (100+5)/165 = 0.64\n + Accuracy: (TP+TN)/Total = (100+50)/165 = 0.91\n + Error Rate: (FP+FN)/Total = (10+5)/165 = 0.09\n\n### Why do we need so many performance measures when accuracy can be sufficient?\n\nIf the dataset has 90% positives, then achieving 90% accuracy is easy by predicting only positives. Thus, accuracy is not a sufficient measure when dataset is imbalanced. Accuracy also doesn't differentiate between Type I (False Positive) and Type II (False Negative) errors. This is where the confusion matrix gives us more useful measures with FPR and FNR; or their complementary measures, Recall and Specificity respectively.\n\nConsider the multiclass problem of iris classification that has three classes: setosa, versicolor and virginica. This has an accuracy of 84% (32/38) but it doesn't tell us where the errors are happening. With the confusion matrix, it's easy to see that only versicolor is wrongly classified. The matrix also shows that versicolor is misclassified as virginica and never as setosa. We can also see that Recall is 62% (10/16) for versicolor. \n\nIn fact, when classes are not evenly represented in the data, confusion matrix by itself doesn't give an adequate visual representation. For this reason, we use a **normalized confusion matrix** that takes care of class imbalance. \n\n\n### What are other performance metrics for a classification/prediction problem?\n\n**F-measure** takes a harmonic mean of Recall and Precision, (2\\*Recall\\*Precision)/(Recall+Precision). It's a value closer to the smaller of the two. Applying this to our earlier example, we get F-measure = (2\\*0.95\\*0.91)/(0.95+0.91) = 0.92 \n\nA commonly used graphical measure is the **ROC Curve**. It's generated by plotting the True Positive Rate (y-axis) against the False Positive Rate (x-axis) as we vary the threshold for assigning observations to a given class. \n\nHow often will we be wrong if we always predict the majority class? **Null Error Rate** gives us a measure for this. It's a useful baseline when evaluating a model. In our example, null error rate would be 60/165 = 0.36. If the model always predicted positive, it would be wrong 36% of the time. \n\n**Cohen's Kappa** can be applied to know how well a classifier is performing as opposed to classifying simply by chance. A high Kappa score implies accuracy differs a lot from null error rate. \n\n\n### What's the procedure to make or use a Confusion Matrix?\n\nWe certainly need both the actual values and the predicted values. We can arrange the actual values by rows and the predicted values by columns, although some may swap the two. It's therefore important read the arrangement of the matrix correctly. For each actual value, count the number of predicted values for each class. Fill these counts into the matrix. \n\nThere's no threshold for good accuracy, sensitivity or other measures. They should be interpreted in the context of problem, domain and business.\n\n\n### Could you mention some tools and techniques in relation to the Confusion Matrix?\n\nIn R, package *caret: Classification and Regression Training* can be used to get confusion matrix with all relevant statistical information. The function is `confusionMatrix(data=predicted, reference=expected)`. This plots actuals (called reference) by columns and predictions by rows. \n\nIn Python, package *sklearn.metrics* has an equivalent function, `confusion_matrix(actual, predicted)`. This plots actuals by rows and predictions by columns. Other related and useful functions are `accuracy_score(actual, predicted`) and `classification_report(actual, predicted)`. \n\n## Milestones\n\n1904\n\nMathematician Karl Pearson publishes a paper titled *On the theory of contingency and its relation to association and normal correlation*. Contingency and correlation between two variables can be seen as the genesis of confusion matrix.\n\n1971\n\nJames Townsend publishes a paper titled *Theoretical analysis of an alphabetic confusion matrix*. Uppercase English alphabets are shown to human participants who try to identify them. Alphabets are presented with or without introduced noise. The resulting confusion matrix is of size 26x26. With noise, Townsend finds that 'W' is misidentified as 'V' 37% of the time; 32% of 'Q' are misidentified as 'O'; 'H' is identified correctly only 19% of the time. \n\n1998\n\nThe term **Confusion Matrix** becomes popular in the ML community when it appears in a glossary featured in *Special Issue on Applications of Machine Learning and the Knowledge Discovery Process* by Ron Kohavi and Foster Provost. \n\n2011\n\nIn a paper titled *Comparing Multi-class Classifiers: On the Similarity of Confusion Matrices for Predictive Toxicology Applications*, researchers show how to compare predictive models based on their confusion matrices. For lower FNR, they propose regrouping performance measures of multiclass classifiers into a binary classification problem.","meta":{"title":"Confusion Matrix","href":"confusion-matrix"}} {"text":"# Neural Networks for NLP\n\n## Summary\n\n\nThe use of statistics in NLP started in the 1980s and heralded the birth of what we called **Statistical NLP** or **Computational Linguistics**. Since then, many machine learning techniques have been applied to NLP. These include naïve Bayes, k-nearest neighbours, hidden Markov models, conditional random fields, decision trees, random forests, and support vector machines. \n\nThe use of neutral networks for NLP did not start until the early 2000s. But by the end of 2010s, neural networks transformed NLP, enhancing or even replacing earlier techniques. This has been made possible because we now have more data to train neural network models and more powerful computing systems to do so. \n\nIn traditional NLP, features were often hand-crafted, incomplete, and time consuming to create. Neural networks can learn multilevel features automatically. They also give better results.\n\n## Discussion\n\n### Which are the main innovations in the application of NN to NLP?\n\nTwo main innovations have enabled the use of neural networks in NLP: \n\n + **Word Embeddings**: This enabled us to represent words as real-valued vectors. Instead of having a sparse representation, word embeddings allowed us to represent words in a much smaller dimensional space. We could identify similar words due to their closeness in this vector space, or use analogies to exploit semantic relationships between words.\n + **NN Architectures**: These had evolved in other domains such as computer vision and were adapted to NLP. This started in language modelling, and later applied to morphology, POS tagging, coreference resolution, parsing, and semantics. From these core areas, neural networks were applied to applications: sentiment analysis, speech recognition, information retrieval/extraction, text classification/generation, summarization, question answering, and machine translation. These architectures are usually not as deep (many hidden layers) as found in computer vision.\n\n### Which are the NN architectures that have been used for NLP?\n\nEarly language models used a feedforward NN or convolutional NN architectures but these didn't capture context very well. Context is how one word occurs in relation to surrounding words in the sentence. To capture context, recurrent NNs were applied. **LSTM**, a variant of RNN, was then used to capture long-distance context. **Bidirectional LSTM (BiLSTM)** improves upon LSTM by looking at word sequences in forward and backward directions. \n\nTypically, the dimensionality of input and output must be known and fixed. This is problematic for machine translation. For example, the best translation of a 10-word English sentence might be a 12-word French sentence. This problem is solved by a **sequence-to-sequence** model that's based on **encoder-decoder** architecture. The essence of the encoder is to encode an entire input sequence into a large fixed-dimensional vector, called the **context vector**. The decoder implements a language model conditioned on the input sequence. \n\nTo encode contextual information in a single context vector is difficult. This gave rise to the idea of **attention** where more information is given to decoder. From here, the **transformer** model evolved. \n\n\n### What's been the general trend in NLP research with neural networks?\n\nLanguage modelling has been essential for the progress of NLP. Because of the ready availability of text, it's been easy to train complex models in an unsupervised manner on lots of training data. The intent is to train the model to learn about words and the contexts in which they occur. For example, the model should learn a vector representation of \"bank\" and also discriminate between a river bank and a financial institution. \n\nA **pretrained language model**, first proposed in 2015, can save us expensive training on vast amounts of data. However, such a pretrained model may need some amount of training on domain-specific data. Then the model can be applied to many downstream NLP tasks. This approach is similar to pretrained word embeddings that didn't capture context. \n\nThe use of a pretrained language model in another downstream task is called **transfer learning**, a concept that's also common in computer vision. \n\nIt's expected that transformer model will dominate over RNN. Pretrained models will get better. It'll be easier to fine tune models. Transfer learning will become more important. \n\n\n### Could you share some real-world examples of NN in NLP?\n\nIn 2018, Google introduced Smart Compose in Gmail. A seq2seq model using email subject and previous email body gave good results but failed to meet latency constraints. They finally settled on a hybrid of bag-of-words (BoW) and RNN-LM. Average embeddings are fed to RNN-LM. \n\nAt Amazon, they've used a lightweight version of ELMo to augment Alexa functions. While ELMo uses a stack of BiLSTM, they use a single layer since Alexa transactions are linguistically more uniform. They trained the embeddings in an unsupervised manner, and then trained on two tasks (intent classification and slot tagging) in a supervised manner while only slowly adjusting the embeddings. They also did transfer learning on new tasks. \n\nUber has used NLP to filter tickets related to map data. Using word2vec, they trained word embeddings on one million tickets. This had the limitation that all words are treated equally. They then experimented with WordCNN and LSTM networks. They got best results with word2vec trained on customer tickets and used it with WordCNN. For future work, they suggested character-level (CharCNN) embeddings that are more resilient to typos. \n\n## Milestones\n\n2001\n\nBengio et al. point out the **curse of dimensionality** where the large vocabulary size of natural languages makes computations difficult. They propose a **feedforward neural network** that jointly learns the language model and vector representations of words. They refine their methods in a follow-up paper from 2003. \n\n2008\n\nCollobert and Weston train a language model in an unsupervised manner from Wikipedia data. They use supervised training for both syntactic tasks (POS tagging, chunking, parsing) and semantic tasks (named entity recognition, semantic role labelling, word sense disambiguation). To model long-distance dependencies, they use a **Time-Delay Neural Network (TNN)** inspired from CNN. They use multiple layers to move from local features to global features. Moreover, two models can share word embeddings, an approach called **multitask learning**. \n\n2010\n\nMikolov et al. use a **recurrent neural network (RNN)** for language modelling and apply this for speech recognition. They show better results than traditional n-gram models. \n\n2012\n\nDahl et al. combine deep neural network with hidden Markov model (HMM) for large vocabulary speech recognition. \n\n2013\n\nGoing beyond just word embeddings, Kalchbrenner and Blunsom map an entire input sentence to a vector. They use this for machine translation without relying on alignments or phrasal translation units. In another research, **LSTM** is found to capture long-range context and therefore suitable for generating sequences. In general, 2013 is the year when there's research focus on using CNN, RNN/LSTM and recursive NN for NLP. \n\n2014\n\nUsing pretrained word2vec embeddings, Yoon Kim uses CNN for sentence classification. Also in 2014, Sutskever et al. at Google apply **sequence-to-sequence** model to the task of machine translation. They use separate 4-layered LSTMs for encoder and decoder. Reversing the order of source sentences allows LSTM to exploit short-term dependencies and therefore do well on long sentences. Seq2seq models are suited for NLG tasks such as captioning images or describing source code changes. \n\nSep \n2014\n\nBahdanau et al. apply the concept of **attention** to the seq2seq model used in machine translation. This helps the decoder to \"pay attention\" to important parts of the source sentence. It doesn't force the encoder to pack all information into a single context vector. Effectively, the model does a soft alignment of input to output words. \n\nNov \n2015\n\nDai and Le propose a two-step procedure of unsupervised pre-training followed by supervised training for text classification. This **semi-supervised** approach works well. Pre-training helps to initialize the model for supervised training and generalization. More unlabelled data during pre-training is seen to improve supervised learning. In later years, this approach becomes important. \n\nSep \n2016\n\nGoogle replaces its phrase-based translation system with **Neural Machine Translation (NMT)**. This reduces translation errors by 60%. It uses a deep LSTM network with 8 encoder and 8 decoder layers. The first layer of encoder is BiLSTM. The model also uses residual connections among the LSTM layers. \n\nMay \n2017\n\nRecurrent architectures can't be parallelized due to their sequential nature. Gehring et al. therefore propose using CNNs for seq2seq modelling since CNNs can be parallelized and make best use of GPU hardware. The model uses gated linear units, residual connection and attention in each decoder layer. \n\nDec \n2017\n\nVaswani et al. propose the **transformer** model in which they use a seq2seq model without using RNN. The transformer model relies only on **self-attention**. By 2018, the transformer leads to state-of-the-art models such as **OpenAI GPT** and **BERT**. \n\n2018\n\nResearchers at the Allen Institute for Artificial Intelligence introduce **ELMo (Embeddings from Language Models)**. While earlier work derived contextualized word vectors, this was limited to the top LSTM layer. ELMo's word representations use all layers of a bidirectional language model. This allows ELMo to model syntax, semantics and polysemy. Such a language model can be pretrained on a large scale and then used for a number of downstream tasks. \n\nFeb \n2019\n\n**OpenAI GPT-2** shows its power in natural language generation. Trained on 8 million websites, it has 1.5 billion parameters. The model is initially not released to the public due to concerns of misuse (such as fake news generation). However, in November 2019, GPT-2 is released.","meta":{"title":"Neural Networks for NLP","href":"neural-networks-for-nlp"}} {"text":"# Probabilistic Neural Network\n\n## Summary\n\n\nA Probabilistic Neural Network (PNN) is a feed-forward neural network in which connections between nodes don't form a cycle. It's a classifier that can estimate the probability density function of a given set of data. PNN estimates the probability of a sample being part of a learned category. Machine learning engineers use PNN for classification and pattern recognition tasks. A PNN is designed to solve classification problems by using a statistical memory-based approach that can be supervised or unsupervised. \n\nThe probabilistic neural net is based on the idea of conventional probability theory, such as Bayesian classification and other estimators for probability density functions, to construct a neural net for classification. The widespread use of PNN originated from the usage of kernel functions for discriminant analysis and pattern recognition.\n\n## Discussion\n\n### What is the architecture of a PNN?\n\nThe PNN architecture has four layers: \n\n + **Input Layer**: \\(p\\) neurons represent the input vector and distribute it to the next layer. \\(p\\) equals the number of input features.\n + **Pattern Layer**: This layer applies the kernel to the input. It organizes the learning set by representing each training vector by a hidden neuron that records the features of this vector. During inference, each neuron calculates the Euclidean distance between the input test vector and the training sample, then applies the radial basis kernel function. In this way, it encodes the PDF centered on each training sample or pattern. For class \\(j\\), we have \\(n\\_j\\) neurons, for \\(j\\) in \\([1,m]\\).\n + **Summation Layer**: This layer computes the average of the output of the pattern units for each class. There's one neuron for each class. Each class neuron is connected to all neurons in the pattern layer of that class.\n + **Output Layer**: This layer selects the maximum value from the summation layer, and the associated class label is determined accordingly.\n\n### Could you explain PNN with a simple example?\n\nConsider the task of classifying the letters O, X, and I. The characters can be in uppercase or lowercase. We consider two features: length and area of each character. Consequently, the training set will have 6 letters `(O,o,X,x,I,i)`. Each training data point will be identified with a `(length, area)` value. For example, `O(0.5,0.7)`, `o(0.2,0.5)`, `X(0.8,0.8)`, `x(0.4,0.5)`, `I(0.6,0.3)` and `i(0.3,0.2)`. \n\nThe input layer of the PNN will have two neurons, one for each feature, that is, one node for length and one for area.\n\nWe have three classes. Each class has two patterns in the pattern layer, one for uppercase and one for lowercase. For example, for class O there are two subtypes (O,o). In total, the pattern layer has six neurons.\n\nThe summation layer will calculate the average value for each pattern type of the pattern layer and output layer will pick the maximum value, thereby determining the suitable class O, X, I. \n\nAn advantage of PNN is that there is no back-propagation training. New pattern units can be added without additional time overhead, since no training is needed; it is automatic.\n\n\n### What are the concepts from which PNN was derived?\n\nThe **Parzen window** density estimation, or the Kernel Density Estimation (KDE), is a non-parametric density estimation technique. It's used to derive a density function \\(f(x)\\). When we have a new training sample \\(x\\) and there's a need to compute the value of the likelihoods, \\(f(x)\\) is used. \\(f(x)\\) takes the sample input data value and returns the density estimate of the given data sample. This doesn't require any knowledge about the underlying distribution and is also used for classification. \n\nParzen windows are seen as a generalization of **k-Nearest Neighbour (KNN)** techniques. Rather than choosing k nearest neighbours of a test point and labelling the test point with the weighted majority of its neighbours' votes, one can consider all points in the voting scheme and assign their weights by using kernel function. \n\nKNN is a non-parametric algorithm based on supervised learning. This is used for classification and regression. The KNN algorithm assumes that similar things exist in close proximity. It considers k nearest neighbours (data points) to predict the class or continuous value for the new data point. \n\n\n### How does PNN work?\n\nThe input layer transmits the characteristics of the sample to the network, specifically the pattern layer. The number of input neurons are the same as dimensions of the sample. \n\nFor the pattern layer, the Euclidean distance between the feature vector of the training sample \\(X\\) and radial center \\(x\\_{ij}\\) realizes matching between the input feature vector and various types in training set. Here, \\(X=[x\\_1, x\\_2, … , x\\_n] \\cdot T\\), for \\(n\\) in \\([1 .. l]\\), and \\(l\\) represents all types of training, \\(d\\) is the dimension of eigenvector, \\(x\\_{ij}\\) is the j-th center of the i-th training sample, and σ is a smoothing factor. The pattern layer also shows \\(m\\) different classes. Among the \\(l\\) neurons, each one belongs to exactly one class.\n\n\\(v\\_i\\) is the output for class \\(i\\) in the summation layer. \\(L\\) is the number of class \\(i\\) neurons. The type corresponding to maximum output in the summation layer is the output type of the output layer, given by\\(Type(v\\_i) = arg max(v\\_i)\\).\n\n\n### How do I select the right smoothing parameter for a PNN?\n\nParticularly when the training dataset is limited, performance of a PNN depends on the right selection of the smoothing parameter σ. Small σ creates a multimodal distribution. Larger σ leads to interpolation between points. Very large σ approaches Gaussian PDF. Intuitively, σ should depend on the density of the samples. \n\nThe simplest technique is to use the standard deviation of training samples for each dimension or feature. Cross-validation (training vs validation datasets) gives better generalization. Clustering is another technique. In **gap-based estimation**, Zhong et al. improved on these techniques by modelling the distances between a training sample and its neighbours. They estimated σ per input feature, noting that estimating σ per feature per class is not as good. \n\n**Genetic algorithms** have been used to estimate σ. In R language, `pnn` package uses a genetic algorithm from `rgenoud` package to estimate σ. \n\nKusy and Zajdel studied three techniques from **reinforcement learning**: Q(0)-learning, Q(λ)-learning, and stateless Q-learning. Results were similar to state-of-the-art performance of alternative approaches. \n\n\n### What are some of the variations of the traditional PNN?\n\n**Enhanced PNN (EPNN)** uses Local Decision Circles (LDCs) that enable incorporation of local information and non-homogeneity existing in the training population. The circle has a radius that limits the contribution of the local decision. The two Bayesian rules used by EPNN are: (a) A global rule that estimates the conditional probability of each class, given an input vector of data considering all training data and using the spread parameter. (b) A local rule that estimates the conditional probability of each class, given an input vector of data existing within a decision circle, considering only the training data. \n\n**Competitive PNN (CPNN)** adds novel competitive features to EPNN to utilize data most critical to the classification process. A competitive layer ranks kernels for each class and an optimum fraction of kernels are selected to estimate the class-conditional probability. \n\n**Supervised Learning PNN (SLPNN)** has three kinds of network parameters that are adjusted through training: variable weights representing the importance of input variables, reciprocal of kernel radius representing the effective range of data, and data weights representing the data reliability. \n\n\n### What are some specializations of PNN?\n\nOften researchers attempt to change the PNN's structure so that it becomes more practical to implement the pattern layer even with large number of training samples. Type of data is also a motivation to specialize. We note a few of these. \n\nZaknich et al. applied PNN for **time series analysis**. Their architecture was modified so that current output depends on current input, preceding five inputs and following five inputs. Thus, their input vector had 11 coefficients. Tested on sinusoidal signals, attenuated and then corrupted with noise, the PNN network produced a smoothened output. Performance improved when the model contained more classes. \n\n**Interval PNN** is a PNN that classifies interval data. As is common in practical applications, the training dataset may be accurate but test data contains less precise measurements. This imprecision is handled by the model using intervals. \n\n\n### What are the applications based on PNN?\n\nPNN's main applications include classifying labelled stationary data patterns or patterns with time-varying PDF. In signal processing, PNN considers waveforms as patterns and thereby recognizes specific events and their severity. In one example, PNN was used to recognize 11 types of disturbances in power quality using waveforms of voltage magnitude, frequency, and phase. \n\nPNN is applied to pattern recognition problems such as character/object/face recognition. PNN brings flexibility, straightforward design and minimal training time.\n\nFor text-independent speaker identification, PNN provides good results in matching speaker for each input vector. To increase the success rate, multiple input vectors from each sample are needed. \n\nThe figure shows PNN applied to ship identification. Even with a noisy image has been used as input of neural network, PNN performs well. Covariance matrix of discrete wavelet transform of ship image is used as input. \n\nPNN is used for overcoming the computational complexity involved in performing sensor configuration management in a wireless ad-hoc network. \n\n\n### What are the pros and cons of PNN?\n\nIn a PNN, there's no extensive training computation time associated with networks that use back-propagation. Instead, each data pattern is represented with a unit that measures the similarity of the input pattern to the data patterns. PNN learns from the training data instantaneously. With this speed of learning, PNN has the capability to adapt its learning in real time, deleting or adding training data as new conditions arise. Additionally, PNNs are relatively insensitive to outliers and approach Bayes optimal classification as the number of training samples increases. PNNs are guaranteed to converge to an optimal classifier as the size of the representative training set increases.\n\nHowever, PNN has its limitations. Because there's one hidden node for each training instance, more computational resources (storage and time) during inference. Additionally, the performance of the system usually decreases in terms of the classification accuracy and speed with a very big hidden layer. \n\n\n### Which are the main research approaches to improve PNN performance?\n\nTo reduce expensive computational times and storage requirements of a full Parzen window classifier, a weighted-Parzen window classifier is an option. A **clustering** procedure is used to find a set of reference vectors and weights that are used to approximate the Parzen window (kernel estimator) classifier. For clustering, even k-means algorithm can be applied. The basis is that not all the patterns contain original, independent, and discriminating information. Thus, clustering can reduce the number of neurons in the pattern layer. \n\nWhen input samples have too many features, **Principal Component Analysis (PCA)** can be used to reduce the number of features. \n\nFor optimization and alteration of spread parameters in PNNs, in **heteroscedastic PNN** (hetero - different, skedasis - dispersion), the kernels of each class are allowed to have their own spread parameter matrix. \n\nThe issue of data heterogeneity and noisy datasets are addressed by **EPNN**, by implementing Local Decision Circles (LDCs) to modify the spread parameter of each training vector and bi-level optimization to find the optimal value of spread parameter and radius of LDCs. \n\n## Milestones\n\n1962\n\nParzen discusses the problem of estimation of a PDF and the problem of determining the mode of a PDF. He also relates the similarity of the problem of estimating a PDF to the problem of estimating the spectral density function of a stationary time series. While the problem of estimating the mode of a PDF is almost similar to the problem of maximum likelihood estimation of a parameter. \n\n1966\n\nIn an attempt to perform classification for pattern recognition, Specht uses a Bayes strategy to merely transform the problem to one of estimating PDFs for each of the possible categories on the basis of training samples available. This is accomplished with an estimator which a) is shown to be consistent (tends to be identical with the true density in the limit as the number of training samples is increased to infinity and b} can be expressed in terms of a polynomial, the coefficients of which can be computed on a one-pattern-at-a-time basis. \n\n1990\n\nSpecht introduces the term Probabilistic Neural Network, for a neural network that replaces the sigmoid activation function often used in neural networks with an exponential function. A PNN can compute nonlinear decision boundaries which approach the Bayes optimal. Architecturally, the neural network is designed to have four layers that can map any input pattern to any number of classifications. This technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. \n\n1991\n\nIn order to compensate the flaw of PNN of not being robust with respect to affine transformations of feature space, leading to poor performance on certain data, a weighted PNN (WPNN) is derived. This allows anisotropic Gaussians, i.e. Gaussians whose covariance is not a multiple of identity matrix. \n\n2000\n\nMao et al. propose two improvements to the PNN: select a suitable smoothing parameter using a genetic algorithm and then select a representative set of pattern layer neurons from the training samples using Forward Regression Orthogonal Algorithm. A similar research was published independently by Chen et al. in September 1999. \n\n2007\n\nA modified PNN for brain tissue segmentation with MRI is proposed. Here, covariance matrices are used to replace the singular smoothing factor in the PNN's kernel function, and weighting factors are added in the pattern of summation layer. This weighted PNN (WPNN) classifier can account for partial volume effects that exist commonly in MRI, not only in the final result stage, but also in the modelling process. \n\n2010\n\nAn enhanced and generalized PNN (EPNN) is proposed using local decision circles (LDCs) to overcome the shortcoming of PNN wherein it doesn't consider probable local densities or heterogeneity in training data. Also, EPNN improves PNN's robustness to noise in data. \n\n2011\n\nA Supervised Learning PNN (SLPNN) is proposed with three kinds of network parameters that can be adjusted through training. The SLPNN is slightly more accurate than MLP and much more accurate than PNN. \n\n2016\n\nConsidering that spread has a great influence on PNN's performance, a self-adaptive PNN (SaPNN) is proposed. In this, spread can be self-adaptively adjusted and selected and then the best selected spread is used to guide the SaPNN train and test. This SaPNN has a more accurate prediction and better generalization performance as compared to basic PNN. \n\n2016\n\nA modified PNN (MPNN) is introduced which is an extension of PNN with the weight coefficients introduced between pattern and summation layer of the model. These weights are calculated by using the sensitivity analysis procedure. MPNN improves the prediction ability of the PNN classifier. \n\n2017\n\nA Competitive PNN (CPNN) is presented wherein a competitive layer ranks kernels for each class and an optimum fraction of kernels are selected to estimate the class-conditional probability. Performance percentage of CPNN is found to be greater than or equivalent to that of traditional PNN.","meta":{"title":"Probabilistic Neural Network","href":"probabilistic-neural-network"}} {"text":"# Richardson Maturity Model\n\n## Summary\n\n\nConsider remote web servers as huge repositories of data. Client applications can access them via APIs. It could be public data such as weather forecasts, stock prices, or sports updates. Data could be private such as company-specific business information that's accessible to employees, vendors or partners.\n\nREST (REpresentational State Transfer) is a popular architectural style for designing web services to fetch or alter remote data. APIs conforming to the REST framework are considered more mature because they offer ease, flexibility and interoperability. **Richardson Maturity Model (RMM)** is a four-level scale that indicates extent of API conformity to the REST framework. \n\nThe maturity of a service is based on three factors in this model: URI, HTTP Methods and HATEOAS (Hypermedia). If a service employs these technologies, it’s considered more mature. This model covers only API architectural style, not data modeling or other design factors.\n\n## Discussion\n\n### What are the different levels in the Richardson Maturity Model?\n\nRMM has four levels of maturity, from the lowest to highest:\n\n + **Level-0: Swamp of POX**: Least conforming to REST architecture style. Usually exposes just one URI for the entire application. Uses HTTP POST for all actions, even for data fetch. SOAP or XML-RPC-based applications come under this level. POX stands for *Plain Old XML*.\n + **Level-1: Resource-Based Address/URI**: These services employ multiple URIs, unlike in Level 0. However, they use only HTTP POST for all operations. Every resource is identifiable by its own unique URI, including nested resources. This resource-based addressing can be considered the starting point for APIs being RESTful.\n + **Level-2: HTTP Verbs**: APIs at this level fully utilize all HTTP commands or verbs such as GET, POST, PUT, and DELETE. The request body doesn’t carry the operation information at this level. The return codes are also properly used so that clients can check for errors.\n + **Level-3: HyperMedia/HATEOAS**: Most mature level that uses *HATEOAS (Hypertext As The Engine Of Application State)*. It's also known as HyperMedia, which basically consists of resource links and forms. Establishing connections between resources becomes easy as they don’t require human intervention and aids client-driven automation.\n\n### Could you illustrate the RMM levels with an example?\n\nConsider the example of creating, querying, and updating employee data, which is included in the Sample Code section. \n\nAt Level 0, an employee's last name would be used to get employee details. Though just a query, HTTP GET is not used instead of HTTP POST. Response body would need to be searched to obtain the employee details. If there's more than one employee with the same last name, multiple records are returned in proprietary format.\n\nAt Level 1, specificity is improved by querying with employee ID rather than with last name. Each employee is uniquely identifiable. This implies that each employee has a unique URI, and therefore is a uniquely addressable resource in REST.\n\nWhile Level 1 uses only POST, Level 2 improves upon this design by using all the HTTP verbs. Query operations use GET, update operations use PUT and add operations use POST. This also ensures sanctity of employee data. Appropriate error codes are returned at HTTP protocol layer. This implies that we don't need dig into response body to figure out errors.\n\nAt Level 3, response about an employee includes unique hyperlinks to access that employee's records.\n\n\n### What are the characteristics of RMM Level 0?\n\nAPI at this level usually consists of functions passing the server object and the name of the interface to the server object. This is sent to the server in XML format using the HTTP post method. The server analyses the request and sends the result to the client, also in XML. If the operation fails, some sort of error message will be in the reply body.\n\nAt Level 0, you can do an actual object transfer between the client and server, not just its representation state (as in REST). Object data can be in standard formats (JSON, YAML, etc.) or in custom formats. If it’s SOAP, the specifications are in an accompanying WSDL file. \n\nHTTP is used in a very primitive manner, just as a tunnelling mechanism for the client’s own remote operations. For that matter, other application layer protocols such as FTP or SMTP can also be used. \n\n\n### What are the characteristics of RMM Level 1?\n\nLevel 1 employs several URIs, each of which leads to a specific resource and acts as an entry point to the server side. This aspect is common with Levels 2 and 3.\n\nRequests are delivered to the specific resource in question, rather than simply exchanging data between each individual service endpoint. While this seems like a small difference, in terms of function it’s a critical change. The identity of specific objects is established and invoked. Only arguments related to its function and form are passed to each object/resource. \n\nFrom this level onwards, it is always HTTP as the application layer protocol. The key non-conformity to REST in this level is disregarding the semantics of HTTP. Only HTTP POST is used for all CRUD (Create, Read, Update, Delete) operations. The return status and data retrieved are all in the body of the HTTP response. \n\n\n### What are the characteristics of RMM Level 2?\n\nThe emphasis in this level is on the correct usage of the various HTTP verbs, especially the use of GET and POST commands. By restricting all fetch operations to GET, an application can guarantee safety and sanctity of the data on the server side. Caching of data at client side for quicker access becomes possible too. \n\nThe model calls this level *HTTP Verbs*. It implies that only HTTP can be used. Actually, REST architectural pattern doesn't mandate the use of HTTP. It's entirely protocol neutral. \n\nAt the server side, the 200 series of HTTP response codes should be used properly to indicate success, and the 400 series to represent different types of error.\n\nTaken together, levels 1 and 2 clarify at a high level (such as in server logs) what resources are being accessed, for what purpose, and what happened.\n\nThe non-conformity here is quite subtle. The API at this level lacks the transparency for the client to navigate the application state without some external knowledge, generally present in supplied documentation. \n\n\n### What are the characteristics of RMM Level 3?\n\nIn order to ensure complete conformance to RESTful architecture, the final hurdle is to provide smooth navigation for the client throughout the application. This is supported through Hypermedia controls.\n\nThe HTTP responses include one or more logical links for the resources that the API accesses. That way, the client doesn’t have to rely on external documentation to interpret the content. Because the responses are self-explanatory, it encourages easy discoverability. This is done using **HATEOAS (Hypertext as the Engine of Application State)**, which forms a dynamic interface between the server and client. \n\nOne obvious benefit is that it allows the server to change its URI scheme without breaking clients. As long as the client-facing URI is maintained intact, the server can juggle around its resources and contents internally without impacting the client. Service updates in the application become seamless without service disruption. \n\nIt also allows the server team to inform the client of new features by putting new links in the responses that the client can discover.\n\n\n### What sort of applications can still be designed with Level 0 or Level 1 APIs?\n\nPlenty of online services belong to RMM Level 0 or 1 such as Google Search Service (now deprecated) and Flash-based websites. However, these are slowly being phased out. \n\nIf you are designing a monolithic service that performs only one major function, then probably a Level 0 API would suffice as there is no need for multiple URIs. Level 0 is adequate for a service of short-term purpose that's not meant to be extended or upgraded. Think of an exam result declaration website as an example. Its use is restricted to a few days when the results are out. After that, it would just become inactive and irrelevant. There is just one function that the web service performs: it returns an individual student’s pass/fail state. If a few resources are to be accessed, then Level 1 can be employed.\n\nIn cases where HTTP is not desired as the transport mechanism, then Level 0 APIs written using SOAP can work even with alternatives like FTP or SMTP. \n\n\n### When does it become necessary to have APIs designed at Level 2 or Level 3?\n\nWeb products and solutions are increasingly being deployed using the SaaS distribution model. To support this, design and manufacturing models are also service-oriented (SOA). They are exposed as microservices, a loosely coupled collection of services. \n\nPopular cloud-based subscriptions such as mobile services, streaming web content, office automation products are all deployed in this fashion. Such services are essentially designed with APIs conforming to the Levels 2 and 3 of the model.\n\nApplications choose Level 2 or 3 due to some additional factors – (1) Need to work even with limited bandwidth and computing resources (2) Rely on caching for greater performance (3) Stateless operations \n\nLower level APIs like SOAP can be compared to an envelope. It has content inside, address written outside. Content is obscured from public view. However, higher levels that conform to REST are like a postcard. Entire content is directly visible. Though this aspect brings a lot of transparency and discoverability, there is a security threat perception associated with the data. Applications take care of this aspect using security features such as in the Spring framework. \n\n\n### What are the other models used to judge the design standards of remote application interfaces?\n\nRichardson Maturity Model classifies applications with varying levels of API maturity based on conformance to REST framework.\n\nThere is another model called the **Amundsen Maturity Model**, which classifies APIs based on their data model abstraction. At higher levels of this model, the API is more decoupled from the internal models or implementation details. \n\n## Milestones\n\nMar \n1998\n\nDave Winer introduces **XML-RPC**, a lightweight rival of SOAP. SOAP development is held up due to several disagreements. Hence, XML-RPC comes out first. About this time (February 1998), W3C releases **XML Version 1.0**. \n\nMay \n2000\n\n**Simple Object Access Protocol (SOAP)**, originally developed at Microsoft, becomes a W3C recommendation. It marks the beginning for web services between clients and remote web servers. SOAP is widely adopted by most emerging companies like Salesforce, Ebay and Yahoo. \n\n2000\n\nRoy Fielding, unhappy with SOAP implementation comes out with a protocol neutral architectural style called **REST**. This is published as his PhD dissertation at UC Irvine. REST is just a set of design principles at this stage. \n\n2003\n\nAmazon S3, a file hosting service, releases APIs with support for URI and HTTP verbs. But the design doesn't provide resource links. Instead, resources are marked by unique key-value pairs. \n\n2004\n\nAdobe, Netflix and some other major service providers adopt web services design, supporting hypermedia links in their responses. This open API design is greatly appreciated. \n\n2008\n\nAfter analysing hundreds of web service designs, Leonard Richardson comes up with a model that helps to distinguish between good and bad designs. His yardstick is purely based on API maturity. This model is popularly referred to as **Richardson Maturity Model (RMM)**. \n\n2015\n\nNews aggregators and media companies like The Guardian and Wall Street Journal upgrade their web services for better REST conformity. Now their services are compliant with Level 3 of the Richardson Maturity Model, with hypermedia controls. \n\n2016\n\nRodríguez et al. publish results of a study on HTTP traffic from an Italian mobile internet provider's network. They note most APIs conform to RMM Level 2, implying a focus on providing CRUD access to individual resources. Even when some APIs are at higher levels of maturity, there's wide variation in the adoption of best practices. For example, use of trailing forward slash in URI is not recommended. Use of lowercase and hyphens (rather than underscores) in URI is recommended. \n\n2018\n\nAdobe releases Hypermedia driven UI framework for remote web services called Granite UI. Granite UI is an umbrella name for UI projects to build a LEGO® architecture for UI, so that one can build complex web applications with much less code.","meta":{"title":"Richardson Maturity Model","href":"richardson-maturity-model"}} {"text":"# Leaky Abstractions\n\n## Summary\n\n\nIn any large system of many components, it's impossible to keep in view full implementation details of all components. To manage complexity better, it's helpful to abstract away the details of other components when working on a particular component. Each component talks to other components via **interfaces** without worrying about the implementation details of those components. It's for this reason that **abstractions** are used. \n\nSometimes an abstraction is not perfect. When an abstraction fails to hide some of the underlying implementation details, we call this a **leaky abstraction**. In this case, client users of that interface will experience wrong or unsatisfactory behaviour. Clients can mitigate this by considering implementation details behind the interface and changing the way they use the interface.\n\n## Discussion\n\n### Could you explain leaky abstractions with an example?\n\nLet's take the example of hashing that takes plaintext and produces a hash out of it. This problem is so well defined that application programmers rarely need to write their own implementation. Many libraries are available for hashing and their interfaces nicely abstract away the implementations. Programmers rarely need to bother how the hashing is implemented. They can simply treat hashing as a \"black box\". Hashing is an example of a good abstraction. \n\nAn example of a leaky abstraction is Axios that wraps the *fetch* JavaScript API in browsers. When there's an HTTP error, Axios will coerce it into JavaScript error. This behaviour is different from *fetch* that treats even HTTP 404 responses as successful responses. Axios behaviour may work for many use cases but it's not the general case. Some applications may not want this behaviour. \n\nConsider a database search. A MySQL query containing `LIKE 'abc%'` is fast but one containing `LIKE '%abc%'` is slow. This is because indices use binary trees in which the latter search is not optimized. Thus, the implementation is exposed and clients have to be aware of this. \n\n\n### Which are the different types of leaky abstractions?\n\nIn some examples of leaky abstractions, we find that **performance** is affected. An MySQL query may run lot slower than expected. An array access may take lot longer than expected. \n\nIn other examples, we find that the **behaviour** is not as expected of the abstraction. An HTTP 404 status code is coerced into a JavaScript error. A database orchestration layer promises support for transactions when in fact it can't achieve this when dealing with multiple SQL and NoSQL databases. A single call to the service layer results in six HTTP calls when in fact the caller expects only one. \n\nAnother variant, or perhaps a related leak, is called **technical leak**. This can be stated as \"it compiles, but doesn't work\". For example, an interface would work only if the methods are called in a certain order. This is a technical leak that's called *temporal coupling*. Another example is initializing an object before using it or closing a database before destroying the object. Technical leaks require developers to learn something about the implementation even when it has nothing to do with business logic. \n\n\n### What does the Law of Leaky Abstractions say?\n\nThis is a phrase coined by Joel Spolsky in 2002. It says, \n\n> All non-trivial abstractions, to some degree, are leaky.\n\nSpolsky gives many examples where leaky abstractions arise. TCP provides higher layers reliable transport and delivery of packets. However, TCP can't do anything if a cable is cut or there's an overloaded hub along the way. The abstraction therefore leaks. \n\nWhen iterating over a 2-D array, performance shouldn't differ if you iterate by rows versus by columns. Ideally, a programmer should not care about how the array is stored in memory. In reality, when virtual memory is involved and page faults happen, some memory accesses may take a lot longer. C++ string class is another example of leaky abstraction. They're not first-class data types. On a string instance `s`, we can do `s + \"bar\"` but when we do `\"foo\" + \"bar\"` we have to recognize that strings are really `char*` underneath. \n\nIn conclusion, abstractions are good when writing code but we still have to learn what's underneath them. High-level languages with abstractions are paradoxically harder to work with since we have to learn what these abstractions are attempting to hide. \n\n\n### How can developers overcome leaky abstractions?\n\nAbstractions reduce complexity but they're not perfect. If an abstraction leaks too much, remove it or create a better one. If you're writing an abstraction, document its limitations. Abstractions are good but having too many adds complexity, as noted by David J. Wheeler, \n\n> All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection.\n\nGiven that at least some abstractions will leak, developers could create a **wrapper** around the abstraction. The application is required to call this wrapper rather than the original abstraction. This wrapper would modify the behaviour into what the application expects. \n\nIn a more extreme case, the developer **reimplements** the functionality to suit the application. This is not a good practice since, with the loss of the abstraction, the application becomes more complex. \n\nAnother approach is to **code between the lines**. The developer understands the implementation behind the abstraction (such as how memory is allocated) and contorts the code to suit that implementation. Code becomes more complex, less readable and less portable to other platforms. \n\n## Milestones\n\n1974\n\nThe fact that abstractions leak is recognized in the design of high-level programming languages that tend to abstract low-level details. A quote by Niklaus Wirth is relevant here,\n\n> I found a large number of programs perform poorly because of the language’s tendency to hide “what is going on” with the misguided intention of “not bothering the programmer with details.”\n\n1980\n\nIn the context of programming languages, some decisions taken by language designers are seen to be **pre-emptive**, that is, they constrain developers to use the language in a specific way. For example, a developer needs two triangular arrays but is forced to use two rectangular arrays (more memory) or pack them into a single rectangular array (complex code). We may say that the abstraction provided by the language doesn't suit such specialized use cases. \n\n1992\n\nGregor Kiczales explains leaky abstractions at a workshop. He proposes to divide an abstraction into two parts: one that does abstraction in the traditional way and another that allows clients some control over the implementation. He calls this an **open implementation** supported by meta-level architectures and metaobject protocols. For designing meta-level interfaces he notes four design principles: scope control, conceptual separation, incrementality and robustness. \n\n2002\n\nJoel Spolsky on his blog *Joel on Software* coins and explains the phrase \"Law of Leaky Abstractions\". \n\n2006\n\nRyan Bemrose of Microsoft states a corollary to the Law of Leaky Abstractions, \"An abstraction should not hide or disable that which it abstracts\". He gives an example of an IRC bot that could interface with third-party plugins. The IRC protocol itself is abstracted via an object-oriented interface. It's discovered that plugins that use custom modes can't function properly because such functionality was disabled by the abstraction. Abstractions are useful but we shouldn't attempt to abstract away everything. \n\nApr \n2017\n\nAt the ContainerWorld 2017 conference, one speaker notes that **containers** are also leaky abstractions. Processes running inside containers have to sometimes know about I/O performance, versions, configurations, garbage collection of old images, etc. In one example, it's seen that two containers contending for the same IO are affected.","meta":{"title":"Leaky Abstractions","href":"leaky-abstractions"}} {"text":"# gRPC\n\n## Summary\n\n\nTightly integrated monolithic applications of the past gave way to modular design. With the growth of the Internet, different parts an application could reside in different servers and accessed over the network using what we *Remote Procedure Call (RPC)*. With the advent of cloud computing, applications are composed of microservices. These microservices are loosely integrated but their interfaces are precisely defined using APIs. \n\ngRPC is a framework that enables the implementation of highly scalable and performant application whose parts are distributed over a network. The framework abstracts away the low-level networking details from app developers so that they can focus on the application logic. Developers need not worry about how one part calls a functionality in another remote part. gRPC takes care of this and enables more responsive real-time apps. \n\n**gRPC** is a recursive abbreviation that expands to **gRPC Remote Procedure Call**.\n\n## Discussion\n\n### How has gRPC come about?\n\nSince the 1990s, networking has played an important part of computing infrastructure. In distributing computing, one computer can trigger an action on another remote computer. CORBA, DCOM and Java's Remote Method Invocation(RMI) were used to make this possible. As the web matured in early 2000s, RPC methodology started relying more and more on HTTP, XML, SOAP and WSDL. The idea was to use standardized technologies for interoperability regardless of languages or platforms. With Web 2.0 and cloud computing, HTTP/XML/SOAP combination was replaced with HTTP/JSON, which resulted in REST. Cloud applications were composed of microservices and these communicated via REST APIs. \n\nMore recently, smartphones and IoT devices are invoking these microservices. These devices have limited compute power, communication bandwidth or both. In such cases, HTTP/JSON with REST is seen as an inefficient approach. gRPC solves this problem. In fact, back in 2016, within it's data centers Google made about 100 billion RPC calls per second. At that scale, Google calling REST APIs would have been inefficient. \n\n\n### What are the essential elements of gRPC?\n\nOne way to summarize gRPC is that, \n\n> It's a lean platform using a lean transport system to deliver lean bits of code.\n\n**gRPC** is the platform. **HTTP/2** is the transport. **Protocol Buffers** deliver lean bits of code. Together, they reduce latency and improve efficiency. \n\ngRPC depends on HTTP/2. HTTP/2 supports bidirectional streaming, flow control and header compression. Moreover, multiple requests can be sent over a single TCP connection. \n\nWhen distributed parts of an application need to communicate with one another, data needs to be serialized. Protocol Buffers is used as the *Interface Definition Language (IDL)* to specify both the service interface and the structure of the payload messages. Thus, the specification is developer-friendly. In addition, data is sent on the wire in an efficient binary format. The encoding and decoding is transparent to the developer and done automatically by the framework.\n\ngRPC is about services and messages, not objects and references. gRPC is language agnostic: developer can use any programming language. gRPC is payload agnostic: the use of Protocol Buffers is not required. gRPC provides authentication out of the box. \n\n\n### What are the benefits of using gRPC?\n\nIn the world of RESTful JSON API, there can be problems of version incompatibility: the API has changed at the server, and client is using an older version. Writing a client library along with authentication is also extra work. The use of HTTP/1.1 is also a poor fit for streaming services. Because JSON data is textual, it takes more bandwidth. It also takes more work to encode and decode JSON. \n\nWith gRPC, both client and server code are generated by the compiler. gRPC comes with language bindings for many popular languages. The use of Protocol Buffers solves the efficiency problem of JSON. Bidirectional streaming is possible and authentication is part of the framework. \n\nProtobuf ensures backward compatibility and offers validations and extensibility. Protobuf provides static/strong typing that REST lacks. This minimizes the overhead of encoding. \n\nCompared to other RPC frameworks, gRPC speaks the language of the web (HTTP/2), comes with multiple language bindings, and therefore makes it easier for developers.\n\n\n### What are the different gRPC life cycles?\n\ngRPC life cycles include: \n\n + **Unary**: This is the simplest one. When client calls a server method, server is notified with client metadata for this call, method name and deadline. Server may respond with its own metadata or simply wait for client's request message. Once request is received, server will execute the method, then send response and status code to client.\n + **Server Streaming**: Similar to the unary case, except that server sends a stream of responses.\n + **Client Streaming**: Similar to the unary case, except that the client sends a stream of requests. Server need not wait for all requests before sending a single response.\n + **Bidirectional Streaming**: This starts with a client request but subsequently client and server can read or write in any order. Each stream operates independent of the other. It's really application dependent.gRPC calls can be either synchronous or asynchronous. Since networks are inherently asynchronous, asynchronous calls are recommended so that current thread is not blocked. \n\n\n### Where does gRPC fit within the communication stack?\n\ngRPC is the framework layer that interfaces to the transport layer below and to the application layer above. In fact, gRPC makes it easier for application developers to make RPC calls over the network since the framework abstracts away all the low-level network operations. By using HTTP/2 for transport, gRPC benefits from all the performance enhancements that HTTP/2 offers over HTTP/1.1.\n\nDevelopers can write apps in any language of their choice. To ease the job of calling framework APIs, gRPC provides language bindings in many popular languages. The core of gRPC is implemented in C for performance reasons. These language bindings are therefore wrappers over the low-level C API. \n\nFinally, application code can be fully hand-written but it's possible to generate boilerplate code based on message definitions. For example, if we use protobuf for data serialization, the *protobuf compiler* can be used to generate request/response and client/server classes. The developer can therefore focus on writing her application to invoke the methods of these generated classes. \n\n\n### What are some myths surrounding gRPC?\n\nHere are some myths about gRPC:\n\n + gRPC works with only microservices: gRPC can be used in any distributed app that needs to communicate over the network.\n + gRPC depends on protobuf: Protocol Buffers are used by default but any other suitable data serialization protocol can be used depending on specific app requirements. JSON could be useful when data must be human readable or data needs to be directly consumed by a web browser.\n + gRPC will replace REST: This may happen in future but right now both will probably coexist and even interwork. Prefer REST for interoperability and gRPC for performance. It's easy to directly consume REST APIs with tools such as curl, but gRPC tooling may improve to offer similar functionality.\n + gRPC is only for server-side communications: Web browsers can also use gRPC. grpc-web is one project that enables this.\n + gRPC core is implemented in C: C is the default implementation but there are implementations in Go, Java, Swift, Dart and JavaScript.\n\n### If I'm developing a gRPC service, what's the recommended workflow?\n\nFirst you define the service methods and data using an IDL of your choice. Protocol Buffers is one possible choice of IDL. Definitions are stored in `*.proto` files. \n\nThe next step is to invoke the `protoc` compiler to generate code in languages of your choice. Many languages are supported as on June 2018: C++, Java, Python, Go, Ruby, C#, Node.js, Android Java, Objective-C, PHP and Dart. \n\nOn the client side, gRPC functionality is implemented in what's called a *stub*. Your app code can call into methods of the stub along with local objects as arguments. gRPC will translate the calls to gRPC requests to the server along with binary serialized protobuf. \n\n\n### How do I migrate my REST API-based web service to gRPC?\n\nMigrating a service to gRPC overnight might break many clients still using REST APIs. It's therefore recommended to have a transition period in which REST and gRPC services are both operational. Even in the long term, it might a good idea to redirect REST calls to gRPC. \n\nProject grpc-gateway is a way to generate *reverse proxy* code. grpc-gateway works as a plugin to protobuf compiler. The reverse proxy will then translate RESTful JSON API into gRPC method calls.\n\nWithin Google Cloud, transcoding HTTP/JSON to gRPC happens via what is called *Extensible Service Proxy (ESP)*. This is enabled by adding annotations in your `*.proto` files or separately in YAML files. \n\n\n### Who's been using gRPC in the real world?\n\ngRPC has been adopted by CoreOS, Netflix, Square, Cockroach Labs and etcd. Telecom companies Cisco, Juniper, and Arista are using it for streaming telemetry data and network configurations. Docker's containerd exposes functionality via gRPC. \n\nSquare moved from a proprietary RPC framework to gRPC and they have shared their experience on YouTube. Lyft has used gRPC along with Envoy. For some services, they used a Go implementation of gRPC. For others, they used Flask-Python with gevent, which didn't work well with gRPC. Hence, they used Envoy to bridge gRPC with HTTP/1.1 while using protobuf. \n\n\n### Could you share some performance numbers of gRPC?\n\nA benchmark test on Google Cloud using 9 gRPC channels compared gRPC performance versus HTTP1.1/JSON. gRPC achieved 3x throughput using only a fourth of the resources. On a per CPU basis, this was 11x the HTTP1.1/JSON throughput. This improvement is due to protobuf in which data is kept and transmitted in binary without involving any Base64 or JSON encoding/decoding. In addition, HTTP/2 can multiplex requests on a single connection and apply header compression. \n\nAnother test involving 300K requests of key/value stores in *etcd* showed that JSON took 1.6ms, single-client gRPC took 122.4µs and 100-client gRPC took 12.9µs. Memory usage per operation dropped by 44%. When comparing single-client and 100-client gRPC cases, the latter ran in one-fifth the time because multiple requests go on a single TCP connection. \n\nWhen sending files over gRPC, it's been shown that plain HTTP/2 is faster because there's no encoding or decoding; it's just raw data transmission. \n\n\n### How does gRPC compare against other RPC frameworks?\n\nWhen comparing different RPC frameworks, gRPC's deserialization can be slow compared to Cap'n Proto. Within gRPC, a C++ implementation is faster than one in Java, Go or Python. It should be noted that performance depends on the message complexity. Another benchmark tests showed that gRPC isn't the fastest. Cap'n Proto, rpclib and Thrift all perform better with Cap'n Proto being the best for deeply nested message structures. This could be because gRPC forgoes zero-copy approach that Cap'n Proto or Flatbuffers use. Instead, gRPC aims to encode data for smaller bandwidth requirements at the expense of extra CPU computation. \n\nThrift and gRPC both support code generation and serialization. However, Thrift doesn't use protobuf or HTTP/2. While REST+JSON API isn't RPC, Swagger is a tool that supports API design and code generation. gRPC alternatives in Java are many.\n\nSome developers might find it useful to study the implementation of different RPC mechanisms. Developer Renato Athaydes claims that gRPC doesn't work well with JVM types and hence provides his own version of RPC based on Protocol Buffers. \n\n## Milestones\n\nJul \n2008\n\nInitial version of **Protocol Buffers** is released. Version 1.0 of Protocol Buffers (internal to Google) inspires the design and implementation of version 2.0, often called *proto2*. Profocol Buffers are also called *protobufs*. \n\nDec \n2014\n\nVersion 3.0-alpha of Protocol Buffers (*proto3*) is released. A stable version 3.0.0 is released in July 2016. This development parallels that of gRPC. \n\nFeb \n2015\n\nGoogle releases and open sources **gRPC** based on its vast experience in building distributed systems. Google states that they're starting to expose most their public services via gRPC endpoints. gRPC uses HTTP/2. Meanwhile, Internet Engineering Steering Group (IESG) approves HTTP/2 as a proposed standard. In May 2015, HTTP/2 is published as RFC 7540. gRPC itself is influenced by Stubby, an earlier RPC framework used by Google internally. \n\nAug \n2016\n\nGoogle releases V1.0 of gRPC with improved usability, interoperability, and performance measurement. This version is now considered stable and production ready. Language support includes C++, Java, Go, Node, Ruby, Python and C# across Linux, Windows, and Mac; Objective-C and Android Java on iOS and Android. \n\nMar \n2017\n\nCloud Native Computing Foundation (CNCF) adds gRPC to its portfolio. This means that CNCF will host and guide the development of gRPC.","meta":{"title":"gRPC","href":"grpc"}} {"text":"# Functional Programming\n\n## Summary\n\n\nFunctional programming (FP) puts emphasis on functions, where functions are closer in spirit to mathematical functions than functions or subroutines in computer science. A function is something that takes inputs and returns outputs without any side effects. A set of inputs given to a function will always return the same set of outputs. This implies that functions do not depend on global variables, shared resources, the thread context or the timing across threads. Absence of side effects is essential to FP. \n\nFunctional programming has been around since the 1950s but they got sidelined by imperative languages such as C, C++ and Java. Today, in the context of parallel computing, synchronization across parallel threads can be difficult. FP has come to the rescue. FP enables clean abstractions, component design and interfaces making the program more declarative and shorter.\n\n## Discussion\n\n### What exactly do you mean by side effects?\n\nWhen a function relates to its external world in a hidden manner, then it has side effects. For a function without side effects, its inputs are passed explicitly as function arguments. It's output is via function return. Such a function does not receive inputs via global variables such as message queues, shared buffers, databases or files. Likewise, it does not modify external entities such as writing to a file or making an AJAX call in a web application.\n\nThe goal of FP is to reduce side effects, not completely eliminate them in the entire program. Since programs have to interface to users, write to files or make network calls, they cannot be without side effects. We can however make most of the program functional by writing functions without side effects. It's important to track which parts of the system are impure. Some languages have a type system to facilitate this. \n\nSometimes input and output are independently recognized as side causes and side effects respectively. More commonly, it's sufficient to use the term side effects to refer to both. \n\n\n### What conditions should a programming language satisfy to be considered functional?\n\nA pure FP language should prohibit side effects. FP places importance on expressions rather than statements. In this sense, FP is an instance of declarative programming, which implies that focus is on what is to be computed rather than how to compute it. Objects are immutable: once created, they cannot be changed. The focus is on data manipulated by functions rather than states stored within objects. It has been said that \"immutability and statelessness are core to functional programming.\" \n\nProgram state is tracked as part of function arguments. Thus, function recursion is preferred to looping based on loop variables. Data flow is more important than control flow and this facilitates concurrency. The order of function calls doesn't matter if the data flow is preserved. \n\nFP does not require dynamic typing or static typing for that matter. The use of compile-time AST (Abstract Syntax Tree) macros does not imply FP. \n\n\n### How is FP different from imperative programming?\n\nWith imperative programming, assignment of a variable is a common operation. For example, `x = x + 1` is a valid operation in C, C++ or Java. But this makes no sense from a mathematical perspective; that is, if we consider it as an equation, there's no solution for it. In FP, data is transformed but not mutated. Expressions are important but storing the result is not. Results are passed around to other functions.\n\nImperative programming is about objects and their states. FP is about pure functions and higher-order functions. Complex behaviour in FP can be constructed by composing simpler functions. If OOP is about passing objects, FP is about passing functions. With imperative programming, operations are specified step by step. Loops with loop variables are used to iterate over lists. With FP, the style is declarative. Recursion and list processing using iterators are common control structures.\n\nMichel Feathers tweeted in 2010 that, \n\n> OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts.\n\n\n### Can you explain pure functions, first-class functions and higher-order functions?\n\nPure functions conform to the rules of lambda calculus. Programs written in pure FP will be referentially transparent and their correctness can be formally proven. A practical way to define pure functions is that they have no side effects and always produce the same outputs for the same inputs. This is also called determinism. Impure FP therefore have elements of FP such as higher-order functions but may allow mutation of variables or even access to data outside the scope of functions. Side effects could be a language feature or due to monads. Closures cannot be considered as pure functions. \n\nFirst-class functions can be assigned to variables, passed as arguments to other functions and can be returned from other functions. Functions in FP are first-class functions. Higher-order functions are functions that accept a function as argument and/or return a function as output. Thus, first-class functions is about the status that functions have in the language whereas higher-order functions are explicitly those that accept/return functions as input/output respectively. They eliminate loops. Code can be refactored to avoid repetitive code. Closures are examples of higher-order functions.\n\n\n### Can you point out some concepts/features of FP?\n\nWe may note the following:\n\n + Closure: This is a data structure with a code pointer and an environment of bound variables, implemented as anonymous functions. Alternatively, \"a closure is a function's scope that's kept alive by a reference to that function.\"\n + Referential transparency: This is a property of pure functions that always return the same outputs for the same inputs. Another way of stating this is that an object can be replaced by its value since objects are immutable. Equational reasoning can be applied to code for either provability or optimization.\n + Currying: A function with multiple arguments is transformed into a chain of functions, each taking a single argument. Currying is useful to adapt a function to a form that some other function or interface expects. Currying is sometimes confused for partial functions.\n + Composability: When functions perform specific tasks, we can use them as building blocks to compose more complex functionality. While composability is a concept, currying and pipelining may be considered as an implementations for it.\n\n### What techniques do FP languages use?\n\nWe may note the following:\n\n + Tail call optimization: With recursion, FP usually supports tail call optimization that reuses the same stack frame for a sequence of recursive calls. Without this, program can run out of stack memory when recursion descends many levels deep.\n + Pattern matching: A concise way to match a value or type and thereby avoid complex control structures.\n + Lazy evaluation: Code will be executed only when required. Haskell does this well. Graph reduction is a technique to implement lazy evaluation.\n + Partial functions: They wrap a function by fixing some arguments to values and retaining the rest as arguments. In fact, partial functions simplify implementation of closures.\n + Pipelining: Pass the output of one function as input to another. Many function calls can be chained in a sequence in this way. Pipelining makes possible code reuse, unit testing and parallel execution.\n + Continuation: This enforces execution in a specified order, which is particularly useful when lazy evaluation would skip execution.\n\n### Why or when should we use FP?\n\nWith multicore, multi-processor or networked systems, concurrency is becoming a challenge. With no side effects, FP is an approach that eliminates shared variables, critical sections and deadlocks. Erlang implements Actor concurrency model, meaning that messages are passed between threads rather than sharing variables. \n\nSince there are no side effects (and side causes), FP are easier to test and debug. Unit tests need to worry about only function arguments. Peeking into the stack is enough to debug issues. Hot code deployment is possible with FP since object instances holding states don't really exist in FP. Reability of code generally improves with FP. Code is concise since the focus is on what rather than how.\n\nFor mission-critical systems, program correctness can be proven formally. Likewise, compilers can transform code into more optimized forms. These are possible due to the mathematical basis on which FP is founded. Due to lazy evaluation, it's possible to optimize code, abstract control structures and implement infinite data structures. \n\n\n### Where is FP used in the real world?\n\nEricsson uses Erlang in telecommunication switches. AT&T uses Haskell for network security. Akamai content delivery network uses Clojure while Twitter uses Scala. Elm is a pure FP language that compiles to JavaScript. It is used by NoRedInk. Computer aided design (CAD) software often support AutoLISP. OCaml has been used for financial analysis and research. Scala is widely used for data science. Clojure is being used in wide range of applications from finance to retail. Hughes shows examples of using FP for numerical problems as well as gaming. \n\nMany uses of Haskell in industry is well documented. An annual conference named Commercial Uses of Functional Programming (CUFP) has been going on as early as 2004 to track and share use cases and best practices. As an example, the uses of Caml in industry was presented in 2007. \n\n\n### Which programming languages are considered as functional?\n\nWhile LISP is one of the earliest FP language, the following (non-exhaustive list) are among those being used commercially: Haskell, F#, Clojure, Scala, OCaml, SML, Erlang. Wikipedia gives lists of FP languages that are considered pure and impure. Excel may be considered as a FP language. \n\nIt's likely that object-oriented programming (OOP) will coexist with FP. For complex applications requiring concurrency, FP may be preferred. F# and Scala combine elements of FP and OOP. Some have even used the term \"object-functional programming\" to mark this trend. \n\nMany popular languages while not designed as FP, can be used in part as FP. Examples include Java, Python, and R. Sometimes third-party libraries or packages enable FP, such as Swiftz for Swift, Underscore.js for JavaScript and Fn.py for Python. Immutable.js in JavaScript enables data immutability.\n\n\n### Why did FP take so long to get popular?\n\nFP started with ideas rooted in mathematics, logic and formalism. This appealed to academics who approached it theoretically. It didn't appeal to the programming community who considered functions more as code blocks for partitioning behaviour and reusing code. To them, imperative programming (including OOP) was more practical. It was also believed that FP, with all its limitations, could not be used to build big complex programs. FP was also believed to be slower. \n\nSome have argued that FP involves abstraction. This can be difficult for programmers used to coding with loops and assignments. \n\n\n### How is FP different from lambda calculus?\n\nFP may be regarded as a practical implementation of lambda calculus. Lambda calculus was a tool to study the problem of computability. It is independent of physical constraints. FP is application of some ideas of lambda calculus. \n\n## Milestones\n\n1936\n\nAlonzo Church invents lambda calculus at Princeton University while formally investigating ways to program machines. This is part a bigger effort at Princeton where others including Alan Turing, John von Neumann and Kurt Gödel attempt to formalise mathematics and computing. \n\n1956\n\nIPL is created at Carnegie Institute of Technology and could be considered as the first FP language (in assembly), though it has some imperative language features. \n\n1958\n\nMIT professor John McCarthy invents List Processing Language (LISP) as the first implementation of Church's lambda calculus. Common Lisp and Scheme are two popular dialects of LISP. The influence of lambda calculus on Lisp has been challenged. \n\n1973\n\nProgrammers at MIT's Artificial Intelligence Lab build a Lisp machine. Thus Lisp got its own native hardware rather than running on von Neumann architecture. \n\n1973\n\nRobin Milner at the University of Edinburgh creates ML (Meta Language). OCaml and Standard ML (SML) are popular dialects of ML.\n\n1977\n\nJohn Backus publishes a paper describing the proper use of functions and expressions without assignments and storage. Some call this function-level programming but the paper influences FP.","meta":{"title":"Functional Programming","href":"functional-programming"}} {"text":"# Web Storage\n\n## Summary\n\n\nClient-server interactions on the web are stateless. This means that a client request is independent of all previous requests. However, many applications require state to be maintained across requests. Examples are shopping on e-commerce sites or filling a form after login authentication. HTTP cookies sent with every request allow us to maintain state. Moreover, server may store session data, often in a database. \n\nWeb Storage is a client-side feature that allows web apps to store data or state on the client/browser without involving the server. Unlike HTTP cookies, storage is managed at the JavaScript layer.\n\nEach web storage area is bound to an origin, which is a triplet of domain, protocol and port. Thus, one domain can't access the storage of another. \n\nWeb Storage shouldn't be confused with cloud storage, which is about storing data on cloud computing platforms.\n\n## Discussion\n\n### Why should I use Web Storage when I can use cookies instead?\n\nCookies have been used to create client-side state but it has it's limitations. Cookies can be created only on the server and then exchanged with the client. Client must include the cookies with every request to the server. The amount of cookie storage is limited to 4KB. Cookies also are not persistent. They have an expiry time. \n\nW3C Recommendation for Web Storage notes an example in which a user could have the same site open in two windows. With a cookie implementation, the cookie would \"leak\" from one window to the other, with the user ending up with duplicate flight bookings. \n\nWith web storage, data can be stored on the client without any support from the server. This is useful for building Single Page Applications (SPAs). Unlike cookies, few megabytes can be stored. Two different types of storage areas are available: local and session. With the former, storage is persistent even if the browser is closed and reopened. \n\nSome browsers allow 10MB of web storage per origin while others allow 5MB. \n\n\n### In Web Storage, how is browser local storage different from session storage?\n\nLocal storage has no expiry. Even if the browser is closed, the data will persist. Session storage is limited to the current tab. It will be lost when the tab is closed. \n\nWith session storage on an e-commerce site, it's not possible to accidentally make double purchases just because the same site is open on two different tabs. Using local storage for this use case is not a good idea since the storage is accessible to all tabs opened to that site. \n\n\n### Could you share real-world examples of sites using Web Storage?\n\nBy 2011, many websites had started to use web storage. A Twitter search tool called Snapbird used it to remember the last person's Twitter stream that the user had searched from that browser. The assumption is that user is likely to continue the same search at a later time, thus saving some effort in retyping the name. Webbie Madness and CSS Lint are two other examples that use local storage to remember user's selection of checkboxes. \n\nWeb storage is not used to store sensitive content such as passwords, JSON Web Tokens or credit card information. However, one developer used it to store the hashed ID of an authenticated user. This hash is later used to give access to protected pages. For example, it can be used to maintain a shopping cart for an e-commerce site. \n\nIn social websites, relying on the server for data can involve complex technologies such server-side caching, distributed databases and DNS load balancers. Instead, web storage can be used to locally store recent chat messages, system notifications and news feeds. \n\n\n### What are the basics of managing Web Storage?\n\nThe API for using web storage is simple. No libraries need to be installed. Web storage can be used with plain JavaScript code. The storage is just key-value pairs with values being only strings. In case more complex data types need to be stored, such as JavaScript objects or arrays, they need to be serialized into strings. \n\nTo store `username` for example, we can use the syntax `localStorage.username = 'jsmith'` or `localStorage.setItem('username', 'jsmith')`. To retrieve data, use `localStorage.getItem('username')`. Use `localStorage.length` to know how many items are stored. A key can be accessed by its integer position `n` by calling `localStorage.key(n)`. To delete a key-value pair, call `localStorage.removeItem('username')`. Call `localStorage.clear()` to delete all pairs. \n\nThe syntax described above for `localStorage` is also applicable for `sessionStorage`. \n\nWhen either local or session storage is modified, `storage` event is fired. The event will contain key, old value and new value. \n\nAn app will fail if it depends on web storage but not supported in a particular browser. The solution is to reliably ascertain browser support and have fallback code. \n\n\n### What are some common criticisms of Web Storage?\n\nWeb Storage is limited to storing strings. Although more complex data types can be serialized and stored as strings, this is seen as an \"ugly hack\". In terms of size, few megabytes may not be enough for data-intensive apps or apps that need to work offline. Moreover, it's not easy to know when storage is at its limit. \n\nProviding app functionality that's highly dependent on web storage can be problematic. Users might delete the data just as they tend to clear cookies. \n\nThe API is synchronous, which implies data loading can block rendering on the main thread. This would be worse if files and images are stored. Likewise, for background processing, web workers can't be used since they can't access web storage. \n\nWeb Storage has security issues. Any JavaScript code on that page can access the storage. Cross-site scripting attacks are possible. Malicious JavaScript code can read the storage and send it to another domain of its choice. This is unlikely if the entire code was built in-house. However, it's common for apps to use third-party code, which could have been compromised.\n\n\n### Could you share some resources for working with Web Storage?\n\nBeginners can look at a demo of Web Storage as well as details of the storage event that's triggered by each change to the storage.\n\nThere are wrappers around Web Storage. *Lockr* allows developers to store non-string values as well. *Store.js* is similar to Lockr but with fallback should the browser lack support for web storage. *Crypt.io* (previously called secStore.js) adds a security layer. *Barn* provides a Redis-like interface. For asynchronous calls, *localForage* combines the easy-to-use Web Storage API with IndexedDB. A quick search on NPM will reveal many more packages related to local storage.\n\nPersistJS is a cross-browser alternative to client-side storage.\n\nIf your app uses web storage, it's easy to unit test your app even without access to browser's web storage. Web Storage can be *mocked* and one developer shows how to do this in Angular. \n\n\n### Which are the alternatives to Web Storage?\n\nFor sensitive data such as a session ID, it's better to use signed cookies. When using cookies, use the `httpOnly` cookie flag and set `SameSite=strict` and `secure=true`. For most other sensitive data, use server-side storage. \n\nFor non-string and non-sensitive data, use IndexedDB. This gives standard database features including primary keys, indexing, and transactions. For offline applications, we could use a combination of IndexedDB and Cache API that's often used by service workers. In fact, it's possible to create a polyfill to override default Web Storage API to store in IndexedDB and store asynchronously. \n\nWeb SQL used to be an alternative but W3C stopped working on this due to lack of implementations. \n\nApplication Cache is a text file to tell browsers what resources to cache. This could be useful for offline viewing of the site. It's more of a caching mechanism than client-side storage. \n\n## Milestones\n\nApr \n2009\n\nW3C releases the first working draft of **Web Storage**. This draft defines both `localStorage` and `sessionStorage`. In addition, it defines an interface to store in databases. \n\nJun \n2013\n\nW3C publishes **Web Storage** as a W3C Recommendation. Database storage defined in the first draft is deleted in the Recommendation. Both `localStorage` and `sessionStorage` are referred to as *IDL attributes*, following Web IDL (Interface Definition Language) that's used to describe interfaces web browsers are required to implement. The Recommendation also points out issues of user privacy and sensitive data, and how user agents (browsers) can mitigate the risks. \n\nApr \n2016\n\nW3C publishes **Web Storage (Second Edition)** as a W3C Recommendation. \n\nApr \n2019\n\nWeb storage is slow due to its synchronous nature. On the other hand, IndexedDB is asynchronous but has a more complex API. As a compromise, **KV Storage** is proposed. It's a key-value storage like web storage but with IndexedDB as the underlying storage. It becomes available with the release of Chrome 74 on an experimental basis. The trial ends with Chrome 76. In future, IndexedDB may come with a simpler API.","meta":{"title":"Web Storage","href":"web-storage"}} {"text":"# Arduino Programming\n\n## Summary\n\n\nProgramming Arduino boards is done via a language called **Wiring**, which itself is based on Processing and C/C++ languages. We can think of Wiring as providing a simpler abstraction on top of the more complex C/C++ syntax. \n\nHardware can be programmed directly using a USB connection. Arduino IDE is open source and it supports all the Arduino variants. It's also possible to program the hardware from within a web browser after installing a plugin. \n\nEvery Arduino board will have constraints of memory, GPIO pins, and so on. Developers should be aware of these and optimize the code accordingly.\n\n## Discussion\n\n### Which are the basic programming constructs used in Arduino?\n\nEvery Arduino sketch must have two basic constructs: \n\n + `setup()`: This is called once when the program starts, that is, when the board is powered up or is reset. This is the place to initialize interfaces, such as GPIO pin modes or baud rate for serial communication.\n + `loop()`: This is called repeatedly and is the place for the main code. This is equivalent to `while (1) {…}` C code.For digital IO, we can use `digitalRead()` and `digitalWrite()`. Since a GPIO pin can be either input or output, it should first be setup using `pinMode()`. For analogue IO, we can use `analogRead()` and `analogWrite()`. \n\nLike in C/C++, control structures in Arduino include if-else, switch-case, for loops, do-while, break and continue. \n\nData types common in C language are available but there's also `String` class for easier manipulation of strings. `Serial` and `Stream` are also useful classes. \n\nThere are built-in math functions, timing functions, random number generation, bit-level operation, and interrupt processing. All of these are documented in the Arduino Reference.\n\n\n### What are the main features of the Arduino IDE?\n\nArduino programs are called *sketches*. Arduino IDE is more than a code editor for a sketch. It does syntax highlighting on the code. It includes a number of simple examples to learn programming. Examples are accessible via File→Examples menu. From the IDE, we can verify the sketch and upload the binary to the target hardware. Any errors in this process are shown to the user. \n\nThe IDE can target all hardware variants. This selection is done via Tools→Board menu. What's more, we can use the *Boards Manager* to install newer boards or update to newer versions of board libraries. Likewise, developers have easy access to hundreds of third-party libraries or install custom libraries from a Zip file. This is done via menu Sketch→Include-Library. \n\nTo monitor serial communications, there's Serial Monitor and Serial Plotter. If IoT data is being sent, Serial Plotter's visualization makes it easy to see what's happening. \n\nWe also mention that there are alternatives to the Arduino IDE for programming Arduino boards. Some names are PlatformIO, Eclipse Arduino and Atmel Studio. \n\n\n### How can I make the best use of limited memory resources on the Arduino?\n\nArduino Uno has only 2KB of SRAM. This means that not a lot of variables can be held in memory, particularly long strings. If SRAM is required for processing data, try offloading the processing to the cloud or another computer/device. If the range of your data can fit within a byte, use `byte` data type rather than two bytes of `int`. Use `String.reserve()` to avoid memory fragmentation. Prefer local to global variables. \n\nUsing `PROGMEM` keyword as part of data declaration, we can tell the compiler to store data in Flash. Using the macro `pgm_read_byte_near()`, we can read a byte from Flash. Similar macros exist for other data types: word, dword, float and ptr. If a string is defined inline, then the `F()` is useful to store it in Flash. \n\nLet's note that the `const` keyword has read-only semantics, particularly useful for passing function arguments. It's not meant to tell the compiler where to store the data. \n\n\n### What exactly is Software Serial and why do we need it?\n\nSerial communications can be controlled using `Serial` class. This relies on the underlying UART hardware that's part of the microcontroller. Arduino Uno has one serial interface on pins 0 and 1. Due and Mega have four serial interfaces. What if we want another serial interface on the Uno? This is where Software Serial becomes useful.\n\nSerial communications can be implemented on any pair of digital pins using software. We don't have to write low-level code to do this since third-party libraries are available. SoftwareSerial is available in Arduino IDE by default. The main disadvantage is that if multiple software serial interfaces are used, only one can receive data at a time. Speeds are also limited compared to UART. AltSoftSerial is an alternative but it has other limitations. \n\nEither UART or software, serial interface operates at either 5V or 3.3V depending on the Arduino board. Don't connect them directly to RS232 serial port that operates at +-12V. Use a USB-to-serial adaptor in between. \n\n\n### How do I program the Arduino for analogue input?\n\nFor analogue input, there's `analogRead()` function. Arduino Uno uses a 10-bit *Analogue-to-Digital Converter (ADC)*. Since Uno runs on 5V, this implies that the ADC resolution is 5V/1024 = 4.88mV. It's possible to improve the resolution by sacrificing range. For example, calling `analogReference(INTERNAL)` sets the ADC reference to 1.1V and improves resolution to 1.1V/1024 = 1.07mV. Uno WiFi Rev2 board has a default reference of 0.55V. \n\nDue, Zero and MKR Family boards use 3.3V reference and 12-bit ADC. Thus, their resolution is at 0.806mV. However, they default to 10-bit ADC and this can be changed using `analogReadResolution()`. It's interesting that this function can be used to specify a lower resolution (least significant bits are discarded) or a higher resolution (extra bits are zero-padded). \n\nAn external reference via the AREF pin can also be used and selected using `analogReference(EXTERNAL)`. It's important to call this before calling `analogRead()`. The input to AREF must also respect the allowed voltage range. \n\n\n### How do I program the Arduino for analogue output?\n\nOn most Arduino boards, analogue output is not actually a continuous signal but rather a stream of digital pulses called *Pulse Width Modulation (PWM)*. MKR and Zero boards are capable of true analogue on pin DAC0. Due can do this on pins DAC0 and DAC1. \n\nFor both PWM and true analogue, the relevant API to call is `analogWrite()`. Arduino Uno has PWM on pins 3, 10 and 11 at 490Hz and pins 5 and 6 at 980Hz. On Due, twelve pins can do PWM at 1000Hz. \n\nPWM is generated from timers. For example, ATmega328P used in the Uno has three timers. These can be used in one of two modes: Fast PWM or Phase-Correct PWM. However, these are low-level details not exposed via `analogWrite()`. \n\nCalled **Software PWM**, it's possible to generate PWM waveforms on digital pins. This approach is called *bit banging*. With this, we can also control the frequency of the pulses. The problem with Software PWM is that interrupts will affect timing, resulting in jitter. Another problem is that processor is dedicated to generating PWM and can't be doing anything else. \n\n\n### How do I program the Arduino for I2C and SPI interfaces?\n\nBoth I2C and SPI are synchronous protocols that rely on a clock line. In both cases, libraries are available: `Wire` for I2C and `SPI` for SPI. In addition, other specialized libraries might wrap these to further simplify programming. An example of this is Adafruit TMP007 library for the I2C-based infrared thermopile sensor. These libraries abstract away the actual pins used by these interfaces. \n\nI2C is also called Two-Wire-Interface (TWI). For I2C, the Wire library uses 7-bit slave addressing and 32 byte buffer. Pullup resistors are required for lines SDA and SCL. Uno has one I2C interface while Due has two. \n\nWith SPI interfacing, microcontroller is typically the master. We need to know some things about the slave: bit order, mode, and maximum clock speed. These must be configured using `SPI.beginTransaction(SPISettings(…))`. On Uno, pins 10-13 make up the SPI interface. MOSI, MISO and SCK are also available on the ICSP header. \n\n\n### How can I manage power consumption of my Arduino application?\n\nManaging power consumption is critical for battery-powered applications. When there's no activity, Arduino must go into sleep to save energy. The built-in function `delay()` gives some savings but more can be obtained by using the LowPower library. The problem with both approaches is that the system can't respond to interrupts while asleep. Sleep time is also limited to maximum 8 seconds. \n\nA better approach is to use the built-in `sleep_cpu()` along with interrupts. On Uno, hardware interrupts are available on pins 2 and 3. Depending on the application, interrupts can come from a sensor or an RTC module. \n\nYou can also slow down the clock to as low as 62.5kHz by setting the CLKPR register. An external crystal can be used for other clock values. This will need changes to bootloader and optionally to the Arduino IDE. **Optiboot** can be used to create a custom bootloader. \n\nOther hardware changes can reduce power. Replace the onboard voltage regulator with a more efficient DC-DC buck converter. Better still, make a custom board just right for your application. \n\n\n### Could you name some popular third-party libraries for the Arduino?\n\nRegistered Arduino libraries are listed online and also available from within the IDE. Libraries are organized by category, license and processor architecture. In June 2019, among the most starred or forked libraries were ArduinoJson, WiFiManager, FastLED, Blynk, IRremote, PubSubClient, Adafruit NeoPXL8, Adafruit NeoPixel, and MFRC522. \n\nDuring Jan-Mar 2019, these libraries recorded most downloads: Adafruit Circuit Playground, DHT sensor library, ArduinoJson, Servo, SD, Adafruit GFX Library, Adafuit NeoPixel, LiquidCrystal I2C, MFRC522, and Blynk. \n\nAmong the top contributors are Adafruit Industries, SparkFun Electronics, Seeed Studio, Arduino Libraries, and STM32duino. \n\nA curated list of libraries is available from Lembed. Another list appears on Arduino Playground site. Read a tutorial to create your own library.\n\n## Milestones\n\n2003\n\nHernando Barragán creates **Wiring** platform so that designers and artists can approach electronics and programming more easily. Wiring abstracts away the hardware pins with API calls such as `pinMode`, `digitalRead`, `analogWrite`, `delay`, etc. In fact, the syntax is defined before any hardware implementation. \n\nAug \n2005\n\nArduino IDE and language library is released for the first time with the name **Arduino 0001**. Main sketch is compiled as C code but from Arduino 0004 (April 2006) it's compiled as C++ code. \n\nNov \n2011\n\n**Arduino 1.0** is released. Sketches now use extension `*.ino` rather than `*.pde` used for Processing files. This version introduces the use of `F()` macro. This macro was invented about a year ago by Paul Stoffregen who invented the Teensy derivative of Arduino. \n\nOct \n2012\n\nWith the release of **Arduino 1.5 Beta**, the IDE now supports both AVR 8-bit and ARM 32-bit. This is also the year when the first 32-bit Arduino is released with Arduino Due. \n\nMar \n2019\n\nThere are more than 2,150 libraries registered with the Arduino Library Manager. These can be downloaded and installed directly from within the IDE. There are also over 7,000 libraries in the wild. Since v1.0.5, the latter can be installed from a Zip file. Meanwhile, **Arduino 1.8.9** is released with support for ARM64 such as NVIDIA Jetson and Raspberry Pi 3.","meta":{"title":"Arduino Programming","href":"arduino-programming"}} {"text":"# Entity Linking\n\n## Summary\n\n\nFinding knowledge is one of the most popular tasks of Internet users. In most cases, query results are a mix of pages containing different entities that share the same name. \n\nEntity linking is the process of connecting entity mentions in text to their knowledgebase counterparts. Prospective information extraction, retrieval, and knowledgebase population are some applications of entity linking. This job, however, is difficult due to entity ambiguity and name variants. Because of the large number of web applications that produce knowledgebase data, major entity linking research has been conducted.\n\nIn many retrieval systems, a user would simply enter an entity or concept name into the retrieval system, and search results would be clustered by various entities/concepts that share that name. Additional details in the indexed records is one way to implement such a framework.\n\n## Discussion\n\n### What are the main steps in the entity linking process?\n\nEntity entity, in most cases, involves the three sub-tasks that are executed in this specific order:\n\n + **Information Extraction**: Extract information from unstructured data.\n + **Named Entity Recognition (NER)**: Individuals, places, organizations, and other real-world objects are examples of named entities. NER recognizes and classifies named entity occurrences in text into pre-defined categories. The role of assigning a tag to each word in a sentence is modeled as NER.\n + **Named Entity Linking (NEL)**: Each entity identified by NER will be assigned a unique identity by NEL. NEL then attempts to link each entity to its description in a knowledgebase. The knowledgebase to be used depends on the program, but we may use Wikipedia-derived knowledgebases for open-domain text, such as Wikidata, DBpedia, or YAGO. **Wikification** refers to the process of connecting entities to Wikipedia.Entity linking can either be end-to-end involving both recognition and disambiguation. If gold standard named entities as available at the input, entity linking does only disambiguation.\n\n\n### What are the main issues with entity linking?\n\nThere are two main issues with entity linking:\n\n + A phrase or word can be represented with multiple entities from the knowledgebase. For example, the entity *Japan* could mean Japan (national football team), Japan (country), Japan (Band), etc.\n + A single entity from the knowledgebase can have multiple names that represent entities that belong to distinct concepts. Take the example of *bass* as shown in the figure. The image on the left represents bass in context of sound and music. The image on the right represents bass as a fish.The challenge for entity linking is to figure out the correct context to resolve these ambiguities and link each entity to the most suitable entries in the knowledgebase.\n\n\n### Which are the main entity recognition paradigms?\n\nFor Named Entity Recognition and Classification (NERC), we have the following machine learning approaches:\n\n + **Supervised Learning**: Relies on distinctive features that separate positive and negative examples. From earlier handcrafted rules, supervised learning has evolved to automatically inferred rule-based systems or sequence labeling algorithms from a set of training examples. Currently a popular approach and has many variants: Hidden Markov Models (HMM), Decision Trees, Maximum Entropy Models, Support Vector Machines (SVM), and Conditional Random Fields (CRF). To solve entity ambiguity, Milne and Witten employed used Wikipedia entities as training data. Other approaches also collected training data based on unambiguous synonyms.\n + **Semi-Supervised Learning**: Also called \"weakly supervised\", the most common approach is called *bootstrapping*. The learning process begins with a small amount of supervision, such as a collection of seeds. From these few samples, the model learns contextual clues that are then applied to the rest of the data in the next iteration. With many iterations, the model see more and more examples to learn from. Some semi-supervised approaches are known to rival baseline supervised approaches.\n + **Unsupervised Learning**: Not very common but one possible approach is to semantic relations present in the data.\n\n### How can we use knowledge graphs for the entity linking task?\n\nModern entity linking systems use broad knowledge graphs built from knowledgebases like Wikipedia instead of textual features generated from input documents or text corpora. These systems extract complex features that take advantage of the information graph topology or exploit multi-step relations between entities that would otherwise go undetected by simple text analysis. Furthermore, developing multilingual entity linking systems based on natural language processing (NLP) is inherently difficult, as it necessitates either broad text corpora, which are often lacking for many languages, or hand-crafted grammar rules, which vary greatly between languages.\n\nIn the previous work of Han et al. proposed a graph-based collective entity linking method to model global topical interdependence (rather than pairwise interdependence) among different entity linking decisions in a single document. They first proposed Referent Graph, a graph-based representation that could model both textual context similarity and global topical interdependence between entity linking decisions as its graph structure. Then, using a strictly collective inference algorithm over the Referent Graph, they were able to jointly infer mapping entities for all entity mentions in the same paper. \n\n\n### What are the main steps in implementing an entity linking system?\n\nAn entity linking system will need the following:\n\n + **Recognize**: Recognize the entities that are mentioned in the context of text. In this module, for each entity mention m ∈ M, the entity linking system aims to filter out irrelevant entities in the knowledgebase and retrieve a candidate entity set Em which contains possible entities that entity mention m may refer to.\n + **Rank**: Rank each candidate. In most cases, the size of the candidate entity set Em is larger than one. Researchers leverage different kinds of evidence to rank the candidate entities in Em. They try to find the entity e ∈ Em which is the most likely link for mention m.\n + **Link**: Link the recognized entities to the categorized entities in the knowledge graph.New facts are created and digitally expressed on the web as the world evolves. For semantic web and knowledge management strategies, automatically populating and enriching existing knowledgebases with newly derived facts has become a key problem. Entity linking is inherently regarded as a critical subtask in the population of a knowledgebase. Entity linking help a knowledgebase to grow. \n\n\n### What datasets are available for entity linking?\n\n**YAGO** is a high-coverage, high-quality open-domain information base that combines Wikipedia and WordNet. It's similar to Wikipedia by size but uses WordNet's clean taxonomy of concepts. YAGO currently contains over 10 million entities (such as individuals, organizations, places, and so on) and 120 million information about these entities, including the Is-A hierarchy (such as form and subclass of relations) as well as non-taxonomic relations. YAGO includes means-relation the relates strings to entities. For example, \"Harry\" denotes Harry Potter. Hoffart et al. used YAGO relations to create candidate entities. \n\n**DBpedia** is a multilingual knowledgebase built by extracting structured data from Wikipedia, such as categorization details, geo-coordinates, and links to external web pages. English DBpedia contains 4 million entities. Furthermore, it adapts to Wikipedia's changes automatically.\n\n**Freebase** is a broad online knowledgebase that's primarily generated collaboratively by its users. Non-programmers can edit the structured data in Freebase using a user interface. Freebase is a database that compiles information from a variety of sites, including Wikipedia. There are currently over 43 million entities and 2.4 billion facts about them in the database. \n\n\n### What evaluation metrics are suited for entity linking?\n\nWhere only disambiguation is done, we have: \n\n + **Micro-Precision**: Fraction of correctly disambiguated named entities in the full corpus.\n + **Macro-Precision**: Fraction of correctly disambiguated named entities, averaged by document.Where both entity recognition and disambiguation are done, we have: \n\n + **Gerbil Micro-F1 – Strong Matching**: InKB micro F1 score for correctly linked and disambiguated mentions in the full corpus as computed using the Gerbil platform. InKB means only mentions with valid KB entities are used for evaluation.\n + **Gerbil Macro-F1 – Strong Matching**: InKB macro F1 score for correctly linked and disambiguated mentions in the full corpus as computed using the Gerbil platform. InKB means only mentions with valid KB entities are used for evaluation.\n\n### What's the current state-of-the-art (SOTA) in entity linking?\n\nMulang et al. 2020 is the current Sota for ConLL-AIDA dataset. \n\nRaiman is the current SOTA in cross-lingual entity linking for WikiDisamb30 and TAC KBP 2010 datasets. They construct a type system, and use it to constrain the outputs of a neural network to respect the symbolic structure. They achieve this by reformulating the design problem into a mixed integer problem: create a type system and subsequently train a neural network with it. They propose a 2-step algorithm: 1) heuristic search or stochastic optimization over discrete variables that define a type system informed by an Oracle and a Learnability heuristic, 2) gradient descent to fit classifier parameters. \n\nThey apply DeepType to the problem of entity linking on three standard datasets (WikiDisamb30, CoNLL (YAGO), TAC KBP 2010) and find that it outperforms all existing solutions by a wide margin, including approaches that rely on a human-designed type system or recent deep learning-based entity embeddings. Explicitly using symbolic information lets it integrate new entities without retraining. \n\n## Milestones\n\n2006\n\nBunescu and Paşca propose using Wikipedia for Named Entity Disambiguation (NED). Their model makes use of Wikipedia's redirect pages, disambiguation pages, categories and hyperlinks. They apply some rules to construct a dictionary of named entities from Wikipedia. For context-article similarity they use cosine similarity with vectors formed from TF-IDF of words in the vocabulary. As similar work is due to Cucerzan (2007). \n\nNov \n2007\n\nMihalcea and Csomai propose **Wikify!**, a system that recognizes key phrases or concepts and link them to suitable Wikipedia pages. The two main tasks in the process are keyword extraction and word sense disambiguation (WSD). They adopt unsupervised keyword extraction that involves candidate extraction and ranking. For WSD, they evaluate a knowledge-based approach (inspired by Lesk algorithm) and a data-driven approach (using Naive Bayes classifier). \n\n2008\n\nGiven two phrases, their **semantic relatedness** is usually computed using external knowledge sources. Instead, Milne and Witten propose using both incoming and outgoing links in Wikipedia pages to measure semantic relatedness. This is useful for WSD and hence for entity linking as well. To determine relatedness, the authors compare each potential candidate to the document's surrounding background, which is created by the other candidates. The use of Wikipedia for semantic relatedness was previously studied by Strube and Ponzetto (2006) and Gabrilovich and Markovitch (2007). \n\n2011\n\nRatinov et al. propose the **GLOW** system, which is an approximation of joint disambiguation using mutual information for semantic relatedness. The researchers extract additional named entity mentions and noun phrases from the document that were previously used as link anchor texts in Wikipedia to emphasize the coherence among candidates. Candidates are then retrieved by querying an anchor-title index that maps each link goal in Wikipedia to its various link anchor texts and vice versa, augmenting the given query mentions with this collection. \n\n2012\n\nShen et al. present **LINDEN**, a framework that uses YAGO to connect named entity mentions. They consider coherence among potential candidate entities. They consider semantic similarity of candidates to the forms in the YAGO ontology. This function assumes that candidate senses are organized into a tree structure of categories. They also consider global coherence of candidates for document mentions, where one candidate's global coherence is equal to the average Semantic Role Labelling (SRL) of all candidates. \n\n2016\n\nTsai and Roth consider the problem of linking entity mentions in non-English language text to the English Wikipedia. They address this using **multilingual embeddings** of titles and words. Their system doesn't handle the case of an English Wikipedia entry that doesn't have an equivalent entry in a foreign language. They also release a Wikipedia dataset across 12 languages.","meta":{"title":"Entity Linking","href":"entity-linking"}} {"text":"# Non-Destructive Testing\n\n## Summary\n\n\n**Non-Destructive Testing (NDT)** is used in various industries to evaluate the properties of a material, component or system without damaging the test subject. NDT is also known by the terms *Non-Destructive Examination (NDE)*, *Non-Destructive Inspection (NDI)*, and *Non-Destructive Evaluation (NDE)*. \n\nIn testing, troubleshooting, and research, NDT is a successful strategy that saves both time and money. NDT plays a crucial role in everyday life and is essential for safety and reliability. Aircraft, spacecraft, motor vehicles, pipelines, bridges, railways, power plants, and oil platforms are just some examples tested with NDT. \n\nNDT has many techniques. Eddy-current, magnetic-particle, liquid penetrant, radiographic, ultrasonic, leak, acoustic emission, and visual testing are most commonly used. \n\nNDT is frequently used in forensics, mechanical engineering, petroleum engineering, electrical engineering, civil engineering, systems engineering, aeronautics, medicine, and art.\n\n## Discussion\n\n### What's the difference between destructive and non-destructive testing?\n\n**Destructive testing** is simply testing the destroys or damages the subject under test. There are many material properties that can only be evaluated by applying physical force or load. Examples include tensile strength, elongation, and hardness. Destructive tests to evaluate these include tensile test, bending test, fracture test, flattening test, hardness test, shear test, and impact test. \n\nOn the other hand, NDT doesn't damage the test subject. This means that the sample can be used in the field provided the tests pass. NDT can be used to detect defects or discontinuities. It can detect degradation of properties after long use. Surface or internal cracking, poor welding, impact damages, delamination, density, porosity and pitting some problems that NDT can detect. \n\nWhile destructive testing is often more reliable, NDT offers a safer, faster, cost-effective and less wasteful alternative. Thus, NDT complements but doesn't replace destructive testing. \n\n\n### Why are NDT checks necessary for a material?\n\nAll materials, products, and equipment have standard design requirements and estimated life. Sometimes products get through production, fabrication, or delivery with undetected defects. These may cause catastrophic failures. Such catastrophes can be costly and can even terminate projects. NDT can catch these problems before catastrophes occur. NDT saves lives and property. It helps companies adhere to regulations and standards. \n\nIn addition to security, NDT is used to assure the effectiveness and longevity of equipment. It's useful for asset integrity management, which leads to increased productivity and profitability for businesses. For example, in the 2020 port blast in Beirut, many buildings within the blast radius are still standing but could be structurally unsafe. Rather than demolish them, NDT is being used to assess extent of damage and repair them where possible. \n\nNDT can help engineers choose the right materials or treatments for each application. NDT can validate product design and suggest enhancements. It also validates manufacturing processes. \n\nIn conclusion, NDT brings safety, regulatory compliance, failure prevention, quality assurance, cost efficiency and reliability. \n\n\n### Which industries are using NDT checks?\n\nSeveral industries are using NDT: \n\n + **Aerospace & Defence**: Airframe structures are tested for wear and tear, and operation under extreme conditions. Ultrasonic method is widely used to reveal even the smallest defects. Users include Boeing, Airbus, GE Aviation, Hindustan Aeronautics Ltd., etc.\n + **Oil & Gas**: Internal structures are inspected for welds, cracks, voids and other structural defects. Ultrasonic and radiographic methods are commonly used. Users include Indian Oil Corporation, Bharat Petroleum, Reliance Petroleum Limited, ONGC, etc.\n + **Biomedical & Medical Devices**: Ultrasounds and x-rays are widely used. Vendors include Medtronic plc, Johnson and Johnson, Abbott Laboratories, etc.\n + **Civil & Heavy Construction**: Bigger structures imply bigger stresses and a greater need for NDT. NDT is applied to buildings, bridges and dams. Users include L&T Engineering & Construction Division, Tata Projects Ltd, etc.\n + **Metals & Mining**: Metals are the foundations on which many industries function. NDT validates material properties and quality. Users include BHP Group Ltd, Jiangxi Copper Co. Ltd, etc.Other industries using NDT include power generation, petrochemicals, automotive, and maritime. \n\n\n### How to select a suitable NDT method for a certain material?\n\nThe selection of NDT method is based on material and defect type: \n\n + **Visual Testing**: Used for analyzing surfaces, examining the condition of mating surfaces, and checking for leaks.\n + **Liquid Penetrant Testing**: Detects surface-breaking defects such as hairline cracks, surface porosity, leaks in new products, and fatigue cracks.\n + **Magnetic Particle Inspection**: Mostly identifies surface and near-surface faults or cracks in ferromagnetic materials. It detects seams, porosity, and small tight cracks.\n + **Eddy Current Testing**: Uses electromagnetic induction to find defects in conductive materials. Detects electrical conductivity, permeability, cracks, seams, alloy content, heat treatment variations, wall or coating thickness.\n + **Ultrasonic Testing**: Uses ultrasonic waves in the range 0.1-50 MHz to pick up cracks and variations in thickness. Applicable for concrete, wood, composites, metals, alloys, and welds.\n + **Radiographic Testing**: Electromagnetic radiation penetrates materials and exposes defects on radiation-sensitive film. It mainly uses X-rays and Gamma-rays. It detects corrosion, geometry variation, density changes, and misaligned parts. It's a good test for inspecting weld interiors and picks out cracks, porosity, inclusions, voids, and lack of fusion.\n\n### What are the best practices or precautions for NDT?\n\nNDT itself poses a risk to testing professionals who are therefore required to follow many safety precautions. Always use trained and experienced professionals. They must know what testing method to use, how to use it, and how to correctly interpret the results. Suitable NDT software can help overcome examiner fatigue and loss of concentration. \n\nWhere ultraviolet radiation, ionizing radiation, or X-rays are used, operators should wear personal protective equipment, and use suitable filters and lenses. Even when more benign techniques such as ultrasonic or eddy current testing are employed, the testing environment itself can pose a risk. For example, changing probes without shutting down the system can cause sparks and explosions. Testing environments must be clean and free of clutter. Compressed gases (sulphur hexafluoride, acetylene, and nitrous oxide) commonly found at NDT facilities must be handled properly. \n\nFor magnetic particle testing, operator should use a local exhaust or at least wear respiratory protective equipment. For penetrant inspection, avoid skin contact, and keep away from food, drinks and smoking materials. For radiography, magneto-inductive and eddy current testing, cordon off the area so that personnel don't unintentionally expose themselves to radiation. \n\n\n### What are some real-world catastrophes that could've been prevented with NDT?\n\nIn 2009, one of the pressure vessels at NDK Crystal in Belvidere, Illinois violently ruptured. Later investigations confirmed that the company had failed to do NDT on the inside of the vessel that had been damaged due to corrosive chemicals. The inside wall had experienced stress corrosive cracking. \n\nThe Columbia Space Shuttle disaster of 2003 killed seven astronauts onboard. It was caused by a falling piece of foam that then damaged the left wing. Following an investigation, it was deemed that more extensive NDT on the foam and wings could have prevented this disaster. Another recommendation was to develop new NDT techniques to complement destructive testing methods. \n\nDuring the 1940s, the U.S. mass produced 2710 Liberty ships, some completed within five days. But 12 ships broke in half, later attributed to tiny fractures in the steel. Modern NDT techniques could have ascertained the quality of the steel. \n\n\n### What certifications are available for NDT professionals?\n\n**ISO 9712:2021** is the main standard for third-party qualification and certification of NDT personnel. Methods included in its scope are acoustic emission, eddy current, leak, magnetic, penetrant, radiographic, strain gauge, thermographic, ultrasonic and visual. It's an evolution of two earlier standards: EN 473 and ISO 9712. ISO 9712 has been adopted in the U.S. as ANSI/ASNT CP-106.\n\n**EN ISO 20807** establishes a system for the qualification of personnel who perform NDT applications of a limited, repetitive or automated nature. **ISO TS 11774** is a performance-based qualification. It's applicable for safety critical applications where even third-party certification (ISO 9712:2021) may not suffice. \n\nFor employer-based certification, we have **ANSI/ASNT CP-189**. Particular to aerospace, we have AIA NAS 410:2014 and EN 4179:2017. \n\nCertification under these standards includes training, work experience under supervision, and passing a written and practical examination conducted by the independent certification authority.\n\nFor a complete list of all ISO NDT standards covering requirements on testing equipment, visit ISO's 19.100: Non-destructive testing.\n\n## Milestones\n\n1868\n\nEnglishman S.H. Saxby first recorded NDT in his journal, \"Engineering,\" a method of detecting cracks in gun barrels using magnetic inductions. \n\n1895\n\nAfter the discovery of radiography Wilhelm Conrad Röntgen described the properties of radiography a type of electromagnetic radiation. \n\n1912\n\nThe Englishman Richardson claimed the identification of icebergs by ultrasound in his patent after titanic sink.\r Until 1912, the discovery of X-rays was mostly used in medical and dentistry fields. \n\n1920\n\nWilliam Hoke discovered the magnetic particles can be used to locate the defects using magnetism. \n\n1929\n\nRussian Sokolov studied the use of ultrasonic waves in detecting metal objects. Victor de Forest and Foster Doane used the ultrasonic waves in real industrial applications. \n\n1930\n\nRichard Seifert developed X-ray technology so that it is more powerful and precise than before. \n\n1932\n\nGiraudi an Italian built a magnetic particle crack detector named as \"Metalloscopio.\" \n\n1940\n\nFirestone developed pulsed ultrasonic testing using a pulse echo testing, and fluorescent or visible dye is added to the oil used to penetrate test objects. \n\n1950\n\nThe Schmidt Hammer (also known as \"Swiss Hammer\") is invented. The instrument uses the world's first patented non-destructive testing method for concrete. Also, J. Kaiser introduces acoustic emission as an NDT method.\n\n2008\n\nNDT in Aerospace Conference was established DGZfP and Fraunhofer IIS hosted the first international congress in Bavaria, Germany.\n\n2020\n\nIndian Society for Non-destructive Testing (ISNT) Accreditation Certification from NABCB for Qualification and Certification of NDT Personnel as per ISO 9712:2012.","meta":{"title":"Non-Destructive Testing","href":"non-destructive-testing"}} {"text":"# CSS Specificity\n\n## Summary\n\n\nAssume an element on a web document is targeted by two different CSS selectors. Each selector applies different styling to that element. The selector with the higher *specificity* will take effect. CSS defines clear rules to determine the specificity of CSS selectors. \n\nA good understanding of the rules of CSS specificity can enable developers write better selectors and troubleshoot problems in their stylesheets. Otherwise, developers may end up misusing `!important` in their CSS declarations, thus making code more difficult to maintain in the long run.\n\n## Discussion\n\n### What are the rules to determine CSS specificity?\n\nA CSS selector will typically use ID, class, element/tag name or a combination of these. In addition, some selectors will use element attributes, pseudo-classes and pseudo-elements. Specificity is computed by counting each of these and ranking them. IDs have more importance than classes, pseudo-classes and attributes; the latter have more importance than elements and pseudo-elements. Inline style, specified as attribute `style=\"…\"`, has highest importance. The counts are concatenated to arrive at the final specificity. A selector with a higher number to the left implies higher specificity. \n\nIf two selectors have the same specificity, one that appears later in the stylesheet is applied. Universal selector `*` and combinators (`~`, `>`, `+`) are ignored. Pseudo-class `:not()` is not counted but selectors inside it are counted. \n\nAlthough specificity is calculated for selectors, styles are applied based on individual property-value declarations. Regardless of specificity, declarations with `!important` have highest importance. If conflicting declarations contain `!important`, higher specificity wins. \n\nIf a selector has no declaration for a specific property, inherited value (if available) or initial value is applied. \n\n\n### Could you explain CSS specificity with an example?\n\nGiven an ID, three classes and two elements, specificity is 0-1-3-0. This has higher specificity than a selector with just five classes, where specificity is 0-0-5-0.\n\nConsider HTML content `

Lorem ipsum.

`. Consider CSS rules `p { color: red; }`, `.foo { color: green; }` and `#bar { color: blue; }`. All three selectors target the paragraph but the last selector has highest specificity. Hence, we'll see blue-coloured text. This can be understood by calculating the specificity: \n\n + `p`: one element: 0-0-0-1\n + `.foo`: one class: 0-0-1-0\n + `#bar`: one ID: 0-1-0-0Since 0-1-0-0 > 0-0-1-0 > 0-0-0-0, `#bar` selector has the highest specificity. \n\nIf we have `p { color: red !important; }`, we'll have red-coloured text. Specificity is ignored. \n\nSuppose we introduce inline styling, `

`. This will take precedence, unless there's an earlier declaration with `!important`. \n\nSuppose we have two classes `

` styled with `.hoo { color: yellow; }`. Specificity is the same for both `.foo` and `.hoo`. If `.hoo` appears later in the stylesheet, we'll have yellow-coloured text. When specificity is same, order matters. \n\n\n### How is specificity affected by cascading order?\n\nConsider HTML content `

Lorem ipsum.

`. Suppose the author defines `.foo { color: green; }` and the user defines `#bar { color: blue; }`. User-defined styles are typical for accessibility reasons. The latter has higher specificity but the former declaration is used; that is, text is rendered in green. To understand this, we need to understand the concept of **origin**. \n\nCSS styles can come from different origins: user (reader of the document), user agent (browser), or author (web developer). The standard defines precedence of the origin. This is applied first before specificity is considered. The order also considers declarations that include `!important`. \n\nPrecedence in descending order is Transition declarations, Important user agent declarations, Important user declarations, Important author declarations, Animation declarations, Normal author declarations, Normal user declarations, and Normal user agent declarations. \n\n\n### What are some examples of CSS specificity calculation?\n\nHere we share a few examples: \n\n + `ul#nav li.active a`: `#nav` is the ID, `.active` is the class, and three elements `ul`, `li` and `a` are used. Specificity 0-1-1-3.\n + `body.ie7 .col_3 h2 ~ h2`: Two classes `ie7` and `.col_3`, and three elements `body`, `h2` and `h2` are used. `~` is not counted. Specificity 0-0-2-3.\n + `
  • `: Has inline style attribute. Specificity 1-0-0-0.\n + `ul > li ul li ol li:first-letter`: Apart from six elements, `:first-letter` is a pseudo-element. `>` is not counted. Specificity 0-0-0-7.\n + `li:nth-child(2):hover`: Both `:nth-child(2)` and `:hover` are pseudo-classes. Specificity 0-0-2-1.\n + `.bar1.bar2.bar3.bar4`: This is a combined selector with four classes. Specificity 0-0-4-0.\n\n### What are some tips for managing CSS specificity?\n\nUse IDs to increase specificity. For example, `a#foo` has higher specificity compared to `a[id=\"foo\"]`: 0-1-0-1 > 0-0-1-1.\n\nWhen two selectors have same specificity, the selector defined second takes precedence. The use of `!important` overrides specificity. If two declarations have `!important`, the second one wins. In any case, avoid the use of `!important`. \n\nFor link styling, define CSS rules in the order of Link, Visited, Hover, Active (LVHA). \n\nIn terms of online resources, developers can consult handy cheat sheets on CSS specificity at Stuff & Nonsense or at Standardista. There's also an online calculator.\n\n\n### I find CSS specificity confusing. Is there a simpler way?\n\nOne approach is to adopt a naming convention such as **Block-Element-Modifier (BEM)**. By defining CSS classes for design components (called blocks), scope of a class is \"limited\" to that block. \n\nIn BEM, selectors use only classes. IDs and elements are avoided in selectors. Combined selectors (of the form `.foo.bar`) are avoided. Nested selectors (of the form `.foo .bar`) are allowed but discouraged in BEM. Since selectors are just classes, it's easier to determine specificity. Selectors can be reused more easily. \n\n**CSS Modules** offers a modern way to restrict the scope of CSS declarations. Via tooling, this automatically renames the selectors. This can be configured to adopt BEM naming approach. For example, a menu component is styled with `.Menu_item--3FNtb`. If it appears within a header, the style changes to `.Header_item--1NKCj`. Although both have same specificity, the latter occurs later in the stylesheet. \n\nOne alternative to BEM is Enduring CSS (ECSS). Partly inspired by BEM, ECSS promotes the use of single class selectors. IDs and tag names are not used in selectors. Nested selectors are allowed. \n\nOther alternatives include Atomic CSS (ACSS), Object-Oriented CSS (OOCSS), Scalable and Modular Architecture for CSS (SMACSS). \n\n\n### Is CSS specificity still relevant in modern JavaScript frameworks?\n\nAmong the well-known JavaScript frameworks are Angular, React and Vue. All of these offer ways to style documents in a modular fashion. This is in contrast to the default global scope of CSS selectors and declarations. In Angular and React, a **styled component** would have declarations that are applicable only to that component. In Vue, **scoped CSS** achieves the same effect. \n\nThis idea of combining JS and CSS has been formalized under the term **CSS-in-JS**. However, one disadvantage is that we're combining HTML structure and styling into a component file. Although this isolates one component from another, it also makes styles harder to reuse across components. \n\nAlthough CSS declarations are restricted to their components, specificity can't be ignored. Specificity still applies within the component but a lot easier to manage.\n\n## Milestones\n\nDec \n1996\n\nW3C publishes **CSS1** as a W3C Recommendation. This clarifies the precedence when conflicting rules target the same element. It also explains how to calculate the specificity of selectors. \n\n2000\n\nThere's no exact date when developers recognize the importance of CSS specificity. Probably around 2000 (plus or minus a few years), as websites and stylesheets start to grow in complexity, developers have a hard time debugging and maintaining their code. It's at this point that they begin to learn and understand CSS specificity.\n\nAug \n2002\n\nW3C publishes the first draft of **CSS2.1**, which is a revision of CSS2 from 1997. This clarifies the **cascading order**. In descending order, it's user important, author important, author normal, user normal and user agent declarations. In CSS1, precedence in descending order was author, reader and user agent declarations. \n\nJan \n2006\n\nDeveloper Keegan Street open sources a JavaScript module that, given a selector, calculates and return its specificity. This is a command-line tool. For convenience, an online version of this calculator is available.\n\nJan \n2013\n\nIn W3C Working Draft titled *CSS Cascading and Inheritance Level 3*, cascading order is updated. This document becomes a W3C Candidate Recommendation in April 2015.","meta":{"title":"CSS Specificity","href":"css-specificity"}} {"text":"# Naming Conventions\n\n## Summary\n\n\nIn general, code is written once but read multiple times, by others in the project team or even those from other teams. Readability is therefore important. Readability is nothing more than figuring out what the code does in less time. \n\nAmong the many best practices of coding, is the way variables, functions, classes and even files are named in a project. A common naming convention that everyone agrees to follow must be accompanied by consistent usage. This will result in developers, reviewers and project managers communicate effectively with respect to what the code does. \n\nWhile there are well-established naming conventions, there's no single one that fits all scenarios. Each programming language recommends its own convention. Each project or organization may define it's own convention.\n\n## Discussion\n\n### Why should we have a naming convention and what are its advantages?\n\nNaming conventions are probably not important if the code is written by a single developer, who's also the sole maintainer. However, typical real-world projects are developed and maintained by teams of developers. Particularly in the context of open source projects, code is shared, updated and merged by many individuals across organizations and geographies. Naming conventions are therefore important.\n\nNaming conventions result in improvements in terms of \"four Cs\": communication, code integration, consistency and clarity. The idea is that \"code should explain itself\". At code reviews, we can focus on important design decisions or program flow rather than argue about naming. \n\nNaming conventions lead to predictability and discoverability. A common naming convention, coupled with a consistent project structure, makes it easier to find files in a project. \n\nIn short, naming convention is so important that Phil Karlton is said to have said, \n\n> There are only two hard things in Computer Science: cache invalidation and naming things.\n\n\n### What are some common naming conventions used in programming?\n\nAmong the common ones are the following:\n\n + **Camel Case**: First letter of every word is capitalized with no spaces or symbols between words. Examples: `UserAccount`, `FedEx`, `WordPerfect`. A variation common in programming is to start with a lower case: `iPad`, `eBay`, `fileName`, `userAccount`. Microsoft uses the term Camel Case to refer strictly to this variation.\n + **Pascal Case**: Popularized by Pascal programming language, this is a subset of Camel Case where the word starts with uppercase. Thus, `UserAccount` is in Pascal Case but not `userAccount`.\n + **Snake Case**: Words within phrases or compound words are separated with an underscore. Examples: `first_name`, `error_message`, `account_balance`.\n + **Kebab Case**: Like Snake Case, but using hyphens instead. Examples: `first-name`, `main-section`.\n + **Screaming Case**: This refers to names in uppercase. Examples: `TAXRATE`, `TAX_RATE`.\n + **Hungarian Notation**: Names start with a lowercase prefix to indicate intention. Rest of the name is in Pascal Case. It comes in two variants: (a) *Systems Hungarian*, where prefix indicates data type; (b) *Apps Hungarian*, where prefix indicates logical purpose. Examples: `strFirstName`, `arrUserNames` for Systems; `rwPosition`, `pchName` for Apps.\n\n### What are the categories of naming conventions?\n\nIt's common to categorize a naming convention as one of these:\n\n + **Typographical**: This relates to the use of letter case and symbols such as underscore, dot and hyphen.\n + **Grammatical**: This relates to the semantics or the purpose. For example, classes should be nouns or noun phrases to identify the entity; methods and functions should be verbs or verb phrases to identify action performed; annotations can be any part of speech; interfaces should be adjectives.Grammatical conventions are less important for variable names or instance properties. They are more important for classes, interfaces and methods that are often exposed as APIs. \n\n\n### What's the typical scope of a naming convention?\n\nNaming convention is applicable to constants, variables, functions, modules, packages and files. In object-oriented languages, it's applicable to classes, objects, methods and instance variables. \n\nWith regard to scope, global names may have a different convention compared to local names; such as, Pascal Case for globals: `Optind` rather than `optind` in gawk. Private or protected attributes may be named differently: `_secret` or `__secret` rather than `secret`. Some may want to distinguish local variables from method arguments using prefixes. Python's PEP8 doesn't give a naming convention to tell apart class attributes from instance attributes but other languages or projects might define such a convention. \n\nIn general, use nouns for classes, verbs for functions, and names that show purpose for variables, attributes and arguments. Avoid (Systems) Hungarian notation in dynamically typed languages since data types can change. Use lower Camel Case for variables and methods. Use Pascal Case for classes and their constructors. Use Screaming Case for constants. \n\n\n### What word separators are used in naming conventions and who's using what?\n\nWith Camel Case and Pascal Case, there are no word separators. Readability is achieved by capitalizing the first letter of each word. Languages adopting this include Pascal, Modula, Java and .NET. \n\nWith Snake Case, underscore is the separator. This practice is common in Python, Ruby, C/C++ standard libraries, and WordPress. \n\nWith Kebab Case, hyphen is the separator. Languages using this include COBOL, Lisp, Perl 6, and CSS. Since hyphen in many languages is used for subtraction, Kebab Case is less common than Snake Case. \n\nIt's common for a language to adopt different case styles for different contexts. For example, a PHP convention named PSR-1 adopts the following: `PascalCase` for class names, `UPPER_SNAKE_CASE` for class variables and `camelCase` for method names. \n\n\n### Where can I find published naming conventions for different languages?\n\nPython defines its naming convention as part of PEP8. Beginners can read a summary. \n\nRust's conventions are summarized in its official documentation. It uses Camel Case for type constructs and Snake Case for value constructs.\n\nGoogle offers it's own naming convention for Java and for R.\n\nKotlin follows Java's conventions and they're summarized in its documentation. \n\n.NET naming conventions are available online. \n\nResources in Android projects are defined in XML but without namespaces. This makes it difficult to find stuff. One suggested XML naming convention solves this problem. \n\nFor user interface design and names in CSS, BEM Methodology is worth studying. \n\n\n### Are there exceptions to following a common naming convention?\n\nFollowing a common naming convention is beneficial. However, it may be okay to relax the rules in some cases. When a name is highly localized, lacks business context or used within a few lines of code, it may be acceptable to use a short name (`fi` rather than `CustAcctFileInfo`). \n\nWhen a project is written in multiple languages, it's not possible to have a single naming convention. For each language, adopt the naming convention prevalent in that language. Another example is when you're using a third-party library. Follow their convention for consistency and readability. \n\nUltimately, naming conventions should not be enforced blindly. We should be sensitive to the context of use. This is partly because IDEs come to the aid of developers. They can highlight the name in all places it occurs. The practice of prefixing member and static variables with `m` and `s` respectively is also not in favour since IDEs colour them differently. \n\n\n### Could you give some tips to create a naming convention?\n\nWithout being exhaustive here are some things to consider:\n\n + Reveal intentions: `fileName` is better `f`; `maxPugs` is better than `pugs`.\n + Make distinctions: `moneyInDollars` is better than `money`.\n + Put the distinguishing aspect first: `dollarMoney` and `rupeeMoney` are better than `moneyInDollars` and `moneyInRupees`.\n + Easy to pronounce: `timeStamp` is better than `ts`.\n + Verbs for functions: `getName()` and `isPosted()` are good; `hasWeight()` or `isMale()` when boolean values are returned; `toDollars()` for conversions.\n + One word, one concept: `fetch`, `retrieve`, `get` all imply the same thing: use one of them consistently.\n + Relate to business context: `AddCustomer` is better than `IncrementCounter`.\n + Use shortforms judiciously: `PremiumCust` may be used over `PremiumCustomer` to emphasize \"Premium\"; but `fn` is not a good substitute for `fileName`.\n + Describe content rather than storage: `user_info` is better than `user_list`.\n + Plurals for containers: `fruitNames` is better than `fruit` for an array of fruit names.\n + Describe content rather than presentational aspects: in CSS, for example, `main-section` is better than `middle-left-and-then-a-little-lower` as identifier name.\n\n\n## Milestones\n\n1813\n\nAn early use of Camel Case, more formally called **medial capitals**, starts in chemistry. Swedish chemist Jacob Berzelius invents it to represent chemical formulae. For example, Sodium Chloride is easier to read as `NaCl` than `Nacl`. \n\n1960\n\nThe use of **Snake Case** can be traced to the late 1960s. In the 1970s, it's used in C language, while Pascal uses what is later called **Pascal Case**. However, the name Snake Case itself is coined in the early 2000s. \n\n1970\n\nThe 1970s is when **Camel Case** started becoming common in the world of computer programming. However, the name itself is coined years later (in 1995) and is attributed to Newton Love. \n\n1980\n\nIn the 1980s, Charles Simonyi, a Hungarian working at Microsoft, invents the notation of using lowercase prefixes to names to indicate what the name referred to. Thus is born (Apps) **Hungarian Notation**. Unfortunately, some mistake Simonyi's idea and use prefixes to indicate data types. This is born Systems Hungarian. While Apps Hungarian is useful, Systems Hungarian is not, particularly in type safe languages. When Microsoft start working on .NET in the late 1990s, they recommend not using Hungarian notation. \n\nDec \n2012\n\nR language doesn't officially specify a naming convention. Therefore multiple conventions exist. An analysis of 2668 R packages downloaded from CRAN shows a lack of consistency. In fact, 28% of packages use three or more conventions.","meta":{"title":"Naming Conventions","href":"naming-conventions"}} {"text":"# Search Engine Optimization\n\n## Summary\n\n\nUsers online use search engines to find information or resources. If you're the owner of a website, you always want to bring more visitors to your site. If search engines give better visibility to your site in response to relevant search queries, this would translate to more visitors. But how do you tell search engines that your site is better than others? This is where **Search Engine Optimization (SEO)** plays an important part. \n\nSEO is the process of applying a set of techniques so that search engines rank your site or page higher, without you paying them to do so. We could optimize on content, organization, design, performance, etc. To know what to optimize is not obvious and requires a good understanding of the many factors that search engines use to rank web pages.\n\n## Discussion\n\n### What are some important factors that search engines use for ranking web pages?\n\nAs of December 2018, it's recognized that Google uses more than 200 ranking factors. These can be categorized into domain related, site level or page level factors. For example, a keyword appearing in domain name can boost ranking. At site level, content uniqueness, content freshness, site trust rank, site architecture, use of HTTPS, and usability are some factors. At page level, use of keywords in title tag, content length, topic coverage, and loading speed are some factors. \n\nSearch results that get more clicks may be ranked better in future. Bounce rate may be used to adjust ranking. Repeat visits to a page and higher dwell time (time spent on a page) may boost ranking. Likewise, pages with lots of comments may get a boost. \n\nSearch engines may also apply algorithmic rules. For example, priority may be given to recently published pages due to freshness. For diversity, results may include different interpretations of the keyword. User's search history is used to give contextual results. Geo-targeted matches may be given priority for some queries. \n\n\n### If there's only one important SEO technique, what would it be?\n\nIn the words of Google SEO Starter Guide 2017, \n\n> Creating compelling and useful content will likely influence your website more than any of the other factors discussed here.\n\nContent comes first. It should be unique and of high quality. It should add value to those who visit your site. When it's backed by great design, we can achieve great user experience across devices. This means that aiming for higher search ranking is not the primary goal. The focus should be on content and user experience. Every other SEO technique should not stray from this focus. \n\nApart from this, SEO is about keywords, links, relevance, reputation and trust. \n\n\n### What's the meaning of link building for SEO?\n\nLink building is the process of getting more links to your own webpages. Incoming links, also called **backlinks**, is an important SEO signal that leads to better ranking. Total number of links, number of linking root domains, number of links from separate C-class IPs are all important. Age and authority of pages or domains linking to your site are also important. The anchoring text used for a backlink is also relevant. \n\nThe recommended approach is to earn backlinks rather than buy them. Earning backlinks is a slow process but this builds your site's reputation in a natural way for better long-term results. To earn backlinks, create unique high-quality content, promote your content, get positive reviews from influencers, and partner with relevant sites without resorting to link scheming. \n\n**Internal link building** is something on which you have greater control. Build internal links based on keyword research and information architecture of your site. Map each page to keywords and user intent. Use this to create an SEO wireframe to help you build internal links. Use navigation bars, menus and breadcrumbs to link to key pages. \n\n\n### How can I make use of social signals for better SEO?\n\nSocial signals are basically user engagement on social media platforms. These include likes, shares, comments, and so on. \n\nSocial signals are important for ranking in Bing search engine. Google Search has said that social signals don't directly influence ranking. However, research has shown good correlation between ranking and social signals. This is because social signals amplify other SEO factors that in turn affect ranking. \n\nIt therefore makes sense to optimize your social presence to indirectly affect ranking. Your profile and branding should be consistent across platforms. Link your profiles to your site. Post regularly across platforms. Post per day about 15 tweets, 1 LinkedIn post, 1 Facebook post, and 2 Instagram posts. Use viral headlines and images in posts. Use hashtags, which are essentially keywords. Explicitly ask for shares. On your web pages, use social media plugins to make sharing easier. For best possible engagement, share only your best posts. \n\n\n### What should web designers keep in mind for better SEO?\n\nDesigners should **design for SEO** rather than just user experience (UX). One approach is to make good use of interactions (such as mouseover or mouse click) and expandable elements. For example, page can load with a minimalistic view but via interactions give more information that's also crawlable. \n\nImage-based banners and call-to-action elements are not crawlable. Image's `alt` attribute is plain text and of limited length. Instead, go for crawlable **text-based design** using webfonts, HTML and CSS, along with schema markup. \n\nDesign should be responsive. More than just fitting to different screen sizes, aim for consistent UX across devices. For example, have a fluid design that maintains proportions. Google's Mobile-Friendly Test can inform if your site's well designed for mobiles. \n\nUse **header tags** to organize content both for visitors and search engines. Using header tags in sidebars, header and footer is not a good SEO practice. Pop-ups and banners are penalized in mobile SEO. Due to voice interfaces, prioritize your design to be heard rather than seen. \n\n\n### What is meant by black hat SEO versus white hat SEO?\n\n**White Hat SEO** is when an SEO professional doesn't deviate from the rules defined by search engines. Getting results via white hat SEO takes time but they also last longer. Sites using only white hat SEO will not be penalized by search engines since they play by the rules. \n\n**Black Hat SEO** is when an SEO professional try to game the system to get better rankings. Search engine guidelines and rules are flouted. Black hat techniques may give quick results but there's a risk of getting penalized. However, we should point out that breaking search engine rules is not illegal. \n\nBuying links is a black hat technique. Automatically following someone who follows you on social media is another one. Cloaking is a black hat practice of showing one version of a web page to search crawlers and another version to normal visitors. Another one is keyword stuffing where irrelevant keywords are forced into a page. \n\nIn practice, no site is 100% white hat. When white hat SEO professionals try to acquire links, they're crossing into \"gray hat\" territory. \n\n\n### Are there SEO techniques that I should avoid?\n\nGoogle has shared some techniques to avoid: automatically generated content, scraped content, pages with little original content, cloaking, sneaky redirects, hidden texts or links, doorway pages, using irrelevant keywords, abusing rich snippets markup, sending automated queries to Google, etc. Too many ads on a page lowers the ranking. \n\nBing has shared a similar list to avoid: link buying or spamming, social media schemes, meta refresh redirects, duplicated content, keyword stuffing and misleading markup. \n\nDon't optimize on a single keyword since search engines focus on search intent. Don't design for just desktops. Make your design mobile friendly, fast, and responsive. Don't focus on quantity by duplicating or copying content; quality of your content is more important. Don't put important content in formats that most likely won't be crawled, such as, PDF files or images. Worse still, using `noindex` in meta tags or robots.txt file wrongly can tell crawlers to ignore your content. \n\nFinally, don't think that SEO is a one-time task. Keep adding new content to your site. If you change your site's structure, properly redirect old URLs. Stay updated on new SEO trends. \n\n\n### What are some common myths about SEO?\n\nHere are some SEO myths and clarifications of the same:\n\n + Boost your rankings overnight by hiring an SEO agency: improving rankings takes time.\n + Guest blogging is obsolete: avoid spamming blogs and publish quality content.\n + SEO is a one-time effort: it's a continuous process.\n + Link building is dangerous: grow links naturally without being manipulative.\n + CTR is no longer relevant: they still influence ranking but clicks from bots will be penalized.\n + Keywords and keyword research are dead: they're important but think in terms of context and search intent; keyword ratio is useless.\n + Keyword-optimized anchor text is bad: it's bad if it's overoptimized; hence aim for diversity (natural, brand, URL, generic).\n + Paid rankings will improve organic rankings: they're in fact treated separately.\n + XML sitemap improves rankings: it doesn't but search engines will index the site faster.\n + Meta tags are irrelevant: they don't affect rankings but they help search engines understand your site better.\n + H1 tags are important for rankings: not really but use them to organize your content and help users navigate more easily.\n\n### What are some tools that help with SEO?\n\nGoogle Search Console is an essential tool. Formerly called Google Webmaster Tools, it provides rankings and traffic reports for keywords and pages. \n\nFor backlink analysis, AHREFs and Majestic are useful. With these, we get to know who's linking to our site or to sites of our competitors. With this information, we can plan for link building. Similar tools include Buzzsumo, FollowerWonk, and Little Bird. \n\nFor keyword research, use Google's Keyword Planner, although this is useful only for paid search. For organic search, consider using Moz Keyword Explorer tool and SEMRush’s Keyword Magic Tool. Google Trends is also useful for competitive analysis on keywords. \n\nSince performance is an SEO factor, use Google PageSpeed Insights, Pingdom, or WebPageTest to know areas where performance can be improved. For local SEO, use Moz Local or Whitespark. \n\nSEO platforms bring together many tools for analyzing and optimizing our site. Moz, BrightEdge, Searchmetrics, Linkdex, and SEO PowerSuite are some examples. \n\nTo know if SEO is giving better results, use Google Analytics. \n\n## Milestones\n\n1990\n\nThe history of SEO is naturally tied to the history of **search engines**. What's probably the world's first search engine, Archie is released in 1990. The World Wide Web Wanderer, later called Wandex, is released in 1993 as one of the first web crawlers. By 1994, many search engines are operational including Alta Vista, Infoseek, Lycos, and Yahoo. Google itself is founded in 1998 but it started in 1996 with the name BackRub. \n\n1997\n\nDanny Sullivan launches *Search Engine Watch*, a site for news on search industry and tips for better ranking. \n\n1998\n\nFor sponsored links and paid search, Goto.com is launched. Advertisers bid on the site to rank higher than organic results. In fact, organic results were bad since search engines didn't succeed against black hat practices. The alternative was paid search or getting listed on popular directories such as Yahoo and DMOZ. \n\n1999\n\nThe first conference focused on search marketing, called *Search Engine Strategies*, takes place. \n\n2000\n\n**Google Toolbar** becomes available within Internet Explorer browser. This allows SEO practitioners to see their PageRank score. Meanwhile, some folks start sharing SEO related information at a London pub. This later evolves into *Pubcon*, a regular search conference. \n\nNov \n2003\n\nWith Google leading the search market, it releases the Florida update. Sites that were earlier ranked higher due to **keyword stuffing** (a black hat practice), lose their ranking. The Florida update gives better ranking for quality content and authentic backlinks. This partly solves the problem created earlier by Blogger.com and Google's AdSense that content creators gamed to make a quick buck. \n\n2004\n\nGoogle starts looking at local search intent and thus is born **local SEO**. This year is also when Google personalizes results based on search history and interests. \n\nJan \n2005\n\nGoogle, Yahoo and MSN jointly create the `nofollow` attribute to combat spammy links and comments. \n\n2011\n\nWith Google's Panda update (delivered over many months), the search engine penalizes **content farms** created solely for driving search engine results. This affected sites having scraped and unoriginal content. Panda also penalized pages with high ad-to-content ratios. \n\n2012\n\nWith the Penguin update of Google Search, sites with overoptimized anchor text, or spammy hyperlinks are penalized. \n\n2013\n\nWith the Hummingbird update of Google Search, search is based on keywords but results are more about **search intent**. This update affects 90% of searches worldwide. Design for user intent among information (to know), location (to go), action (to do), or shopping (to buy). To help search engines figure out intent, let each page have a singular purpose. \n\n2015\n\nIn April, Google Search gets a **mobile-friendly** update named Mobilegeddon. In October, Google releases an AI-powered search named *RankBrain*. It's initially used to interpret 15% of searches that Google has never seen before. In later years, RankBrain plays a more central role in all of Google searches.","meta":{"title":"Search Engine Optimization","href":"search-engine-optimization"}} {"text":"# Word Embedding\n\n## Summary\n\n\nWord embedding is simply a vector representation of a word, with the vector containing real numbers. Since languages typically contain at least tens of thousands of words, simple binary word vectors can become impractical due to high number of dimensions. Word embeddings solve this problem by providing dense representations of words in a low-dimensional vector space.\n\nSince mid-2010s, word embeddings have being applied to neural network-based NLP tasks. Among the well-known embeddings are word2vec (Google), GloVe (Stanford) and FastText (Facebook).\n\n## Discussion\n\n### Why do we need word embeddings?\n\nConsider an example where we have to encode \"the dog DET\", which is about 'dog', its previous word 'the' and whose part of speech is determiner (DET). If we represent every word and every part of speech in its own dimension, we would require a high-dimensional vector since our vocabulary will have lots of words. The vector will mostly be zeros except in three places that represent 'dog', 'the' and 'DET'. Called *One-Hot Encoding*, this a **sparse** representation. \n\nInstead, word embeddings give a **dense** representation in a lower-dimensional space. Each entity gets a unique representation in this vector space. As shown in the figure, both words have six dimensions each and the part of speech has four dimensions. The entire vector representation is now only 16 dimensions. This makes it practical for further processing. \n\nMore importantly, word embeddings capture similarities. For example, even if the word 'cat' is not seen during training, it's embedding would be similar to that of 'dog'. Likewise, different tenses of the same verb are correlated. \n\n\n### Is word embedding related to distributional semantics?\n\nYes. The term \"word embedding\" has been popularized by the deep learning community. In computational linguistics, the more preferred term is **Distributional Semantic Model (DSM)**, which comes from the theory of distributional semantics. Other equivalent terms include distributed representation, semantic vector space or word space. \n\nEssentially, words are not represented as a single number or symbol. Rather, the representation is distributed in a vector space of many dimensions. The notion of semantics emerges because two words that are close to each other in the vector space are somehow semantically related. Similar words form clusters in the vector space. \n\n\n### How can we extract semantic relationships captured within word embeddings?\n\nWord embeddings are produced in an unsupervised manner. We don't inform the model anything about syntactic or semantic relationships among words. Yet, word embeddings seem to capture these relationships. For example, country names and their capital cities form a relationship. Relations due to gender or verb tense of words are other examples. \n\nTo see this in practice, consider the following vector equations: \n\n$$king\\,–\\,man\\,+\\,woman = queen\\\\Paris\\,–\\,France\\,+\\,Germany = Berlin$$\n\nThe relationship between 'king' and 'man' is same as that between 'queen' and 'woman'. This is captured in the vector space. This means that given the word vectors for Paris, France and Germany, we can find the capital of Germany. The term **word analogy** is often used to refer to this phenomenon. \n\n\n### What are some applications of word embeddings?\n\nWord embeddings have become useful in many downstream NLP tasks. Word embeddings along with neural networks have been applied successfully for text classification, thereby improving customer service, spam detection, and document classification. Machine translations have improved. Analyzing survey responses or verbatim comments from customers are specific examples. \n\nWord embeddings help in adapting a model from one domain to another, such as from legal documents to news articles. In general, this is called **domain adaptation** that's useful for machine translation and transfer learning. In addition, pretrained word vectors can be adapted to domains where large training datasets are not available. \n\nIn recommendation systems, such as suggesting a playlist of songs, word embeddings can figure out what songs go well together in a particular context. \n\nIn search and information retrieval applications, word embeddings have been shown to be insensitive to spelling errors and exact keyword matches are not required. Even for words not seen during training, machine learning models work well provided such words are in the vector space. Thus, word embeddings are being preferred over older approaches such as TF-IDF or bag-of-words. \n\n\n### What traits do we expect from a good word embedding?\n\nDifferent models give different word embeddings. A good representation should aim for the following: \n\n + **Non-conflation**: A word can occur in different contexts giving rise to variants (tense, plural, etc.). Embedding should represent these differences and not conflate them.\n + **Unambiguous**: All meanings of the word should be represented. For example, for the word 'bow', the difference between \"the bow of a ship\" and \"bow and arrows\" should be represented.\n + **Multifaced**: Words have multiple facets: phonetic, morphological, syntactic, etc. Representation should change when tense changes or a prefix is added.\n + **Reliable**: During training, word vectors are randomly initialized. This will lead to different word embeddings from the same dataset. In any case, the final output should be reliable and show consistent performance.\n + **Good Geometry**: There should be a good spread of words in the vector space. In general, rare words should cluster around frequent words. Frequent unrelated words should be spread out.\n\n### Which are the well-known word embeddings?\n\nOne of the first word embeddings is the **Neural Network Language Model (NNLM)** in which word embeddings are learnt jointly with the language model. Embeddings can also be learnt using Latent Semantic Analysis (LSA) or Latent Dirichlet Allocation (LDA). \n\nNNLM has high complexity due to non-linear hidden layers. A tradeoff is to first learn the word vectors using a neural network with a single hidden layer, which is then used to train the NNLM. Other log-linear models are **Continuous Bag-of-Words (CBOW)** and **Continuous Skip-gram**. An improved version of the latter is Skip-gram with Negative Sampling (SGNS). These are part of the *word2vec* implementation. \n\nCBOW and Skip-gram models use only local information. **Global Vectors (GloVe)** is an approach that considers global statistical information as well. Word-to-word co-occurrence counts are used. GloVe combines LSA and word2vec. \n\nRare words can be poorly estimated. **FastText** overcomes this by using subword information. \n\nOther models include ngram2vec and dict2vec. **Embeddings from Language Models (ELMo)** is a representation that captures sentence level information. Based on ELMo, BERT and OpenAI GPT are two pretrained models for other NLP tasks that have been proven effective. \n\n\n### What's the role of the embedding layer and the softmax layer?\n\nThe general architecture of word embeddings using neural networks involves the following: \n\n + **Embedding Layer**: Generates word embeddings from an index vector and a word embedding matrix.\n + **Hidden Layers**: These produce intermediate representations of the input. LSTMs could be used here. In word2vec, there are no hidden layers.\n + **Softmax Layer**: The final layer that gives the distribution over all words in the vocabulary. This is the most computationally expensive layer and much work has gone into simplifying this. Two broad categories are softmax-based approaches and sampling-based approaches.\n\n### What's the process for generating word embeddings?\n\nWe can use a neural network on a supervised task to learn word embeddings. The embeddings are weights that are tuned to minimize the loss on the task. For example, given 50K words from a collection of movie reviews, we might obtain a 100-dimensional embedding to predict sentiment. Words signifying positive sentiment will be closer in the vector space. Since embeddings are tuned for a task, selecting the right task is important. \n\nWord embeddings can be learnt from a standalone model and then applied to different tasks. Or it could be learnt jointly with a task-specific model. For good embeddings, we would need to train on millions or even billions of words. An easier approach is to use pretrained word embeddings (word2vec or GloVe). They can be used \"as is\" if they suit the task at hand. If not, they can be updated while training your own model. \n\nIn biomedical NLP, it was noted that bigger corpora don't necessarily result in better embeddings. Sometimes intrinsic and extrinsic evaluation methods don't agree well. Hyperparameters that we can tune include negative sampling size, context window size, and vector dimension. Gains plateau at about 200 dimensions. \n\n\n### Could you share some practical tips for applying word embeddings?\n\nPredictive neural network models (word2vec) and count-based distributional semantic models (GloVe) are different means to achieve the same goal. There's no qualitative difference. Word2vec has proven to be robust across a range of semantic tasks. \n\nFor syntactic tasks such as named entity recognition or POS tagging, a small dimensionality is adequate. For semantic tasks, higher dimensionality may prove more effective. It's also been noted that pretrained embeddings give better results. It's been commented that 8 dimensions might suffice for small datasets and as many as 1024 dimensions for large datasets. \n\nIn 2018, selecting the optimal dimensionality was still considered an open problem. Too few dimensions, embeddings are not expressive. Too many dimensions, embeddings are overfitted and model becomes complex. Commonly, 300 dimensions are used. \n\n\n### What are some challenges with word embeddings?\n\nModels such as ELMo and BERT, capture surrounding context within word embeddings. However, word embeddings don't capture \"context of situation\" the way linguist J.R. Firth defined it in the 1960s. To achieve true NLU, we would have to combine the statistical approach of word embeddings along with the older linguistic approach. \n\nMore generally, it's been said the deep learning isn't sample efficient. Perhaps we need something better than deep learning to tackle language with compositional properties. \n\nWord embeddings don't capture phrases and sentences. For example, it would be misleading to combine word vectors to represent \"Boston Globe\". Embeddings for \"good\" and \"bad\" might be similar, causing problems for sentiment analysis. \n\nWord embeddings don't capture some linguistic traits. For example, vectors for 'house' and 'home' may be similar but vectors of 'like' and 'love' are not. In general, when a word has multiple meanings, called homographs or polysemy, its vector is an average value. One solution is to consider both the word and its part of speech. Inflections also cause problem. For example, 'find' and 'locate' are close to each other but not 'found' and 'located'. Lemmatization can help before training the word vectors. \n\n\n### What software tools are available for word embeddings?\n\nBoth word2vec and GloVe implementations are available online. In some frameworks such Spark MLlib or DL4J, word2vec is readily available. \n\nSome frameworks that support word embeddings are S-Space and SemanticVectors (Java); Gensim, PyDSM and DISSECT (Python). \n\nDeeplearning4j provides the `SequenceVectors` class, an abstraction above word vectors. This allows us to extract features from any data that can be described as a sequence, be it transactions, proteins or social media profiles. \n\nA tutorial explaining word embeddings in TensorFlow is available.\n\nYou can also download pretrained word embeddings. Note that many use lemmatization while learning word embeddings. \n\n## Milestones\n\n1950\n\nIn contrast to the formal linguistics of Noam Chomsky, researchers in the 1950s explore the idea that context can be useful for linguistic representation. This is based on structuralist linguistics. **Distributional Hypothesis** by Zellig Harris states that word meanings are associated with context. Another linguist John Firth states,\n\n> You shall know a word by the company it keeps!\n\n1960\n\nEarly attempts are made in the 1960s to construct features to represent semantic similarities. Hand-crafted features are used. Charles Osgood's semantic differentials is an example. \n\n1990\n\nDeerwester et al. note that words and documents in which they occur have a semantic structure. They exploit this structure for information retrieval to match documents based on concepts rather than keywords. They map words to documents as a matrix with word counts, giving a sparse representation. They attempt at the most 100 dimensions and employ the technique of Singular Value Decomposition (SVD). They coin the term **Latent Semantic Indexing (LSI)**. \n\n2003\n\nBengio et al. propose a language model based on neural networks, though they're not the first ones to do so. They use a feed-forward NN with one hidden layer. Words are represented as feature vectors. Model learns vectors and joint probability function of word sequences. However, they don't use the term \"word embeddings\". Instead, they use the term **distributed representation** of words. Note that here we're interested in similar words whereas LSI is about similar documents due to its application to information retrieval. \n\n2008\n\nCollobert and Weston show the usefulness of pretrained word embeddings. Using such word embeddings, they show that a number of downstream NLP tasks can be learned by a neural network. They consider both syntactic tasks (POS tagging, chunking, parsing) and semantic tasks (named entity recognition, semantic role labelling, word sense disambiguation). In their approach, a word is decomposed into features and then converted to vectors using lookup tables. \n\n2013\n\nAt Google, Mikolov et al. develop **word2vec** that helps in learning standalone word embeddings from a text corpus. Efficiency comes from removing the hidden layer and approximating the objective. Word2vec enabled large-scale training. Embeddings from the skip-gram model is shown to give state-of-the-art results for sentence completion, analogy and sentiment analysis. \n\n2014\n\nStanford researchers release **GloVe** word embedding. This has vectors of 25-300 dimensions learned from up to 840 billion tokens. \n\n2018\n\nConneau et al. apply word embeddings to language translation by aligning monolingual word embedding spaces in an unsupervised way. They don't require parallel corpora or character-level information. This can therefore benefit low-resource languages. They achieve better results as compared to supervised methods. Earlier in 2016, Sebastian Ruder published a useful survey of many cross-lingual word embeddings.","meta":{"title":"Word Embedding","href":"word-embedding"}} {"text":"# Open Source Hardware\n\n## Summary\n\n\nMany developments have contributed to the interest and growth of Open Source Hardware (OSHW): free and open source software (FOSS), 3D printing, crowdfunding, the maker movement, and Moore's Law reaching its limits.\n\nWhile OSHW is commonly thought to include electronics and mechanical designs, OSHW today has a much broader reach including fashion, furniture, musical instruments, farm machinery, bio-engineering, and more. \n\nOSHW is not a standard, nor is there a single organization tasked with leading the OSHW movement. However, the Open Source Hardware Association (OSHWA) hopes to become the hub of this movement. There's also the Open Source Hardware Definition, which forms the basis for defining licenses for OSHW.\n\n## Discussion\n\n### What's the historical context for open source hardware?\n\nIn the ham radio community sharing knowledge was a common practice. Then in the 1970s, computers were shipped as kits with schematics included. This resulted in a hacking culture among computer enthusiasts. This culture was about tinkering, experimenting, sharing, and collaborating. \n\nFOSS that started in the 1980s contributed to OSHW, although open hardware need not be about either electronics or software. The web of the 1990s made it easier to share designs and best practices. By early 2010s, OSHW became more widely known due to the following reasons:\n\n + **3D Printing**: This brought down prototyping and production costs. It made design iterations easier and faster.\n + **Maker Movement**: Started in the mid-2000s, this established magazines, platforms, and fairs/exhibitions for people to come together, collaborate and co-create. Maker labs brought essential tools under roof, gave people affordable access to these tools, and thereby democratized production.\n + **Crowdfunding**: Those with good ideas could get upfront funding from potential users of the product without depending on traditional investment routes or financial institutions.\n + **Moore's Law**: With the Law reaching its limits, there's a need to create application-specific silicon and open designs can keep costs down.\n\n### What aspects of hardware can be open sourced?\n\nOpen source hardware is about sharing the design files of hardware. This may include architecture/design drawings, schematics, PCB layout, bill of materials, HDL code, production/assembly instructions, and anything else that can enable others to replicate the hardware. \n\nThere's no use claiming open design if the file format is proprietary and can be opened only with closed tools. Thus, the definition of open source can be expanded to include design file formats and/or access to tools or software to manipulate the files. Along with original proprietary files, intermediate files in open formats should be made open. Examples of file formats include PDFs of circuit schematics, Gerbers for circuit board layouts, and IGES or STL files for mechanical objects. OpenCAD, KiCAD and Eagle are examples of open tools. \n\nIn the spirit of open source, users should be able to study the design, modify it for their specific needs, or distribute it. They also have the freedom to make and sell hardware based on the design. \n\n\n### Won't people misuse my designs and get rich at my expense?\n\nMaybe but not necessarily. Firstly, just because you adopted OSHW licensing, doesn't mean you cannot also sell your products commercially and be successful in the process. Even if others have access to your design, there's cost and effort involved in sourcing, manufacturing, assembling, testing, distributing and providing technical support. If you establish your own brand value, provide a quality product at a good price, it's difficult for others to compete. \n\nThe advantage of being open is that you benefit from community support and contributions. You establish an ecosystem around your product. While commercial vendors offer free reference designs, OSHW is a community effort. An open source design is likely to be more robust. It enables faster prototyping and it's continuously improved via multiple contributors. \n\nIn conclusion, the case for keeping your hardware design proprietary is weaker than for software. Investment towards manufacturing and distribution is still a barrier to entry. Hardware can also be differentiated based on the firmware that need not necessarily be open.\n\n\n### What business models are possible with OSHW?\n\nYour design may be open and freely shared but you may charge for products made from open designs. Technical support, maintenance and upgrades can be part of your services model and customers will be willing to pay for them. \n\nYou could create innovative services around open designs. You could then set up an online marketplace, aggregate and analyze data, etc. Openness becomes an incentive for customers because they have the freedom to tinker and customize. You can provide them training, resources, and additional services. \n\nYour product and its open design should be a market enabler. You can build an entire platform based on your open design. You can partner with others for peripherals that may or may not be open. In any case, a healthy ecosystem based on your open design will likely lead to better adoption and sales of your core products. This has been the case with Arduino and add-on boards called *shields*; and Raspberry Pi and add-on boards called *hats*.\n\n\n### What's the recommended licensing for OSHW?\n\nIn the days before any OSHW licenses were defined, people simply used FOSS licenses for CAD drawings or firmware. To call something OSHW, it should be completely open without restrictions. Any license that prevents commercial use is not compatible with OSHW. \n\n\"Creative\" works are protected by copyright and \"useful\" or functional works are protected by patents. Thus, copyrights don't apply to hardware. If hardware is not patented, anyone can copy, modify or build upon the hardware. But copying or modifying hardware is lot easier if the design files are available. Thus, when we talk about open licensing or copyleft for hardware, we are referring to the design files and related documentation. \n\nIn the world of software, there are plenty of licenses with different degrees of openness. Among the copyleft (share-alike or viral) licenses are GPL, CC BY-SA, CERN Open Hardware License (OHL) and TAPR Open Hardware License (OHL). Permissive licenses that allow for closed derivatives include FreeBSD, MIT, CC Attribution, and Solderpad Hardware License. \n\n\n### What are some examples of OSHW projects?\n\nWell-known examples that use CC BY-SA include Arduino, mBed HDK, BeagleBoard, Particule (formerly Spark), and Tessel. mangOH is an example that uses Creative Commons Attribution license. \n\nBack in 2013, some successful OSHW projects included Arduino, Raspberry Pi, OpenROV (remote-operated underwater robot), DIY Drones, LittleBits, and Makerbot Replicator 2, Lasersaur, Robo3D, and Console II. \n\nNoteworthy projects of 2016 included the Global Village Construction Set (fabricate industrial machines), Open Source Beehives (bee home and sensor kits for tracking), AKER garden kits, WikiHouse (building system), FarmBot (CNC farming machine), OpenDesk (make furniture), OSVehicle, RepRap (3D printer), OpenKnit (digital knitting), Defense Distributed (3D firearms), APM:Copter, and Open Hand Project (robotic prosthetic hands).\n\nSome OSHW boards include Arduino Due, Freescale Freedom, Microchip ChipKIT Uno32, and Beaglebone Black. Mouser website also lists dozens of other boards. Olimex offers OSHW boards including Linux-based OLinuXino boards. \n\nAt chip level, RISC-V offers an open architecture from which customized SoCs can be designed. Other include lm32, mor1kx, and blocks from the OpenCores project. There's talk of even building an open source supercomputer. \n\n\n### What are some online resources for OSHW?\n\nDesign files, particularly for 3D printing, can be downloaded from Thingiverse, MyMiniFactory, Pinshape, and Cults. Thingiverse was launched in 2008 and it encourages folks to modify and re-upload designs to the site. Other sources are Hackster.io, Hackaday, Open Electronics and the Open Circuits Institute. \n\nFor crowdfunding, try CrowdSupply, Kickstarter, Goteo and Tindie. Adafruit and SparkFun, while selling proprietary products, also promote OSHW. Olimex, Pandaboard.org, and SolderCore are suppliers of OSHW that are also available from Mouser. \n\nIf you're open sourcing your own product, you may release the design files on your own website, on Thingiverse or similar sites. If you wish to make it open during the design and development stages, GitHub or its alternatives can be a place to share. Prusa Mendel and Mendel90 are just two examples of projects that have received lots of community contributions, what in the tech speak are called \"pull requests\". \n\nElement14 has an online community forum for discussions on OSHW.\n\n## Milestones\n\nMar \n1975\n\nAt a garage in Menlo Park, California, some computer hobbyists have the first meeting of their newly formed **Homebrew Computer Club**. At such a meeting, Steve Wozniak gets inspired to build his own computer and share the blueprints with others. This leads to Apple-1. However, Steve Jobs convinces Wozniak to sell Apple-1 rather than share them freely, thus marking the birth of Apple in 1977. \n\n1997\n\nBruce Peters launches the **Open Hardware Certification Program** to allow vendors to self-certify their products as open. This implies availability of documentation so that anyone could write their own device drivers. \n\n1998\n\nMozilla releases source code of Netscape browser suite. Not wanting to call it free software (in the spirit of the Richard Stallman's Free Software Foundation), the term **Open Source** is coined. The **Open Source Initiative (OSI)** is also formed by Eric Raymond and Bruce Peters. At this point, it's all about software, not hardware. \n\n2003\n\nHernando Barragán creates **Wiring** so that designers and artists can approach electronics and programming more easily. This work leads to the creation of **Arduino** in 2005. \n\n2005\n\nThe birth of the modern maker movement starts with the launch of **Make: magazine**. In August, **Instructables** launches as an online platform to share step-by-step instructions to make something. \n\n2006\n\nDale Dougherty organizes the first **Maker Faire** for makers to showcase their creations. In October, as an open access DIY workshop, **TechShop** opens in Menlo Park, California. \n\n2010\n\nAt a open hardware workshop in March, some folks define the Open Source Hardware Definition 0.1. In July, v0.3 is made public. These are based on the definition of open source (from 1998). As on June 2019, Open Source Hardware (OSHW) Definition 1.0 is current. Open-Source Hardware Definition is not itself a license but OSHW licenses are written so as to be compatible with the definition. In September, the first **Open Hardware Summit** is organized, in New York City. Since then it has become an annual event. \n\n2011\n\nThe original gear logo of OSHW is selected via a community contest. A modified version of the winning logo is announced at the Open Hardware Summit. In July, the **CERN Open Hardware License (OHL)** is announced. To facilitate collaboration and sharing, CERN had already set up the Open Hardware Repository in January 2009. \n\nFeb \n2012\n\nThe first **Raspberry Pi Model B** is released as a credit-card-sized single-board computer (SBC) retailing at only $35. The idea is to make computers affordable, accessible and fun to a new generation of programmers. Within two years, 2.5 million units are sold. By 2018, 22 million are sold worldwide. Also in 2012, the **Open Source Hardware Association (OSHWA)** is formed. Certification of OSHW is also done by the Association.\n\nJul \n2018\n\nThe U.S. DARPA announces funding of $1.5 billion over five years for what it calls the Electronic Resurgence Initiative (ERI). This includes the **POSH (Posh Open Source Hardware)** project meant to create a Linux-based platform for the design and verification of open source hardware IP blocks for SoCs. \n\nMar \n2019\n\nEsperanto, Google, SiFive, and Western Digital get together to form the **CHIPS Alliance**. The purpose is to foster open-source chip designs. The alliance is committed to RISC-V architecture but wishes to encourage more such open designs.","meta":{"title":"Open Source Hardware","href":"open-source-hardware"}} {"text":"# Slim Framework\n\n## Summary\n\n\nSlim is a minimalistic PHP framework for implementing APIs. At its core, we can see it simply as a \"routing library\". It receives HTTP requests, routes them to the correct callback functions, and forwards the HTTP responses. By being minimal, Slim is fast and efficient. Unlike fullstack frameworks, developers don't end up installing lots of features that they may never use. \n\nSlim uses the concept of **middleware**. Each middleware can implement something specific. By chaining together many such middleware, complex web applications can be built. Many third-party middleware are available that implement authentication, security, view templating for generating HTML responses, caching, and more. \n\nSlim is open source and maintained by a community of developers. It can be used on its own (server-side rendering) or used alongside frontend frameworks such as React or Vue (client-side rendering).\n\n## Discussion\n\n### Why should I prefer Slim over so many other PHP frameworks?\n\nSlim is a good choice if you're looking for \"performance, efficiency, full control and flexibility\". Slim supports standard interfaces such as PSR-7 and PSR-15. This prevents vendor lock-in. Slim framework plays well with good software practices including SOLID principles, design patterns, security principles and dependency injection. \n\nSlim's easier to learn than fullstack frameworks such Laravel or Symfony. Fullstack frameworks provide many things out of the box: authentication, security, database integration, template engines, session management, and more. With Slim, these are outside the core framework. They're available as middleware from third-party vendors. Developers can install them via Composer only if and when their app needs them. \n\nSlim is suited for building modern RESTful APIs and services where data is exchanged in JSON or XML formats. Complex apps can also be built since the core framework can be extended with middleware. Complete web pages can be rendered on the server side with templating engines such as Twig-View or PHP-View. This brings Slim into the domain of MVC frameworks. Developers have used Slim backend with Next.js frontend. SPAs with Slim and Vue.js are possible. \n\n\n### How's the performance of Slim?\n\nIn a performance study from 2016, Slim v2.6 ranked fifth in terms of throughput (requests per second), peak memory usage and execution time. Phalcon and Ice were the best. In another study from 2017, Slim 3.0 ranked third behind Phalcon and CodeIgniter. It performed better than microframeworks Silex and Lumen. \n\nSlim's routing is based on FastRoute, which optimizes the parsing of regular expressions via chunking. \n\n\n### What are the key features of Slim?\n\nThe key features of Slim include these: \n\n + **Routing**: HTTP requests are efficiently routed to mapped callbacks. Routes are matched using regular expressions that can also capture and pass parameters into the callbacks. A callback is usually a function or a method of a class.\n + **Middleware**: Built-in or third-party middleware allow specific handling of requests and responses. Developers can also write custom middleware. The middleware approach is modular and flexible: middleware can be reordered or disabled if needed.\n + **PSR-7 Support**: HTTP messages interfaces conform to PSR-7 specifications. Slim provides an implementation of this interface but developers are free to adopt any alternative implementation.\n + **Dependency Injection**: This makes the app more testable or configurable. The interface is PSR-11 compliant.\n\n### Could you share details of how Slim middleware work?\n\nIn Slim, middleware can be visualized as concentric circles. Middleware are called from the outermost to the innermost circles. At the center, the application executes and routing takes place. Subsequently, middleware are again processed but in innermost-to-outermost circles. Another useful visualization is the LIFO stack. \n\nSuppose we have two middleware: authentication and caching. Authentication middleware is executed first. If authentication fails, app skips further processing. If authentication succeeds, then caching middleware is executed. If response exists in the cache, app uses the cache and skips the main callback processing. If cache doesn't exist, the main callback is executed that generates the response. Then caching middleware is called to store the response in the cache if necessary. Finally authentication middleware is called to complete or modify the response if needed. \n\nSlim middleware deal with PSR-7 request/response objects and PSR-15 request handler object. Middleware can be added to the app object and thus potentially called with every request. Or it can be added to only specific routes, in which case such middleware are called after routing is done. In both cases, outermost middleware is added last. \n\n\n### Which are the packaged middleware in Slim?\n\nDefault Slim installation comes packaged with a few middleware:\n\n + **Routing**: Implements routing based on *FastRoute* but developers can choose to use a different implementation.\n + **Error Handling**: Useful during development but also in production where errors can be logged for later analysis.\n + **Method Overriding**: Process `X-Http-Method-Override` HTTP header. For example, a `POST` request with `X-HTTP-Method-Override:PUT` header can be treated as a `PUT` request.\n + **Output Buffering**: Append to the current response body by default but this can be changed to \"prepend\" mode.\n + **Body Parsing**: When implementing APIs, response is often in JSON or XML. This middleware handles these formats.\n + **Content Length**: Appends `Content-Length` header to the response.\n\n### What's the application lifecycle in Slim?\n\nA Slim app is first instantiated with the call to `Slim\\App()`. It's common to do this via the factory method `AppFactory::create()`. For each of the HTTP methods (such as GET, POST, DELETE, etc.) routes can be defined. Then middleware are added. Finally, the application is executed by calling the app object's `run()` method. \n\nApp middleware execute in outermost-to-innermost order. When it reaches the innermost layer, application despatches the HTTP request to its suitable route object. If this route has its own middleware, they're executed in the same outermost-to-innermost order. Then control passes from the innermost to outermost middleware. Finally, application serializes and returns the HTTP response. \n\n\n### How can I create routes in Slim?\n\nSlim framework allows us to define routes for various HTTP methods: GET, POST, PUT, DELETE, OPTIONS, PATCH. It's possible to define a route for all methods using `$app->any()` method or for multiple methods such as in `$app->map(['GET', 'POST'])`. \n\nEach route method accepts a **pattern** that's used to match against the URI of the HTTP request. **Placeholders** within the pattern are delimited using `{}`. Moreover, placeholders can be regular expressions. Optional parts of the pattern are delimited using `[]`. For instance, `$app->get('/users[/{id:[0-9]+})` wold match \"/users\" (since `id` part is optional) and \"/users/23\" (since `id` is a valid positive integer). \n\nEach route method also accepts a **callback**, which is a PHP callable with three arguments: Request, Response and Arguments. The values captured by placeholders are passed as arguments. In the above example, `id` is accessed as `$args['id']`. Callback can be specified as an anonymous function, a method defined in a class, a class that implements the `__invoke()` method, etc. \n\nRoutes can be grouped to make code more maintainable. Thus, URIs `/users/{id:[0-9]+}` and `/users/{id:[0-9]+}/reset-password` can be grouped into `/users/{id:[0-9]+}`. \n\n\n### What are some online resources to get started with Slim?\n\nBeginners can start with the official Slim documentation. This introduces Slim with a simple example and then goes into the details. For latest updates, visit the Slim blog. To get help from the community, head to the discussion forum.\n\nFor step-by-step guides to developing APIs or apps with Slim, see the Slim Walkthrough and Rob Allen's Building Modern APIs with Slim Framework.\n\nFor ideas on how to organize a complex Slim-based app, see an example filesystem layout (slide 18) . Read about architecting a layered structure. A Slim v4 starter project is also available.\n\nCurated list of PSR-15 middleware are available at psr15-middlewares and awesome-psr15-middlewares.\n\nThose who wish to contribute the framework itself, can find Slim's codebase on GitHub.\n\n## Milestones\n\nSep \n2010\n\nJosh Lockhart invents the Slim framework with a focus on simplicity and rapid application development. He feels that alternatives such as Symfony, Cake and CodeIgniter have steeper learning curves and too big for his needs. \n\nSep \n2011\n\nThe Slim blog illustrates a simple \"Hello World\" example with the Slim framework. App is initialized with the call `$app = new Slim()`. Routes are configured using `$app->get()`, `$app->post()`, and so on. Application execution is trigger with the call `$app->run()`. All of this can be placed in the file *index.php*. Via Apache's *.htaccess* file, requests can be directed to *index.php*. \n\nApr \n2012\n\n**Slim v1.6.0** is released. Framework architecture is based on the Rack Protocol. Application-wide **middleware** is introduced. Request/response interfaces, session handling, cookie handling and logging are improved. It's also announced that Slim can now be installed via Composer. \n\nSep \n2012\n\n**Slim v2** is released. It requires at least PHP 5.3. It's PSR-2 compliant and uses **namespaces**. \n\nJul \n2013\n\n**Slim v2.3.0** is released in a backward compatible manner. Environment, request, response, view and log objects can be accessed as public properties of the application instance. Route groups are added. Headers and cookies can be easily accessed in Request objects and easily set in Response objects. \n\nDec \n2015\n\n**Slim v3.0.0** is released. New features include support for dependency injection and PSR-7. Methods expect interfaces rather than concrete implementations. By November 2019, this evolves to Slim v3.12.3. \n\nAug \n2019\n\n**Slim v4.0.0** is released. This release requires PHP 7.1 or newer. It supports PSR-15 compliant middleware. Framework is decoupled from built-in PSR-7 implementation. Thus developers can use alternatives such as Zend, Nyholm and Guzzle. Likewise, routing, error handling and response emitting implementations are decoupled from the framework core. Routing is via `Slim\\Middleware\\RoutingMiddleware`. App factory is introduced. By March 2022, this evolves to Slim v4.10.0.","meta":{"title":"Slim Framework","href":"slim-framework"}} {"text":"# Information Security Principles\n\n## Summary\n\n\nInformation can be private or public, personal or generic, valuable or commonplace, online or offline. Like any other asset, it has to be protected. This is more important online where hackers can steal or misuse information remotely even without any physical access to where that information resides.\n\nIn line with evolving technology, data security practices have evolved from high-level principles into more detailed set of practices and checklists. In practice, there's no single list of principles that everyone agrees on. Many lists exist, each one customized for its context.\n\n## Discussion\n\n### Which are the three main information security principles?\n\nThe three main security principles include: \n\n + **Confidentiality**: Protect against unauthorized access to information.\n + **Integrity**: Protect against unauthorized modification of information. Even if an adversary can't read your data, they can either corrupt it or selectively modify it to cause further damage later on.\n + **Availability**: Protect against denial of access to information. Even if an adversary can't access or modify your data, they can prevent you from accessing it or using it. For example, they can destroy or congest communication lines, or bring down the data server.These principles have also been called security goals, objectives, properties or pillars. More commonly, they are known as the **CIA Triad**.\n\nSecurity practitioners consider these principles important but vague. This is because they're about the \"what\" but not the \"how\". They have to be translated into clear practices based on context. They have been applied to IT infrastructure, cloud systems, IoT systems, web/mobile apps, databases, and so on. Actual practices may differ but can be related to the CIA triad.\n\n\n### What are some variations of CIA?\n\nIt's been said that the CIA Triad is focused on technology and ignores the human element. The **Parkerian Hexad** therefore addresses the human element with three more principles: \n\n + **Possession/Control**: It's possible to possess or control information without breaching confidentiality.\n + **Authenticity**: This is about proof of identity. We should have an assurance that the information is from a trusted source.\n + **Utility**: Information may be available but is it in a usable state or form?Another variation is the **McCumber Cube**. It includes the CIA Triad but also adds three states of information (transmission, storage, processing) and three security measures (training, policy, technology). \n\nOther published security principles have come from OECD, NIST, ISO, COBIT, Mozilla, and OWASP. \n\n\n### What are some means of achieving the CIA security goals?\n\nAuthorization, authentication and the use of cryptography are some techniques to achieve the CIA security goals. These have been sometimes called **Security Mechanisms**. These mechanisms are designed to protect assets and mitigate risks. However, they may have vulnerabilities that threats will attempt to exploit. \n\nConfidentiality is often achieved via encryption. Hackers in possession of encrypted data can't read it without the requisite decryption keys. File permissions and access control lists also ensure confidentiality. For integrity, a hash of the original data can be used but this hash must itself be provided securely. Alternatively, digital certificates that use public-key cryptography can be used. For availability, there should be redundancy built into the system. Backups should be in place to restore services quickly. Systems should have recent security updates. Provide sufficient bandwidth to avoid bottlenecks. \n\nPeople must be trained to use strong passwords, recognize possible threats and get familiar with social engineering methods. \n\n\n### What are some common approaches to enhancing information security?\n\nComplex systems are hard to secure. Keep the design simple. This also **minimizes the attack surface**. For example, a search box is vulnerable to SQL injections but a better search UI will remove this risk. Use **secure defaults** such as preventing trivial passwords. Give users or programs the **least privilege** to perform their function. When failures occur, ensure they're handled with correct privileges. \n\nThere's better **defence in depth**. This means that multiple levels of control are better than a single one. Security at application layer alone is not enough. Secure server access, network communications, wireless access, user interface, and so on. Don't trust third-party services. Have a clear **separation of duties** to prevent fraud. For example, admin users shouldn't be allowed to login to the frontend with same privileges and make purchases on behalf of others. \n\n**Avoid security by obscurity**. This means that we shouldn't rely on hidden secrets. For example, even if source code is leaked or encryption algorithms are known, the system should remain secure. \n\nPrefer **decentralized systems** with replication to centralized ones. \n\n\n### Could you mention some threats or attacks by which hackers can compromise the security principles?\n\nSniffing data communications, particularly when it's not encrypted, is an example of breach of confidentiality. ARP spoofing is an example of sending false ARP messages so that traffic is directed to the wrong computer. Phishing is a breach of integrity since the hacker's website tricks a visitor into thinking it's the genuine website.\n\nRepeatedly sending a request to a service will overload the server. Server will become progressively slower to response to requests and even crash. This Denial-of-Service (DoS) attack make the service unavailable. \n\nFor databases, SQL injection is a big threat allowing hackers access to sensitive data or extra privileges. Buffer overflow vulnerabilities can be exploited to modify data. DoS attacks are possible with databases and their servers. \n\nIn any case, record all transactions and events. This leads to better detection of intrusions and future preventions. Have a good recovery plan. Perform frequent security tests to discover vulnerabilities. \n\n## Milestones\n\n1950\n\nInformation Security or InfoSec doesn't exist in the 1950s or even in the 1960s. Security is all about physically securing access to expensive machines. Reliability of computers is the main concern. As hardware and software becomes standardized and cheaper, it's only in the 1970s that there's a shift from computer security towards information security. \n\n1970\n\nIn the early years of the ARPANET, the US Department of Defense commissions a study that's published by the Rand Corporation as *Security Controls for Computer Systems*. It identifies many potential threats and possible security measures. The task force was chaired by Willis H. Ware. In time, this report becomes influential and is known as the Ware Report. \n\n1972\n\nJames P. Anderson authors *Computer Security Technology Planning Study* for the USAF. This is published in two volumes. In time, this comes to be called the Anderson Report. \n\n1973\n\nMultics was a timesharing operating system that started in 1965 as a MIT research project. In the summer of 1973, researchers at MIT look at the security aspects of Multics running on a Honeywell 6180 computer system. They come up with broad security design principles. They categorize these into three categories with due credit to J. Anderson: unauthorized release, unauthorized modification, unauthorized denial. \n\n1980\n\nPrior to the 1980s, security was influenced by the defence sector. In the 1980s focus shifts from **Confidentiality** to commercial concerns such as costs and business risks. Among these is the idea of **Integrity** since it's important for banks and businesses that data is not modified by unauthorized entities. \n\n1988\n\n**Morris Worm** becomes the first DoS attack on the Internet. Thus, **Availability** is recognized as an essential aspect of information security. \n\n1989\n\nIn the *JSC-NASA Information Security Plan* document we find the use of the term **CIA Triad**. However, the term could have been coined as early as 1986. \n\n1998\n\nTo complement InfoSec, **Information Assurance (IA)** emerges as a discipline. This is more about securing information systems rather than information alone. With the growth of networks and Internet, **Non-Repudiation** and **Authentication** become important concerns. Non-repudiation means that parties can't deny having sent or received a piece of information. \n\n2001\n\nNIST publishes Underlying Technical Models for Information Technology Security. It identifies five security objectives: Availability, Integrity, Confidentiality, Accountability and Assurance. It points out that these are interdependent. For example, if confidentiality is compromised (eg. superuser password), then integrity is likely to be lost as well. \n\n2002\n\nDonn B. Parker expands on the CIA Triad by adding three more items: authenticity, possession or control, and utility. Parker also states that it's best to understand these six principles in pairs: confidentiality and possession, integrity and authenticity, and availability and utility. In time, these six principles have come to be called **Parkerian Hexad**.","meta":{"title":"Information Security Principles","href":"information-security-principles"}} {"text":"# Apache Spark\n\n## Summary\n\n\nWhen processing large amounts of data, it's common to distribute and parallelize the workload across a cluster of machines. Apache Spark is a framework that sits between the applications above and the cluster of resources below. Spark doesn't manage the low-level storage and compute resources directly. Instead, it makes use of other frameworks such as Mesos or Hadoop. In fact, Apache Spark is described as \"a unified analytics engine for large-scale data processing\". \n\nApplications written in many popular languages can make use of Spark. Meanwhile, support for more languages is coming. Since Spark comes with many useful libraries, different types of processing are possible in a single application. Spark is being popularly used for Machine Learning workloads.\n\nSpark is open source and is managed by Apache Software Foundation.\n\n## Discussion\n\n### In which application scenarios is Spark useful?\n\nHadoop MapReduce is great for batch processing where typically data is read from disk, processed and written back to disk. But MapReduce is inefficient for multi-pass applications that read more than once. Performance drops due to data duplication, serialization and disk I/O. Apache Spark was created to solve this and is useful for the following: \n\n + **Iterative Algorithms**: This includes machine learning algorithms that by nature process data through many iterations.\n + **Interactive Data Mining**: Data is loaded into RAM once and then repeatedly queried. Interactive or ad-hoc analysis often includes visualizations. The language for querying the data must also be expressive.\n + **Streaming Applications**: For real-time analysis, data must be analyzed as they come into the system. There's a need to maintain aggregate state over time.Spark can be used in gaming, e-commerce, finance, advertising, and more. Many of these involve real-time analysis and unstructured data sources. Uber used Spark for feature aggregation in its ML data pipeline. Spark can help in scaling data pipelines for genomics. One researcher analyzed NASA server logs (~300MB) using database-like queries and regular expressions via Spark. \n\n\n### What makes Spark better than Hadoop MapReduce?\n\nLike MapReduce, Spark is scalable, distributed and fault tolerant, but it's also more efficient and easy to use. While MapReduce reads and writes to disk between tasks, Spark does in-memory caching and thereby improves performance. \n\nSpark does this by providing a data abstraction called **Resilient Distributed Dataset (RDD)**. Interfacing to RDD is enhanced with *DataFrame* and *Dataset* APIs. This is just one example of Spark's better usability via rich APIs and a functional programming model. \n\nMapReduce jobs are executed as JVM processes but Spark is able to execute multiple tasks inside a JVM process. Another advantage of Spark is that each JVM process lives for entire duration of the application, unlike in MapReduce where the process exits once execution completes. This means that new tasks submitted to a Spark cluster can start much faster. There's better CPU utilization. The tradeoff is that resource management is coarse grained but cluster managers can overcome this (such as YARN's container resizing). \n\nHowever, MapReduce may still be relevant for linear processing of huge datasets that can't fit into memory, particularly for join operations. \n\n\n### What's the architecture of Apache Spark?\n\nA Spark application runs in a distributed manner on a cluster of nodes. It consists of two parts: the main program called **driver program**, and **executors** or processes on worker nodes where actual execution happen. Driver program contains the *SparkContext* object for coordinating the application. \n\nA **worker node** is any node that runs application code. Such code could be JAR or Python files, for example. Application code available in SparkContext is passed to executors. SparkContext also schedules and sends tasks to executors. Each application get its own executors. This cleanly isolates one application from another. Multiple tasks can run within an executor due to multi-threading. \n\nSince the driver program schedules tasks, it should be close to the worker nodes, preferably on the same local area network. It should also be network addressable from worker nodes and listen for incoming connections. \n\nDriver program connects to a **cluster manager** that's responsible for managing resource allocation. The cluster manager could be anything: Spark's default manager, Apache Mesos, Hadoop YARN, Kubernetes, etc. The manager needs to only acquire executor processes and these communicate with one another. \n\n\n### What are some essential terms to know in Apache Spark?\n\nHere are some essential terms:\n\n + **Task**: A unit of work sent to one executor.\n + **Job**: Parallel computation involving multiple spawned tasks for some actions such as `save` or `collect`.\n + **Stage**: A smaller set of tasks of a job, useful when one stage depends on another.\n + **RDD**: A fault-tolerant collection of elements that can be processed in parallel.\n + **Partition**: A smaller chunk into which an RDD is divided. A task is launched per partition. Thus, more partitions imply greater parallelism. Having 2-4 partitions per CPU is typical.\n + **Transformation**: An operation performed on an RDD. Since RDDs are immutable, transformations on an RDD result in a new RDD.\n + **Action**: An operation on an RDD that returns a result to the driver program.\n\n### What's the Spark software stack?\n\nThe Spark Core is the main programming abstraction. It gives APIs (in Java, Scala, Python, SQL, and R) to access and manipulate RDDs. To ease development, Spark comes with component libraries including Spark SQL/DataFrame/Dataset, Spark Streaming, MLlib and GraphX. Each of these serves specific application requirements but they all rely on Spark Core's unified API. This approach of a modular core plus useful components makes Spark attractive to developers. \n\nUser applications sit on top of these components. The Spark Shell is an example app that facilitates interactive analysis. \n\nSpark Core itself doesn't manage cluster resources. This is done by a cluster manager. Spark comes with a standalone cluster manager but we are free to choose alternatives such as Hadoop YARN, Mesos, Kubernetes, etc. Spark Core also doesn't deal with disk storage for which we can use Hadoop HDFS, Cassandra, HBase, S3, etc. For example, in one deployment we could choose to use the YARN along with HDFS while the computing is managed via Spark Core. \n\n## Milestones\n\n2009\n\nResearchers at UC Berkeley's AMPLabs create a cluster management framework called **Mesos**. To showcase how easy it is build something on top of Mesos, they look at the limitations of MapReduce and Hadoop. These are good at batch processing but not at iterative or interactive processing, such as machine learning or interactive querying. To overcome these limitations, they build **Spark**. Spark is written in Scala and exposes a functional programming interface. \n\n2010\n\nSpark is open sourced. It's creators publish the first paper on Spark, titled Spark: Cluster Computing with Working Sets. \n\n2013\n\nSpark is incubated under the **Apache Software Foundation**. In February 2014, it becomes a top-level Apache project. \n\n2015\n\nWith Spark 1.4 release, there's support for both Python 2 and 3. However, it's announced later to deprecate Python 2 support in the next major release of 2019. \n\nJul \n2016\n\nSpark 2.0.0 is released. To enable optimization, DataFrame API was introduced in v1.3. Dataset API introduced in v1.6 enabled compile-time checks. From v2.0, **Dataset** presents a single abstraction although language bindings that don't have type checks (Python, R) will internally use DataFrame, which is an alias for Dataset[Row]. \n\n2018\n\nIn February, Spark 2.3.0 is released. With this release, native Spark jobs can be managed by Kubernetes. Data source API V2 improves over V1. Meanwhile, a ranking of distributed computing packages for data science shows Apache Spark at the top, followed by Apache Hadoop. In the enterprise, **Apache Spark MLib** is most adopted for ML and Big Data analytics. This is followed by TensorFlow. \n\nApr \n2019\n\nFor using Spark from C# and F#, **.NET for Apache Spark** is launched as a preview project. Direction will come from .NET Foundation. This effort replaces and deprecates Mobius.","meta":{"title":"Apache Spark","href":"apache-spark"}} {"text":"# Profiling Python Code\n\n## Summary\n\n\nPython documentation defines a *profile* as \"a set of statistics that describes how often and for how long various parts of the program executed.\" In addition to measuring time, profiling can also tell us about memory usage. \n\nThe idea of profiling code is to identify bottlenecks in performance. It may be tempting to guess where the bottlenecks could be but profiling is more objective and quantitative. Profiling is a necessary step before attempting to optimize any program. Profiling can lead to restructuring code, implementing better algorithms, releasing unused memory, caching computational results, improving data handling, etc.\n\n## Discussion\n\n### What are the Python packages or modules that help with profiling?\n\nPython's standard distribution includes profiling modules by default: `cProfile` and `profile`. `profile` is a pure Python module that adds significant overhead. Hence `cProfile` is preferred, which is implemented in C. Results that come out from this module can be formatted into reports using the `pstats` module. \n\n`cProfile` adds a reasonable overhead. Nonetheless, it must be used only for profiling and not for benchmarking. **Benchmarking** compares performance between different implementations on possibly different platforms. Because of the overhead introduced by `cProfile`, to benchmark C code and Python code, use `%time` magic command in IPython instead. Alternatively, we can use the `timeit` module. \n\nTo trace memory leaks, `tracemalloc` can be used. \n\nOther useful modules to install include `line_profiler`, `memory_profiler` and `psutil`. The module `pprofile` takes longer to profile but gives more detailed information than `line_profiler`. For memory profiling, `heapy` and `meliea` may also help. `PyCounters` can be useful in production. \n\n\n### Are there visualization tools to simplify the analysis of profiler output?\n\n`vprof` is a profiler that displays the output as a webpage. It launches a Node.js server, implying that Node.js must be installed. On the webpage, you can hover over elements to get more information interactively. \n\nThe output of cProfile can be saved to a file and then converted using `pyprof2calltree`. This converted file can then be opened with KCachegrind (for Linux) and QCachegrind (for MAC and Windows). Likewise, gprof2dot and RunSnakeRun are alternative graphical viewers. \n\n*PyCallGraph* profiles and outputs the statistics in a format that can be opened by Graphviz, a graph visualization software. \n\n*nylas-perftools* adds instrumentation around code, profile it and export the results in JSON format. This can be imported into Chrome Developer Tools to visualize the timeline of execution. This can also be used in production since the app stack is only sampled periodically. \n\n\n### What are the types of profilers?\n\nThree types of profilers exist: \n\n + **Event-based**: These collect data when an event occurs. Such events may be entry/exit of functions, load/unload of classes, allocation/release of memory, exceptions, etc. `cProfile` is an example.\n + **Instrumented**: The application profiles itself by modifying the code or with built-in support from compiler. Using decorators to profile code is an example of this. However, instrumented code is not required in Python because the interpreter provides hooks to do profiling.\n + **Statistical**: Data is collected periodically and therefore what the profiler sees is a sample. While less accurate, this has low overhead. Examples of this include Intel® VTune™ Amplifier, Nylas Perftools and Pyflame. On the contrary, we can also say that `cProfile`, `line_profiler` and `memory_profiler` are *Deterministic Profilers*.In general, deterministic profiling adds significant overhead whereas statistical profiling has low overhead since profiling is done by sampling. For this reason, the latter can also be used in production. We should note that the profiling overhead in Python is mostly due to the interpreter: overhead due to deterministic profiling is not that expensive. \n\nWe can also classify profilers as **function profilers**, **line profilers** or **memory profilers**.\n\n\n### What are the IPython magic commands that help with profiling?\n\nIPython has the following magic commands for profiling: \n\n + `%time`: Shows how long a one or more lines of code take to run.\n + `%timeit`: Like `%time` but gives an average from multiple runs. Option `-n` can be used to specify the number of runs. Depending on how long the program takes, the number of runs is limited automatically. This is unlike the `timeit` module.\n + `%prun`: Shows time taken by each function.\n + `%lprun`: Shows time taken line by line. Functions to profile can be specified with `-f` option.\n + `%mprun`: Shows how much memory is used.\n + `%memit`: Like `%mprun` but gives an average from multiple runs, which can be specified with `-r` option.Commands `%time` and `%timeit` are available by default. Commands `%lprun`, `%mprun` and `%memit` are available via modules `line-profiler`, `memory-profiler` and `psutil`. But to use them within IPython as magic commands, mapping must be done via IPython extension files. Or use the `%load_ext` magic command. \n\nWhen timing multiple lines of code, use `%%timeit` instead of `%timeit`. This is available for `%prun` as well. \n\n\n### How to interpret the output of cProfile?\n\nThe output of cProfile can be ordered using the `-s` option. In the accompanying image, we see that the results are ordered by cumulative time in descending order. Everything starts at the module level from where `main()` function is called. We can also see that most of this is from `api.py:request()`, `sessions.py:request()` and `sessions.py:send()`. However, we are unable to tell if these are part of the same call flow or parallel flows. A graphical viewer will be more useful.\n\nThe column \"tottime\" is the time spent within a function without considering time spent in functions called within. We also see that `readline()` function from `socket.py` is called many times. It takes 1.571 seconds of 4.481 seconds of cumulative time. However, the total time within this function is 0. This means that we need to optimize functions that are called by `readline()`. We suspect it might be the read operation of SSL Socket. The entry \"4/3\" indicates that `sessions.py:send()` is called 3 times plus 1 recursive call. \n\n\n### What should I look for when analyzing any profiling output?\n\nYou should identify which function (or lines of code) is taking most of the execution time. You should identify which function is getting called most often. Look for call stacks that you didn't expect. Know the difference between *total time* spent in function and *cumulative time*. The latter enables direct comparison of recursive implementations against iterative ones. \n\nWith respect to memory usage, see if there's a memory leak.\n\n\n### Could you share some tips for optimizing profiled code?\n\nKeep in mind that speeding up a function 100 times is irrelevant if that function takes up only a few percent of the program's total execution. This is the essence of profiling. Optimize only what matters. \n\nIf profiling shows that I/O is a bottleneck, threading can help. If your code uses a lot of regex, try to replace these with string equivalents, which will run faster. To avoid repeat computations, earlier results can be cached. This is called *memoization*. Where applicable, this can help in improving performance. If there are no obvious places to optimize the code, you can also consider an alternative runtime such as PyPy or moving critical parts of the code into Cython or C and calling the same from Python. Likewise, vectorize some operations using NumPy. Consider moving stuff from inner loops. Remove logging or make them conditional. If a particular function is called many times, code could be restructured.\n\nWhether during profiling or after optimizing your code, don't ignore your unit tests. They must continue to pass. Also, keep in mind that optimizing code can compromise on readability and maintainability. \n\n\n### Are there Python IDEs that come with profilers?\n\nWith IPython, Jupyter and Anaconda's Spyder, `%time` and `%timeit` can be used by default. For line profiling or memory profiling, the necessary modules need to be installed before invoking them. \n\nPyCharm Community Edition does not support profiling but the professional version supports it. In fact, cProfile, yappi and vmprof are three profilers that are supported. vmprof is a statistical profiler. \n\nVisual Studio comes with a built-in profiler. However, this works only for CPython and not for IronPython for which .NET profiler should be used. \n\n\n### What's the decorator approach to profiling Python code?\n\nIt's possible to decorate a function for profiling purpose. We can timestamp the entries and exits and thereby calculate time spent within the function. The advantage of this approach is that there's no dependence on any profiling tool. Overhead can be kept low. We can choose to profile only specific parts of our program and ignore core modules and third-party modules. We'll also have better control of how results will be exported. \n\n\n### What's Pyflame and why may I want to use it?\n\nPyflame is a profiler for Python that takes snapshots of the Python call stack. From these, it generates flame graphs. Pyflame is based on Linux's `ptrace`. Hence it can't be used on Windows systems. Pyflame overcomes the limitations of cProfile, which adds overhead and doesn't give a full stack trace. Pyflame also works with code not instrumented for profiling. \n\nBecause Pyflame is statistical in nature, meaning that it doesn't profile every since function call, it can also be used in production. \n\n## Milestones\n\nApr \n1992\n\nIn CPython implementation, first code commit is made for `profile` module. \n\nJun \n1994\n\nIn CPython implementation, first code commit is made for `pstats` module. \n\nFeb \n2006\n\nIn CPython implementation, first code commit is made for `cProfile` module. \n\nSep \n2008\n\nInitial code commit for `line_profiler` is done by Robert Kern. Version 2.1 of this module is released in December 2017. \n\nSep \n2016\n\nFirst version of Pyflame is released by the engineering team at Uber.","meta":{"title":"Profiling Python Code","href":"profiling-python-code"}} {"text":"# Transformer Neural Network Architecture\n\n## Summary\n\n\nGiven a word sequence, we recognize that some words within it are more closely related with one another than others. This gives rise to the concept of **self-attention** in which a given word \"attends to\" other words in the sequence. Essentially, attention is about representing context by giving weights to word relations. \n\nTransformer is a neural network architecture that makes use of self-attention. It replaces earlier approaches of LSTMs or CNNs that used attention between encoder and decoder. Transformer showed that a feed-forward network used with self-attention is sufficient. \n\nInfluential language models such BERT and GPT-2 are based on the transformer architecture. By 2019, transformer architecture became an active area of research and application. While initially created for NLP, it's being used in other domains where problems can be cast as sequence modelling.\n\n## Discussion\n\n### How is the transformer network better than CNNs, RNNs or LSTMs?\n\nWords in a sentence come one after another. The context of the current word is established by the words surrounding it. RNNs are suited to model such a time-sequential structure. But an RNN has trouble remembering long sequences. LSTM is an RNN variant that does better in this regard. CNN architectures WaveNet, ByteNet and ConvS2S have also been used for sequence-to-sequence learning. \n\nMoreover, RNNs and LSTMs consider only words that have gone before (although there's bidirectional LSTMs). Self-attention models the context by looking at words before and after the current word. For instance, the word \"bank\" in sentence \"I arrived at the bank after crossing the river\" doesn't refer to a financial institution. Transformer can figure out this meaning because it looks at subsequent words as well. \n\nThe sequential nature of RNNs implies that tasks can't be parallelized on GPUs and TPUs. Transformer's encoder self-attention can be parallelized. While CNNs are less sequential, complexity still grows logarithmically. It's worse for RNNs where complexity grows linearly. With transformers, the number of sequential operations is constant. \n\n\n### What's the architecture of the transformer?\n\nThe transformer of Vaswani et al. basically follows the encoder-decoder model with attention passed from encoder to decoder. Both encoder and decoder stack multiple identical layers. Each encoder layer uses self-attention to represent context. Each decoder layer also uses self-attention in two sub-layers. While the encoder's self-attention uses both left and right context, the lower sub-layer of decoder masks out the future positions while predicting the current position. \n\nIn each layer we find some common elements. Residual connections are made. These are added and normalized with connections flowing via the self-attention sub-layers. There are no recurrent networks, only a fully connected feed-forward network. \n\nAt the input, source and target sequences are represented as embeddings. These are enhanced with positional encodings. At the output, a linear layer is followed with softmax. \n\nThe transformer's encoder can work on the input sequence in parallel but the decoder is auto-regressive. Each output is influenced by previous output symbols. Output symbols are generated one at a time. \n\n\n### How is self-attention computed in a transformer network?\n\nEvery word is projected on to three vectors: query, key and value. Respective weight matrices \\(W\\) to do this projection are learned during training. Suppose we're calculating the attention on a particular word. A dot-product operation of its query vector with the key vector of each word is calculated. Dot-product attention is scaled with \\(1/\\sqrt d\\_k\\) to compensate large dot-product values. The value vectors are weighted with weights from the dot product and then summed. \n\nFor better results, **multi-head attention** is used. Each head learns a different attention distribution, similar to having multiple filters in CNN. For example, if the model dimension is 512, instead of a large single attention layer, we use 8 parallel attention layers, each operating in 64 dimensions. Output from the layers are concatenated to derive the final attention. Mathematically, we have the following: \n\n$$MultiHead(Q,K,V) = Concat(head\\_1,...,head\\_h)W^O\\\\head\\_i = Attention(QW^{Q}\\_i, KW^{K}\\_i, VW^{V}\\_i)\\\\Attention(Q,K,V) = softmax(\\frac{QK^T}{\\sqrt d\\_k})V$$\n\nThe original transformer of Vaswani et al. uses self-attention within encoder and decoder, but also transfers attention from encoder to decoder as is common in traditional sequence-to-sequence models. \n\n\n### How does the transformer network capture the position of words?\n\nIn RNNs, the sequential structure accounts for position. In CNNs, positions are considered within the kernel size. In transformers, self-attention ignores the position of tokens within the sequence. To overcome this limitation, transformers explicitly add **positional encodings**. These are added to the input or output embeddings before the sum goes into the first attention layer. \n\nPositional encodings can either be learned or fixed. In the latter case, Vaswani et al. used sine and cosine functions for even and odd positions respectively. They also used different frequencies for different positions to make it easier for the model to learn the positions: \n\n$$PE\\_{(pos,2i)}=sin(pos/10000^{2i/d\\_{model}})\\\\PE\\_{(pos,2i+1)}=cos(pos/10000^{2i/d\\_{model}})$$\n\nWhile Vaswani et al. (2017) considered absolute positions, Shaw et al. (2018) looked at the distance between tokens in a sequence, that is, relative positioning. They showed that this leads to better results for machine translation with the trade-off of 7% decrease in steps per second. \n\n\n### Could you share some applications of the transformer network?\n\nIn October 2019, Google announced the use of BERT for 10% of its English language search. Search will attempt to understand queries the way users tend to ask them in a natural way. This is opposed to parsing the query as a bunch of keywords. Thus, phrases such as \"to\" or \"for someone\" are important for meaning and BERT picks up these. \n\nWe can use transformers to generate synthetic text. Starting from a small prompt, GPT-2 model is able to generate long sequences and paragraphs of text that are realistic and coherent. This text also adapts to the style of the input. \n\nFor correcting grammar, transformers provide competitive baseline performance. For sequence generation, Insertion Transformer and Levenshtein Transformer have been proposed. \n\nTransformers have been used beyond NLP, such as for image generation where self-attention is restricted to local neighbourhoods. Music Transformer applied self-attention to generate long pieces of music. While the original transformer used absolute positions, the music transformer used relative attention, allowing the model to create music in a consistent style. \n\n\n### Which are the well-known transformer networks?\n\n**BERT** is an encoder-only transformer. It's the first deeply bidirectional model, meaning that it uses both left and right contexts in all layers. BERT showed that as a pretrained language model it can be fine-tuned easily to obtain state-of-the-art models for many specific tasks. BERT has inspired many variants: RoBERTa, XLNet, MT-DNN, SpanBERT, VisualBERT, K-BERT, HUBERT, and more. Some variants attempt to compress the model: TinyBERT, ALERT, DistilBERT, and more. \n\nThe other competitive model is **GPT-2**. Unlike BERT, GPT-2 is not bidirectional and is a decoder-only transformer. However, the training includes both unsupervised pretraining and supervised fine-tuning. The training objective combines both of these to improve generalization and convergence. This approach of training on specific tasks is also seen in **MT-DNN**. \n\nGPT-2 is auto-regressive. Each output token is generated one by one. Once a token is generated, it's added to the input sequence. BERT is not auto-regressive but instead uses context from both sides. XLNet is auto-regressive while also using context from both sides. \n\n\n### What are some variations of the transformer network?\n\nCompared to the original transformer of Vaswani et al., we note the following variations:\n\n + **Transformer-XL**: Overcomes the limitation of fixed-length context. It makes use of segment-level recurrence and relative positional encoding.\n + **DS-Init & MAtt**: Stacking many layers is problematic due to vanishing gradients. Therefore, depth-scaled initialization and merged attention sublayer are proposed.\n + **Average Attention Network (AAN)**: With the original transformer, decoder's self-attention is slow due to its auto-regressive nature. Speed is improved by replacing self-attention with an averaging layer followed by a gating layer.\n + **Dialogue Transformer**: Conversation that has multiple overlapping topics can be picked out. Self-attention is over the dialogue sequence turns.\n + **Tensor-Product Transformer**: Uses novel TP-Attention to explicitly encode relations and applies it to math problem solving.\n + **Tree Transformer**: Puts a constraint on the encoder to follow tree structures that are more intuitive to humans. This also helps us learn grammatical structures from unlabelled data.\n + **Tensorized Transformer**: Multi-head attention is difficult to deploy in a resource-limited setting. Hence, multi-linear attention with Block-Term Tensor Decomposition (BTD) is proposed.\n\n### For a developer, what resources are out there to learn transformer networks?\n\nTo get a feel of transformers in action, you can try out Talk to Transformer, which is based on the full-sized GPT-2.\n\nHuggingFace provides implementation of many transformer architectures in both TensorFlow and PyTorch. You can also convert them to CoreML models for iOS devices. Package spaCy also interfaces to HuggingFace. \n\nTensorFlow code and pretrained models for BERT are available. There's also code for Transformer-XL, MT-DNN and GPT-2.\n\nTensorFlow has provided an implementation for machine translation. Lilian Weng's implementation of the transformer is worth studying. Samuel Lynn-Evans has shared his implementation with explanations. The Annotated Transformer is another useful resource to learn the concepts along with the code.\n\n## Milestones\n\n2014\n\nSutskever et al. at Google apply **sequence-to-sequence** model to the task of machine translation, that is, a sequence of words in source language is translated to a sequence of words in target language. They use an **encoder-decoder** architecture that has separate 4-layered LSTMs for encoder and decoder. The encoder produces a fixed-length context vector, which is used to initialize the decoder. The main limitation is that the context vector is unable to adequately represent long sentences. \n\n2015\n\nBahdanau et al. apply the concept of **attention** to the seq2seq model used in machine translation. This helps the decoder to \"pay attention\" to important parts of the source sentence. Encoder is a bidirectional RNN. Unlike the seq2seq model of Sutskever et al., which uses only the encoder's last hidden state, attention mechanism uses all hidden states of encoder and decoder to generate the context vector. It also aligns the input and output sequences, with alignment score parameterized by a feed-forward network. \n\nJun \n2017\n\nVaswani et al. propose the **transformer** model in which they use a seq2seq model without RNN. The transformer model relies only on **self-attention**, although they're not the first to use self-attention. Self-attention is about attending to different tokens of the sequence. \n\nJan \n2018\n\nFor multi-document summarization, Liu et al. propose a **decoder-only transformer** architecture that can attend to sequences longer than what encoder-decoder architecture is capable of. Input and output sequences are combined into single sequence and used to train the decoder. During inference, output is generated auto-regressively. They also propose variations of attention to handle longer sequences. \n\nJun \n2018\n\nOpenAI publishes **Generative Pre-trained Transformer (GPT)**. It's inspired by unsupervised pre-training and transformer architecture. The transformer is trained on large amount of data without supervision. It's then fine-tuned on smaller task-specific datasets with supervision. Pre-training involves a standard language modelling and uses Liu et al.'s decoder-only transformer. In February 2019, OpenAI announces an improved model named **GPT-2**. Compared to GPT, GPT-2 is trained on 10x the data and has 10x parameters. \n\nOct \n2018\n\nGoogle open sources **Bidirectional Encoder Representations from Transformers (BERT)**, which is a pre-trained language model. It's deeply bidirectional and unsupervised. It improves state-of-the-art in many NLP tasks. It's trained on two tasks: (a) Masked Language Model (MLM), predicting some words that are masked in a sequence; (b) Next Sentence Prediction (NSP), binary classification that predicts if the next sentence follows the current sentence. \n\nJan \n2019\n\nCombining Multi-Task Learning (MTL) and pretrained language model, Liu et al. propose **Multi-Task Deep Neural Network (MT-DNN)**. Lower layers of the architecture are shared across tasks and use BERT. Higher layers do task-specific training. They show that this approach outperforms BERT in many tasks even without fine-tuning. \n\nJan \n2019\n\nJust as AutoML has been used in computer vision, Google researchers use an evolution-based neural architecture search (NAS) to discover what they call **Evolved Transformer (ET)**. It performs better than the original transformer of Vaswani et al. It's seen that ET is a hybrid, combining the best of self-attention and wide convolution.","meta":{"title":"Transformer Neural Network Architecture","href":"transformer-neural-network-architecture"}} {"text":"# Microservices Observability\n\n## Summary\n\n\nWith traditional monolithic applications, troubleshooting problems was easier compared to the more recent microservices architecture. Log files captured information for debugging and analysis. Monitoring highlighted problems via static dashboards and alerts. The application itself was generally well understood. \n\nWith microservices, there are more moving parts. The system is dynamic and heterogenous. It's parts are loosely coupled and transient. The system could fail in many different and unpredictable ways. It's necessary to monitor the application, the network and the infrastructure. It's in this context that observability (aka *o11y*) becomes important. \n\nCan we understand the inner workings of the system? Can we correlate the outputs with the inputs? Can we explain unexpected behaviour? Can data help us achieve business goals? A system that answers these questions effectively is said to be **observable**.\n\n## Discussion\n\n### How does observability differ from monitoring?\n\nIT systems have been monitoring performance since the early 2000s. They sample and aggregate important data points such as system downtime, response time and memory usage. These are compared against expectations. Anomalies and violations are thrown up as alerts for support teams. Monitoring focuses on overall system health and business KPIs. \n\nObservability goes deeper with the aim of capturing system behaviour in greater detail and context. An application composed of many interacting microservices can fail in unpredictable ways. So it's not sufficient to focus on specific failures or scenarios. While monitoring deals with known unknowns, observability deals with unknown unknowns. Data analysis is an essential aspect of observability. Failures are traced back to root causes. To put it simply, \n\n> Monitoring tells you **when** something is wrong. Observability lets you ask **why**.\n\nObservability doesn't replace monitoring. Monitoring is essential to observability. We may even view observability as a superset of monitoring. \n\n\n### What are the main benefits of observability?\n\nObservability leads to better visibility into the system dynamics. We can identify bottlenecks and optimize workflows. It creates a culture of innovation, improves operational efficiency and enables data-driven business decisions. DevSecOps teams can use the insights from observability to build more secure applications. \n\nBecause observability helps us understand the system better, engineers can more confidently update and release software. Release cycles can be shorter. Quality can be improved. Problems that you we didn't know existed in the system (\"unknown unknowns\") become visible. We can find and fix issues earlier in the software development phase. When combined with AIOps, issues can be solved automatically without human intervention. \n\nFor any organization moving from monoliths to microservices, observability helps in this transition. It can help them better manage their cloud-native applications. \n\n\n### What are the key pillars of observability?\n\nObservability has three key pillars: \n\n + **Logs**: Capture individual events. Logs are granular, timestamped and immutable. When things go wrong, logs are invaluable in determining the root cause. A good practice is to log in structured formats such as JSON. Structured logging enables easy visualization, search and analysis.\n + **Metrics**: Data aggregated over time. Monitoring solutions rely on metrics. Uptime, CPU utilization, system load, memory usage, throughput, response time and error rate are some examples of metrics.\n + **Traces**: Sequence of calls triggered by an event. In the world of microservices, a failure can be traced back to an offending microservice or an API call.**Events** and **exceptions** can be seen as special cases of logs. Some identify **dependencies** as another pillar. This captures how components depend on one another. \n\nHistorically (in 2016), engineers at Twitter identified these four pillars: monitoring, alerting/visualization, distributed systems tracing infrastructure, and log aggregation/analytics. \n\n\n### What's the typical pipeline for microservices observability?\n\nA logger produces logs within each service instance. A centralized collector collects the logs. Data is then pre-processed and stored to ease later analysis. Pre-processing might include cleaning, formatting, sampling and aggregating (for metrics). Analysts may run ad hoc queries to explore and visualize data, and to solve problems. \n\nEach logger might be part of the main service or deployed as a sidecar container within the same Kubernetes pod of the main container. More generally, an agent pull logs from services or log files and then pushes them to the collector. Alternatively, log collection is agent-less, that is, services themselves push logs to the collector. The latter is easier to deploy but requires developers to integrate logging functionality into the service code. \n\nDifferent types of analysis are possible: timeline analysis, service dependency analysis, aggregation analysis, root cause analysis and anomaly detection. Analysis is enabled by the use of statistics, rules and visualization. \n\n\n### What design patterns are available for microservices observability?\n\nAmong the design patterns are: \n\n + **Application Metrics**: Gives a complete picture of the application. Includes infrastructure, application and end user metrics.\n + **Audit Logging**: Record all user or service account actions. Needed for regulatory compliance. Event sourcing is one approach to capturing audit logs. Likewise, changes to deployment ought to be logged.\n + **Distributed Tracing**: Helps with performance profiling and root cause analysis. A sequence of API calls triggered by a user request is a trace. A trace is composed of spans. Each span records a unit of work within the trace. For example, user request, API gateway processing, service processing and database access are all spans within a trace. Distributed tracing requires context or correlation IDs passed from one service to the next.\n + **Health Check**: An unhealthy service is one that's running but has problems. Health check API can assess current system health. It can trigger alerts, recovery, service restarts, etc.\n + **Exception Tracking**: Includes error messages, error codes and stack traces.\n + **Log Aggregation**: Logs from various microservices are sent to a centralized log server.\n\n### As a developer, should I care about observability?\n\nObservability is not just the concern of operations. Observability must be considered at the design stage itself. System should be designed to be testable. It should be incrementally deployable with support for rollback. System should collect detailed runtime data. \n\nLogs should be designed to support data with high cardinality and high dimensionality. This makes it possible to query the log data effectively to bring out patterns and root causes. A useful query such as \"list all 502 errors in the last 20 minutes from host foobar.com\" becomes possible. \n\nDevelopers must acknowledge that complex systems are unpredictable and failures are inevitable. They should proactively instrument their code, rather than rely on automatic instrumentation. Even controlled testing in production may be permitted. Developers should consider the operational semantics and dependencies of their software. For example, developers should understand how services start/shutdown, their static/dynamic configurations, service discovery, concurrency, etc. \n\n\n### What are the best practices when implementing observability?\n\nMonitor essential metrics such as latency, traffic, error rate and saturation. Collect metrics along with contextual metadata. Configure and prioritize alerts correctly. For example, an alert for every 500 status code is a bad thing. Create specialized dashboard rather than overwhelming everyone with all the data. \n\nRules that detect incidents should be simple, predictable, reliable and actionable. Monitoring systems can become complex over time. They could end up collecting lots of metrics that are never used or complex rules that are never triggered. Data should lead to actionable insights. Data should create effective feedback loops. \n\nInvesting in observability tools matters, but tools alone will not solve problems. Teams must come with a mindset towards making their systems observable, from design to deployment. \n\nAn observability framework can serve as a reference for implementations. To be effective, observability must be guided by business goals such as Service Level Objectives (SLOs). Look at observability from the perspective of end users (response times, failed purchases) and backend applications (slow queries, container restarts). \n\n\n### What tools are available for observability?\n\nMany frameworks (eg. ASP.NET and Spring Boot) have libraries or plugins that implement OpenTelemetry. Pick one that supports W3C Trace Context. Select tools that support automation and are easy to maintain. \n\nSome have proposed observability stacks that combine analysis, logging and visualization: ELK (Elasticsearch, Logstash, and Kibana), EFK (Elasticsearch, Fluentd, and Kibana), or PLG (Promtail, Loki, Grafana). \n\nFor logging from microservices we have Splunk, Loggly, Sumologic, Logstash, Beats, Fluent Bit, and more.\n\nTime-series data can be stored in Atlas, InfluxDB and Prometheus. Databases (such as Oracle) provide exporters and make data accessible to the observability stack. Prometheus helps with monitoring containers. Jaeger offers a visual representation of traces and dependencies. Dashboards can be created with Grafana. \n\nA few others include Chronosphere, Cisco AppDynamics, Datadog, Dynatrace, Honeycomb, IBM Instana, Lumigo, New Relic, Sensu, Sentry, SolarWinds, Serverless360, and Zipkin. \n\nCloud providers offer managed services for observability. For example, AWS has managed services for Grafana, Prometheus and OpenTelemetry (Java SDK + collector implementation). It also offers its proprietary AWS X-Ray and Amazon CloudWatch. \n\n\n### What are some challenges with observability?\n\nA New Relic survey from 2022 showed that observability is far from mature. Many problems are still detected manually. Organizations use too many tools. Some tools require developers to instrument their code with calls to tracing libraries. Some tools can monitor microservices but not monoliths. Some tools are SaaS only. Some tools sample data while others don't. \n\nLarge volumes of data can be overwhelming. Many tools may not scale well when data has high cardinality. Reducing the cardinality will make the system less observable. Designers must therefore consider this tradeoff until better tools become available. \n\nWhere service meshes are used, observability is harder due to increased variety of services, data volume and complex request paths. Often systems collect all metrics and later determine the relevant ones to analyze. It would be better to determine upfront the most relevant metrics and collect only those. ViperProbe can do this. \n\n## Milestones\n\n1960\n\nRudolf E. Kálmán coins the term **observability** in the domain of control theory. It's \"a measure of how well internal states of a system can be inferred from knowledge of its external outputs.\" Five decades later, this term gets repurposed towards distributed software systems. In the latter case, observability is the ability to understand and explain any system state, no matter how unusual or unexpected it may be. \n\n1988\n\nAt IETF, RFC 1067 titled *A Simple Network Management Protocol* is published. In subsequent years, this becomes the essential protocol for monitoring network infrastructure via **metrics**. Time-series databases and dashboards are later born out of metrics. But engineers reach for low-level tools (`strace`, `tcpdump`) when metrics fall short for debugging complex systems. \n\n1999\n\nEngineers at Sun Microsystems use the term \"observability\" as something that enables application performance monitoring and capacity planning. At this time, there are no microservices. Hence this definition of observability is rather narrow and closer to the modern use of the term \"monitoring\". \n\n2011\n\nThe term **microservices** gets discussed at a workshop of software architects near Venice, to describe a common architectural style that several of them were exploring at the time. By 2015, microservices become mainstream as per a survey by NGINX. \n\n2013\n\nTwitter publishes on its engineering blog an article titled *Observability at Twitter*. The article describes the elements of observability as practised at Twitter as they moved from a monolithic architecture to a microservices architecture. The article uses the terms \"observability team\" and \"observability stack\". Beyond just traditional monitoring, the stack includes time-series database, dashboards and ability to run ad hoc queries. This extension of traditional monitoring has been called **whitebox monitoring**. \n\n2015\n\nBy mid-2010s, the term \"observability\" becomes more common. By 2018, it's commonly used in conferences and blog posts. \n\n2019\n\n**OpenTelemetry** is formed by merging two earlier projects named OpenTracing and OpenCensus. It's an incubation project at CNCF. It collects logs, traces and metrics from services, and sends them to analysis tools. It integrates easily with popular frameworks and libraries. \n\nNov \n2021\n\nW3C publishes *Trace Context* as a W3C Recommendation. This standardizes new HTTP header names and formats that allow context information to be propagated across HTTP clients and servers in any distributed system. It's worth noting that OpenTelemetry supports W3C Trace Context format. \n\n2023\n\nGigaOm publishes a research finding on the current state of the market in the cloud observability space. The study considers supported deployment models: SaaS, on-premise or hybrid. It considers many features: dashboards, user interaction performance monitoring, predictive analysis, microservices detection, etc. It's likely that no single platform is the best fit for every use case.","meta":{"title":"Microservices Observability","href":"microservices-observability"}} {"text":"# End-to-End Principle\n\n## Summary\n\n\nWhen a function has to be supported in a networked system, the designer often asks if it should be implemented at the end systems; or should it be implemented within the communication subsystem that interconnects all the end systems. The end-to-end argument or principle states that it's proper to implement the function in the end systems. The communication system itself may provide a partial implementation but only as a performance enhancement. \n\nThe architecture and growth of the Internet was shaped by the end-to-end principle. It allowed us to keep the Internet simple and add features quickly to end systems. The principle enabled innovation. \n\nMore recently, the principle has been criticized or violated to support features such as network caching, differentiated services, or deep packet inspection.\n\n## Discussion\n\n### Could you explain the end-to-end principle?\n\nSuppose we need to transfer a file from computer A to computer B across a network. Let's assume that the network guarantees correct delivery of the file by way of checksums, retransmissions, and deduplication of packets. Thus, our hypothetical network is full of features but also complex. The problem is that despite such a smart network, the file transfer can still go wrong. The file could get corrupted on B during transfer from buffer to the file system. This implies that end computers still have to do the final checks even if the network has already done them. \n\nThis is the essence of the end-to-end (E2E) argument. A communication system may do some things for performance reasons but it can't achieve correctness. For reasons of efficiency and performance, the communication system may implement some features at minimal cost but should avoid trying to achieve high levels of reliability. Reliability and correctness must be left to end systems. \n\nIn addition, applications may not need features implemented in the communication system. An \"open\" system would give more control to end systems. \n\n\n### How has the end-to-end principle benefited the Internet?\n\nComplexity impedes scaling due to higher OPEX and CAPEX. Thus, the end-to-end principle leads to the **simplicity principle**. IP layer is simple, giving Internet it's hourglass-shaped architecture. \n\nAt the network layer we have IP as the dominant protocol. At higher layers, we have many protocols for supporting diverse applications. At lower layers, we have many protocols suited to different physical networks. IP can be said to \"hide the detailed differences among these various technologies, and present a uniform service interface to the applications above\". \n\nIP itself is simple and general. It's supported by all routers within the network. Application-level functions are kept at endpoints. Application developers could therefore innovate without any special support from the network, with some calling it the **generative Internet**. The principle has been credited with making Internet a success. \n\nResearch has also shown that future architectures for the Internet are likely to evolve into the hourglass shape. \n\n\n### Does the end-to-end principle prohibit the network from maintaining state?\n\nEnd-to-end applications can survive partial network failures. This is because the network maintains only coarse-grained states, while endpoints maintain the main states. State can be destroyed only when the endpoint itself is destroyed, called **fate sharing**. Fate of endpoints doesn't depend on the network. \n\nRouting, QoS guarantees, and header compression are some examples where the network may maintain state. However, this state is self-healing. It can be worked out even if network topology or activity changes. State maintained within the network must be minimal. Loss of this state should at most result in temporary loss of service. \n\nState maintained in the network may be called *soft state* while that maintain at endpoints, and required for the proper functioning of the endpoints, may be called *hard state*. \n\n\n### Besides the Internet, where else has the end-to-end principle been applied?\n\nOn the Internet, end-to-end principle has been applied to reliable delivery, deduplication, in-order delivery, reputation maintenance, security, and fault tolerance. For security, both authenticity and encryption of messages are best done at endpoints. Doing these within the network will not only compromise security but also complicate key management. \n\nIt's application to file transfer is well known. Checksum should be validated only after successful storage to disk. Another example is the EtherType field in Ethernet frame. Ethernet doesn't interpret this field. To do so would mean that all higher layer protocols would pay the price for the special few. \n\nIn computer architecture, RISC favours simple instructions over the complex ones of CISC. In CISC, a designer may include complex instructions but it's hard for her to anticipate client requirements. Clients may end up with their own specific implementations. \n\nA case has been made to apply end-to-end principle for data commons, that is, sharing and organizing data for a discipline. Applications can decide how to import, export or analyze data. The core system would only define global identifiers, basic metadata, authenticated access and a configurable data model. \n\n\n### What are the essential aspects of end-to-end connectivity?\n\nRFC 2775 (published in 2000) mentions three aspects: \n\n + **E2E Argument**: This is as described by Saltzer et al. in 1981, and what is now called the end-to-end principle.\n + **E2E Performance**: This concerns both the network and the end systems. Research in this area has suggested some improvements to TCP plus optimized queuing and discard mechanisms in routers. However, this won't help other transport protocols that don't behave like TCP in response to congestion.\n + **E2E Address Transparency**: A single logical address space was deemed adequate for the early Internet of the 1970s. Packets could flow end to end unaltered and without change of source or destination addresses. RFC 2101 of 1997 analyzed this aspect and concluded that address transparency is no longer maintained in present day Internet. An example of this is Network Address Translation (NAT). Applications that assume address transparency are likely to fail unpredictably.\n\n### Are there instances where the end-to-end principle has been violated?\n\nConsider a client-server architecture involving a database write operation at the server. A \"smart\" server can return an immediate confirmation to the client even though it hasn't completed the database write operation. If the write fails, the server has to do retries, effectively taking up responsibilities of the client. It gets worse if the server itself fails, since client thinks that the database write actually happened. \n\nBufferbloat was an interesting problem the Internet had in the late 2010s. Because routers could afford larger buffers, they started accepting higher load. They didn't drop packets until later when their larger buffers started filling up. Therefore, endpoints didn't backoff early enough. This resulted in a slower internet. \n\nHTTP caching, link-level encryption, SOAP 2.0, Network Address Translation (NAT) and firewalls are other counter-examples. \n\nExamples where the network gets involved are traffic management, capacity reservation, packet segmentation/reassembly, and multicast routing. But these shouldn't be seen as violating the principle. Likewise, cloud computing doesn't violate the principle. Cloud infrastructure is not part of the communication system. It's actually an endpoint. \n\n\n### How is end-to-end principle relevant to the net neutrality debate?\n\nNet neutrality is about creating a level-playing field for everyone, big or small. It ensures that big companies can't pay for preferential treatment of their content. The network sees and treats all content alike. Without net neutrality, a few companies that own or control online platforms or communication infrastructure become all too powerful. Power therefore moves from the end consumers to the network controlled by a few. This goes against the end-to-end principle. \n\nTim Wu coined the termed \"network neutrality\" back in 2002, when he noticed that broadband providers were blocking certain types of services. Network providers might promise services such as blocking spam, viruses or even advertisements. Most users would rather do these on their end systems rather than lose control. \n\nDeep Packet Inspection (DPI) is used for QoS, security and even surveillance. It's at odds with net neutrality. With DPI, intermediate nodes look into packet headers and payload. Yet, the end-to-end principle doesn't actually prohibit this. \n\n\n### What are the common criticisms of the end-to-end principle?\n\nOne criticism is that the original paper by Saltzer et al. never properly understood the true nature of packet switching, which is stochastic. The paper also confused moving packets through the network (statistical) with non-functional aspects such as confidentiality (computational). It would have been better to model timely arrival of packets to enable successful computation. \n\nThe end-to-end principle never gave end users freedom. Network infrastructure has always been built and controlled for commercial reasons by those who had the means. Therefore, to protect user interests, discussions must involve everyone in the industry. \n\nBack in 2001, researchers noted new applications and scenarios that end-to-end principle didn't address very well: untrusted endpoints, video streaming, ISP service differentiation, third-parties, and difficulty in configuring home network devices. All of these could benefit with some intelligence in the network. \n\nIn Service-Oriented Architecture (SOA), implementing stuff end-to-end would be too costly. A hop-by-hop approach would be better. Ultimately, the principle shouldn't be applied blindly. In the early days of the Internet, when bandwidth was scarce, HTTP caching made sense even when it violated the end-to-end principle, even when it made HTTP a considerably more complex protocol. \n\n## Milestones\n\n1950\n\nIn the 1950s, for reading and writing files to magnetic tapes, engineers attempt to design a reliable tape subsystem. They fail to create such a system. Ultimately, applications take care of checks and recovery. An example of this from the 1970s is the Multics file system. Although there's low-level error detection and correction, they don't replace high-level checks. \n\n1973\n\nFrenchman Louis Pouzin designs and develops CYCLADES, a packet switching network. It becomes the first network in which hosts are responsible for reliable delivery of packets. Networks transport datagrams without delivery guarantees. Even the term *datagram* is coined by Pouzin. CYCLADES inspires the first version TCP. Meanwhile, D.K. Branstad makes the end-to-end argument with reference to encryption. \n\n1978\n\nWhat was originally TCP, is split into two parts: TCP and IP. Thus, layered architecture is applied and the functions of each layer becomes more well defined. On January 1st, 1983, TCP/IP becomes the standard protocol for ARPAnet that by now connects 500 sites. \n\nApr \n1981\n\nJ.H. Saltzer, D.P. Reed and D.D. Clark at the MIT Laboratory for Computer Science present a conference paper titled *End-to-end Arguments in System Design*. Given a distributed system, the paper gives guidance on where to place protocol functions. For example, end systems should perform recovery, encryption and deduplication. Low-level parts of the network could support them only as performance enhancements. Phil Karn, a well-known Internet contributor, comments years later that this is \"the most important network paper ever written\". \n\n1996\n\nThe IETF publishes RFC 1958 titled *Architectural Principles of the Internet*, with reference to the end-to-end principle of Saltzer et al. In 2002, this RFC is updated by RFC 3439. Other IETF documents relevant to this discussion are RFC 2775 (2000), and RFC 3724 (2004). \n\nMay \n1997\n\nDavid S. Isenberg, an employee of AT&T, writes an essay titled *The Rise of the Stupid Network*. He notes that telephone networks were built on the assumption of scarce bandwidth, circuit-switching, and voice-dominated calls. This has led to the creation of **Intelligent Network (IN)**, where the network took on more features. Given the rise of the Internet, Isenberg argues that the time has come for telephone networks to become stupid and allow endpoints to do intelligent things. Tell networks, \"Deliver the Bits, Stupid\".","meta":{"title":"End-to-End Principle","href":"end-to-end-principle"}} {"text":"# Redis Streams\n\n## Summary\n\n\nRedis has data types that could be used for events or message sequences but with different tradeoffs. Sorted sets are memory hungry. Clients can't block for new messages. It's also not a good choice for time series data since entries can be moved around. Lists don't offer fan-out: a message is delivered to a single client. List entries don't have fixed identifiers. For 1-to-n workloads, there's Pub/Sub but this is a \"fire-and-forget\" mechanism. Sometimes we wish to keep history, make range queries, or re-fetch messages after a reconnection. Pub/Sub lacks these properties. \n\nRedis Streams addresses these limitations. Stream data type can be seen as similar to logging, except that Stream is an abstraction that's more performant due to logical offsets. It's built using radix trees and listpacks, making it space efficient while also permitting random access by IDs.\n\n## Discussion\n\n### What are some use cases for Redis Streams?\n\nRedis Streams is useful for building chat systems, message brokers, queuing systems, event sourcing, etc. Any system that needs to implement unified logging can use Streams. Queuing apps such as Celery and Sidekiq could use Streams. Slack-style chat apps with history can use Streams. \n\nFor IoT applications, Streams can run on end devices. This is essentially time-series data that Streams timestamps for sequential ordering. Each IoT device will store data temporarily and asynchronously push these to the cloud via Streams. \n\nWhile we could use Pub/Sub along with lists and hashes to persist data, Stream is a better data type that's designed to be more performant. Also, if we use Pub/Sub and Redis server is restarted, then all clients have to resubscribe to the channel. \n\nSince Streams supports blocking, clients need not poll for new data. Blocking enables real-time applications, that is, clients can act on new messages as soon as possible. \n\n\n### Which are the new commands introduced by Redis Streams?\n\nAll commands of Redis Streams are documented online. We briefly mention them: \n\n + **Adding**: `XADD` is the only command for adding data to a stream. Each entry has a unique ID that enables ordering.\n + **Reading**: `XREAD` and `XRANGE` read items in the order determined by the IDs. `XREVRANGE` returns items in reverse order. `XREAD` can read from multiple streams and can be called in a blocking manner.\n + **Deleting**: `XDEL` and `XTRIM` can remove data from the stream.\n + **Grouping**: `XGROUP` is for managing consumer groups. `XREADROUP` is a special version of `XREAD` with support for consumer groups. `XACK`, `XCLAIM` and `XPENDING` are other commands associated with consumer groups.\n + **Information**: `XINFO` shows details of streams and consumer groups. `XLEN` gives number of entries in a stream.\n\n### What are main features of Redis Streams?\n\nStreams is a first-class citizen of Redis. It benefits from the usual Redis capabilities of persistency, replication and clustering. It's stored in-memory and under a single key. \n\nThe main features of Streams are: \n\n + **Asynchronous**: Producers and consumers need not be simultaneously connected to the stream. Consumers can subscribe to streams (push) or read periodically (pull).\n + **Blocking**: Consumers need not keep polling for new messages.\n + **Capped Streams**: Streams can be truncated, keeping only the N most recent messages.\n + **At-Least Once Delivery**: This makes the system robust.\n + **Counter**: Every pending message has a counter of delivery attempts. We can use this for dead letter queuing.\n + **Deletion**: While events and logs don't usually have deletion as a feature, Streams supports this efficiently. Deletion allows us to address privacy or regulatory concerns.\n + **Persistent**: Unlike Pub/Sub, messages are persistent. Since history is saved, a consumer can look at previous messages.\n + **Lookback Queries**: This helps consumers analyse past data, such as, obtain temperature readings in a particular 10-second window.\n + **Scale-Out Options**: Via consumer groups, we can easily scale out. Consumers can share the load of processing a fast-incoming data stream.\n\n### Could you explain consumer groups in Redis?\n\nA consumer group allows consumers of that group to share the task of consuming messages from a stream. Thus, a message in a stream can be consumed by only one consumer in that consumer group. This relieves the burden on a consumer to process all messages. \n\nCommand `XGROUP` creates a consumer group. A consumer is added to a group the first time it calls `XREADGROUP`. A consumer always has to identify itself with a unique consumer name. \n\nA stream can have multiple consumer groups. Each consumer group tracks the ID of the last consumed message. This ID is shared by all consumers of the group. Once a consumer reads a message, it's ID is added to a *Pending Entries List (PEL)*. The consumer must *acknowledge* that it has processed the message, using `XACK` command. Once acknowledged, the pending list is updated. Another consumer can *claim* a pending message using `XCLAIM` command and begin processing it. This helps in recovering from failures. However, a consumer can choose to use the `NOACK` subcommand of `XREADGROUP` if high reliability is not important. \n\n\n### Could you share more details about IDs in Redis Streams?\n\nEntries within a stream are ordered using IDs. Each ID has two parts separated by hyphen: UNIX millisecond timestamp followed by sequence number to distinguish entries added at the same millisecond time. Each part is a 64-bit number. For example, `1526919030474-55` is a valid ID. \n\nIDs are autogenerated when `XADD` command is called. However, a client can specify its own ID but it should be an ID greater than all other IDs in the stream. \n\nIncomplete IDs are when the second part is omitted. With `XRANGE`, Redis will fill in a suitable second part for us. With `XREAD`, the second part is always `-0`. \n\nSome IDs are special:\n\n + `$`: Used with `XREAD` to block for new messages, ignoring messages already in the stream.\n + `-` & `+`: Used with `XRANGE`, to specify minimum and maximum IDs possible within the stream. For example, the following command will return every entry in the stream: `XRANGE mystream - +`\n + `>`: Used with `XREADGROUP`, to get new messages (never delivered to other clients). If this command uses any other ID, it has the effect of returning pending entries of that client.\n\n### In what technical aspects does Redis Streams differ from other Redis data types?\n\nUnlike other Redis blocking commands that specify timeouts in seconds, commands `XREAD` and `XREADGROUP` specify timeouts in milliseconds. Another difference is that when blocking on list pop operations, the first client will be served when new data arrives. With Stream `XREAD` command, every client blocking on the stream will get the new data. \n\nWhen an aggregate data type is emptied, its key is automatically destroyed. This is not the case with Stream data type. The reason for this is to preserve the state associated with consumer groups. Stream is not deleted even if there are no consumer groups but this behaviour may be changed in future versions. \n\n\n### How does Redis Streams compare against Kafka?\n\nApache Kafka is a well-known alternative for Redis Streams. In fact, some features of Streams such as consumer groups have been inspired by Kafka. \n\nHowever, Kafka is said to be difficult to configure and expensive to operate on typical public clouds. Streams is therefore a better option for small, inexpensive apps. \n\n\n### Could you share some performance numbers on Redis Streams?\n\nIn one test on a two-core machine with multiple producers and consumers, messages were generated at 10K per second. With `COUNT 10000` given to `XREADGROUP` command, every iteration processed 10K messages. It was seen that 99.9% requests had a latency of less than 2 ms. Real-world performance is expected to be better than this. \n\nWhen compared against traditional Pub/Sub messaging, Streams gives 100x better throughput. It's able to handle more than 1 million operations per second. If Pub/Sub messages are persisted to network storage, latency is about 5 ms. Streams has less than 1 ms latency. \n\n\n### Could you share some developer tips for using Redis Streams?\n\nThere are dozens of Redis clients in various languages. Many of these have support for Streams.\n\nUse `XREAD` for 1-to-1 or 1-to-n messaging. Use `XRANGE` for windowing-based stream processing. Within a consumer group, if a client fails temporarily, it can reread messages from a specific ID. For permanent failures, other clients can claim pending messages. \n\nFor real-time streaming analytics, one suggestion is to pair up Redis Streams with Apache Spark. The latter has the feature Structured Streaming that pairs up nicely with Streams. To scale out, multiple Spark jobs can belong to a single consumer group. Since Streams is persistent, even if a Spark job restarts, it won't miss any data since it will start consuming from where it left off. \n\n## Milestones\n\nSep \n2017\n\nSalvatore Sanfilippo, creator of Redis, gives a demo of Redis Streams and explains the API. In October, he blogs about it. He explains that the idea occurred much earlier. He tinkered with implementing a generalization of sorted sets and lists but was not happy with the results. When Redis 4.0 came out with support for modules, Timothy Downs created a data type for logging transactions. Sanfilippo used this as an inspiration to create Redis Streams. \n\nMay \n2018\n\nThe first release candidate RC1 of Redis 5.0 is released. This supports **Stream** data type. \n\nJul \n2018\n\nBeta version of Redis Enterprise Software (RS) 5.3 is released. This is based on Redis 5.0 RC3 with support for Stream data type. \n\nOct \n2018\n\nRedis 5.0.0 is released. \n\nMay \n2019\n\nAt the Redis Conference 2019, Dan Pipe-Mazo talks about Atom, a microservices SDK powered by Redis Streams. Microservices interact with one another using Streams.","meta":{"title":"Redis Streams","href":"redis-streams"}} {"text":"# Machine Learning Model\n\n## Summary\n\n\nIn traditional programming, a function or program reads a set of input data, processes them and outputs the results. Machine Learning (ML) takes a different approach. Lots of input data and corresponding outputs are given. ML employs an algorithm to learn from this dataset and outputs a \"function\". This function or program is what we call an **ML Model**. \n\nEssentially, the model encapsulates a relationship or pattern that maps the input to the output. The model learns this automatically without being explicitly programmed with fixed rules or patterns. The model can then be given unseen data for which it predicts the output. \n\nML models come in different shapes and formats. Model metadata and evaluation metrics can help compare different models.\n\n## Discussion\n\n### Could you explain ML models with some examples?\n\nConsider a function that reads Celsius value and outputs Fahrenheit value. This implements a simple mathematical formula. In ML, once the model is trained on the dataset, the formula is implicit in the model. It can read new Celsius values and give correct Fahrenheit values. \n\nLet's say we're trying to estimate house prices based on attributes. It may be that houses with more than two bedrooms fall into a higher price bracket. Areas 8500 sq.ft. and 11500 sq.ft are important thresholds at which prices tend to jump. Rather than encode these rules into a function, we can build a ML model to learn these rules implicitly. \n\nIn another dataset, there are three species of irises. Each iris sample has four attributes: sepal length/width, petal length/width. An ML model can be trained to recognize three distinct clusters based on these attributes. All flowers belonging to a cluster are of the same species. \n\nIn all these examples, ML saves us the trouble of writing functions to predict the output. Instead, we train an ML model to implicitly learn the function.\n\n\n### What are the essentials that help an ML model learn?\n\nThere are many types (aka shapes/structures/architectures) of ML models. Typically, this **structure** is not selected automatically. The data scientist pre-selects the structure. Given data, the model learns within the confines of the chosen structure. We may say that the model is fine-tuning the parameters of its structure as it sees more and more data. \n\nThe model learns in iterations. Initially, it will make poor predictions, that is, predicted output deviate from actual output. As it sees more data, it gets better. Prediction error is quantified by a **cost/loss function**. Every model needs such a function to know how well it's learning and when to stop learning. \n\nThe next essential aspect of model training is the **optimizer**. It tells the model how to adjust its parameters with each iteration. Essentially, the optimizer attempts to minimize the loss function. \n\nIf results are poor, the data scientist may modify or even select a different structure. She may pre-process the input differently or focus on certain aspects of the input, called features. These decisions could be based on experience or analysis of wrong predictions.\n\n\n### What possible structures, loss functions and optimizers are available to train an ML model?\n\nClassical ML offers many possible model structures. For example, Scikit-Learn has model structures for regression, classification and clustering problems. Some of these include linear regression, logistic regression, Support Vector Machine (SVM), Stochastic Gradient Descent (SGD), nearest neighbour, Guassian process, Naive Bayes, decision tree, ensemble methods, k-Means, and more.\n\nFor building neural networks, many architectures are possible: Feed-Forward Neural Network (FFNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), Long Short Term Memory (LSTM), Autoencoder, Attention Network, and many more. In code, these can be built using building blocks such as convolution, pooling, padding, normalization, dropout, linear transforms, non-linear activations, and more. \n\nTensorFlow supports many loss functions: BinaryCrossentropy, CategoricalCrossentropy, CosineSimilarity, KLDivergence, MeanAbsoluteError, MeanSquaredError, Poisson, SquaredHinge, and more. Among the optimizers are Adadelta, Adagrad, Adam, Adamax, Ftrl, Nadam, RMSprop, and SGD. \n\n\n### What exactly is saved in an ML model?\n\nML frameworks typically support different ways to save the model: \n\n + **Only Weights**: Weights or parameters represent the model's current state. During training, we may wish to save checkpoints. A checkpoint is a snapshot of the model's current state. A checkpoint includes model weights, optimizer state, current epoch and training loss. For inference, we can create a fresh model and load the weights of a fully trained model.\n + **Only Architecture**: Specifies the model's structure. If it's a neural network, this would be details of each layer and how they're connected. Data scientists can share model architecture this way, with each one training the model to suit their needs.\n + **Complete Model**: This includes model architecture, the weights, optimizer state, and a set of losses and metrics. In PyTorch, this is less flexible since serialized data is bound to specific classes and directory structure.In Keras, when saving only weights or the complete model, `*.tf` and `*.h5` file formats are applicable. YAML or JSON can be used to save only the architecture. \n\n\n### Which are the formats in which ML models are saved?\n\n**Open Neural Network Exchange (ONNX)** is open format that enables interoperability. A model in ONNX can be used with various frameworks, tools, runtimes and compilers. ONNX also makes it easier to access hardware optimizations. \n\nA number of ML frameworks are out there, each saving models in its own format. TensorFlow saves models as protocol buffer files with `*.pb` extension. PyTorch saves models with `*.pt` extension. Keras saves in HDF5 format with `*.h5` extension. An older XML-based format supported by Scikit-Learn is Predictive Model Markup Language (PMML). SparkML uses MLeap format and files are packaged into a `*.zip` file. Apple's Core ML framework uses `*.mlmodel` file format. \n\nIn Python, Scikit-Learn adopts pickled Python objects with `*.pkl` extension. Joblib with `*.joblib` extension is an alternative that's faster than Pickle for large NumPy arrays. If XGBoost is used, then a model can be saved in `*.bst`, `*.joblib` or `*.pkl` formats. \n\nWith some formats, it's possible to save not just models but also pipelines composed of multiple models. Scikit-Learn is an example that can export pipelines in Joblib, Pickle, or PMML formats. \n\n\n### What metadata could be useful along with an ML model?\n\nData scientists conduct multiple experiments to arrive at a suitable model. Without metadata and proper management of such metadata, it becomes difficult to reproduce the results and deploy the model into production. ML metadata also enables us to do auditing, compare models, understand provenance of artefacts, identify reusable steps for model building, and warn if data distribution in production deviates from training. \n\nTo facilitate these, metadata should include model type, types of features, pre-processing steps, hyperparameters, metrics, performance of training/test/validation steps, number of iterations, if early stopping was enabled, training time, and more. \n\nA saved model (also called exported or serialized model), will need to be deserialized when doing predictions. Often, the versions of packages or even the runtime will need to be the same as those during serialization. Some recommend saving a reference to an immutable version of training data, version of source code that trained the model, versions of libraries and their dependencies, and the cross-validation score. For reproducible results across platform architectures, it's a good idea to deploy models within containers, such as Docker. \n\n\n### Which are some useful tools when working with ML models?\n\nThere are tools to **visualize** an ML model. Examples include *Netron* and *VisualDL*. These display the model's computational graph. We can see data samples, histograms of tensors, precision-recall curves, ROC curves, and more. These can help us optimize the model better. \n\nSince ONNX format aids interoperability, there are **converters** that can convert from other formats to ONNX. One such tool is *ONNXMLTools* that supports many formats. It's also a wrapper for other converters such as keras2onnx, tf2onnx and skl2onnx. ONNX GitHub code repository lists many more converters. Many formats can be converted to Apple Core ML's format using *Core ML Tools*. For Android, `tf.lite.TFLiteConverter` converts a Keras model to TFLite. \n\nSometimes converters are not required. For example, PyTorch can natively export to ONNX. \n\nONNX models themselves can be simplified and there are **optimizers** to do this. *ONNX Optimizer* is one tool. *ONNX Simplifier* is another, built using ONNX Optimizer. It basically looks at the whole graph and replaces redundant operators with their constant outputs. There's a ready-to-use online version of ONNX Simplifier. \n\n## Milestones\n\n1952\n\nAt IBM, Arthur Samuel writes the first learning program. Applied to the game of checkers, the program is able to learn from mistakes and improve its gameplay with each new game. In 1959, Samuel popularizes the term **Machine Learning** in a paper titled *Some Studies in Machine Learning Using the Game of Checkers*. \n\n1986\n\nRumelhart et al. publish the method of **backpropagation** and show how it can be used to optimize the weights of neurons in artificial neural networks. This kindles renewed interest in neural networks. Although backpropagation was invented in the 1960s and developed by Paul Werbos in 1974, it was ignored back then due to the general lack of interest in AI. \n\n1990\n\nIn this decade, ML shifts from a knowledge-driven to a **data-driven approach**. With the increasing use of statistics and neural networks, ML tackles practical problems rather than lofty goals of AI. Also during the 1990s, Support Vector Machine (SVM) emerges as an important ML technique. \n\n2006\n\nHinton et al. publish a paper showing how a network of many layers can be trained by smartly initializing the weights. This paper is later seen as the start of **Deep Learning** movement, which is characterized by many layers, lots of training data, parallelized hardware and scalable algorithms. Subsequently, many DL frameworks are released, particularly in 2015. \n\nJun \n2016\n\nVartak et al. propose *ModelDB*, a system for **ML model management**. Data scientists can use this to compare, explore or analyze models and pipelines. The system also manages metadata, quality metrics, and even training and test data. In general, from the mid-2000s we see interest in ML model management and platforms. Examples include *Data Version Control (DVC)* (2017), Kubeflow (2018), ArangoML Pipeline (2019), and *TensorFlow Extended (TFX)* (2019 public release). \n\nSep \n2017\n\nMicrosoft and Facebook come together to announce **Open Neural Network Exchange (ONNX)**. This is proposed as a common format for ML models. With ONNX, we obtain framework interoperability (developers can move their models across frameworks) and shared optimizations (hardware vendors and others can target ONNX for optimizations). \n\nJul \n2019\n\nWhile there are tools to convert from other formats to ONNX, one ML expert notes some limitations. For example, *ATen* operators in PyTorch are not supported in ONNX. This operator is not standardized in ONNX. However, it's possible to still export to ONNX by updating PyTorch source code, which is something only advanced users are likely to do. \n\nMar \n2020\n\nIn an image classification task, a performance comparison of ONNX format with PyTorch format shows that ONNX is faster during inference. Improvements are higher at lower batch sizes. On another task, ONNX shows as much as 600% improvement over Scikit-Learn. Further improvements could be obtained by tuning ONNX for specific hardware.","meta":{"title":"Machine Learning Model","href":"machine-learning-model"}} {"text":"# MIOTY\n\n## Summary\n\n\nMIOTY is a software-based LPWAN protocol that's targeted at IoT applications. In particular, many industrial IoT applications demand high reliability, scalability, power efficiency and mobility. High data rate is not really needed. MIOTY meets these requirements while also being an ETSI standard, TS 103 357. In addition, the MIOTY Alliance promotes the technology towards better interoperability among vendors of both endpoints and base stations. \n\nMIOTY is often written as mioty®. Fraunhofer-Gesellschaft owns the registered trademark. Patents owned by Fraunhofer-Gesellschaft and Diehl Metering GmbH are licensed via Sisvel International.\n\n## Discussion\n\n### Given many other LPWAN protocols, why do we need MIOTY?\n\nLPWAN protocols are many. Some operate in unlicensed spectra: LoRa, Sigfox, and ZigBee. Others (from the cellular world) operate in licensed spectra: LTE-M, EC-GSM and NB-IoT. ZigBee, LoRa and NB-IoT offer about 250kbps. LTE-M offers 1Mbps. Sigfox offers 100bps. \n\nIn license-free spectra, where many technologies coexist, interference causes packet loss. Cellular technologies have higher power consumption. ZigBee has a mesh topology. Though ZigBee's relay nodes extend coverage, they have lower power efficiency. Some protocols are not standardized, leading to vendor lock-in or interoperability issues. \n\nWhile these protocols have their niche applications, there are some applications that need low data rate, higher power efficiency and long range. Packet loss should be very low even in the face of interference. A single base station should be able to handle thousands of endpoints. A star topology, rather than a mesh topology, is more suited to meet these requirements. This is the space that MIOTY addresses. \n\n\n### What are the use cases for MIOTY?\n\nMIOTY is being seen as a \"low-throughput tech for last mile industrial communications\". In smart grids, gas and water meters can use MIOTY. In agriculture, soil sensors and irrigation controls can use MIOTY. In smart factories and buildings, asset tracking can benefit from MIOTY. Remote sensors need long range that MIOTY provides. MIOTY can handle assets moving up to 120kph, thus making it suitable for fleet management or vehicle-to-infrastructure communications. \n\nMIOTY is designed to support **massive IoT**, where 100,000+ endpoints can be supported by a single base station handling 1.5 million messages per day. This requirement is common in smart metering in dense urban deployments or monitoring in smart factories. \n\nMIOTY Alliance website shares news on recent applications. In November 2021, a project was initiated to automate and digitize RAG Austria's oil fields using Diehl Metering's MIOTY gateway. In January 2022, sensors in a MIOTY network detected in Germany the pressure wave triggered by an underwater eruption near Tonga. \n\n\n### What are the main technical details of MIOTY?\n\nMIOTY is based on a transmission technique called **Telegram Splitting Multiple Access (TSMA)**. A telegram or packet is split into smaller packets. These are sent slowly over a longer time period. For better resilience against interference, frequency hopping is used. Data is spread across 24 uplink or 18 downlink radio bursts. System uses 24 frequencies plus 1 for Sync-burst. \n\nMIOTY operates in the sub-GHz range in license-free bands 868 MHz (Europe) and 915 MHz (US). Symbol rate is 2,380,371 symbols per second. Standard carrier spacing is 2,380,371Hz and channel bandwidth is 100kHz. Range is about 5km (non-LOS) and 20km (LOS). Even if 50% of the sub-packets are lost, the original information can be recovered. \n\nCoherent MSK or GMSK demodulation is used. Receiver sensitivity is at -139dBm. Data rate is about 400bps. 10 bytes of application data can be sent in about 400ms. At the endpoint, duty cycle can be as low as 0.1% with a power consumption of 17.8μWh per message. This means that batteries could last for 20+ years. Messages are encrypted and integrity protected. \n\n\n### Which are the different MIOTY device classes?\n\nMIOTY defines three device classes: \n\n + **Class Z**: Unidirectional and uplink only. For monitoring applications. Very high energy efficiency.\n + **Class A**: Bidirectional so that endpoints can be configured via downlink unicast messages. Communication is initiated by endpoints.\n + **Class B**: Bidirectional. Suits low latency applications. Enables control of actuators at endpoints. Supports both unicast and broadcast messages. In broadcast mode, base station sends a periodic beacon signal. This indicates the timeslots when an endpoint needs to receive.For the TS-UNB protocol family, the ETSI standard mentions classes Z and A but not class B. \n\n\n### What's the architecture of a typical MIOTY-based network?\n\nThe MIOTY system is one realization of a **Low Throughput Network (LTN)** standardized by ETSI. MIOTY is defined in the standard as TS-UNB and it specifies the air interface between endpoints and base station. In the standard, this interface is called *Interface A*. \n\nOne or more endpoints connect via the air interface to a base station (aka access point). The system can have multiple base stations but only one Service Center (SC). SC forwards/aggregates/deduplicates data, authenticates and configures endpoints, manages base stations, and coordinates roaming to other SCs. In practice, some of these functions are done by a separate entity called the Application Center. Application Center may use various protocols (MQTT, REST, COAP) to interface to applications. In fact, the ETSI standard identifies Registration Authority (RA) that stores identifiers and credentials of endpoints. In MIOTY, these RA functions are part of the Application Center. Finally, an IoT platform handles visualization and analytics. \n\n\n### What's the current MIOTY ecosystem?\n\nThe MIOTY Alliance was formed in 2019. By 2022, it acquired 10 full members and many associated members. Members include chipset vendors, hardware manufacturers, software stack providers and application solution providers. It's goal is \"to enable the most accessible, robust and efficient Massive IoT connectivity solution on the market\". \n\nMIOTY chipsets are being made by Radiocrafts, Silicon Labs, Texas Instruments, STMicroelectronics (STM32WL SoC series), etc. TI's CC1310 wireless MCU includes RF transceiver and Arm® Cortex®-M3 MCU. More powerful ones include CC1312R and CC1352R, the latter capable of BLE connectivity as well. \n\nGateways are available from BehrTech (called MYTHINGS), Swissphone, Deihl Metering, WEPTECH AVA and AST-X. Some of these vendors also offer endpoint hardware plus development kits. Others such as Radiocrafts and Sentinum offer only endpoints. ResIOT plans to offer a complete MIOTY network solution that includes base station, Service Center, and Application Center. \n\nFor the software stack, STMicroelectronics has partnered with Stackforce, which offers a multi-stack solution supporting MIOTY, LoRaWAN, Wireless M-Bus and Sigfox. BehrTech and Swissphone have their own stacks.\n\n## Milestones\n\nSep \n2011\n\nFraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. files a German patent that introduces the concept of **telegram splitting**. A packet or telegram is split into smaller sub-packets that are sent over a much longer time period. In addition, sub-packets may be duplicated and sent on multiple frequencies (frequency hopping). The Fraunhofer Institute started this research in 2009. \n\nDec \n2015\n\nFraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. apply for a German **trademark on MIOTY**. International and U.K. trademarks are filed in May 2016. Subsequently, these applications are approved and registered. \n\nJun \n2018\n\nETSI standardizes the telegram splitting method of transmission in **TS 103 357: Protocols for radio interface A**. It's named Telegram Splitting Ultra Narrow Band (TS-UNB). It's the protocol used in MIOTY. \n\nJul \n2018\n\nBehrTech claims to be the first to offer a MIOTY solution that's compliant with the ETSI standard. In March 2021, in partnership with WEPTECH, BehrTech claims first low-cost MIOTY gateway. \n\nNov \n2019\n\n**MIOTY Alliance** is formed. It's founding members include Fraunhofer Institute for Integrated Circuits ISS, Texas Instruments, Diehl Metering, Diehl Connectivity Solutions, ifm, Ragsol, Stackforce and Wika. In February 2020, the Alliance is announced officially at the Embedded World 2020 in Nuremberg. The Alliance aims to provide an open, standardized and interoperable ecosystem for MIOTY that can suit industrial IoT and smart city use cases. By mid-2022, the Alliance has 10 full members and 25 associate members. \n\nApr \n2020\n\nIt's announced that Sisvel International will manage the licensing of MIOTY. \n\nMay \n2021\n\nBernhard, Schlicht and Kilian of the Fraunhofer Institute for Integrated Circuits IIS are awarded the **Joseph von Fraunhofer Prize 2021** for their invention and development of MIOTY. In other news, **MIOTY Class B** is released to complement earlier classes Z and A. \n\nJul \n2021\n\nFraunhofer IIS completes a successful test of MIOTY with a **GEO satellite** at a distance of about 38,000 km. This is in the S band at about 2 GHz. This test shows that MIOTY can be used in scenarios that are not dependent on any terrestrial infrastructure. \n\nOct \n2021\n\nAt the Mobile Breakthrough Awards, BehrTech wins **Industrial IoT Solution of the Year** award for its MIOTY-BLE dual stack. The solution uses BLE 5.2 and CC1352R wireless microcontroller from Texas Instruments. Their dual stack was first announced in May 2021. \n\nMay \n2022\n\nAt Fraunhofer IIS, researchers achieve up to **3.5 million telegrams per day per base station**. This is due to improved algorithms that can now run on ARM processors. For developers, Fraunhofer IIS also offers a MIOTY evaluation kit.","meta":{"title":"MIOTY","href":"mioty"}} {"text":"# CSS Selectors\n\n## Summary\n\n\nCascading Style Sheets (CSS) are commonly used to style web content. To style a particular element on a HTML/XHTML page, we have to first select the element. This is done using CSS Selectors. A selector essentially selects or targets one or more elements for styling.\n\nCSS3 specifications standardized by W3C defines a wide variety of selectors. A selector can target a single specific element or target multiple elements. In a good web design, CSS selectors are written to balance clarity, portability, stylesheet size, ease of update, and performance.\n\n## Discussion\n\n### Which are the commonly used CSS selectors?\n\nA web page is essentially a set of HTML/XHTML tags organized into a hierarchy. It's common to mark elements with identifiers and/or class names. It's therefore common to use the following as CSS selectors: \n\n + **By Tag Name**: For instance, a paragraph can be selected with `p`, such as in `p { color: #444; }` .\n + **By Identifier**: An HTML element can have the `id` attribute that must be unique across the document. Thus, identifier can be used to target a specific element. For example, given the HTML snippet `
    `, the relevant selector is `#footer`.\n + **By Class Name**: The `class` attribute is also common for HTML elements. An element can have one or more classes. Multiple elements can belong to a class. By using a class-based CSS selector, we can target multiple elements. For example, given the HTML snippet ``, the relevant selector is `.pincode`.The above three selector syntaxes can be combined into a single selector. Suppose we wish to target list items of class `icon` inside the footer, we can write `#footer li.icon`. \n\n\n### How can HTML attributes be used in CSS selectors?\n\nHTML elements can have attributes and these can be used in CSS selectors. The simplest syntax is to select by **presence of the attribute**. For example, `a[title]` targets all anchor elements with `title` attribute. A more refined selection is to match the **value of the attribute**. This comes in three variants (also explained in the figure): \n\n + `[attr=value]`: exact match: `div[class=\"alert\"]` will match `
    ` but not `
    `.\n + `[attr~=value]`: word match: `div[class~=\"alert\"]` will match `
    ` but not `
    `.\n + `[attr|=value]`: exact or with hyphen match: `div[class|=\"alert\"]` will match `
    `, `
    ` and `
    ` but not `
    ` or `
    `.It's possible to match substrings of attribute values. These take the form `[attr^=value]`, `[attr$=value]` and `[attr*=value]` to match start, end and any part of the value string respectively. \n\nHTML element names and attribute names are case insensitive but attribute values are case sensitive. To match value strings in a case-insensitive manner, include the `i` flag. For example, `div[class=\"alert\"]` will not match `
    ` but `div[class=\"alert\" i]` will. \n\n\n### What's the difference between pseudo-classes and pseudo-elements?\n\nPseudo-classes and pseudo-elements give additional capability to CSS selectors. Neither appears in the document source nor modifies the document tree. \n\n**Pseudo-classes** are applied to elements due to their position (eg. last child) or state (eg. read only). Elements can gain or lose pseudo-classes as the user interacts with the document. The syntax is `:`, such as in `:enabled`, `:hover`, `:visited`, `:read-only`, `:last-child`, or `:nth-of-type()`. \n\nWhereas pseudo-classes represent additional information about an element, **pseudo-elements** represent elements not explicitly present in the document tree. A pseudo-element is always bound to another element (called *originating element*) in the document. It can't exist independently. The syntax is `::`, such as in `::before`, `::after`, `::placeholder`, `::first-line`, or `::first-letter`. \n\nIn CSS1 and CSS2, both pseudo-classes and pseudo-elements used single colon syntax for `::before`, `::after`, `::first-line`, and `::first-letter`. Thus, browsers today accept either syntax for these special cases. \n\nWhere user actions are involved, the two can be combined. For example, `::first-line:hover` matches if the first line is hovered whereas `:hover::first-line` matches the first line of any originating element (such as second line of the paragraph) that's hovered. \n\n\n### Which are the CSS selectors for targeting descendants of an element?\n\nTo select **descendants** use multiple selectors separated by spaces. For example, `p a` targets all links within paragraphs; links outside paragraphs are not selected. Selector `.agenda ul a` targets all links within unordered lists within elements of `agenda` class.\n\nA more restrictive syntax selects only **immediate children** of an element, not grandchildren, great-grandchildren, etc. For example, `ul > a` is incorrect since children of `ul` are `li` elements. The correct selector would be `ul > li > a`. Another example is a list containing other lists. Selector `ul > li` selects only first level list items whereas `ul li` selects list items at every nested level. \n\nPseudo-classes provide further control for child selection. Some of these are `:first-child`, `:last-child`, `:only-child`, `:nth-child`, and `:nth-last-child` (nth child from the end). Children are one-indexed. To target every third list item from fourth item, use `ul > li:nth-child(3n+4)`. To target exactly the fourth item, use `ul > li:nth-child(4)`. To target the last but one item, use `ul > li:nth-last-child(2)`. To target last three items, use `ul > li:nth-child(-n+3)` \n\n\n### Which are the CSS selectors for targeting siblings of an element?\n\nAssume a DOM structure in which elements `h1 p p ul p` are all children of the same parent and therefore siblings in that order. CSS has two ways to select siblings by combining two selectors: \n\n + **Adjacent Sibling**: For example, `h1 + p` means that we target `p` if it's the immediate sibling following `h1`. Though `ul` is a sibling of `h1`, it's not the immediate sibling. Hence, `h1 + ul` will not select `ul`.\n + **General Sibling**: For example, `h1 ~ p` will select all three `p` elements that are all siblings of `h1`. The selector `p ~ h1` will not select `h1` because the way CSS works, we're targeting `h1` siblings that *follow* a `p` element.Combinator `+` targets only one sibling whereas `~` targets multiple siblings. With `~`, it's possible to obtain a specific sibling by using pseudo-classes such as `:first-of-type`, `:last-of-type`, `:nth-of-type()` or `:nth-last-of-type()`. For example, `h1 ~ p:nth-last-of-type(2)` will select the second last `p` sibling of `h1`. \n\n\n### What are quantity selectors in CSS?\n\nGiven a number of children, it's possible to select one or more of them depending on the number of children. Quantity selectors help us do this. Essentially, they are pseudo-classes. Multiple pseudo-classes can be used together. We share a few examples. \n\nTo select a list item if it's the only item in the list, use `li:only-child`. This logic can be reversed by doing `li:not(:only-child)`. \n\nTo select the first child if there are exactly six children, use `li:nth-last-child(6):first-child`. This selects the sixth child from the end. This can be the first child only if there are exactly six children. To select children 2-6 if there are exactly six children, use the general sibling combinator, as in `li:nth-last-child(6):first-child ~ li`. \n\nIf there are six or more children, and we wish to ignore the last 5 children, use `li:nth-last-child(n+6)`. To select only the last 5 children given that there are six or more children, use `li:nth-last-child(n+6) ~ li`. \n\n\n### Which are some uncommon CSS selectors?\n\nSome CSS selectors are either not widely known or commonly used, but could be useful in some cases:\n\n + **Universal**: It's inefficient to select all elements by using the *universal selector* `*`. However, it could be used to select all children of an element, such as `.footer > *`.\n + **Empty**: Pseudo-class `:empty` targets elements with no content. HTML comments are not considered. Even if an element has only a single space, it's considered non-empty.\n + **Multiple Classes**: `p.last-section.section` targets paragraphs that belong to both classes `section` and `last-section`.\n + **Selector List**: Same CSS styles can be applied to multiple selectors, which are separated by commas. For example, `em, i {...}` styles `em` and `i` tags alike.\n + **Inverse Selection**: Specify a selector and target the inverse set. For example, `:not(p)` selects all elements that are not paragraphs. Another example is `.nav > div:not(:first-child)` to select all (expect the first) `div` children of `.nav`.\n\n### Could you share some best practices for using CSS selectors?\n\nTo keep content separate from styling, avoid inline CSS styles. Put styles in stylesheets. An exception is when emailing HTML content since many mail clients ignore stylesheets. \n\nWrite legible CSS rulesets. Put each declaration on a separate line. Be careful about spaces. For example, selectors `#header.callout` (header of callout class) and `#header .callout` (callout descendants of header) target different elements. \n\nAvoid repetitive rules. If many selectors contain the same style declaration, abstract that into a separate class-based ruleset. \n\nMove animations to the end of stylesheets so that browsers load and render basic styling first. \n\nCSS allows the use of `!important` in the value of a property. This gives precedence to the declaration, overriding even inline styles. **CSS specificity** determines selector precedence should multiple selectors target the same element. Since `!important` breaks this precedence order, it can make stylesheets hard to maintain and debug. \n\nLong selectors resulting in high specificity, including use of identifiers, can break cascading and prevent efficient reuse of styles. Keep selectors to two or three levels deep. For example, replace `#header #intro h1.big a.normal` with `.big .normal`. \n\n\n### What are some performance considerations when designing CSS selectors?\n\nSelecting by identifier is fast. Selecting by class, tag, child, descendant, or attribute exact match are fast in that order. These results may vary across browsers. In fact, difference in speed between identifier and class selectors is negligible. \n\nWebKit developer Dave Hyatt noted back in 2008 that it's best to avoid using sibling, descendant and child selectors for better performance. \n\nThe way browsers parse CSS selectors is different from the way web designers write them. Browsers read selectors in **right-to-left order**. For example, given `#social a` browsers will look at all `a` tags, move up the DOM tree and retain only those that are within `#social`. This insight can help us write more performant selectors. The rightmost selector is called the **key selector**. \n\nThe above example can be made more performant by adding a new class `social-link` to relevant anchor tags and then using the selector `#social .social-link`. In fact, this selector is *overqualified*. It can be simplified to just `.social-link`. A poor selector is `div:nth-of-type(3) ul:last-child li:nth-of-type(odd) *`. It's four levels deep and its key selector selects all elements of the DOM. \n\n## Milestones\n\nAug \n1999\n\nW3C publishes *CSS3 module: W3C selectors*. In November 2018, W3C publishes this as *Selectors Level 3* W3C Recommendation. \n\nAug \n2006\n\nTo simplify DOM tree traversal and manipulation John Resig releases a new library named **jQuery**. In JavaScript code, this allows us to select elements using CSS selectors, thus avoiding many lines of code that call `getElementById()` or `getElementsByClassName()`. jQuery itself is partly inspired by *cssQuery* (September 2005). An earlier effort to use CSS selectors in JavaScript is `getElementsBySelector()` (March 2003). \n\nOct \n2007\n\nIn a W3C Working Draft, new API methods `querySelector()` and `querySelectorAll()` are introduced. Browsers/clients must implement them for both `DocumentSelector` and `ElementSelector` interfaces. Traditionally, DOM elements were selected using methods `getElementById()`, `getElementsByClassName()`, `getElementsByTagName()`, `getElementsByName()`, etc. These are cumbersome. The new API methods allow direct use of CSS selectors. This becomes *Selectors API Level 1* W3C Recommendation in February 2013. \n\nMar \n2009\n\nIn a study on CSS performance, it's noted that optimizing selectors matters only if a webpage has many thousands of DOM elements. For example, a Facebook page (in 2009) has 2882 CSS rules and 1966 DOM elements, a scale easily handled by browsers. Among a number of popular browsers, IE7 gives best performance. IE7 hits a performance limit only when a page has about 18K child/descendant rules. \n\nSep \n2011\n\nW3C publishes *Selectors Level 4* as a Working Draft. As on May 2020, it's still a Working Draft, with the latest version from November 2018. Among the new pseudo-classes are `:is()`, `:has()`, `:where()`, `:target-within`, `:focus-visible`, and `:focus-within`. Temporal pseudo-classes are `:current`, `:past` and `:future`. There are new pseudo-classes for input states and value checking. Grid pseudo-classes include `:nth-col()` and `:nth-last-col()`. \n\nJun \n2014\n\nHeydon Pickering proposes at a CSS conference a peculiar CSS selector of three characters that looks like an \"owl's vacant stare\". He calls it the **Lobotomized Owl**. This selector is `* + *`. It selects all elements that follow other elements. For example, this is useful when adding margin between two siblings without margin above the first element or below the last element. The alternative, `:not(:first-child):not(:root)` has high specificity and can't be easily overridden. The Lobotomized Owl has zero specificity.","meta":{"title":"CSS Selectors","href":"css-selectors"}} {"text":"# Tools to Manage Technical Debt\n\n## Summary\n\n\nIn an ideal world, developers follow best practices and implement the best possible solution. In practice, this is rarely the case. Technical debt is often accepted to satisfy immediate business needs. However, this debt must be managed in the long term. Except for the most trivial projects, it's difficult to manage technical debt through manual effort. Fortunately, many tools exist for this purpose.\n\nEngineering systems are diverse because of their choice of hardware, software programming language, development methodology, system architecture, test plan, and so on. As a result, technical debt is also rich in variety: architecture to code, documentation to testing. Tools to address them are also varied. A project might need to adopt more than one tool to manage its technical debt.\n\n## Discussion\n\n### What features are expected of tools that manage technical debt?\n\nGood technical debt management tools are expected to have these traits: \n\n + **Polyglot**: Tools need to analyze the source code. As such, they must support popular programming languages (Java, JavaScript, C/C++, C#, Python, PHP, etc.). This support could be offered as plugins to the main product.\n + **Analysis**: Analysis must be from different perspectives including maintainability, reliability, portability and efficiency. Approaches can include static code analysis, SCM analysis, and test coverage. Tool should record historical data so that trends can be observed. Tool should show time needed to fix the debts.\n + **Reporting**: Dashboard should show a high-level summary and project status. Dashboard should be customizable since engineers and business folks are interested in different levels of detail. Visualizations should be clear.\n + **Deployment**: The tool can be hosted on-premise or in the cloud. For multiuser access, it should be a web application.\n + **Flexibility**: It should be possible to override default values and configure the tool to suit the project. For example, thresholds to detect duplicated code must be configurable.\n + **Logging**: We may wish to get insights into the analysis or troubleshoot issues. Tool should therefore collect logs for future study or audit.\n\n### Could you mention some tools to manage technical debt?\n\nSome tools only report code metrics, facilitate code reviews or support only a few languages. Others support multiple languages, cover multiple debt factors, perform risk analysis, visualize debt in different ways, and offer a useful dashboard summary. Among the latter are SonarQube, Squore, and Kiuwan. \n\nBliss, SonarQube, Checkstyle, and Closure Compiler are some tools that help with static code analysis. Designite can detect design smells, such as a class doing things that are not part of its core responsibility (poor cohesion). Jira Software and Hansoft are examples that identify but don't measure technical debt. Jacoco captures test debt. \n\nAmong other tools are CAST Application Intelligence Platform, Teamscale, SIG Software Analysis Toolkit, Google CodePro Analytix, Eclipse Metrics, Rational AppScan, CodeXpert, Redmine, Ndepend (Visual Studio plugin), DebtFlag (Eclipse plugin), CLIO, CodeVizard, and FindBugs. \n\nIn general, there are more tools for code/test debts than for design/architecture debts. \n\n\n### How do tools quantify technical debt?\n\nTools should show metrics that are simple, clear, correct and objective. They should help in decision making. There are many metrics to quantify debt with respect to design, coding, defects, testing, documentation, and other aspects of product development. Tools should therefore measure code duplication, code complexity, test coverage, dependency cycles and coupling, lack of documentation, programming rules violations, and more. \n\nIdeally, a few numbers that express overall product quality will be easier to understand at a high level. Expressing technical debt as in units of **person-days** points to the effort needed to remove the debt. This is a good measure but it doesn't indicate product quality. Therefore, we can also express **technical debt as a ratio**. Generally, debt that's higher than 10% needs to be addressed quickly. \n\nTool should also capture trends. For example, a declining trend in test coverage might suggest that with each iteration the team is adding more features that it's capable of testing. \n\nIn general, effort estimates are hard to get right. The suggested approach is to remove technical debt gradually (pay off the principal) with every release. \n\n\n### What are some approaches that tools use to manage technical debt?\n\nGiven various aspects of managing technical debt, we note the following approaches: \n\n + **Identify**: Code analysis, dependency analysis, checklists; compare against an optimal solution.\n + **Measure**: Use mathematical models, source code metrics, manual estimation by experts; estimate based on debt type; use operational metrics; compare against an optimal solution.\n + **Prioritize**: Cost-benefit analysis; repay items with high remediation cost or interest payments; select a set of items that maximize returns (portfolio approach).\n + **Monitor**: Warn when thresholds are reached; track debt dependencies; regular measurements; monitor with respect to defined quality attributes; plot and visualize trends.\n + **Document**: Record each item in detail with identifier, type, description, principal and interest estimates; probability of interest payment; correlation with other items.\n + **Communicate**: Summarize in dashboards; record in backlog and schedule with each development cycle; list or visualize items.\n + **Prevent**: Improve processes; evaluate potential debts during architecture/design phases; create a culture that minimizes debt.\n + **Repay**: Refactor, rewrite, reengineer (new features); automate routine tasks; fix bugs; improve fault tolerance in areas of debt.\n\n### What are some shortcomings of tools that manage technical debt?\n\nTools may differ in the way they quantify technical debt. The calculated metrics may be incomplete or even inaccurate. For example, a tool may fail to capture architectural debt, such as when code violates layered architecture. \n\nThere's no clear consensus on the dimensions or boundaries of technical debt. For example, some may consider known defects as technical debt. When tools report false positives, dealing with these can be tedious. \n\nMany tools calculate the principal (cost of refactoring) but not the interest (extra cost due to technical debt). The latter is more difficult to quantify. However, it should be noted that the SQALE method has models to estimate both **remediation cost (principal)** and **non-remediation cost (interest)**. \n\n## Milestones\n\n1992\n\nWard Cunningham coins the term **Technical Debt** while working on WyCASH+, a financial software written in Smalltalk. \n\n2001\n\n**The Agile Manifesto** is signed. In the following years, the concept of technical debt is more widely adopted with the growth of the Agile movement. \n\nFeb \n2007\n\nThe first lines of code are written for the **Sonar platform**. In November 2008, SonarSource is founded to accelerate development and adoption of the open source Sonar platform. By mid-2010, SonarSource gets regular downloads. In April 2013, SonarSource gets a commercial edition and is renamed to SonarQube. \n\nSep \n2009\n\nIn a white paper, Letouzey and Coq introduce the **SQALE (Software Quality Assessment Based on Lifecycle Expectations) Analysis Model**. They note that in qualimetry, any measure of software quality should be precise, objective and sensitive. The SQALE model provides normalized measures and aggregated indices. Since software is a hierarchy of artefacts, the model standardizes aggregation rules that lead to remediation indices. Since SQALE is open source and royalty free, by 2016, it's implemented by many tools. \n\n2014\n\nMultiple studies are published in literature that attempt to understand technical debt and relate it to software quality. A couple of these studies use SonarQube and its plugins for analysis. \n\nMar \n2015\n\nLi et al. bring together different studies and summarize the capabilities of 29 different technical debt management tools. They map them to various attributes: identification, measurement, prioritization, monitoring, repayment, documentation, communication and prevention. Most are able to identify or measure technical debt. In addition, they capture what sort of technical debts these tools capture: code, design (code smells or violations), testing, requirements, architecture, and documentation. \n\nApr \n2015\n\nIn a blog post, Atlassian recognizes the importance of technical debt. The note that their **Jira** software can now list and track technical debt, including when a debt was created and when it's expected to be resolved. \n\nNov \n2018\n\n**DeepSource** is released. It's a source code analysis tool that can integrate with GitHub. Test coverage, documentation debt, security, and dependency debt are some things it can capture. \n\nOct \n2019\n\nOswal studies the support for technical debt in three tools: SonarQube, PMD and Code Analytix. He notes that no single tool gives a holistic view and therefore he uses multiple tools for this analysis. The study looks at the Core Java 8 project plus a web application (Java/JavaScript/HTML/XML). The study addresses software reliability, maintainability and security.","meta":{"title":"Tools to Manage Technical Debt","href":"tools-to-manage-technical-debt"}} {"text":"# Apache Beam\n\n## Summary\n\n\nWith the rise of Big Data, many frameworks have emerged to process that data. These are either for batch processing, stream processing or both. Examples include Apache Hadoop MapReduce, Apache Spark, Apache Storm, and Apache Flink. \n\nThe problem for developers is that once a better framework arrives, they have to rewrite their code. Mature old code gets replaced with immature new code. They may even have to maintain two codebases and pipelines, one for batch processing and another for stream processing. \n\nApache Beam aims to solve this problem by offering a **unified programming model** for both batch and streaming workloads that can run on any distributed processing backend execution engine. Beam offers SDKs in a few languages. It supports a number of backend execution engines.\n\n## Discussion\n\n### Why do I need Apache Beam when there are already so many data processing frameworks?\n\nHaving many processing frameworks is part of the problem. Developers have to write and maintain multiple pipelines to work with different frameworks. When a better framework comes along, there's significant effort involved in adopting it. Apache Beam solves this by enabling and reusing a single pipeline across multiple runtimes. \n\nThe benefit of Apache Beam is therefore both in development and operations. Developers can focus on their pipelines and less about the runtime. Pipelines become **portable**. Therefore there's no lock-in to a particular runtime. Beam SDKs allow developers to quickly integrate a pipeline into their applications. \n\nThe **Beam Model** offers powerful semantics for developers to think about data processing at higher level of abstractions. Concepts such as windowing, ordering, triggering and accumulation are part of the Beam model. \n\nBeam has auto-scaling. It looks at current progress to dynamically reassign work to idle workers or scaling up/down the number of workers. \n\nSince Apache Beam is open source, support for more languages (by SDK writers) or runtimes (by runner writers) can be added by the community. \n\n\n### What are the use cases served by Apache Beam?\n\nApache Beam is suitable for any task that can be parallelized by breaking down the data into smaller parts, each part running independently. Beam supports a wide variety of use cases. The simplest ones are perhaps Extract, Transform, Load (ETL) tasks that are typically used to move data across systems or formats. \n\nBeam supports batch as well as streaming workloads. In fact, the name *Beam* signifies a combination of \"Batch\" and \"Stream\". It therefore presents a unified model and API to define parallel processing pipelines for both types of workloads. \n\nApplications that use multiple streaming frameworks (such as Apache Spark and Apache Flink) can adopt Beam to simplify the codebase. A single data pipeline written in Beam can address both execution runtimes. \n\nBeam can be used for scientific computations. For example, Landsat data (satellite images) can be processed in parallel and Beam can be used for this use case. \n\nIoT applications often require real-time stream processing where Beam can be used. Another use case is computing scores for users in a mobile gaming app. \n\n\n### Which are the programming languages and runners supported by Apache Beam?\n\nApache Beam started with a Java SDK. By 2020, it supported Java, Go, Python2 and Python3. Scio is a Scala API for Apache Beam. \n\nAmong the main runners supported are Dataflow, Apache Flink, Apache Samza, Apache Spark and Twister2. Others include Apache Hadoop MapReduce, JStorm, IBM Streams, Apache Nemo, and Hazelcast Jet. Refer to the Beam Capability Matrix for more details. \n\nJava SDK supports the main runners but other SDKs support only some of them. This is because the runners themselves are written in Java, which makes support for non-Java SDKs non-trivial. Beam's **portability framework** aims to improve this situation and enable full interoperability. This framework would define data structures and protocols that can match any language to any runner. \n\n**Direct Runner** is useful during development and testing for execution on your local machine. Direct Runner checks if your pipeline conforms to the Beam model. This brings greater confidence that the pipeline will run correctly on various runners. \n\nBeam's portability framework comes with **Universal Local Runner (ULR)**. This complements the Direct Runner. \n\n\n### What are the essential programming abstractions in Apache Beam?\n\nBeam provides the following abstractions for data processing: \n\n + `Pipeline`: Encapsulates the entire task including reading input data, transforming data and writing output. Pipelines are created with options using `PipelineOptionsFactory` that returns a `PipelineOptions` object. Options can specify for example location of data, runner to use or runner-specific configuration. A pipeline can be *linear* or *branching*.\n + `PCollection`: Represents the data. Every step of a pipeline inputs and outputs `PCollection` objects. The data can be *bounded* (eg. read from a file) or *unbounded* (eg. streamed from a continuous source).\n + `PTransform`: Represents an operation on the data. Inputs are one or more `PCollection` objects. Outputs are zero or more `PCollection` objects. A transform doesn't modify the input collection. I/O transforms read and write to external storage. Core Beam transforms include `ParDo`, `GroupByKey`, `CoGroupByKey`, `Combine`, `Flatten`, and `Partition`. Built-in I/O transforms can connect to files, filesystems (eg. Hadoop, Amazon S3), messaging systems (eg. Kafka, MQTT), databases (eg. Elasticsearch, MongoDb), and more.\n\n### What's the Beam execution model?\n\nCreating a pipeline doesn't imply immediate execution. The designated pipeline runner will construct a **workflow graph** of the pipeline. Such a graph connects collections via transforms. Then the graph is submitted to the distributed processing backend for execution. Execution happens asynchronously. However, some runners such as Dataflow support blocking execution. \n\n\n### What are some essential concepts of the Beam model?\n\nData often has two associated times: **event time**, when the event actually occurred, and **processing time**, when the event was observed in the system. Typically, processing time lags event time. This is called *skew* and it's highly variable. \n\nBounded or unbounded data are grouped into **windows** by either event time or processing time. Windows themselves can be fixed, sliding or dynamic such as based on user sessions. Processing can also be time-agnostic by chopping unbounded data into a sequence of bounded data. \n\nData can arrive out of order and with unpredictable delays. There's no way of knowing if all data applicable to an event-time window have arrived. Beam overcomes this by tracking **watermarks**, which gives a notion of data completeness. When a watermark is reached, the results are materialized. \n\nWe can also materialize early results (before watermark is reached) or late results (data arriving after the watermark) using **triggers**. This allows us to refine results over time. Finally, **accumulation** tells how to combine multiple results of the same window. \n\n\n### Could you point to useful developer resources to learn Apache Beam?\n\nApache Beam's official website contains quick start guides and documentation. The Overview page is a good place to start. There's an example to try out Apache Beam on Colab.\n\nThe Programming Guide is an essential read for developers who wish to use Beam SDKs and create data processing pipelines. \n\nVisit the Learning Resources page for links to useful resources. On GitHub, there's a curated list of Beam resources and a few code samples.\n\nDevelopers who wish to contribute to the Beam project should read the Contribution Guide. The code is on GitHub. The codebase also includes useful examples. For example, Python examples are in folder path `sdks/python/apache_beam/examples` and Go examples in `sdks/go/examples`. \n\n## Milestones\n\nJul \n2014\n\nJay Kreps questions the need to maintain and execute parallel pipelines, one for batch processing (eg. using Apache Hadoop MapReduce) and one for stream processing (eg. using Apache Storm). The batch pipeline gives exact results and allows data reprocessing. The streaming pipeline has low-latency and gives approximate results. This has been called **Lambda Architecture**. Kreps instead proposes, \"a language or framework that abstracts over both the real-time and batch framework.\" \n\nMay \n2015\n\nAt a Big Data conference in London, Google engineer Tyler Akidau talks about streaming systems. He introduces some of those currently used at Google: MillWheel, Google Flume, and Cloud Dataflow. In fact, Cloud Dataflow is based on Google Flume. These recent developments bring greater maturity to streaming systems, which so far have remained less mature compared to batch processing systems. \n\n2016\n\nIn February, Google's Dataflow is accepted by Apache Software Foundation as an Incubator Project. It's named **Apache Beam**. The open-sourced code includes Dataflow Java SDK, which already supports four runners. There's plan to build a Python SDK. Google Cloud Dataflow will continue as a managed service executing on the Google Cloud Platform. Beam logo is also released in February. \n\nJan \n2017\n\nApache Beam graduates from being an incubator project to a top-level Apache project. During the incubation period (2016), the code was refactored and documentation was improved in an extensible vendor-neutral manner. \n\nMay \n2017\n\n**Beam 2.0** is released. This is the first stable release of Beam under the Apache brand. It's said that Beam is at this point \"truly portable, truly engine agnostic, truly ready for use.\" \n\nJun \n2018\n\nWith the release of Beam 2.5.0, **Go SDK** is now supported. Go pipelines run on Dataflow runner. \n\nJul \n2020\n\n**Beam 2.23.0** is released. This release supports Twister2 runner and Python 3.8. It removes support for runners Gearpump and Apex.","meta":{"title":"Apache Beam","href":"apache-beam"}} {"text":"# Packet Forwarding Control Protocol\n\n## Summary\n\n\nPacket Forwarding Control Protocol (PFCP) is a protocol used for communicating between control plane (CP) and user plane (UP) functions in 4G (Release 14 onwards) and 5G networks. In these networks, the concept of **Control and User Plane Separation (CUPS)** separates the two planes. However, there's a need for these two planes to communicate across the newly defined interfaces. For this purpose, PFCP was defined.\n\nPFCP sits on top on UDP/IP. It functions only in the control plane. A control plane node uses PFCP to associate with one or more user plane nodes and subsequently configure PDU sessions for the user plane. PFCP consists of node-related or session-related messages. Messages and IEs are encoded in TLV format. \n\nOpen source implementations of PFCP are available.\n\n## Discussion\n\n### Where does PFCP fit in?\n\nIn 4G EPC, PFCP is used on the **Sx interfaces**. In particular, these are Sxa (SGW-C and SGW-U), Sxb (PGW-C and PGW-U) and Sxc (TDF-C and TDF-U). Due to CUPS, traditional Serving Gateway (SGW) and PDN Gateway (PGW) were split into SGW-C and SGW-U, and PGW-C and PGW-U respectively. PFCP was defined to help these split entities to communicate. The Traffic Detection Function (TDF) is an optional function and it was also split. Its parts also use PFCP. \n\nIt's also possible to combine SGW-C and PGW-C into a single control plane node. Likewise, SGW-U and PGW-U can be combined. These combined entities can communicate using PFCP. \n\nIn 5G, CUPS was applied from the outset, that is, in Release 15. The equivalent control plane and user plane nodes are Session Management Function (SMF) and User Plane Function (UPF). These are connected via the **N4 interface**. PFCP is used on this interface. \n\nIn some scenarios, user plane data packets may be sent on these interfaces between CP and UP functions. However, such packets don't use PFCP. They're sent using GTP-U over UDP/IP. \n\n\n### Which are the main procedures in PFCP?\n\nThe figure shows the list of PFCP messages, which follow the procedures that PFCP needs to perform: \n\n + **Node-related**: Heartbeat (to know if the peer node is alive); Load Control (UP sends CP its current load); Overload Control (UP informs CP that it's overloaded); Association Setup/Update/Release (relate to associating UP with CP); Packet Flow Description (PFD) Management (provision PFDs to UP); Node Report (UP reports non-session-related information to CP).\n + **Session-related**: Session Establishment/Modification/Release (configure rules in the UP to handle packets on a per-session basis); Session Report (UP reports session-specific information to CP).PFCP runs on top of UDP, which doesn't guarantee packet delivery. For this reason, PFCP **retransmits** every request for which a response is not received within a defined timeout. Timeout value and number of retransmissions are implementation specific. A request-response pair share the same sequence number. \n\nFor efficiency, many session-related messages targeting the same peer PFCP entity can be **bundled** together. If bundled messages remain unacknowledged, they may be retransmitted individually. Bundling is not applicable for node-related messages. \n\n\n### What's meant by PFCP Association?\n\nBefore a UP function can carry packets, it must first be configured by a CP function. It's for this purpose that PFCP exists. However, even before CP configures UP for a session, it must first select a suitable UP. This is exactly what the association procedure achieves. A CP associates with a UP before establishing sessions on the UP. Association allows CP to subsequently use the resources of the UP. \n\nThe association setup procedure may be initiated by either CP or UP. Support for UP-initiated association is optional. Either way, CP and UP exchange their support for PFCP optional features during the association procedure. \n\nEither CP or UP may initiate the association update procedure. Only CP may initiate the association release procedure. When UP receives a release request message, it shall delete all PFCP sessions and then delete its association with the CP. \n\nA CP node may be associated with many UP nodes and vice versa. However, a CP-UP pair shall have only one association. \n\n\n### What's the packet forwarding model in PFCP?\n\nThe CP function configures an associated UP function with Packet Detection Rules (PDRs) for each session. Each PDR contains Packet Detection Information (PDI). A PDI can include UE IP address, SDF filter, QFI, Ethernet packet filter, etc. Each PDR also has associated rules that specify how a packet should be treated, that is, if or how should the packet be forwarded, buffered, dropped, etc. No two PDRs shall have the same PDIs. \n\nWhen a data packet comes into the UP function, first task is to identify the packet's PFCP session using provisioned PDRs. Then UP identifies all matching PDRs of the session. PDR with the highest precedence is selected. Finally, the associated rules of this PDR are applied on the packet. \n\nPackets unmatched by any PDRs are dropped. However, CP may configure UP with a separate session with a PDR containing wildcarded fields. Such a PDR shall have lowest precedence. Such packets may be configured to be dropped or forwarded to CP. \n\n\n### Which are the rules signalled by PFCP?\n\nA PFCP session is configured with the following: \n\n + **Packet Detection Rule (PDR)**: PDIs are used to identify the PFCP session.\n + **Forwarding Action Rule (FAR)**: Apply Action IE informs UP function if packets are to be forwarded, duplicated, dropped or buffered.\n + **QoS Enforcement Rule (QER)**: QoS enforcement includes gating control, QoS control, DL flow level marking, service class indicator marking and reflective QoS.\n + **Usage Reporting Rule (URR)**: UP function sends traffic reports to CP function. Reports may be volume-based, time-based or both.\n + **Buffer Action Rule (BAR)**: Specifies buffering parameters for FARs whose Apply Action IEs indicate buffering.\n + **Multi-Access Rule (MAR)**: Applicable for multi-access PDU sessions if UP function supports the Access Traffic Steering, Switching, Splitting (ATSSS) feature.FAR, QER, URR and MAR are association with PDRs of the same PFCP session. Only PDR and FAR are mandatory for a session. These rules are configured during session establishment or modification procedures. However, to reduce signalling load, the UP function may be asked to activate pre-defined PDRs. This is permitted for a UP function that has pre-defined FAR, QER and/or URR. \n\n\n### How are messages and IEs encoded in PFCP?\n\nPFCP messages and IEs are encoded using **TLV format**. TS 29.244 defines message formats in section 7.2 and IE formats in section 8.1. A PFCP message consists of a PFCP header and zero or more IEs. \n\nThe header is of variable length with flags in the first byte signalling the presence/absence of fields. Session-related messages contain a Session Endpoint Identifier (SEID) that's omitted in node-related messages. \n\nMessage type is signalled with a single byte. Message length takes up two bytes. The length value excludes the first four bytes (which are mandatory) of the PFCP header. \n\nFor each IE, the IE type is signalled with two bytes. IE length takes up two bytes. The length value excludes the first four bytes of the IE. Such an encoding enables forward compatibility. Some IEs are extendable. A legacy receiving entity should ignore extra bytes that it doesn't understand. \n\nSome IEs can contain other IEs. These are called **Grouped IEs**. Messages carry these grouped IEs. \n\n\n### Are there open source implementations of PFCP?\n\nThe Open5GS project has C language-based implementation of PFCP. Open5GS is a well-known project with 1300+ stars on its codebase. \n\nLess well-known implementations include free5GC's PFCP and go-pfcp, both in Go language. A C++ implementation is OpenAirInterface Software Alliance's PFCP. It's associated openair-uml project gives useful class, sequence and activity diagrams. \n\nThe Open Mobile Evolved Core (OMEC) project has Go implementations of UPF, SMF and PFCP simulator. The simulator simulates 4G SGW-C or 5G SMF. It can be used for UPF testing. \n\n\n### Which are the patents concerning PFCP?\n\nWe note a few patents pertaining to PFCP (though possibly not standardized by 3GPP):\n\n + WO2021009166A1 (2021): CP configures UP to enable Network Address Translation (NAT) for one or more Service Data Flows (SDFs). An IE for NAT can be part of FAR.\n + 20210297535 (2021): 5G specifications allow UPF to report only traffic volume to SMF. It's foreseen that data analytics may become more important in 5G (NWDAF). By enhancing Reporting Rule, we can report latency, jitter, QoE, etc.\n + WO/2021/254645 (2021): Layer-specific volume reporting would give operators flexibility to charge based on application payload only or include L3/L4 headers.\n + EP3697171B1 (2020): UP-initiated Association Release procedure could be useful when UP functions are shut down for maintenance, prior to which usage reports need to be sent to CP. Correct reporting is needed for policy control and charging.\n + 20200059992 (2020): Using Deep Packet Inspection (DPI), a UP determines if a PDR associated with a PFCP session context needs to be modified or a new PDR created. UP then initiates this change towards CP.\n + US20210281664A1 (2021): This talks about negotiating and compressing PFCP messages on the N4 interface.\n\n\n## Milestones\n\nJul \n2015\n\nAt the SA WG2 Meeting #S2-110, a study item titled *Feasibility Study on Control and User Plane Separation of EPC nodes* is proposed. This is subsequently approved in September at the 3GPP TSG SA Meeting #69. \n\nJun \n2016\n\n3GPP publishes **TR 23.714** titled *Study on control and user plane separation of EPC nodes* as part of Release 14. The document identifies the different issues and possible solutions. The selected solution is a functional split of user plane and control plane functions, including the case of combined SGW/PGW. \n\nSep \n2016\n\n3GPP publishes **TS 23.214** that specifies the architectural changes to support CUPS in LTE EPC. This is for Release 14. This evolves for other releases: V15.0.0 (Sep 2017), V16.0.0 (Jun 2019) and V17.0.0 (Jun 2021). \n\nMar \n2017\n\n3GPP publishes **TS 29.244**, v1.0.0, titled *Interface between the Control Plane and the User Plane nodes*. This is the main document giving details of PFCP. In June, this is approved for Release 14 as v14.0.0. \n\nMay \n2017\n\nAt the IANA (Internet Assigned Numbers Authority), **port number 8805** is assigned to PFCP over UDP. This is destination port number for a PFCP request message. The source port may be any locally assigned number. \n\nDec \n2017\n\nAs part of 5G Release 15, 3GPP publishes PFCP document TS 29.244, v15.0.0. \n\nJun \n2019\n\nAs part of 5G Release 16, 3GPP publishes PFCP document TS 29.244, v16.0.0. This update includes Enhanced PFCP Association Release, Deferred PDR activation/deactivation, Activation/Deactivation of pre-defined PDRs, Multi-Access Action Rule, ATSSS support, and more. With v16.1.0 (September 2019), PFCP messages bundling is introduced. \n\nMar \n2021\n\nAs part of 5G Release 17, 3GPP publishes PFCP document TS 29.244, v17.0.0. This includes support of Radius, Diameter or DHCP communication via UPF, and partial failure handling over the N4 interface. With v17.1.0 (June 2021) there's support for multi-access PDU sessions and per QoS flow performance measurement. \n\nMar \n2022\n\n3GPP publishes **TR 29.820**, v17.0.0, titled *Study on Best Practice of PFCP*. These best practices lead to better interoperability such as when SMF is deployed centrally at the operator side while UPF is deployed locally at the customer side. Over the N16a interface, I-SMF and SMF have to be interoperable since PFCP IEs are sent over this interface.","meta":{"title":"Packet Forwarding Control Protocol","href":"packet-forwarding-control-protocol"}} {"text":"# CAP Theorem\n\n## Summary\n\n\nA well-design cloud-based application often stores its data across multiple servers. For faster response, data is often stored closer to clients in that geography. Due to the distributed nature of this system, it's impossible to design a perfect system. The network may be unreliable or slow at times. Therefore, there are trade-offs to be made. CAP Theorem gives system designers a method to think through and evaluate the trade-offs at the design stage.\n\nThe three parts of the CAP Theorem are **Consistency**, **Availability**, and **Partition Tolerance**. The theorem states that it's impossible to guarantee all three in a distributed data store. We can meet any two of them but not all three.\n\nOver the years, designers have misinterpreted the CAP Theorem. To reflect read-world scenarios, modifications to the theorem have been proposed.\n\n## Discussion\n\n### What's the definition of CAP Theorem?\n\nA formal definition of CAP Theorem is, \"It is impossible in the asynchronous network model to implement a read/write data object that guarantees the following properties: availability, atomic consistency, in all fair executions (including those in which messages are lost)\". \n\nA simplified definition states that, \n\n> In a network subject to communication failures, it is impossible for any web service to implement an atomic read/write shared memory that guarantees a response to every request.\n\nThe word \"atomic\" used above means that although it's a distributed system, requests are modelled as if they are executing on a single node. This gives us an easy model for consistency. \n\n\n### Could you explain the CAP Theorem?\n\nThe parts of the CAP Theorem can be understood as follows: \n\n + **Consistency**: When a request is made, the server returns the right response. What is \"right\" depends on the service. For example, reading a value from a database might mean that the most recent write to that value should be returned.\n + **Availability**: A request always receives a response from the server. No constraint is placed on how quickly the response must be received.\n + **Partition Tolerance**: The underlying network is not reliable and servers may get partitioned into non-communicating groups. Despite this, the service should continue to work as desired.As an example, consider two nodes G1 and G2 that have been partitioned. A client changes a value from v0 to v1 on G1. However, the same value is not updated on G2 due to the partition. Hence, when G2 is queried it returns the old value v0. Thus, the service is available but not consistent. \n\nSometimes the terms *safety* (consistency) and *liveness* (availability) are used in the generalized sense. Safety means \"nothing bad ever happens\". Liveness means \"eventually something good happens\". \n\n\n### What's the implication of the CAP Theorem when designing distributed systems?\n\nWhen CAP Theorem was proposed, the understanding was that system designers had three options: \n\n + **CA Systems**: Sacrifice partition tolerance. Single-site or cluster databases using two-phase commit are examples.\n + **CP Systems**: Sacrifice availability. If there's a partition, for consistency we make the service unavailable: return a timeout error or lock operations.\n + **AP Systems**: Sacrifice consistency. If there's a partition, we continue accepting requests but reconcile them later (writes) or return stale values (reads).In practice, we deal with network partitions at least some of the time. The choice is really between consistency and availability. For databases, consistency can be achieved by enabling reads after completing writes on several nodes. Availability can be achieved by replicating data across nodes. In fact, permanent partitions are rare. So the choice is temporary. \n\nDesigners don't have to give up one of the three to build a distributed system. In fact, it's possible to have all three under normal network conditions. There's trade-off only when the network is partitioned. It's also helpful to think probabilistically. We can design a CA system if probability of a partition is far less than that of other systemic failures. \n\n\n### Could you share real-world applications of the CAP Theorem?\n\nDatabases that follow ACID (Atomicity, Consistency, Isolation, Durability) give priority to consistency. However, NoSQL distributed databases prefer availability over consistency since availability is often part of commercial service guarantees. So caching and logging were used for *eventual consistency*. This leads to what we call BASE (Basically Available, Soft-state, Eventually consistent). As examples, Zookeeper prefers consistency while Amazon's Dynamo prefers availability. \n\nMaintaining consistency over a wide area network increases latency. Therefore, Yahoo's PNUTS system is inconsistent because it maintains remote copies asynchronously. A particular user's data is partitioned locally and accessed with low latency. Facebook's prefers to update a non-partitioned master copy. User has a more local but potentially stale copy, until it gets updated. \n\nA web browser can go offline if it loses connection to the server. The web app can fall back to on-client persistent storage. Hence, availability is preferred over consistency to sustain long partitions. Likewise, Akamai's web caching offers best effort consistency with high level of availability. \n\nIn Google, primary partition usually resides within one datacenter, where both availability and consistency can be maintained. Outside this partition, service becomes unavailable. \n\n\n### What are some criticisms of the CAP Theorem?\n\nAlthough the Theorem doesn't specify an upper bound on response time for availability, in practice, there's exists a timeout. CAP Theorem ignores **latency**, which is an important consideration in practice. Timeouts are often implemented in services. During a partition, if we cancel a request, we maintain consistency but forfeit availability. In fact, latency can be seen as another word for availability. \n\nIn NoSQL distributed databases, CAP Theorem has led to the belief that eventual consistency provides better availability than strong consistency. Some believe this is an outdated notion. It's better to factor in sensitivity to network delays. \n\nCAP Theorem suggests a binary decision. In reality, it's a continuum. There are different degrees of consistency implemented via \"read your writes, monotonic reads and causal consistency\". \n\n## Milestones\n\n1985\n\nOriginally presented in 1983, researchers Fisher, Lynch and Paterson (FLP) show that distributed consensus is impossible in a fault-tolerant manner in an asynchronous system. Distributed consensus is related to the problem of atomic storage addressed by CAP Theorem. \n\n1996\n\nBefore CAP Theorem is formalized, researchers have been working on similar ideas. One example is the paper titled *Trading Consistency for Availability in Distributed Systems* by two researchers at Cornell University. \n\n2000\n\nEric Brewer presents the CAP Theorem at the 19th Annual ACM Symposium on Principles of Distributed Computing (PODC). It's early history can be traced to 1998 and first published in 1999. Brewer points out that distributed computing has unduly focused on computation and not on data. \n\n2002\n\nMIT researchers Seth Gilbert and Nancy Lynch offer a formal proof of the CAP Theorem in their paper titled *Brewer's Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services*. In asynchronous systems, the impossibility result is strong. In partially synchronous systems, we can achieve a practical compromise between consistency and availability. \n\n2010\n\nDaniel Abadi proposes the **PACELC Theorem** as an alternative to the CAP Theorem. When data is replicated, there's a trade-off between latency and consistency. PACELC makes this explicit: during partitions (P), trade-off is AC; else, trade-off is LC. Default versions of Dynamo, Cassandra, and Riak are PA/EL systems. VoltDB/H-Store and Megastore are PC/EC. MongoDB is PA/EC.","meta":{"title":"CAP Theorem","href":"cap-theorem"}} {"text":"# Microservices\n\n## Summary\n\n\nMicroservice Architecture describes an approach where an application is developed using a collection of loosely coupled services. Previously, applications were based on centralized multi-tier architecture. This worked well in the age of mainframes and desktops. But with cloud computing and mobile devices, backend must be available at all times for a wide range of devices. Bug fixes and features must be delivered quickly without downtime or deploying the entire application. \n\nMicroservices are independently deployable, communicating through web APIs or messaging queues to respond to incoming events. They work together to deliver various capabilities, such as user interface frontend, recommendation, logistics, billing, etc.\n\nMicroservices are commonly run inside containers. Containers simplify deployment of microservices but microservices can run even without containers.\n\n## Discussion\n\n### What are Microservices?\n\nA microservice is an **autonomous independent service** that encapsulates a business scenario. It **contains the code and state**. Usually a microservice even contains its own data store. This makes it independently versionable, scalable and deployable. A microservice is loosely coupled and **interacts** with other microservices through well-defined interfaces using protocols like http. They remain **consistent and available** in the presence of failure. A microservice is independently releasable. Each microservice can be scaled on its own without having to scale the entire app. \n\n\n### What are the types of Microservices?\n\nBroadly, there are two types of microservices: \n\n + **Stateless**: Has either no state or it can be retrieved from an external store (cache/database). It can be scaled out without affecting the state. There can be N instances. Examples: web frontends, protocol gateways, etc. A stateless service is not a cache or a database. It has frequently accessed metadata, no instance affinity and loss of a node is non-evident.\n + **Stateful**: Maintains a hard, authoritative state. For large hyper-scale application the state is kept close to compute. N consistent copies achieved through replication and local persistence. Example: database, documents, workflow, user profile, shopping cart, etc. A stateful service consists of databases and caches and the loss of a node is a notable event. It is sometimes a custom app that holds large amounts of data.As a variation, one author has identified three types: stateless (compute), persistence (storage), aggregation (choreography). Aggregation microservices depend on other microservices, thereby have network and disk I/O dependence. \n\n\n### What are the types of microservices in a layered architecture?\n\nWhen we look at microservices as a layered architecture, we can identify the following types: \n\n + **Core/Atomic services**: Fine-grained self-contained services. No external service dependencies. Mostly business logic. Often no network calls.\n + **Composite/Integration Services**: Business functionality composed from multiple core services. Include business logic and network calls. Implement routing, transformations, orchestration, resiliency and stability patterns. Often interface to legacy or proprietary systems.\n + **API/Edge Services**: A selected set of integration services and core services offered as managed APIs for consumers. Implement routing, API versioning, API security and throttling.\n\n### How are microservices different from APIs?\n\nAPIs are not microservices and microservices are not the implementation of an API. API is an interface. A microservice is a component. The term \"micro\" refers to the component and not the granularity of the exposed interfaces. Microservices can be used to expose one or more APIs. However, not all microservice components expose APIs. \n\n\n### What's the relationship between Microservices Architecture and Service Oriented Architecture (SOA)?\n\nSOA and Microservice architecture relate to different scopes. While SOA relates to enterprise service exposure, microservice architecture relates to application architecture. Both try to achieve many of the same things (creation of business functions as isolated components) but at a different scale. They differ in maintainability, granularity, agility, etc. SOA is a very broad term and microservices is a subset of that usage. Netflix noted that microservices is \"fine-grained SOA\". Microservices have been recognized as \"SOA done right\". \n\nSome microservices principles are really different to SOA: \n\n + Reuse is not the goal: Reuse of common components is discouraged due to the dependencies it creates. Reuse by copy is preferred.\n + Synchronous is bad: Making synchronous calls such as API or web services creates real-time dependencies. Messaging is used whenever possible between microservices.\n + Service Discovery at run time: Components are assumed to be volatile. So it's often the clients’ responsibility to find and even load balance across instances.\n + Data Duplication is embraced: Techniques such as event sourcing result in multiple independent \"views\" of the data, ensuring that microservices are truly decoupled.\n\n### What are the important principles to keep in mind while designing a Microservices Architecture?\n\nBroadly speaking the following principles are good to know while designing a microservices architecture: Modelling around a Business Domain, Culture of Automation, Hide Implementation Details, Highly Observable, Decentralise all things, Isolate Failure, Consumer First, Deploy Independently. \n\n\n### What are the advantages of using a microservices architecture?\n\nAdvantages span both development and operations. Briefly we note the following advantages: Build and operate service at scale, Improved resource utilisation to reduce costs, Fault isolation, Continuous Innovation, Small Focused Teams, Use of any language or framework. \n\nScalability comes because of the modularity that microservices enable. Because of containers, microservices have become easier deploy in various environments. Because of the isolation of services, there's also a security advantage: an attack on one service will not affect others. \n\nMicroservices are about code and development. Containers are about deployment. Containers enable microservices. Containers offer isolated environments, thus making them an ideal choice for deploying microservices. However, it's possible to deploy microservices without containers. For example, microservices can be deployed independently on Amazon EC2. Each microservice can be in its own auto-scaling group and scaled independently. \n\n\n### Why should I probably not adopt microservices for my app?\n\nSince application is distributed across microservices, such distributed systems are harder to manage and monitor. Operational complexity increases as the number microservices and their instances increases, particularly when they are dynamically created and destroyed. Network calls can be slow and even fail at times. Being distributed, maintaining strong consistency is hard and application has to ensure eventual consistency. \n\nMicroservices require more time for planning and partitioning the app. They should be designed with failure in mind. When you are building a Minimum Viable Product (MVP) or experimenting to assess what works or can add value to business, then a monolithic approach is faster and easier. Adopt microservices only when you need to scale after your idea and its business value is proven. \n\nIf you're managing a legacy application, the effort to migrate to microservices might involve considerable cost. The technology stack may be considerably larger than a monolithic app. Applications have a great dependence on networking and its performance. Microservices can be tested independently but to test the entire application is more difficult. \n\n## Milestones\n\n1970\n\nAlthough not directly influencing the birth of microservices, it's been noted that from an architectural/philosophical perspective similar ideas existed in the 1970s: Carl Hewitt's Actor Model; or pipes in UNIX that connect many small programs, each of which did something specific rather than having a large complex program that did many things. \n\n1990\n\nThis is the decade when **Service-Oriented Architecture (SOA)** starts to gain wider adoption. The term itself is first described by Gartner in 1996. The idea is to use services as isolated and independent building blocks that collectively make up an application. Services are loosely bound together by a standard communications framework. \n\n2000\n\nTo enable applications to communicate with one another over a network, W3C standardizes SOAP. Over the next few years, WSDL and UDDI gets standardized as well. XML is the preferred format of communication. Through this decade, **Web Services**, a web-based implementation of SOA, becomes popular over proprietary methods such as CORBA or DCOM. \n\n2009\n\nNetflix redefines the way it develops apps and manages operations by relying completely on APIs. This later develops into what we today call microservices. \n\nMay \n2011\n\nThe term **microservices** gets discussed at a workshop of software architects near Venice, to describe a common architectural style that several of them were exploring at the time. \n\n2012\n\n**REST** and **JSON** become the de facto standard to consume backend data. These would later prove to be foundational for inter-service communication when building microservices-based apps.\n\nMar \n2013\n\nClosely related to microservices, **Docker**, a computer program that performs operating-system-level virtualization (containerization), gets open sourced. \n\nJun \n2014\n\n**Kubernetes**, a container orchestration system for automating deployment, scaling and managing containerized applications, gets open sourced by Google. \n\nNov \n2015\n\nAn app developer survey from NGINX shows that microservices are entering mainstream with about two-thirds either using or investigating them. Adoption is higher with medium and small companies.","meta":{"title":"Microservices","href":"microservices"}} {"text":"# Promises\n\n## Summary\n\n\nIn asynchronous programming, it's common practice to use callbacks so that the main thread can continue with other processing rather than wait for current function to complete. When that function completes, a relevant callback function is called. Writing and maintaining asynchronous code can be difficult. Promises offer a syntax that enables better code structure and flow. \n\nA promise is an object that's returned immediately when an asynchronous call is made even when such a call has not completed. This object is constructed and returned \"with the promise\" that its contents will be filled in at a later point when the asynchronous call completes. Formally, \n\n> A promise represents the eventual result of an asynchronous operation.\n\nPromises have become popular since mid-2010s in the world of JavaScript and web development, although promises are not exclusive to JavaScript.\n\n## Discussion\n\n### Why do we need promises?\n\nAsynchronous code often ends up with deeply nested callbacks. Programmers often call this *callback hell* or *pyramid of doom*. This happens because in async code we can't return values because such values are not yet ready. Likewise, we can't throw exceptions because there's no one to catch them. \n\nPromises solve this by flattening the program flow. Because each asynchronous call immediately returns with a promise object, this object can then be used to specify the callback functions for both success and failure cases. In particular, `then` method of a promise object allows us to specify the callbacks. This method also returns another promise object, thus facilitating a chain or composition of promises.\n\nAsynchronous code written with promises is closer in spirit to how we write synchronous code. In synchronous code, we are used to `return`, `throw` and `catch` statements. This functionality was lost in the world of asynchronous callbacks. Essentially, \n\n> The point of promises is to give us back functional composition and error bubbling in the async world.\n\n\n### What are the essentials of a promise?\n\nA promise is a first-class object, meaning that it can be copied, passed as arguments or returned from functions. Moreover, a promise is returned even before the result of an async call is ready. This allows us to call methods of the object (such as `then` method) while the async call is still in progress.\n\nCallbacks can be specified via the `then` method. Two things are possible when the `then` method is called:\n\n + If the async call has already completed (or it could even be a synchronous call), the promise object will invoke the relevant callback immediately.\n + If the async call is still pending, the promise object will register the callback but call it later.In either case, `then` method returns a new promise object and this is important to enable chaining. Multiple callbacks can be added by calling `then` multiple times. \n\nPromises also simplify the handling of exceptions. Anytime an exception is thrown it can be handled by a single `catch` method. In fact, the `catch` method is a simplification of `then` method for exception handling.\n\n\n### What are the states and rules of transition of a promise object?\n\nA promise can be in one of three states: \n\n + **pending**: This is the initial state.\n + **fulfilled**: This is entered if the execution succeeds. The promise is fulfilled with a *value*.\n + **rejected**: This is entered if execution fails. Rejection comes with a *reason*.A promise that's either fulfilled or rejected is said to be *settled*. This is an apt term since a promise that's settled is an immutable object. It's value (when fulfilled) or reason (when rejected) cannot change. Immutability is important so that consumers of the promise are prevented from introducing side effects. \n\nA promise is said to be *resolved* if it's either settled or \"locked in\" to the state of another promise. Once resolved, any attempt to resolve it again will have no effect on the promise. An unresolved promise, a promise that's not resolved, is in pending state. \n\n\n### What are the methods of a promise object?\n\nThe Promises/A+ standard specifies `then` method to access the eventual value or reason. For interoperability, no other method needs to be specified.\n\nHowever, it's common for other standards or implementations to have other methods. For example, ECMAScript 2015 specifies: \n\n + Methods `all`, `race`, `reject` and `resolve` of `Promise`\n + Methods `then`, `catch` and `finally` of a `Promise.prototype`Among the non-standard methods are `Promise.denodeify`, `Promise.prototype.done`, `Promise.prototype.finally` and `Promise.prototype.nodeify`. GuzzleHttp's Promise implementation in PHP provides methods `otherwise`, `wait`, `getState` and `cancel`. Bluebird adds useful methods such as `map`, `some`, `any`, `filter`, `reduce`, `each` and `props`. \n\n\n### Could you share some details of the then method?\n\nA promise must have a `then` method that accepts two arguments and returns another promise: `q = p.then(onFulfilled, onRejected)`, where the following hold true for promises `p` and `q`: \n\n + Arguments `onFulfilled` and `onRejected` are both optional\n + If `onFulfilled` is a function, it's called once when `p` is fulfilled\n + If `onRejected` is a function, it's called once when `p` is rejected\n + Promise `q` is resolved with value `x` if either function `onFulfilled` or `onRejected` returns `x`\n + Promise `q` is rejected with reason `e` if either function `onFulfilled` or `onRejected` throws an exception `e`\n + If `onFulfilled` is not a function, `q` will be fulfilled with the value of `p` when fulfilled\n + If `onRejected` is not a function, `q` will be rejected with the reason of `p` when rejectedFor interoperability, non-promises can be treated as promises if they provide the `then` method, for example, using duck typing. Such an object is called a *thenable*. \n\n\n### Can you give some use cases where promises might simplify my code?\n\nPromises can enable us to sequentially call a bunch of async functions. We can handle all errors in a single code block if we wish to do so. We can trigger multiple async calls and do further processing only when all of them have completed; or exit if any one of them throws an exception; or handle the first one that completes and ignore the rest. Promises enable us to retry async calls more easily. \n\n\n### What are some tools for working with promises?\n\nIf your browser doesn't support promises, a polyfill will be needed. One option is to use promise-polyfill. Another polyfill is called es6-promise.\n\nBluebird is a JS library for promises. It can be used in Node.js and browsers. Using \"promisification\", it can convert old API into promise-aware API. Another useful library is Q. Alternatives include promise, lie, when and RSVP. A comparison of these promise libraries by size and speed is available. \n\nMocha, Chai and Sinon.JS are three suggested tools for testing asynchronous program flow. Mocha and Chai in combination work nicely for testing promises. \n\n\n### Can you give some tips for developers coding promises?\n\nHere are some tips, for beginners in particular: \n\n + It's possible to write promise-based code in the manner of callback-style nested code. This is bad practice. Instead use a promise chain.\n + Always handle errors using either `then(null, onRejected)` or `catch()`.\n + Avoid using the old-style `deferred` pattern that used to be common with jQuery and AngularJS.\n + Always return a value or throw an exception from inside `then` and `catch` methods. This is also handy for converting synchronous code into promisey code.\n + `Promise.resolve()` can wrap errors that occur in synchronous code. Be sure to use a `catch` to handle all errors.\n + Note that `then(onFulfilled).catch(onRejected)` is different from `then(onFulfilled, onRejected)`. The latter code will not catch exceptions that may occur inside `onFulfilled` function.\n\n### I have a bunch of promises I wish to call sequentially. Can I use a for loop?\n\nThis is an interesting case where each promise invokes (presumably) some asynchronous code and yet we wish to wait for each asynchronous call to complete before we start the next promise. A promise executes as soon as it's created. If you have a bunch of promises to be called in sequence, any loop construct such as `forEach` will not work. Instead, use a chain of promises, that is, each promise is constructed within the `then` method of the previously fulfilled promise. \n\n## Milestones\n\n1976\n\nThe concept of promises is born in the domain of parallel computing. It's called by different names: futures, promises, eventuals. \n\nJun \n1988\n\nMIT researchers present a paper that defines and explains promises. They describe how promises can aid asynchronous programming in distributed systems, including sequences of async calls. It references an async communication mechanism called *call-streams* and illustrates the concepts in Argus programming language. \n\nMar \n2009\n\nDiscussion starts within a CommonJS Google Group for specifying an API for promises. An initial version of *Promises/A* is published in February 2010. \n\nDec \n2012\n\n*Promises/A+* specification version 1.0 is released. Versions 1.1 and 1.1.1 are subsequently released in 2013 and 2014 respectively. Promises/A+ is based on the earlier Promises/A but has some omissions, clarifications and additions. Specifically, progress handling and interactive promises are omitted. The focus has been to arrive at a minimal API for interoperability. For interoperability, the specification details the behaviour of the `then` method of a promise object. The specification doesn't talk about how to create, fulfill or reject promises. \n\nJun \n2015\n\nPromise is introduced into ECMA Script 2015, 6th Edition.","meta":{"title":"Promises","href":"promises"}} {"text":"# JavaScript\n\n## Summary\n\n\nJavaScript (JS) is a programming language and a core technology of the World Wide Web (WWW). It complements HTML and CSS. HTML provides structured static content on the web. CSS does the styling on the content including animations. JavaScript enables rich interactions and dynamic content. \n\nJS started for the web and executed within web browsers. With the coming of Node.js in 2009, it became possible to use JS for server-side execution. It's now possible to build entire web apps with JS for both frontend and backend code. Today, the JS ecosystem is rich and mature with many libraries, frameworks, package managers, bundlers, transpilers, runtimes, engines and IDEs.\n\nTechnically, JavaScript is a loosely typed, multi-paradigm, interpreted high-level language that conforms to the **ECMAScript** specifications.\n\n## Discussion\n\n### What's unique about JavaScript that one would want to learn it?\n\nJavaScript is the language of web. It's one of the three core technologies of the World Wide Web along with HTML and CSS. JS enables dynamic effects and interactions on webpages. Hence it's become an essential part of web applications. Static web pages are almost obsolete nowadays.\n\nWith the advent of NodeJS, JS was quickly adopted for server-side scripting. JS can be used for both frontend (client-side) and backend (server-side) programming of web apps. A developer therefore needs to learn only one language. This is particularly important when there's a demand for fullstack (frontend + backend) developers.\n\nJavaScript is no longer limited to just client-side code. It's being used for both client-side and server-side code, in mobile devices, for IoT applications, desktop applications and more. \n\nThe importance of JavaScript is also highlighted by this quote from Eric Elliott, \n\n> Software is eating the world, the web is eating software, and JavaScript rules the web.\n\n\n### How does JavaScript work on the web?\n\nJavaScript code can be embedded into HTML files or stored in separate `*.js` files that can be referenced from HTML files. When browsers encounter the JS code, they will parse it and execute it on the fly. JavaScript is an **interpreted language**, meaning that it's not compiled in advance to machine code. \n\nBy mid-2000s, **AJAX** became a common approach to use JS more effectively for richer user experience. With AJAX, user interactivity is continued while new content is fetched asynchronously in the background. \n\nTraditionally, content on the web was served as individual HTML pages and each page had some interactivity enabled via JS and AJAX. In the late 2010s, there was a shift towards building the entire app in JavaScript and updating the contents via AJAX. Such an app is called a **Single Page Application (SPA)**. \n\nSince we can now run JS on both client and server, this gives us the flexibility to render part of the web page on server and let the client complete the rest. This approach leads to what we call **Isomorphic or Universal Application**. \n\n\n### Is JavaScript related to Java, VBScript, JScript and ECMAScript?\n\nJava and JavaScript are two different programming languages. The name *JavaScript* itself was coined in 1995 because of a partnership between Netscape Communications (that invented JavaScript) and Sun Microsystems (that invented Java). Java syntax was introduced into JavaScript. While Java was reserved for enterprise apps, JavaScript was positioned as a web companion to Java. About the same time, Java was also made available within web browsers as **applets**. Today, Java and JavaScript are both popular languages.\n\nWhen Microsoft got into web technology in the 1990s, its Internet Explorer (IE) browser needed an equivalent to what Netscape and Sun had with JavaScript and Java. JScript and VBScript are therefore Microsoft scripting languages. JScript is said to be very similar to JavaScript and enabled dynamic content on IE. \n\nThis fragmentation meant that a web page written for Netscape would not work well on IE, and vice versa. There was no standard and JavaScript was evolving far too quickly. In this context, **ECMAScript** was born in 1997 as a standard for JavaScript. \n\n\n### Why is JavaScript called a multi-paradigm language?\n\nJavaScript is an **object-oriented** language. It has objects, with properties and methods. Functions are first-class objects that can be passed into other functions, returned from functions, bound to variables or even thrown as exceptions. However, object inheritance is not done in the classical manner of C++ or Java. It uses **prototypal inheritance**, inspired by Self language. This makes the language flexible. A prototype can be cloned and modified to make a new prototype without affecting all child instances. It's also possible to do **classical inheritance** in JavaScript. \n\nThough being object-oriented, there are no classes in JavaScript but constructors are available. Object systems can be built using either inheritance or aggregation. Variables and methods can be private, public or privileged to the object. \n\nEric Elliott explains why he regards *prototypal inheritance* and *functional programming* as two pillars of JavaScript. *Closures* provide encapsulation and the means to avoid side effects. Thus, understanding closures is important for **functional programming** in JavaScript. JavaScript can also be used in an **imperative style** using `if-else` conditions, `for` loops, and module-level variables. \n\nThus, JavaScript is multi-paradigm because it's object-oriented, functional and imperative.\n\n\n### What are some important features of JavaScript that I should learn?\n\nBeginners should first learn the basics of JS, in addition of HTML and CSS. The Modern JavaScript Tutorial is great place to start learning.\n\nSome useful things to learn include effective console logging, destructuring, template literals, and spread syntax. Explicit loops can be avoided by taking the functional programming approach with `reduce`, `map` and `filter` methods. For asynchronous programming, you should learn about Promises and adopt the modern `async/await` syntax. \n\nGet a good understanding of closures and partial application. Use the arrow syntax of ES6 for more compact and readable code. Other useful ES6 features to learn are `const`, multiline strings, default parameters, and module import/export. \n\n\n### What's the difference between a JS engine and a JS runtime?\n\nA **JavaScript Engine** parses JS code and converts it to a form that the machine can execute. It contains the memory heap and the call stack. Examples include Google's V8 Engine (Chrome), SpiderMonkey (Firefox), JScript (IE), and Chakra (Microsoft Edge). \n\nA **JavaScript Runtime** is an environment in which JS runs. Web browsers are environments. Node.js provides a runtime for server-side execution. In a typical web app, the program may access the DOM, make AJAX calls, and call Web APIs. These are not part of the language. They are provided by the browser as part of the JS runtime. \n\nAnother important distinction is that the engine does **synchronous** processing. It can process only one function at a time via the call stack. On the other hand, the runtime can maintain a number of items in the callback queue. A new item can be added anytime to the queue and processed later when the call stack is free. Thus, the runtime enables **asynchronous** processing. \n\n\n### As a beginner, what should I know about the JavaScript ecosystem?\n\nConsider the following:\n\n + **Frameworks**: These simplify development by providing a structure, including design patterns such as MVC. Examples include React, Angular, Vue, Backbone and Ember.\n + **Libraries**: Many developers release their code as JS packages/modules/libraries so that others can reuse them. In December 2018, about 836,000 libraries were available on NPM.\n + **Package Managers**: Developers use them to install and manage packages they require for their projects. Examples include NPM and Yarn. Packages are downloaded from known repositories on the web.\n + **Linters**: These catch errors or bad practices early. Examples include Prettier and ESLint.\n + **Module Bundlers**: A project may have many dependent JS packages. Bundlers combine them into far fewer files so that these can be delivered more efficiently to browsers and other runtimes. Examples include Webpack and Browserify.\n + **Task Runners**: These enable automated development workflows such as minifying or concatenating files. Examples include Gulp and Grunt.\n + **Transpilers**: Developers can write code in other languages (CoffeeScript, TypeScript) and then convert them into JavaScript. Another use case is to write in modern ES6 syntax and transpile to syntax supported by older browsers. Examples include Babel.\n\n### What are some concerns that developers have about JavaScript?\n\nDevelopers coming from languages such as C++ and Java, find JavaScript annoying in many ways: automatic semicolon insertion, automatic type coercion (loose typing), lack of block scoping, lack of classes, unusual (prototypal) inheritance. Another quirk is type mismatches between objects that are otherwise similar: `\"Hello world\"` of type \"string\" versus `new String(\"Hello world\")` of type \"object\". Some of these are attributed to the hurried development of JavaScript. \n\nBecause there are many JS frameworks, libraries and tools, the choice of which one to adopt can be daunting. In one example, it was shown that Vue.js was hardcoded to rely on `yarn` and this broke the workflow because the developer had only installed `npm`. It's also possible in the JS world to have a small project of just few lines of code download lots of dependencies. An average web app has over 1000 modules. \n\nBecause the language was designed in a hurry, it has design errors. Developers can write bad JS code. This prompted Douglas Crockford to write his now famous book titled *JavaScript: The Good Parts*.\n\n## Milestones\n\n1993\n\nIn the early days, the history of JavaScript is somewhat tied to the history of web browsers. Among the early browsers are WorldWideWeb, ViolaWWW, Erwise and Midas. In 1993, **NCSA Mosaic** appears on the scene. It supports multiple networking protocols and is able to display images inline. It becomes popular and people begin to see the potential of the web. There's no JavaScript at this point and content shown on browsers are all static.\n\n1994\n\nMarc Andreessen, one of the creators of Mosaic, has the vision that the web should be lot more dynamic with animations and interactions. He starts **Netscape Communications**. \n\nMay \n1995\n\nAt Netscape, the idea is to invent a dynamic scripting language for the browser that would have simple syntax and appeal to non-programmers. Brendan Eich is contracted by Netscape to develop \"Scheme for the browser\", Scheme being a language with simple syntax, dynamic and powerful. Thus is born **Mocha**, which becomes part of Netscape Communicator browser in May 1995. \n\nNov \n1995\n\nWith competition from Microsoft and its Internet Explorer, Netscape partners with Sun Microsystems to make Java available in the browser. At the same time, they recognize that Java is a beast not suited for the browser. Together they announce **JavaScript**, which they describe as \"an open, cross-platform object-scripting language designed for creating and customizing applications on the Net.\" To ride on the hype surrounding Java, JavaScript is positioned as a companion to Java, even based on Java. JS becomes available in beta version of Netscape Navigator 2.0. Mocha becomes JS but less Scheme-like and more Java-like. \n\n1996\n\nMicrosoft introduces **JScript**, it's alternative to JavaScript. It's included in Internet Explorer 3.0. JScript can also be used for server-side scripting in Internet Information Services (IIS) web server. Server-side scripting is also possible with JavaScript in Netscape Enterprise Server. However, only in 2009 does Node.js make server-side scripting with JS popular.\n\nJun \n1997\n\nECMA organization publishes **ECMAScript**, documented as the ECMA-262 standard. Web pages that conform to ECMAScript are guaranteed to work on all browsers, provided browsers support the standard. JavaScript conforms to ECMA, while providing additional features. ActionScript and JScript are other well-known implementations of ECMAScript. \n\n2005\n\nJesse James Garrett of Adaptive Path mentions a new term, **AJAX**, which expands to *Asynchronous JavaScript + XML*. AJAX eliminates the need to reload an entire webpage to update small details. It gives developers more flexibility to make their web apps more responsive and interactive. Many projects in Google already use AJAX. AJAX goes on to change the way web interactions are built with JS.\n\n2009\n\nAs a standard library, **CommonJS** is founded, mainly for JS development outside the browser. This include the use of JS for web server, desktop and command line applications. This is also the year when **NodeJS** is released so that JS can be used for server-side scripting. \n\nJun \n2015\n\nAs a standard, **ECMAScript 2015** (ES2015 or ES6) is released. ES6 is seen as a major update to the core language specifications, which hadn't changed since ES5 of 2009. ES6 is expected to make JS coding easier, although it may take a while for browsers to support it fully. \n\n2017\n\nJavaScript has highest number of pull requests on GitHub, an open site for sharing code, particularly on open source projects. A pull request is a way for developers to submit their contributions to a project. \n\n2018\n\nWith more than 9.7 million developers using JavaScript worldwide, JS is declared as the most popular programming language. This is based on a survey of 21,700 developers from 169 countries. JS has 2.4 million more developers than the next popular language. However, some predict that JS may decline due to **WebAssembly**. For interactions on the web, JS has been the only language of choice so far. With WebAssembly, developers can potentially migrate to other languages. \n\nMar \n2018\n\nTensorFlow project releases **TensorFlow.js** to train and run Machine Learning (ML) models within a web browser. This becomes an easy path for JS developers to get started with ML. This is just one example to illustrate the versatility of JS.","meta":{"title":"JavaScript","href":"javascript"}} {"text":"# 5G Core RESTful APIs\n\n## Summary\n\n\nThe 5G Core separates the control plane from the user plane. The control plane is a designed as a set of Network Functions (NFs). Each NF exposes one or more services. Each service offers one or more operations, implemented as RESTful APIs over HTTPS. For this reason, NFs are said to use Service-Based Interfaces (SBIs). \n\nDisaggregating the 5G Core as independent (but interworking) NFs gives Communication Service Providers (CSPs) the ability to mix and match NFs from various software vendors. Benefits include automation, cost reduction, customization, and less vendor lock-in. \n\n3GPP has defined these APIs in conformance with the OpenAPI Specification. APIs have been published as YAML files. Using OpenAPI tools, developers can autogenerate client/server code from these YAML files. Such autogenerated code can become a starting point for implementing 5G Core.\n\n## Discussion\n\n### Could you give an overview of 5G Core NFs?\n\nThere are dozens of NFs (and their services) for which APIs have been defined. The figure highlights that Service-Based Architecture (SBA) and Service-Based Interfaces (SBIs) are defined only for the control plane. SBA is based on web technologies. Services communicate via HTTP/2 over TLS over TCP/IP, and exchange data in JSON format. \n\nNon-SBA interfaces use telecom-specific protocols: NAS-MM (on N1), NGAP over SCTP (on N2), GTP-U over UDP (on N3), PFCP over UDP (on N4), and so on. \n\nAn NF Service Producer exposes its services that are then invoked by an authorized NF Service Consumer. Producers and consumers can communicate directly or indirectly via Service Communication Proxy (SCP). A consumer can discover a producer via local configuration, via NRF (direct communication) or via SCP (indirect communication). \n\nEach service is self-contained, reusable and independent of another service, even if they're part of the same NF. Services may share some context data. A system procedure is basically invoking a sequence of NF services. \n\n\n### Where can I obtain the 5G Core APIs?\n\nA descriptive overview of NFs is in section 6 of **TS 23.501**. Section 7 lists the services of each NF. In Release 17 (Oct 2023), there are a total of 28 NFs containing 102 services. These are listed in the figure.\n\nSection 5 of **TS 23.502** details all operations of each service. An operation can take the form of request/response or subscribe/notify. The document also gives the known consumers of each operation. Section 4 describes system procedures that make use of these operations. \n\nStage 3 specifications (including API definitions in YAML) of each NF is the **TS 29.5xx series** of documents. An easier way to obtain the API definitions is via the 5G APIs repository at 3GPP Forge. The README file in this repository includes links to view/edit/validate/test each API via Swagger Editor UI. \n\n\n### Which are the REST principles adopted by 5G SBA?\n\nCode on Demand and HATEOAS are two REST principles currently not used in 5G SBA. Otherwise, the following have been adopted: \n\n + **Resource, Serialization and Representation**: Resources/objects are accessed via unique URIs. They're serialized as JSON documents.\n + **Client/Server**: Clients are service consumers while servers are service producers. The same NF can take on different roles in different contexts. For example, when registering with NRF, SMF is the client and NRF is the server. When creating a session, SMF is the server and AMF is the client.\n + **Stateless**: Servers don't maintain any state about clients. Clients include all necessary information with each request. This allows for load balancing, scalability and resilience in a distributed environment. UDSF offers a centralized storage and thereby facilitates stateless implementations. If an NF instance goes down, another instance can obtain context data (such as Service Profile used at SMF) from UDSF. UDR and UDSF can't be stateless.\n + **Cacheable**: Servers indicate if clients can cache the information.\n + **Layered System**: Clients need not be aware exactly which server provides a service. NRF allows this decoupling of clients and servers.\n + **HTTP Methods**: GET, POST, PUT, PATCH and DELETE are used.\n\n### What communication types does 5GC SBA adopt?\n\nFor CRUD operations, **Request/Response** communication is used. An NF consumer requests. An NF producer responds. POST method is used on a parent resource to create a new child resource. The latter's URI is returned in the response. Another way to create a resource is to use PUT method on the resource itself. In this case, the NF consumer selects the resource identifier and URI. Resource can be read with GET method accompanied with necessary query parameters. Resource updates can be done with PUT (full update) or PATCH (partial update). Resource deletion is via DELETE method. \n\nThere's also **Subscribe/Notify** communication. An NF consumer subscribes by providing a callback URI and an optional filter. An NF producer notifies the consumer when data changes and matches the filter. Subscriptions are created with POST, updated with PUT or PATCH, and deleted with DELETE. Notifications use POST. Subscriptions can be of explicit or implicit. For example, UDR is implicitly subscribed to provisioned subscriber data at the UDR. Another example is an AMF instance that creates an SM Context for a PDU Session. That instance is implicitly subscribed to SM Context Status at the SMF. \n\n\n### What's the structure of URIs used by 5GC SBA?\n\nThe figure shows the structure and an example of the `nf-instances` resource from NRF NFManagement API. API Root, API Name and API Version are collectively called API URI. API URI doesn't include a trailing forward slash. Variable `nfInstanceID` is delimited with `{}`. In the example, this is replaced with UUID to indicate the specific instance. \n\nAPI Version follows semantic versioning of the form `vx.y.z` where `x`, `y` and `z` are positive integers. Only the major version (`x` part) is indicated in the URI. A suffix starting with hyphen indicates pre-release version (eg. `1.0.0-alpha1`). A suffix starting with plus sign indicates build metadata after release freeze (eg. `3.0.1+orange.2020-09`). An NF consumer can use NRF to discover supported versions of NF producer instances. \n\n\n### What are some challenges in implementing 5GC?\n\nMulti-vendor deployments are complex with respect to interoperability, provisioning, dependencies, Operations Support Systems (OSSs), etc. AI-driven approaches may help avoid manual work. Engineers need to skill themselves in system integration. Operators remain undecided if they should get into system integration themselves or adopt a pre-packaged solution from a consortium. \n\n3GPP permits custom operations using POST method. It also permits custom RPC-style API operations. However, these will adversely impact interoperability.\n\nA service mesh such as Istio isn't aligned with the 3GPP approach of using NRF for service discovery or UPF selection by SMF. However, 3GPP does suggest using a service mesh for SCP. \n\nOpenAPI is meant to simplifying implementation due to readily available validators, code generators and other tools. The open source OpenAPI Generator is able to generate code in many languages. When run against 5G specifications, it shows its limitations. As on December 2023, the project has close to 4,000 open issues. \n\n## Milestones\n\n2000\n\nFielding describes **Representational State Transfer (REST)** in his PhD dissertation. He proposes REST as a guide to enhancing HTTP, not building web APIs per se. He notes that any architectural style (such as REST) is not a silver bullet to solve all types of problems. REST itself is meant for \"large-grain hypermedia data transfer\". \n\nJun \n2018\n\n3GPP publishes **TS 29.501, v15.0.0**, titled *Principles and Guidelines for Services Definition; Stage 3*. In v15.1.0 (September 2018), security requirements of API design and JSON structures in query parameters are specified. Subsequent versions include v16.0.0 (June 2019) and v17.0.0 (December 2020). \n\nAug \n2020\n\n5G Core APIs specified in YAML files are hosted at 3GPP Forge. While this is a GitLab environment, GitLab tools and processes can't fully be used. This is because changes can happen only via 3GPP mandated Change Requests (CRs). \n\nMay \n2023\n\nAt a **multi-vendor trial**, Enea demonstrates fast network slicing. A network slice can be instantiated quickly. Software comes from Enea, Oracle and Casa Systems. Enea supplied UDM, UDR and AUSF. Hardware included those from HPE, Intel and Nokia. However, achieving this in a commercial network is not expected to be straightforward.","meta":{"title":"5G Core RESTful APIs","href":"5g-core-restful-apis"}} {"text":"# Cryptography\n\n## Summary\n\n\n**Cryptography** is a set of techniques for encrypting data using specified algorithms that make the data unreadable to the third party computer systems, unless decrypted using predefined procedures by the sender. Messages between the sender and receiver are passing through a medium, which may be attacked and the information can be stolen. So the sender encrypts and the receiver decrypts. In the presence of third parties, cryptography is the practise of secure communication. **Data encryption** is known for its ability to keep information safe from prying eyes. It uses an encryption key to convert data from one format, called plaintext, to another, called ciphertext. Modern cryptography is largely based on computer science's mathematical theory and practise.\n\n## Discussion\n\n### What is the purpose of cryptography?\n\nA system's ability to verify the sender's identity is **Authentication**. The sender and recipient can verify each other's identities as well as the information's origin and destination. \n\nInformation transmitted should only be accessed by legal parties and not by anyone else for the purpose of **Confidentiality**. Anyone who was not intended to get the information will be unable to comprehend it. \n\nOnly authorised parties are allowed to make changes to transmitted data to maintain **Integrity**. Information cannot be manipulated with while in storage or in transmission between the sender and the intended recipient without being detected. \n\n**Non-repudiation** is the guarantee that no one can refute something's legitimacy. The information creator/sender cannot later contradict their intentions in the development or transmission of the material. \n\nThe information provided is only accessible to those who have been given permission. This gives **Access Control** only to the involved parties. \n\n\n### What are some well-known applications of cryptography?\n\n**Protected Communication:** Encrypting communications between ourselves and another system is the most common application of cryptography, and one we all use on a regular basis. A web browser and a web server are two examples, as are an email client and an email server. Modern switching networks make interception more difficult. \n\n**Storage Integrity:** The emergence of computer viruses has led to the adoption of cryptographic checksums for data storage. A checksum is generated and compared to expectations, just as it is for transmission integrity. Transmitted information is often available for a shorter amount of time and is used for a smaller volume of data, and is retrieved at a slower rate than stored data. \n\n**Electronic Money:** Today, there are patents in existence around the world that allow electronic information to substitute cash money in individual financial transactions. Cryptography is used in this system to keep national assets in electronic form. \n\n\n### Which are some commonly used terms in cryptography?\n\n**Plaintext** refers to any communication or data that has to be protected for various reasons. \n\n**Ciphertext** is the unreadable form of data that is generated at the end of the encryption process. \n\n**Encryption** refers to the process of encoding a message with the use of a key. The basic text is turned into illegible text in this way. \n\n**Decryption** is the process of deciphering an encoded communication with the use of a key. It's the inverse of the encryption method. \n\nA **key** is a parameter that dictates what a cryptographic process's final output will be. The length of the key is important in the encryption process. \n\n\n### What techniques can be used for encrypting/decrypting data?\n\nWhen **Symmetric Encryption** is performed, the same cryptographic keys are utilised for plaintext crypting and degradation of the figure materials. Symmetrical key encryption is less complicated. One key is used to encrypt and decrypt both data sets. Types of symmetric keys: Stream or Block cyphers, can be used in symmetric-key encryption. Stream cyphers encrypt the digits of a message one by one (typically bytes). To modify the component's measurement, Block figures employ distinct sections and encrypt them with the plaintext as a single component unit. \n\n**Asymmetric Encryption** is a collection of keys that are used to encrypt and decrypt public and private-key information.\n\nBecause the user utilises two keys as asymmetrical encoding employs two keys, it's also known as the Cryptography Public Key. \n\nIn **Encryption with public keys** the messages are encrypted using the recipient's public key. The post cannot be deciphered by anyone who is not the private coordinator, does not own the key, or is not connected to the general public key. \n\n**Digital Signature** uses a personalised transmitter key that can be verified by anyone with a personal key, ensuring network security. \n\n\n### Could you share examples of cryptographic algorithms?\n\n**Data Encryption Standard (DES)** developed by IBM in the 1970s, approved for commercial usage by the National Bureau of Standards(NBS) in 1977 uses a 56-bit key and 8 rounds to work on 64-bit blocks. \n\n**Advanced Encryption Standard (AES)** is a fast and safe algorithm released in 1998, created by Vincent Rijmen and Joan Daemen works with variable key and block lengths. The key length can be 128, 192, or 256 bits, and the block length can be 128, 192, or 256 bits. \n\n**Rivest Cipher (RC)** was created by Ronald Rivest, and named after him. RC1, RC2, RC3, RC4, RC5, and RC6 are available.\n\nBruce Schneie created **Blowfish** in 1993 and published it in 1994. It contains 8 rounds, with a 64-bit block size and a key length of 32 to 448 bits. Blowfish is considered as a replacement for DES as it is substantially faster than others with a good key strength.\n\n**RSA** was named after it's inventors Ron Rivest, Adi Shamir and Leonard Adleman, in 1997. A variable-size key and encryption block are employed in it. It provides increased security and convenience, and uses Public Key Encryption.\n\n\n### How do I evaluate cryptographic algorithms?\n\nEach encryption algorithm has strengths and disadvantages, that affect encryption performance based on these parameters:\n\n**Encryption time** is measured in milliseconds and depends on the data block's length and key. When the encryption time is short, an algorithm's performance is considered advanced. \n\n**Decryption time** is the amount of time it takes to recover the original text from ciphertext, measured in milliseconds. When the decryption time is short, an algorithm's performance is considered superior. \n\n**Memory used** should be minimal because it has an impact on system costs. \n\n**Throughput** is calculated by dividing the total encoded block size by the whole encode time. If the throughput cost rises, the algorithm's power consumption will fall. \n\n**Avalanche effect** predicts that if the plaintext changes, the ciphertext will also change dramatically, by calculating the difference between plaintext and ciphertext modifications. \n\n**Entropy** is a statistical measure of data randomness and uncertainty.. \n\n**The number of bits required for encoding optimally** defines the bandwidth required for transmission. When an encrypted character or bit is encoded with fewer bits, it uses less storage and bandwidth, directly impacting the system's cost. \n\n\n### What is the role of computational and energy costs in implementing Cryptography?\n\nNetworks are developing towards a ubiquitous model in which heterogeneous devices are networked (offering easy network interconnections anytime and anyplace). Cryptographic algorithms are necessary for the network security solutions' development. However, network devices' computational and energy restrictions make real implementation of such processes difficult. As a result, a thorough examination of the costs of launching symmetric and asymmetric cryptographic algorithms, hash chain functions, elliptic curve cryptography, and pairing-based cryptography on personal agendas is carried out, and the results are compared to the costs of basic operating system functions. The studies reveal that, while cryptographic power costs are considerable and such operations must be time limited, they are not the primary limiting factor in a device's autonomy. \n\nThe technological advancement has resulted in the expansion of personal portable computers and the emergence of new forms of networks. Security solutions are required to secure heterogeneous ubiquitous networks generated by small and restricted devices, which are being explored or are already in use. The inherent nature of ubiquitous networks requires the safeguards' implementation to assure proper protocol execution at all layers, from networking operations to collaborative enforcement protocols or privacy-protecting mechanisms. \n\n## Milestones\n\n1932\n\nPolish cryptographer Marian Rejewski discovers how Enigma works. \n\n1939\n\nPoland shares the information on how Enigma works with the French and British intelligence services, allowing cryptographers like Alan Turing to figure out how to crack the key, which changes daily. It proves crucial to the Allies' World War II victory. \n\n1945\n\nClaude E. Shannon of Bell Labs publishes an article called \"A mathematical theory of cryptography.\" It's the starting point of modern cryptography. For centuries, governments have controlled secret codes: applied to diplomacy, employed in wars, and used in espionage. But with modern technologies, the use of codes by individuals is exploding. \n\n1976\n\nWhitfield Diffie and Martin Hellman publish a research paper, New Directions in Cryptography, on what would be defined as the Diffie-Hellman key exchange. For the first time, the code key is no longer pre-arranged, but a pair of keys (one public, one private but mathematically linked) is dynamically created for every correspondent. \n\n1977\n\nRSA public key encryption is invented. \n\n1978\n\nRobert McEliece invents the McEliece cryptosystem, the first asymmetric encryption algorithm to use randomization in the encryption process. \n\n2000\n\nThe Advanced Encryption Standard replaces DES, or AES (asymmetric key - the user and sender must know the same secret key), found through a competition open to the public. Today, AES is available royalty-free worldwide and is approved for use in classified US government information. PKI (Public Key Infrastructure) is a generic term used to define solutions creating and managing public-key encryption. It is activated by browsers for the Internet but also by public and private organizations to secure communications. \n\n2001\n\nBelgian Rijndael algorithm selected as the U.S. Advanced Encryption Standard (AES) after a five-year public search process by National Institute of Standards and Technology (NIST). \n\n2004\n\nThe first commercial quantum cryptography system becomes available from id Quantique. \n\n2005\n\nElliptic-curve cryptography (ECC) is an advanced public-key cryptography scheme and allows shorter encryption keys. Elliptic curve cryptosystems are more challenging to break than RSA and Diffie-Hellman. \n\n2007\n\nUsers swamp Digg.com with copies of a 128-bit key to the AACS system used to protect HD DVD and Blu-ray video discs. The user revolt is a response to Digg's decision, subsequently reversed, to remove the keys, per demands from the motion picture industry that cited the U.S. DMCA anti-circumvention provisions. NIST hash function competition announced. \n\n2013\n\nEdward Snowden discloses a vast trove of classified documents from NSA. Dual\\_EC\\_DRBG is discovered to have a NSA backdoor. NSA publishes Simon and Speck lightweight block ciphers.","meta":{"title":"Cryptography","href":"cryptography"}} {"text":"# Xamarin\n\n## Summary\n\n\nXamarin is a cross-platform application development framework. It allows you to develop a mobile app using C# and reuse most of the codebase across multiple platforms including Android, iOS and Windows. Among the advantages of Xamarin are native user interfaces, native API access, native performance and developer productivity. \n\nXamarin itself leverages .NET development platform that comes with C# language, its compilers, base libraries, editors and tools. Xamarin extends .NET so that developers can build mobile apps and target multiple platforms.\n\n## Discussion\n\n### When should I use Xamarin?\n\nThere are many hybrid and cross-platform frameworks to develop apps across multiple platforms, but these require skills in JavaScript and are best suited for Web developers. For them, Apache Cordova is one possible framework but do keep in mind that this is a hybrid approach that will not give native performance. \n\nWhere performance is desired along with faster time to market via multiple mobile platforms, Xamarin must be preferred. Xamarin is best suited for developers coming from .NET, C# or Java backgrounds. \n\nAs a note, Xamarin should not be confused with **.NET Core**, though both are cross platform and open source. Xamarin is for cross-platform mobile (though it can be used for MacOS) whereas .NET Core is for creating cross-platform web apps, microservices, libraries and console apps that can run on Windows, Linus or MacOS. \n\n\n### Why should I use Xamarin?\n\nXamarin helps to expedite native mobile app development targeting multiple platforms. It's been said that for informational apps, 85% of code can be reused across Android and iOS. For more intensive apps, code reuse of 50% is possible. This code reuse comes from using shared C# app logic across platforms. \n\nUser interface code itself is not shared by Xamarin but that's where **Xamarin.Forms** comes in. Using **Xamarin.Forms** 96% code reuse has been reported. \n\nFor developers, this means that you get to build native Android, iOS and Windows Phone apps concurrently without having to build them one after another or having multiple teams with multiple skillsets and tools.\n\n**Xamarin.Android** and **Xamarin.iOS** provide further customization possibilities for developers who are looking at tweaking the app's look and feel achieved with Xamarin.Forms. There are several other benefits like seamless API integration, easy collaboration and sharing, etc. \n\n\n### Could you describe some useful Xamarin-specific terms that developers should know?\n\nThe term **managed code** refers to code that's managed by .NET Framework Common Language Runtime. In the case of Xamarin, this is the Mono Runtime. In contrast, **native code** is code that runs natively on the platform. Managed code could be in C# or F#. Java/Android code are native to Android; or Objective-C code is native to iOS/MAC. \n\nDuring compilation, C# or F# code is converted to **Microsoft Intermediate Language (MSIL)**. In Android and Mac platforms, MSIL is compiled to native at runtime using **just-in-time (JIT)** compilation. In iOS, due to security restrictions, this is not allowed. Hence **ahead-of-time (AOT)** compilation is performed. \n\n\n### What's the architecture of Xamarin?\n\nArchitecture is described in the documentation in three places:\n\n + Xamarin.Android Architecture\n + Xamarin.Mac Architecture\n + Xamarin.iOS ArchitectureIn general, C# code and .NET APIs sit on top of Mono runtime, which in turn sits on top of the OS kernel. Architecture allows for managed code to invoke native APIs via bindings. In Android, **Managed Callable Wrappers (MCW)** are used when managed code needs to invoke Android code; **Android Callable Wrappers (ACW)** are used when Android runtime needs to invoke managed code. There's also the **Java Native Interface (JNI)** for one-off use of unbound Java types and members. \n\n\n### How much does it cost to use Xamarin?\n\nXamarin was initially available for a license. After it was acquired by Microsoft in 2016, it's now bundled with the Visual Studio suite of tools for free. While Visual Studio is not completely free, there is a Community Edition, which is free for eligible companies and developers.\n\nThe .NET platform itself open source and Xamarin in part of it. This means there are no fees or licensing costs, even for commercial use. \n\n\n### What are the pre-requisites for Xamarin app development?\n\nTo develop the app, one should have good programming expertise in C#, assuming that the RESTful APIs required for the application are already available. A Windows PC will be required for development on Android and Windows platforms. iOS development is done on Windows PC but to build the app, a Mac will be required to be connected in the same network as per Apple's requirements. \n\n\n### How does Xamarin compare against other cross-platform frameworks?\n\nAmong the cross-platform mobile frameworks are React Native, Ionic 2, Flutter, NativeScript, and Xamarin. Ionic 2, React Native and NativeScript rely on web technologies (HTML/CSS/JavaScript/TypeScript). Ionic 2 can suffer from performance problems. While Ionic 2 uses WebView (basically HTML wrapped in a native app), React Native and NativeScript use native UI components. Ionic 2 can also allow access to native API but will require the use of Apache Cordova. Xamarin uses native UI components and offers good performance. \n\nFor development, Xamarin lacks automatic restarting and hot/cold swapping. In this aspect, React Native is one of the easiest for developer productivity. All frameworks allow native code bindings but this process is easiest on Xamarin. Xamarin offers full one-to-one bindings to native libraries whereas in React Native or Ionic, support is partial and done via an abstraction layer. \n\nA blog post from AltexSoft compares the performance of Xamarin apps against native apps. It concludes that performance drop due to Xamarin is acceptable for most cases but the use of Xamarin.Forms is not recommended when apps are CPU intensive. \n\n\n### What are some criticisms of Xamarin?\n\nIt's been said that Xamarin support for latest platform updates (iOS/Android) are usually delayed. While iOS and Android developers can tap into an active ecosystem plus use open source technologies, similar following is limited in Xamarin. Xamarin is also not suited for apps that are heavy for graphics, where code reuse will be limited. For that matter, Xamarin developers still need to learn platform-specific languages for the UI, unless they also adopt Xamarin.Forms for their apps. \n\nXamarin apps are also larger in size when compared to equivalent native apps. This is because the app has to include the Mono runtime and associated components. A simple \"hello world\" Android app written in Xamarin requires about 16 MB. \n\n\n### Where can I learn more about Xamarin?\n\nXamarin has excellent documentation and code samples at their website. Specific official resources from Microsoft include:\n\n + Documentation\n + Courses, tutorials and videos\n + Community forums\n + The Xamarin Show on MSDN\n + Xamarin tutorials on Microsoft Learn\n + Visual Studio tools for XamarinThere used to be Xamarin University to train and certify professional developers. Since June 2019, this is now part of Microsoft Learn platform. \n\n## Milestones\n\n1999\n\nMiguel de Icaza and Nat Friedman start **Helix Code** for GNOME desktop application development for the Linux platform. In 2001, this is renamed to **Ximian**, which is acquired in 2003 by Novell. \n\n2000\n\nMicrosoft releases .NET Framework and Visual Studio .NET along with introducing a new language named C#. \n\n2001\n\n**Mono** project is launched for supporting .NET applications on non-Windows platforms. Open source and based on .NET Framework, Mono enables developers to build cross-platform applications. \n\nMay \n2011\n\n**Xamarin** as a company is incorporated with the idea of building commercial .NET offerings for iOS and Android. In July, Novell gives Xamarin the rights to use MonoTouch and Mono for Android. \n\n2016\n\nMicrosoft acquires Xamarin and releases Xamarin as part of Visual Studio suite of tools.","meta":{"title":"Xamarin","href":"xamarin"}} {"text":"# Natural Language Toolkit\n\n## Summary\n\n\nNatural Language Toolkit (NLTK) is a Python package to perform natural language processing (NLP). It was created mainly as a tool for learning NLP via a hands-on approach. It was not designed to be used in production. \n\nThe growth of unstructured data via social media, online reviews, blogs, and voice-based human-computer interaction are some reasons why NLP has become important in the late 2010s. NLTK is a useful toolkit for many of these NLP applications. \n\nNLTK is composed of sub-packages and modules. A typical processing pipeline will call modules in sequence. Python data structures are passed from one module to another. Beyond the algorithms, NLTK gives quick access to many text corpora and datasets.\n\n## Discussion\n\n### Which are the fundamental NLP tasks that can be performed using NLTK?\n\nNLTK can be used in wide range of applications for NLP. For basic understanding, let's try to analyze a paragraph using NLTK. It can be pre-processed using sentence segmentation, removing stopwords, removing punctuation and special symbols, and word tokenization. After pre-processing the corpus, it can be analyzed sentence-wise using parts of speech (POS) to extract nouns and adjectives. Subsequent tasks can include named entity recognition (NER), coreference resolution, constituency parsing and dependency parsing. The goal is to find insights and context about the corpus. \n\nFurther downstream tasks, more pertaining to application areas, could be emotion detection, sentiment analysis or text summarization. Tasks such as text classification and topic modeling typically require large amounts of text for better results.\n\n\n### Which are the modules available in NLTK?\n\nNLTK's architecture is modular. Functionality is organized into sub-packages and modules. NLTK is used for its simplicity, consistency and extensibility of its modules and functions. It's better explained in the tabular list of modules. \n\nA complete module index is available as part of NLTK documentation.\n\n\n### How is NLTK package split into sub-packages and modules?\n\nNLTK is divided into different sub-packages and modules for text analysis using various methods. Figure depicts an example of `text` sub-package and the modules within it. Each module fulfils a specific function. \n\n\n### Which are the natural languages supported in NLTK?\n\nLanguages supported by NLTK depends on the task being implemented. For **stemming**, we have RSLPStemmer (Portuguese), ISRIStemmer (Arabic), and SnowballStemmer (Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish). \n\nFor **sentence tokenization**, PunktSentenceTokenizer is capable of multilingual processing. \n\n**Stopwords** are also available in multiple languages. After importing `stopwords`, we can obtain a list of languages by running `print(stopwords.fileids())`. \n\nAlthough most **taggers** in NLTK support only English, `nltk.tag.stanford` module allows us to use StanfordPOSTagger, which has multilingual support. This is only an interface to the Stanford tagger, which must be running on the machine. \n\n\n### What datasets are available in NLTK for practice?\n\nNLTK corpus is a natural dump for all kinds of NLP datasets that can be used for practice or maybe combined for generating models. For example, to import the Inaugural Address address, the statement to execute is `from nltk.corpus import inaugural`. \n\nOut of dozens of corpora, some popular ones are Brown, Name Genders, Penn Treebank, and Inaugural Address. NLTK makes it easy to read a corpus via the package `nltk.corpus`. This package has a reader object for each corpus. \n\n\n### What are the disadvantages or limitations of NLTK?\n\nIt's been mentioned that NLTK is \"a complicated solution with a harsh learning curve and a maze of internal limitations\". \n\nFor sentence tokenization, NLTK doesn't apply semantic analysis. Unlike *Gensim*, NLTK lacks neural network models or word embeddings. \n\nNLTK is slow, whereas *spaCy* is said to be the fastest alternative. In fact, since NLTK was created educational purpose, optimized runtime performance was never a goal. However, it's possible to speed up execution using Python's `multiprocessing` module. \n\nMatthew Honnibal, the creator of *spaCy*, noted that NTLK has lots of modules but very few (tokenization, stemming, visualization) are actually useful. Often NLTK has wrappers to external libraries and this leads to slow execution. The POS tagger was terrible, until Honnibal's averaged perceptron tagger was merged into NLTK in September 2015. \n\nIn general, NLP is evolving so fast that maintainers need to curate often and throw away old things. \n\n\n### For beginners, what are some useful resources to learn NLTK?\n\nThe official website includes documentation, Wiki, and index of all modules. There are Google Groups for users and developers.\n\nFor basic usage of NLTK, you can read a tutorial by Bill Chambers. This also shows some text classification examples using *Scikit-learn*. Another basic tutorial from Harry Howard includes examples from *Pattern* library as well. \n\nOften specific processing is implemented in external libraries. Benjamin Bengfort shows in a blog post how to call CoreNLP from inside NLTK for syntactic parsing. \n\nThere's a handy cheat sheet by murenei. Another one from 2017 is published at Northwestern University.\n\nA list of recommended NLTK books appears on BookAuthority. You can start by reading Natural Language Processing with Python (Bird et al. 2009). Those who wish to learn via videos can look up a playlist of 21 videos from sentdex. \n\n## Milestones\n\nJul \n2001\n\nThe first downloadable version of NLTK appears on SourceForge. Created at the University of Pennsylvania, the aim is to have a set of open source software, tutorials and problem sets to aid the teaching of computational linguistics. Before NLTK, a project might require students to learn multiple programming languages and toolkits. Lack of visualizations also made it difficult to have class demonstrations. NLTK is meant to solve these problems. \n\nJul \n2005\n\n**NLTK-Lite 0.1** is released. Steven Bird, one of the creators of NLTK, explains that NLTK 1.4 introduced Python's dictionary-based architecture for storing tokens. This created overhead for programmers. With NLTK-Lite, programmers can use simpler data structures. For better performance, iterators are used instead of lists. Taggers use backoff by default. Method names are shorter. Since then, regular releases are made till NLTK-Lite 0.9 in October 2007. NLTK-Lite eventually becomes NLTK. \n\nApr \n2008\n\nTwo NLTK projects are accepted for **Google Summer of Code**: dependency parsing and natural language generation. The dependency parser becomes part of NLTK version 0.9.6 (December 2008). \n\nJun \n2009\n\nBook titled *Natural Language Processing with Python* by Bird et al. is published by O'Reilly Media. Since October 2013, the authors release online revised versions of the book updated for Python 3 and NLTK 3. \n\nApr \n2011\n\n**Version 2.0.1rc1** becomes the first release available via GitHub, although till July 2014 releases are also made via SourceForge. \n\nJul \n2013\n\nOver a five-year period from January 2008 to July 2013, NLTK gets more than half a million downloads. This excludes downloads via GitHub. \n\nSep \n2014\n\n**NLTK 3.0.0** is released, making this the first stable release supporting Python 3. Alpha release, version 3.0a0 (alpha), supporting Python 3 can be traced to January 2013.","meta":{"title":"Natural Language Toolkit","href":"natural-language-toolkit"}} {"text":"# 5G NR PHY\n\n## Summary\n\n\n5G NR PHY is designed to meet the main 5G use cases, namely eMBB, mMTC and URLLC. While it's an evolution of LTE PHY, there are many aspects that are unique to 5G NR PHY.\n\nPHY layer sits at the bottom of the 5G NR protocol stack, interfacing to MAC sublayer higher up via transport channels. It provides its services to MAC and is configured by RRC. PHY supports downlink (gNB-to-UE), uplink (UE-to-gNB) and sidelink (UE-to-UE) communications. \n\nSome of the main features include a wide spectrum from sub-GHz bands to mmWave bands, an OFDM-based air interface, scalable numerology, deployments from indoor picocells to outdoor macrocells, FDD and TDD support, flexible and self-contained slot structure, modulation up to 256QAM, polar and LDPC codes, Hybrid-ARQ (HARQ), bandwidth parts, CORESETs, beamforming, and massive MIMO.\n\n## Discussion\n\n### Could you share some technical details of 5G NR PHY?\n\n**5G spectrum** spans a wide range: FR1 (410-7125 MHz) and FR2/mmWave (24250-52600 MHz). **UE bandwidth** per component carrier is of range 5-100MHz (FR1) and 50-400MHz (FR2). Higher bandwidth allocations can be achieved with carrier aggregation. \n\n**Waveform** used in 5G is OFDM with Cyclic Prefix (CP). In uplink, an optional transform precoding of DFT spreading is done before sub-carrier mapping. \n\n**Sub-Carrier Spacing (SCS)** is flexible from 15kHz to 120kHz, with higher values applicable in FR2. **Slot duration** is also flexible from 1ms at 15kHz SCS to 125µs at 120kHz SCS. SCS 240kHz is only for control. **Cyclic prefix** at 60kHz SCS can be normal or extended. \n\nFor **duplexing**, both FDD and TDD are supported in FR1. Only TDD is applicable in FR2. In TDD, DL/UL split can be dynamically adjusted. \n\n\n### What modulation schemes and channel coding are supported in 5G NR PHY?\n\n**Modulation schemes** supported are QPSK, 16QAM, 64QAM and 256QAM. For DFT-s-OFDM in uplink, 5G NR introduces π/2-BPSK for better power efficiency at lower data rates, necessary for mMTC services. DFT spreading in uplink helps coverage-limited scenarios. \n\n**Channel coding** is based on Low Density Parity Check (LDPC) code, applied on transport blocks. A large TB is segmented into multiple equal code blocks and LDPC coding is applied on the code blocks. Polar code is used for BCH, DCI and UCI. In addition, block code is used for UCI. \n\n\n### Which are the main physical radio resources in 5G NR?\n\nTime and frequency are the two main resources. **Time** is organized into OFDM symbols, slots, subframes and frames. **Frequency** is organized into sub-carriers as needed for OFDM with SCS determined by numerology. Unlike LTE, both have flexible configurations due to 5G's scalable numerology.\n\n5G NR defines the following: \n\n + **Resource Element (RE)**: The smallest unit of resource. It's one sub-carrier for one OFDM symbol duration.\n + **Resource Block (RB)**: 12 consecutive sub-carriers in the frequency domain. It's not defined for the time domain. *Common Resource Blocks* are numbered from zero for each SCS. A UE is configured one or more bandwidth parts. *Bandwidth part* is a contiguous set of common RBs. *Physical Resource Blocks (PRBs)* are numbered from zero within the bandwidth part. Thus, a UE uses PRBs for actual communication.\n + **Resource Grid**: A combination of subcarriers and OFDM symbols. Defined for each numerology, carrier and antenna port. One set of resource grids is defined for downlink, uplink and sidelink each.\n + **Resource Element Group (REG)**: One PRB and one symbol.\n + **Control Resource Set (CORESET)**: Multiple PRBs with 1, 2 or 3 symbols.\n\n### What's the frame and slot structure in 5G NR?\n\nA frame is 10ms. A subframe is 1ms that's divided into slots. Slot duration depends on numerology. At 15kHz, a subframe has a single slot. At 30kHz, a subframe has two slots, each slot being 500µs. Likewise, we have slot durations 250µs@60kHz, 125µs@120kHz and 62.5µs@240kHz. \n\nA slot has 14 OFDM symbols but only 12 symbols at 60kHz when using extended cyclic prefix. At higher SCS, symbols and slots are shorter. 5G also permits mini-slot transmissions of 2, 4 and 7 symbols. \n\nBecause the different numerologies are of the form \\(2^µ\\), they can coexist. Regardless of numerology, symbols and slots are time aligned. Services with different requirements of bandwidth and latency can be multiplexed on the same frequency. \n\nTDD slot structure is **self-contained**. It allows for fast and flexible TDD switching. DL control, DL data, guard period and UL control are in the same slot. Thus, DL data and its acknowledgment can happen in the same slot. This is also possible for UL data. Symbol allocation to DL or UL can be switched every slot. \n\n\n### What are the different physical channels used in 5G NR PHY?\n\nWe note the following physical channels (with transport channel in parenthesis): \n\n + **Downlink**: PBCH (BCH), PDSCH (DL-SCH, PCH), PDCCH\n + **Uplink**: PRACH (RACH), PUSCH (UL-SCH), PUCCH\n + **Sidelink**: PSBCH (SL-BCH), PSSCH (SL-SCH), PSCCH, PSFCHPDCCH, PUCCH, PSCCH and PSFCH are standalone physical channels, that is, they're not mapped to transport channels. PDCCH has Slot Format Indicator (SFI) and Downlink Control Information (DCI) fields. The latter informs scheduling for PDSCH and PUSCH. PUCCH carries Uplink Control Information (UCI). UCI carries channel reports, HARQ-ACK and scheduling request. \n\n\n### What are the different signals used in 5G NR PHY?\n\nPHY has a few signals for the following purposes: \n\n + **Synchronization**: Primary Synchronization Signal (PSS) and Secondary Synchronization Signal (SSS). These are transmitted along with PBCH.\n + **Acquisition and Channel Estimation**: Demodulation Reference Signal (DM-RS) in downlink and uplink. Sounding Reference Signal (SRS) in uplink when PUCCH and PUSCH are not scheduled.\n + **Positioning**: Positioning Reference Signal (PRS) in downlink. SRS in uplink.\n + **Phase Tracking**: Phase Tracking Reference Signal (PT-RS) for PDSCH and PUSCH. Helps combat path delay spread and Doppler spread.\n + **Beam Management**: Channel State Information Reference Signal (CSI-RS) in downlink towards a connected UE.\n\n### What are the main functions of 5G NR PHY?\n\nThe main functions include error detection on the transport channel and indication to higher layers; FEC encoding/decoding of the transport channel; HARQ soft-combining; rate matching of the coded transport channel to physical channels; mapping of the coded transport channel onto physical channels; power weighting of physical channels; modulation and demodulation of physical channels; frequency and time synchronisation; radio characteristics measurements and indication to higher layers; MIMO antenna processing; RF processing. \n\nConsider the PHY model for DL-SCH. At gNB, Transport Blocks (TBs) arrive from MAC on transport channels. PHY adds CRC to each TB, perhaps involving code block segmentation. Channel coding and rate matching are performed, including for HARQ retransmissions. Data modulation is next, followed by mapping to physical resources. Finally, there's antenna mapping before transmission. At UE, the reverse process happens with CRC used for error detection and indication to MAC. \n\nIn the figure, blue boxes are configurable by higher layers. For other channels, some steps may be hardcoded in the specification. For example, all steps are fixed for BCH; coding and rate matching are fixed for PCH. \n\n\n### What are the main procedures in 5G NR PHY?\n\nThe main procedures include Cell search; Power control; Uplink synchronisation and Uplink timing control; Random access related procedures; HARQ related procedures; Beam management and CSI related procedures; Sidelink related procedures; Channel access procedures. \n\nWe describe a few:\n\n + **Cell Search**: UE acquires time and frequency synchronization for a cell and detects the Cell ID. UE receives PSS, SSS and PBCH.\n + **Uplink Power Control**: For PUSCH, PUCCH, SRS and PRACH transmissions. When Dual Connectivity (DC) is active, UE is configured with maximum power for both MCG and SCG.\n + **Random Access**: Of either Type-1 or Type-2. Involves preamble transmission on PRACH (and PUSCH MsgA in Type-2), random access response (RAR) reception on PDCCH/PDSCH, PUSCH transmission based on RAR UL grant (fallback in Type-2), and PDSCH contention resolution.\n + **PDSCH Reception**: Typically first requires decoding DCI from PDCCH.\n + **PUSCH Transmission**: Scheduled dynamically by UL grant in DCI or semi-statically by Type-1 or Type-2 grants configured by RRC.\n + **CSI Measurements & Reporting**: Periodic, semi-persistent or aperiodic. Reports are sent on PUCCH or PUSCH and triggered by DCI when applicable.\n\n### Which are the main 5G NR PHY specifications?\n\nFor a high-level overview of 5G NR PHY, PHY section of **TS 38.300** is worth reading. This specification gives an overall description of NR and NG-RAN. \n\nA general description of 5G NR PHY layer is found in **TS 38.201**. Beginners can start with this document. For more details into PHY, the following are useful: \n\n + **TS 38.202**: Services and functions provided by PHY, downlink/uplink/sidelink models.\n + **TS 38.211**: Physical channels and modulation, frame structure, PHY resources, modulation mapping, OFDM signal generation, scrambling, modulation, up-conversion, layer mapping, precoding.\n + **TS 38.212**: Multiplexing and channel coding, rate matching, transport channels, control information.\n + **TS 38.213**: Physical layer procedures for control, synchronization procedures, uplink power control, random access procedure, UE procedure for reporting and receiving control information.\n + **TS 38.214**: Physical layer procedures for data, power control, procedures related to physical shared channels.\n + **TS 38.215**: Physical layer measurements, UE and NG-RAN measurement capabilities.To learn about RF requirements including operating bands, channel bandwidth and transmitter/receiver characteristics, documents to read are **TS 38.101** in two parts (for FR1 and FR2) for the UE, and **TS 38.104** for base station. \n\n## Milestones\n\nDec \n2017\n\n3GPP approves the first specifications for 5G, called \"early drop\" of Release 15. \n\nDec \n2019\n\nFirst updates to specifications towards Release 16 are published. Some of the new features are Remote Interference Management (RIM), two-step RACH procedure, NR-based access to unlicensed spectrum, Integrated Access and Backhaul (IAB), V2X, eURLLC, NR positioning, MIMO enhancements, Dynamic Spectrum Sharing (DSS) enhancements, Multi-RAT DC/CA, NR-DC and cross-carrier scheduling with different numerologies. \n\nJul \n2020\n\n3GPP finalizes **Release 16** specifications.","meta":{"title":"5G NR PHY","href":"5g-nr-phy"}} {"text":"# Ethical AI\n\n## Summary\n\n\nWe're living in a world where machines and algorithms are increasingly giving recommendations, tagging content, generating reviews and even taking decisions. A number of ethical questions arise. Can we trust machines to do this right thing? Was my loan application processed in a fair manner? Why was my application rejected? \n\nThen there are questions about AI replacing humans in the workplace. With widespread use of personal data, we're worried about privacy and data theft. What happens to us if AI agents acquire human-like cognitive abilities? \n\nEthical AI addresses these issues by building systems that are safe, fair, transparent, accountable and auditable. To practice ethical AI, it's been said that, \n\n> We are not at the point where you can simply download a tool to do the job. Data ethics is still new and it requires critical thinking.\n\n## Discussion\n\n### Are there real-world examples that highlight the need for ethical AI?\n\nIn March 2016, Microsoft released a chatbot named Tay. Tay could interact and learn from real users on social platforms. Soon Tay was exposed to nasty tweets and ended up becoming mean and racist on her own. \n\nA study by ProPublica found that software used to predict future criminals is biased against blacks. Risk scoring done by the software is seen by judges and directly influences sentences. White defendants were often mislabelled as low risk compared to black defendants. Only 20% of those predicted to commit violent crimes went on to do so. \n\nIn March 2018, an Uber self-driving vehicle killed a pedestrian. Who is responsible: the distracted driver, the pedestrian, Uber, developers who wrote the code, or a sensor manufacturer? It's unrealistic to expect AI systems to be perfect but determining liability isn't trivial. \n\nMore recently, it was found that Facebook's ad delivery algorithms discriminate based on race and gender even when ads are targeted to a broad audience. Influencing factors include user profile, past behaviour and even the ad content. \n\n\n### What are the common ethical concerns raised by AI?\n\nEthical AI has the following concerns: \n\n + **Bias**: AI systems can be biased because they're designed to look for patterns in data and favour those patterns.\n + **Liability**: AI systems can't be perfect. When mistakes are made, who's responsible?\n + **Security**: As AI systems advance, how do we stop bad actors from weaponizing them? What happens if robots can fight and drones can attack?\n + **Human Interaction**: There's already a decline in person-to-person interactions. Are we sacrificing humanity's social aspect?\n + **Employment**: Repetitive, predictable jobs that can be automated will be automated. Those replaced have to retrain themselves in areas where robots can't come in easily, such as, creative or critical thinking.\n + **Wealth Inequality**: Companies rich enough to invest in AI will get richer by reducing cost and being more efficient.\n + **Power & Control**: Big companies that use AI can control and manipulate how society thinks and acts.\n + **Robot Rights**: If AI systems develop consciousness and emotions, should we give them rights? Can we punish AI and make them suffer?\n + **Singularity**: What happens if AI surpasses human intelligence? Will they turn against us to defend themselves?\n\n### How can AI be biased when it's based on trusted data?\n\nEven if data is trusted, it may not be a fair representation of the population. Dataset may be skewed in a number of ways including race, gender, education, or wealth. When algorithms are exposed to this data, they acquire the bias that's present in them. \n\nBias in AI comes from human biases since we are the ones building the algorithms and creating/selecting data. Minorities, often in low-income groups, lack access to technology. As a result, AI systems are not trained on such data. One infamous example is Google tagging photo of a black woman as \"gorilla\". \n\nThere are many ways in which human biases creep into AI systems. Selection bias, interaction bias and latent bias are some examples. \n\nAI system that's trained primarily on American or European faces will not work well when applied on Asian or African faces. In one example, a Nikon S630 camera hinted to the user that perhaps she has blinked when in reality Chinese people have eyes that are squinty when they smile. In another example, a happy tribal woman was wrongly classified to have \"disgusted\" emotion. \n\n\n### What can we do to make AI more ethical?\n\nWe need to design our systems to avoid biases. Check that proxy metrics don't introduce bias. We can minimize bias by having diversity in teams, data, and validation. \n\nFor transparency, data must be marked with metadata detailing the source, intended use, usage rights, etc. Data must be managed well leading to algorithms that are better traceable, reproducible and fixable. Context is key to transparency and explainability. \n\nWhen AI systems fail, have a plan to fix them in production. We need to monitor them. Let the purpose be clearly defined and bounded. AI systems should not be allowed to explore unintended pathways. \n\nWhat we need is a holistic approach. It's not just about technology and tools, but also about leaderships, standards and rules. Microsoft's Satya Nadella has said ethical AI also means how humans interact with AI. We must have empathy, education, creativity, judgement and accountability. \n\nAccenture has suggested setting up AI advisory bodies. Have discussions. Publish guidelines. Engage with stakeholders. The AI Ethics Lab is looking into integrating ethics and AI right at the R&D phase. \n\n\n### Are there published guidelines or code of conduct for practising ethical AI?\n\nThe Institute for Ethical AI & ML has identified 8 ML principles for responsible AI: human augmentation, bias evaluation, explainability by justification, reproducible operations, displacement strategy, practical accuracy, trust by privacy, and security risks. They also identify a 4-phase strategy: by principle, by process, by standards, by regulation. They've also formed the *Ethical ML Network*. \n\nThe Future of Life Institute has created a interactive map addressing validation, security, control, foundations, verification, ethics and governance. \n\nIn 2018, GE Healthcare published a set of AI principles. Google published its own guidelines of what AI applications should and shouldn't do. Microsoft released ten guidelines for developing conversational AI. One of these guidelines says that developers are accountable until bots become truly autonomous. \n\n## Milestones\n\n2014\n\nAmazon starts using AI for filtering resumes and hiring top talent. But in 2015, it's discovered that the algorithm is biased towards male candidates since it was trained on resumes submitted to Amazon over a 10-year period. This dataset had mostly male applicants since they dominated the tech industry. The algorithm just reinforced the bias. The algorithm is found to have other problems as well and is discontinued by Amazon. \n\n2016\n\nThis is the year when people start talking about ethics and AI. Interest in ethical AI grows through 2017 and 2018. \n\nMay \n2018\n\nMicrosoft appoints Tim O'Brien to the role of **AI Ethicist**, a new full-time position. The role involves AI ethics advocacy and evangelism. In reality, O'Brien clarifies that the role includes IoT, analytics, and AR/VR, since real-world solutions are hybrids of multiple technologies. \n\nDec \n2018\n\nMaria de Kleijn-Lloyd of Elsevier finds from her research that only 0.4% of published work on AI deals with ethics. She comments that there's lot of discourse on ethical AI but not much in terms of research and rigorous inquiry. \n\nApr \n2019\n\nGoogle disbands Advanced Technology External Advisory Council (ATEAC), which was supposed to advise on ethical AI. This happens due to controversy over ATEAC membership. This example shows that avoiding human bias and discrimination is not easy. \n\nJan \n2020\n\nFjeld et al. note that many organizations working with AI have published their own guidelines. They study thirty-six such documents and identify eight main themes. Moreover, recent documents tend to cover all the eight themes.","meta":{"title":"Ethical AI","href":"ethical-ai"}} {"text":"# Wi-Fi\n\n## Summary\n\n\nWi-Fi is a technology for wireless local area networking with devices based on the IEEE 802.11 standards. User Equipment (laptop/mobile) uses a wireless adapter to translate data into a radio signal and transmit that signal using an antenna. At the receiving end, a wireless router converts radio waves back into data and then sends it to the Internet using a physical connection. Wi-Fi networks either operate in infrastructure mode or ad hoc mode.\n\nWi-Fi networks typically operates in unlicensed 2.4, 5 and 60 GHz radio bands. Data rates up to 20 Gbps are possible in the 60 GHz band. Range of a Wi-Fi network varies anywhere from a few metres (point-to-multipoint) to many kilometres (point-to-point with directional antennas).\n\n## Discussion\n\n### What are the roles of IEEE and Wi-Fi Alliance in Wi-Fi Technology?\n\nIEEE 802.11 is the Working Group of Institute of Electrical and Electronics Engineers (IEEE) that deals with Local Area Networks (LANs), and its main role is to develop technical specifications for WLAN implementation. \n\nThe Wi-Fi Alliance was formed to ensure interoperability testing and certification for the rapidly emerging 802.11 world. This gives consumers the confidence a device from one vendor will work with another from another vendor, as long as they are Wi-Fi certified. It developed Wi-Fi Protected Access (WPA) in response to the poorer security of WEP. While, IEEE standards have technology-centric names, Wi-Fi Alliance has come up with more consumer-friendly naming. For example, IEEE 802.11ax is named Wi-Fi 6. \n\n\n### What's the difference between WiFi and WLAN?\n\nWLAN (Wireless Local Area Network) is a LAN to which a user (Station) can connect through a wireless connection. However, Wi-Fi is a type of WLAN that adheres to IEEE 802.11x specifications. \n\n\n### What are the different existing 802.11x Standards?\n\n\n| 802.11 protocol | Frequency (GHz) | Bandwidth (MHz) | Data Rate (Mbit/s) | Description |\n| --- | --- | --- | --- | --- |\n| 802.11a | 5 | 20 | 54 | Uses data link layer protocol and frame format as the original standard, but an OFDM-based air interface. |\n| 802.11b | 2.4 | 22 | 11 | Uses same media access method defined in the original IEEE standard. |\n| 802.11g | 2.4 | 20 | 54 | Uses OFDM-based transmission and operates at physical layer. |\n| 802.11n | 2.4/5 | 20, 40 | 600 | Provides multiple-input multiple-output antennas. |\n| 802.11ac | 5 | 20, 40, 80, 160 | 3467 | Release incrementally as Wave 1 and Wave 2. More spatial streams, higher-order modulation and the addition of multi-user MIMO. |\n| 802.11ad | 60 | 2106 | 6757 | An amendment that defines a new physical layer to operate in the 60 GHz millimeter wave spectrum. |\n| 802.11ax | 2.4/5 | 20, 40, 80, 80+80 | 9608 | Successor to 802.11ac meant to increase the efficiency of WLAN networks. |\n| 802.11aj | 45/60 | | | A rebranding of 802.11ad for China. |\n| 802.11ay | 60 | 8000 | 20000 | Extension of the existing 11ad, aimed at extending the throughput, range and use-cases. |\n\n\n### Which are the types of Wi-Fi products available in market?\n\nWi-Fi products with a number of features are getting released on a regular basis. Here's a short list of Wi-Fi product types: \n\n + **Wi-Fi Access Point (AP)** - Used to connect other devices in Wi-Fi Infrastructure mode. All User Equipment will get access to Internet via Access Point.\n + **Wi-Fi Analyzer** - To Test and diagnose wireless performance issues such as throughput, connectivity, device conflict and single multipath.\n + **Wi-Fi Autodoc** - Autodoc is foremost software to generate a comprehensive report from firewall configuration files.\n + **Wi-Fi Adapters** - Adapters permit various devices to connect with cable-less media to perform various type of external or internal interconnects as PC cards, USB, PCI etc.\n + **Wi-Fi Bar Code Scanner** - WiFi bar code scanner continues their workflow in retail and intended to read stock keeping unit by providing efficiency and simplicity.\n\n### Could you explain infrastructure and ad hoc modes of operation?\n\nInfrastructure mode is suitable for any permanent network that's intended to cover a wide area. Ad hoc mode is suitable for a temporary network where the devices are close to each other.\n\nIn infrastructure mode, Wi-Fi devices on this network communicate through single access point, which is generally called wireless router. For example, two laptops placed next to each other might connect to the same AP. They don't communicate directly. Instead, they’re communicating indirectly through the wireless access point. Infrastructure mode requires a central access point that all devices connect to. \n\nAd-hoc mode is also known as **peer-to-peer mode**. Ad-hoc networks don’t require a centralized access point. Instead, devices on the wireless network connect directly to each other. \n\n\n### Is Wi-Fi a viable technology for IoT applications?\n\nFor IoT, wireless technologies commonly proposed include RFID, LoRa, Sigfox, NB-IoT, LTE-M, IEEE 802.15.4, BLE and Bluetooth Mesh. Wi-Fi is not suitable for battery-operated devices due to its higher power consumption. Where a power outlet is available, Wi-Fi can be used in smart homes, home appliances, digital signages, and security cameras. Wi-Fi 6 might cater to connected cars and retail IoT. \n\nThe high data rates and low latency offered by Wi-Fi 5 and 6 make them suitable for vehicular services and applications heavy on media such as security cameras. \n\nFor low-power long-range applications, **IEEE 802.11ah**, aka Wi-Fi HaLow, is the most suitable standard. It operates in sub-1 GHz band with a range of 1 km. It supports short bursty data transmission and scheduled sleep/wakeup. It's ideal for smart building applications (lighting, HVAC) and smart city applications (parking meters or garages). \n\n**IEEE 802.11p** is for vehicular applications. It aligns with FCC's Dedicated Short-Range Communications (DSRC). Applications seek to improve road safety and traffic management. It competes with LTE-V2V. \n\n\n### Could you share a list of top WLAN solution providers?\n\nIn 2019, some well-known WLAN solution providers included Aerohive Networks, Mojo Networks, Aruba Networks, Cisco Meraki, Ruckus Wireless, Datto Networking, Ubiquiti Networks, Mist Systems, Purple, Edgecore Networks, Cloud4Wi, and Eleven. The best of them provide cloud management, including the use of ML/AI. \n\nA report from IDC showed that in Q1-2019, about 47% of the enterprise market is with Cisco. This is followed by Aruba, Ubiquiti and Ruckus. \n\n## Milestones\n\n1990\n\nNCR Corporation creates *WaveLAN* as a wireless alternative to Ethernet and Token Ring computer networking. In 1991, AT&T acquires NCR. The same year WaveLAN becomes the starting point for the standardization of Wi-Fi. \n\n1996\n\nAustralian agency CSIRO's WLAN and its method of recovering data in multipath environments is granted a US patent. It's only in 1999 that the patent goes into the standard IEEE 802.11a. The technology is made available to implementers via non-exclusive licenses. In 2005, CSIRO files first worldwide family litigation. In 2012, it files suits against US carriers. Patent expires in 2013. \n\nJun \n1997\n\nIEEE publishes the first version of Wi-Fi standard, called **IEEE 802.11-1997**. It supports 2 Mbps in the 2.4 GHz band. \n\n1999\n\nSome companies come together to form a global non-profit association to promote and facilitate Wi-Fi adoption and interworking, regardless of brand. This association is initially called *Wireless Ethernet Compatibility Alliance*. In 2000, it's renamed to **Wi-Fi Alliance**. The Alliance also announces **Wi-Fi®** as the formal name for the wireless technology. The term Wi-Fi was in commercial use as early as August 1999. It was a name coined by Interbrand who also designed the Wi-Fi logo. The Alliance announces a certification programme and the first certified devices come out in 2000. \n\nSep \n1999\n\nIEEE publishes two amendments, **IEEE 802.11a** (only 5 GHz band, 54 Mbps max) and **IEEE 802.11b** (only 2.4 GHz band, 11 Mbps max). Although 802.11a offers 54 Mbps, 802.11b offers better range, uses the same modulation as the original standard and leads to dropping prices due to wider adoption. However, in terms data rates Wi-Fi remains far slower than its wired counterparts, Fast Ethernet (100 Mbps, 1995) and Gigabit Ethernet (1Gbps, 1998). \n\nJun \n2003\n\nIEEE publishes the **IEEE 802.11g** standard that provides 54 Mbps data rate although it uses the same 2.4 GHz band as 802.11b. Thus, 802.11g devices can work with 802.11b devices. It uses OFDM as the modulation just as 802.11a. Soon dual-band 802.11a/b products become tri-band 802.11a/b/g. \n\n2004\n\nTVs and smartphones get Wi-Fi certified and are launched in the market. **WPA2** is released to provide higher security. For the first time, Wi-Fi is offered to passengers on a commercial flight. \n\nJan \n2008\n\nNASA installs the first Wi-Fi device on the International Space Station. Two Netgear RangeMax 802.11b/g APs are installed, each giving 240 Mbps. In May 2016, the Wi-Fi network is extended to outside the space station. In 2019, Wi-Fi is integrated into a space suit, takes a space walk and streams HD video. \n\nOct \n2009\n\nIEEE publishes the **IEEE 802.11n** standard. Further amendments appear in later years: 802.11ac (December 2013) and 802.11ax (September 2019). 802.11ax can be seen as an evolution of 802.11ac. \n\nDec \n2012\n\nIEEE publishes the **IEEE 802.11ad** standard that allows operation in the 60 GHz band. It's derived from a WiGig specification completed by *Wireless Gigabit Alliance* in 2009. Since 2010, this alliance has been cooperating with Wi-Fi Alliance to promote WiGig. However, it's only in 2016 that Wi-Fi Alliance starts certifying WiGig products. The delay is mainly because vendors are reluctant to adopt a technology that has little infrastructure support. \n\nOct \n2018\n\nIn an effort to simplify naming, Wi-Fi Alliance introduces **consumer-friendly generation names**. For example, 802.11ax is also known as Wi-Fi 6; 802.11ac as Wi-Fi 5; and 802.11n as Wi-Fi 4. In addition, UI visuals are defined to indicate which Wi-Fi standard is currently in use. Meanwhile in 2018, Wi-Fi certifications reach 45,000 and **WPA3** is released for higher security. \n\n2019\n\nThe 30 billionth Wi-Fi device is shipped. Wi-Fi adoption is accelerating given that the 10 billionth device was shipped in 2014 and the 20 billionth device was shipped only in 2017.","meta":{"title":"Wi-Fi","href":"wi-fi"}} {"text":"# Web Exploitation\n\n## Summary\n\n\nWebsites are significantly more complex today than in the early 1990s when they mostly served static HTML content. Web applications often serve dynamic content, use databases, and rely on third-party web services. The application server itself is being built from many components, which may come from diverse sources. Servers authenticate users before logging them into the system. They also authorize users to access restricted resources or data. Often applications handle sensitive user data that need to be protected. \n\nGiven this complexity, it's not easy to deploy and maintain web applications in a secure way. No application is perfect. Hackers are always on the lookout to discover and exploit vulnerabilities. This article discusses web exploitations and offers tips to improve the security of web applications.\n\n## Discussion\n\n### What aspects of an application are vulnerable to web exploitation?\n\nA web application typically involves a web server, an application server, application middleware, internal or third-party web services, a database, and so on. Any of these components could be attacked. \n\nAn attack could be as simple as slowing down the server by making lots of HTTP requests. More serious attacks would involve installing a virus on the server or stealing sensitive data. Defacing the site by modifying site content, or deleting code or data are just as serious but more easily visible. Another attack is to run cryptocurrency miners on server infrastructure. \n\nWeb clients/browsers and servers communicate via protocols such as HTTP, HTTPS, FTP, etc. Vulnerabilities in how these protocols are used could be exploited. Protocols are located at different layers of the protocol stack. Although web exploits happen at the application layer (layer 7), it can impact other layers via packet flooding (data link layer) or SYN flooding (network layer). However, web exploits at the application layer are becoming more common than network layer attacks on web servers. \n\n\n### Which are main types of web vulnerabilities and their related exploits?\n\nWeb exploits involve one or more of the following: \n\n + **Injection**: This results from accepting untrusted input without proper validation. Examples include SQL injection, LDAP injection and HTTP header injection.\n + **Misconfiguration**: This happens when processes are manual and settings are not correctly maintained.\n + **Cross-Site Scripting**: Via user input, server accepts untrusted JavaScript code. When server returns this in response, browser will execute it.\n + **Outdated Software**: With the increasing use of open source and third-party software packages, it's important to keep these updated. Outdated software can be exploited, especially when the vulnerabilities are public.\n + **Authentication & Authorization**: URL may expose session ID. Password may be unencrypted. If timeouts are not correctly implemented, session hijacking is possible. Unauthorized resources can be accessed even when UI doesn't expose them.\n + **Direct Object References**: By poor design or coding error, direct references are exposed to clients. For example, a GET request to `download.php?file=secret.txt` may bypass authorization and allow direct download of a protected file. Another example is to directly reset admin password.\n + **Data Exposure**: Sensitive data is stored in an unencrypted form, or exposed in cookies or URLs. Client-server communicate on a non-HTTPS connection.\n\n### What are some exploits pertaining to HTTP headers?\n\nOne technique is to send an arbitrary domain name or port number in the **Host header** and see how the server responds. Duplicating the Host header or formatting it differently can reveal vulnerabilities. These can be exploited to reset passwords, bypass authentication or poison web cache. \n\nSuppose your architecture uses a load balancer or reverse proxy, followed by a backend server. Each component might process the HTTP header in subtly different ways. This can lead to **HTTP request smuggling** attacks in which an extra malicious request is smuggled via the main request. \n\nNewlines are used to separate HTTP header from body. With **CRLF injection**, characters `\\r\\n` are forced into header fields via their URL encoded form `%0D%0A`. This can be used to modify HTTP access logs to cover up earlier attacks, enable CORS or deliver XSS payload. \n\nTo improve web security, HTTP has headers related to Cross-Origin Resource Sharing (CORS). Some of these are Access-Control-Allow-Origin, Access-Control-Expose-Headers, and Access-Control-Max-Age. More security-specific headers include Cross-Origin-Resource-Policy, Content-Security-Policy, Strict-Transport-Security, X-Content-Type-Options, X-Frame-Options, and X-XSS-Protection. \n\n\n### What are some server-side web exploits?\n\nHTTP DoS attack works by making large number of HTTP requests to the web server. It may also involve crawling the site to learn which pages require more server processing. The attack then involves requesting resource-intensive pages. If requests involve database calls, the site slows down visibly affecting user experience. Another way to slow down the server is to set a high Content-Length header value, send fewer bytes and make the server wait for the rest. \n\nRemote code execution is possible if server is using outdated software. Once server is infiltrated, arbitrary code can be executed to steal data or install malware. Insecure deserialization is a vulnerability that can lead to remote code execution. \n\nImproper authentication and authorization can allow attackers access protected server resources or sensitive data. Passwords could be reset via URL query strings. \n\nDatabase could be corrupted via SQL injection. This can happen when input data is not validated. \n\nIn any case, it's important that server software is kept up to date. Review and validate configuration. Collect sufficient logs and constantly monitor so that any breach is detected as soon as possible. \n\n\n### What is Cross-Site Scripting?\n\nCross-Site Scripting (XSS) is when a user visits site A and her browser executes malicious script that then contacts site B. Malicious code is contained within `` tag. The third method (not preferred) is to inline the styles with each element using the `style` attribute. It's possible to combine all these approaches for a single webpage. Due to its cascading nature, CSS figures out the final style to apply for each element. \n\nCSS styles have global scope, meaning that a selector can target any element of the DOM. The early 2010s saw a trend towards Single Page Applications (SPAs). Frameworks such as Augular, React and Vue emerged. About mid-2010s, **CSS-in-JS** emerged as a new way to scope styles to specific page components. CSS rules were defined within JavaScript code of each component. \n\n\n### Which are the key features of CSS?\n\nCSS has lots of features and we note a few of them:\n\n + **Animations**: Visual effects can be created for better user engagement, for example, on mouse hover. Using *CSS 3D Transforms*, a 360-degree view of a product can be displayed.\n + **Calculations**: Simple calculations to automatically size an element can be done with `calc()`.\n + **Custom Properties**: Defined at root or component level, browser will substitute all instances with its value. For example, it's useful for theming.\n + **Gradients**: Large images can be replaced with *CSS Gradients* that allow for smooth transitions across colours.\n + **Image Filters**: Background images, colours and gradients can be combined for creating visual effects. Images can be clipped or converted to grayscale.\n + **Layouts**: Beyond the use of tables, diverse layouts can be created with *CSS Grid* and *CSS Flexbox*.\n + **Media Queries**: These are useful for creating responsive designs. System-wide preference for dark mode can be fulfilled.\n\n### Besides HTML, where else is CSS useful?\n\nEven in the early days of CSS, it was recognized that stylesheets could apply to markup languages other than HTML. \n\nCSS has been used with HTML, XML and XHTML. While CSS is common in web browsers many other software also use CSS. PDF generators parse HTML/XML+CSS to generate styled PDF documents. E-books, such as the popular EPUB format, are styled using CSS. Style languages have adopted CSS for their custom requirements: Qt Style Sheets and JavaFX Style Sheets for UI widgets, MapCSS for maps. \n\nOf interest to developers, the popular jQuery library has the method `css()` to set and get any CSS property value of an element. For test automation, Selenium allows developers to select DOM elements using CSS selectors as an alternative to XPath selectors. \n\n\n### Could you share some developer resources for working with CSS?\n\nVisit W3C site for a description of all CSS specifications or search for CSS standards and drafts. W3C also provides a free online CSS Validation Service to validate your stylesheets.\n\nFor a showcase on CSS capabilities, visit CSS Zen Garden. CSS-Tricks is a popular blog focused on CSS for developers. CSS Reference includes every CSS property and rendering of the same at different values. A similar site is DevDocs.\n\nThe CSS page at Mozilla's MDN Web Docs is a good place for beginners. The site has tutorials and shares a cookbook of common layout patterns. \n\nNan Jeon has shared a handy cheatsheet on CSS selectors. Each example includes HTML structure and infographic. Make A Website Hub has published another useful CSS cheat sheet. \n\n## Milestones\n\n1980\n\nIn the 1980s, stylesheets are created as part of Standard Generalized Markup Language (SGML). These are named Document Style Semantics and Specification Language (DSSL) and Formatting Output Specification Instance (FOSI). Though considered for the Web a decade later, they have a limitation: they can't combine styles from different sources. \n\nDec \n1990\n\nTim Berners-Lee has a working web server and a browser on his NeXT computer at CERN. There's no CSS at this point but Berners-Lee recognizes that it's good to keep document structure (content) separate from its layout (styling). His browser has the means to select style sheets. He doesn't publish the details, leaving it to browsers to control styling. Indeed, when Pei-Yuan Wei creates the ViolaWWW browser in 1991, it comes with its own stylesheet language. \n\nOct \n1994\n\nHåkon Wium Lie releases a draft titled *Cascading HTML Style Sheets*. This gets discussed, along with alternative proposals, at the Mosaic and the Web Conference in Chicago in November. Lie's proposal has the advantage of being designed for the Web. Styles can be defined by the author, the reader, the browser or even based on device display capabilities. Lie's proposal could combine or \"cascade\" styles from different sources. \n\nAug \n1996\n\nMicrosoft's Internet Explorer (IE) becomes the first commercial browser to support CSS. IE has good support for font, colour, text and background properties but poor support for the box model. When IE6 comes out in 2001, it has much better support for CSS and a browser market share of 80%. \n\nDec \n1996\n\n**CSS Level 1** or **CSS1** is released as a W3C Recommendation. CSS1 allows styling of fonts, colours, background, text, and lists. Box properties include width, height, margin, padding, and border. A revised version of CSS1 is published in April 2008. In September 2018, W3C officially supersedes this by more recent Recommendations. \n\nMay \n1998\n\n**CSS Level 2** or **CSS2** is released as a W3C Recommendation. It's now possible to position elements and style page layout using tables. To target media types and devices, `@media` rule is introduced. CSS2 expands the syntax for selectors. \n\nOct \n1998\n\nEach browser renders CSS1 differently due to non-conformance to the specifications. Todd Fahrner creates the \"acid test\", a CSS1-styled document that a browser must render exact to the pixel. This is based on the work done by Eric Meyer and others who developed a CSS test suite. In later years, more acid tests are defined. These are available at acidtests.org.\n\nJun \n1999\n\nThe first drafts for **CSS Level 3** or **CSS3** are published by W3C. In fact, work on CSS3 started just after the release of CSS2. Unlike, CSS1 and CSS2, CSS3 takes a modular approach. There's no single document. \n\nMay \n2003\n\nAcross browsers, support for CSS is inconsistent. Web designers prefer to avoid CSS and use HTML tables instead. Web designer Dave Shea sets out to change this by showcasing good CSS designs. He launches the website *CSS Zen Garden* with simple HTML content that could be styled differently by changing only the CSS. He showcases five of his own examples. A little later, he allows public to submit their own CSS designs. \n\nMay \n2011\n\nCSS3 is a collection of separate documents. It becomes difficult to release these documents due to interdependencies. It's therefore proposed that CSS3 will be based only on CSS2.1. Each document will have its own levelling. When CSS2.1 becomes a Recommendation, stable modules of CSS3 will also become Recommendations on their own. This modularization doctrine is documented in the *CSS Snapshot 2007*, which is meant for implementors to know the current state of standardization. \n\nJun \n2011\n\n**CSS Level 2 Revision 1** or **CSS2.1** is released as a W3C Recommendation. CSS2.1 fixes some errors present in CSS2, removes poorly supported features and adds features already supported by browsers. It comes after more than a decade of edits, often changing status between Working Draft and Candidate Recommendation. Subsequently, for CSS3, CSS Color Level 3, Selectors Level 3, and CSS Namespaces are published as Recommendations. \n\nApr \n2020\n\nCSS now has more than a hundred documents at different levels. It may no longer make sense to use the terms \"CSS3\" or \"CSS4\". CSS continues to evolve with more capabilities.","meta":{"title":"Cascading Style Sheets","href":"cascading-style-sheets"}} {"text":"# Test-Driven Development\n\n## Summary\n\n\nIn traditional software development (such as the Waterfall Model), the order of work is design, code and test. Test-Driven Development (TDD) reverses this order to test, code and design/refactor. In other words, writing tests becomes the starting point. For this reason, TDD is sometimes called *Test-First Programming*. \n\nIt's common in traditional workflows to have lengthy design and implementation phases. Once code is \"frozen\", testing begins. TDD instead recommends small incremental cycles. This leads to frequent feedback and immediate course correction. Continuous Integration (CI) and Shift Left are practices or techniques that are aligned with TDD. \n\nTDD has its roots in Agile methodology and eXtreme Programming (XP). While TDD brings many benefits, it may not suit all projects. Project managers must access the context and apply it accordingly.\n\n## Discussion\n\n### Which are the main steps in TDD?\n\nThe TDD cycle or process has three distinct steps: \n\n + **Test**: Developer writes a unit test first. Since the corresponding feature is not yet implemented in the application code, the test should fail.\n + **Code**: Developer writes the code with the goal of quickly passing the test. Other existing tests should also pass to confirm that nothing is broken.\n + **Refactor**: Design is implicit in the preceding step. Since developer might have written the code quickly, this step is an opportunity to improve the design. Since tests are in place, developer can confidently refactor and improve the design.The three-step process is also called **Red-Green-Refactor**, where red implies a failing test and green implies a passing test. \n\nSometimes the TDD cycle is described in five steps: understand the requirements first, execute the three steps of test-code-refactor, and finally repeat the process. \n\n\n### What's the essence of TDD?\n\nWhile testing is an essential aspect of TDD, TDD is not about testing. Tests are used as a means towards clean code that's less buggy. TDD is a way of developing software. It's not about how to write or execute tests. It's tests driving implementation. For this reason, it's been said, \n\n> Only ever write code to fix a failing test.\n\nWhile testing first is certainly important, studies have shown that the real benefits come from **incremental development**. Developers must work on small features and in short cycles of test, code and refactor. To work on large and complex features that take many days or weeks is the wrong way of doing TDD. \n\nTDD gives developers quick **feedback** on whether the code works as desired. Developers can refactor code more confidently since failing tests can catch problems early on. A related aspect is that tests are written by developers, not by a separate QA team. \n\n\n### What are the benefits of TDD?\n\nTests in TDD are derived from requirements. Tests therefore specify precisely what needs to be built and nothing more. This helps us avoid shipping a wrong product or an overengineered product. Tests are \"living documentation\", helping us understand both the requirements and the code. \n\nTDD can help developers avoid \"paralysis by analysis\". By breaking down the requirements into smaller parts, incremental and consistent progress can be achieved. These parts becomes less coupled and system design becomes less of a monolith. TDD's incremental nature helps the creative process since the developer focuses on one small part at a time. \n\nTests become a safety net. Developers can fearlessly experiment or refactor. They can continuously improve the design or implementation. \n\nTDD creates unit tests that become useful within a CI/CD pipeline. Tests can be made to execute at every build or code commit. When tests fail, developers notice it immediately. This avoids costly integration effort later on. \n\nTDD encourage developers to write more tests. With more tests, debugging time reduces. There are fewer defects. Code becomes more cohesive and less coupled. \n\n\n### What techniques are there for writing tests in TDD?\n\nStart with small tests. For example, a feature or bug fix may involve many building blocks. Test each of those building blocks before attempting an end-to-end feature test. Each small test may run in isolation or within an end-to-end workflow containing stubs for the other blocks. \n\nUnit tests should be automated and they should run fast. They shouldn't depend on one another. Tests that are hard to initialize, have many dependencies, cover many scenarios or show little reuse are code smells. Too many tests per class or too many mocks are also code smells. Test code should follow SOLID design principles. \n\nUnits tests shouldn't create side effects such as calling an external API. Instead, mock external dependencies. This makes tests deterministic and repeatable. \n\nHave a good naming convention for test names. When such a test fails, its name will immediately suggest what aspect of the software isn't working. \n\nTests may assert states of objects, values in databases or return values. Alternatively, tests may assert messages that are exchanged between two blocks of code. Whether state-based testing or interaction-based testing, adopt what makes sense for the project. \n\n\n### What are some best practices for TDD?\n\nManagement and developers must first commit to TDD. This commitment can help overcome old habits and migrate from test-last workflows. Avoid partial adoption where only some developers in the team use TDD. \n\nTDD doesn't imply that we don't need a QA team. While TDD covers unit testing, the QA team can look at integration and system testing. In fact, TDD may not be the best way to test for concurrency and security. \n\nCode refactoring can be about changing structure or improving design. Refactor in a controller manner and in small steps. Refactor to change internal structure without affecting external behaviour. Moving from one design pattern to another is an example of refactoring. \n\n\n### What are some criticisms of TDD?\n\nTDD requires initial effort. Since developers don't write application code until later, progress can be slow. TDD involves extra effort due to refactoring. \n\nEven 15 years after the birth of TDD, studies have failed to observe the benefits of TDD. There's no strong evidence that TDD improves code quality and productivity. Lack of testing skills among developers is a limitation. \n\nOne study found that testing first doesn't contribute to the benefits of TDD. The main contributing factor is TDD's incremental process. \n\nOften requirements at the start of a project are vague, even from the client's perspective. They evolve as the project progresses. By following TDD, tests will need to be rewritten often as requirements evolve. This is extra effort. TDD advocates that developers write tests. The developer could make the same mistake in both application code and test code. This defeats the purpose of testing. \n\nBy focusing on unit tests, TDD compromises system-level design. It leads to complex interactions, indirections, conceptual overheads, command patterns, and more. Hard-to-unit-test code is not necessarily bad design. In fact, integration tests are better than unit tests for controllers under the MVC pattern. System tests are better for views. \n\n\n### What are some variations of TDD?\n\nTDD unit tests verify isolated pieces of code. But do these parts satisfy high-level requirements? This issue is addressed by **Acceptance TDD (ATDD)**. Tests are derived from specification and requirements. ATDD works at the system level whereas TDD works at the implementation step of each feature. ATDD improves external quality whereas TDD improves internal quality. \n\n**Behaviour-Driven Development (BDD)** is derived from TDD. It shifts the focus from testing to behaviour, requirements and design. BDD could be seen as TDD done right. BDD has been called by other names including Story TDD (STDD), executable acceptance testing, and specification by example. \n\nIn the world of microservices, **Contract-Driven Development (CDD)** can help test interfaces from the perspective of both consumers and providers of service APIs. This mitigates the problem of finding problems later during integration testing. An earlier form of CDD was called Agile Specification-Driven Development that combined TDD and Design by Contract. \n\nDomain-Driven Design (DDD) can work with TDD for experimentation and iterative design. One suggested approach is to think outside in (BDD), view the big picture (DDD) and then think inside out (TDD). \n\n\n### What tools are available to practice TDD?\n\nMany programming languages and IDEs support TDD. There are frameworks that support unit testing, mocking, end-to-end testing, or acceptance testing. Other frameworks support variations of TDD such as BDD. The figure shows a selection of these. \n\nConsider a Node.js project as an example. We note some useful tools: *Node Version Manager (NVM)* for versioning, *Jest* for unit testing, *ESLint* for linting, *Prettier* for formatting, and *lint-staged* for linting on staged files via the pre-commit Git Hook. \n\nSome IDEs can generate stub methods or modify method arguments based on the usage in test cases. \n\n## Milestones\n\n1957\n\nMcCracken writes in his book *Digital Computer Programming* that tests may be written before coding. Moreover, it's advisable that such tests are written by the customers rather than by the programmers themselves. This helps to bring out misunderstandings and logical errors. \n\n1960\n\nIn the early 1960s, programmers at NASA working on the Mercury Space Program write test cases on punched cards before writing the program code. They work on half-day iterations doing test-first micro-increment cycles. Their approach is top-down with stubs. Engineers on this project were doing **incremental development** as early as 1957. \n\n1994\n\nKent Beck codes the first version of *SUnit* test framework for Smalltalk. About a year later he demos TDD to Ward Cunningham at the OOPSLA conference. \n\n1998\n\nKent Beck coins the term **TDD** in his book *Extreme Programming Explained: Embrace Change*. A point to note is that TDD has always been a part of Extreme Programming although only now it's being named TDD. \n\n2002\n\nKent Beck publishes a book titled *Test Driven Development: By Example*. He describes TDD as \"a proven set of techniques that encourage simple designs and test suites that inspire confidence.\" The goal is \"clean code that works.\" He also notes that the idea of writing tests first is not new. Years later he notes that he \"rediscovered\" TDD rather than invented it. \n\nMar \n2003\n\nAn early study of TDD shows that it produces code that passes 18% more black box test cases compared to the Waterfall model. TDD tends to produce more tests. However, developers took 16% more time. \n\nSep \n2003\n\nDan North starts working on *JBehave* as a replacement for *JUnit*. He introduces a vocabulary around behaviours rather than tests. Tests should actually describe behaviours. This becomes the starting point for **Behaviour-Driven Development (BDD)**. Inspired by DDD, BDD introduces (in 2004) a common language to describe user stories and acceptance criteria. \n\n2004\n\n**Story TDD (STDD)** is born as an XP practice. It brings together developers, testers and customers to discuss the requirements before any code is written. Chunks of functionality are grouped into stories. Stories are detailed. Tests are written for them. Thus, everyone arrives at a common understanding of what's to be built. In 2010, Park and Maurer review the literature on STDD. \n\n2008\n\nIn a blog post, Grenning presents a useful visualization of how TDD saves time and cost. He compares it to the traditional approach of coding first and testing later, something he calls \"Debug Later Programming\". He makes the point that bugs in code are unavoidable and therefore testing is essential. When a test fails in TDD, we usually know the problem since only small changes have been made. Feedback is immediate. This avoids long debugging sessions. \n\n2010\n\nIn a survey of Agile practitioners, 53% claim to use TDD. At an Agile webinar, 50% claim to use TDD. A Forrester survey shows only 3.4% adoption of TDD among IT professionals. However, TDD was not well-defined in this survey. It was listed alongside Scrum and XP when in fact TDD is a practice that can used within these methodologies. Likewise, results from other studies during this period must be analysed in the context of how TDD was defined or interpreted. \n\n2014\n\nMäkinen and Münch analyze current literature and learn that TDD reduces defects, makes code more maintainable and improves external quality. However, it doesn't seem to improve productivity or internal code quality. Meanwhile, Martin Fowler, Kent Beck, and David Heinemeier Hansson engage in a series of discussions asking \"Is TDD Dead?\" They address TDD's limitations and how not to do TDD.","meta":{"title":"Test-Driven Development","href":"test-driven-development"}} {"text":"# IEEE 802.11ad\n\n## Summary\n\n\nIn scenarios where we desire throughput in excess of 1 Gbps and low latency, such as in home or office environments, we still use wires. At least, there wasn't a suitable Wi-Fi protocol to cater to these scenarios until IEEE 802.11ad was conceived.\n\nIEEE 802.11ad is a protocol for very high data rates (about 8 Gbps) for short range communication (about 1-10 meters) at the 60 GHz unlicensed band. Because of its 60 GHz operation band, 802.11ad complements but does not interoperate at the PHY layer with 802.11ac at 5 GHz band. This standard is also called **Directional Multi-Gigabit (DMG)**. Commercially, the term **WiGig (Wireless Gigabit)** is common. \n\nVendor support for IEEE 802.11ad has been growing since 2016. This standard has an evolution path towards IEEE 802.11ay, which also operates in the 60 GHz band.\n\n## Discussion\n\n### What are the typical use cases for 802.11ad?\n\nWe are surrounded by gadgets at home, office and even in our daily commutes. Content is also media rich and there's a need to stream HD videos or play interactive games. Wires are one option but going wireless would be lot more convenient. Think about connecting your laptop to a projector; or connecting Blu-ray player to HDTV; or sharing content from one phone to another; or playing a virtual reality game with wireless controls. \n\nIEEE 802.11ad enables us to get rid of wires in homes and offices. It can be used for wireless docking, display, entertainment, instant file transfers, HD media streaming, AR/VR apps, and more. In general, applications that require high bandwidth (> 1 Gbps) and low latency (~ 10 us) can benefit from 802.11ad. With 802.11ad, it becomes possible to connect without wires consumer electronics devices, handheld devices and personal computers. \n\nThis standard is ideal for short range line-of-sight (LOS) connections, although non-LOS is possible due to multiple antennas. It could also see usage in public Wi-Fi infrastructure and small cells backhaul. \n\n\n### What frequency bands are used by 802.11ad?\n\nPreviously, in the 802.11-2016 standard, 802.11ad had four channels in the range 57-66 GHz. Today, IEEE 802.11ad operates in the 57-70 GHz frequency range. Six channels are available with each having a nominal channel bandwidth of 2.16 GHz. Channel 2 (59.40-61.56 GHz) is available in all regions and is considered as the default channel. \n\nChannel bandwidth of 2.16 GHz is a lot of spectrum, thus allowing 802.11ad to offer multi-gigabit speeds. Comparatively, the most that 802.11ac Wave2 can offer is 160 MHz via channel bonding. \n\n\n### What are some technical details or parameters of 802.11ad?\n\nAt the PHY layer, 802.11ad has three modes: Control, Single Carrier (SC), and OFDM. Later, OFDM mode was made obsolete. There's also the optional Low-power Single Carrier mode for mobile devices. \n\n802.11ad does not support spatial multiplexing such as MIMO. It supports a single spatial stream on a single channel. However, it supports **beamforming** for spatial separation and directional operation. Up to 32 antennas are possible. 2 Gbps at 100 feet LOS is possible. \n\nA variety of modulation and coding schemes (MCS) are available. MCS0 is for Control. MCS1-12 with extensions are for SC mode. MCS25-31 are for Low-power SC mode. MCS13-24 are for OFDM mode. Data rates vary from 385 Mbps to 8085 Mbps. The maximum rate is achieved in MCS12.6 using π/2-64QAM and Low Density Parity Code (LDPC) at rate 7/8. Maximum rate for Low-power SC is at MCS31 giving 2503 Mbps. \n\nCompared to 802.11ac, 802.11ad is more power efficient. It has five times lower power consumption per bit. \n\nThe MAC frame consists of preamble, header, data, and optional training for beamforming. **Golay Sequences** are extensively used in the preamble. \n\n\n### Could you explain Fast Session Transfer?\n\n**Fast Session Transfer (FST)** is a MAC layer feature of 802.11ad that allows for multiband operation. While 802.11ad is incompatible with older standards at the PHY layer due to operation in 60 GHz, interoperability is possible at the MAC layer. FST is managed by a Common Upper MAC sublayer that sits on top of Lower MAC sublayer containing band-specific implementation. \n\nWith FST, we can have seamless transfer of sessions from 60 GHz band to other bands, and vice versa. For example, if a better range is desired then the session may be transferred from 60 GHz to 5 GHz while sacrificing throughput. A session transfer that involves a different MAC address may be slower than when same MAC client and address is used for all bands. Some devices may be capable of supporting multiple bands at the same time. TP-Link's AD7200 offers such a multiband operation. \n\nA related concept is called **band steering** where an access point presents a single SSID for clients across all bands. This is supported by some D-Link routers but not by TP-Link's AD7200. \n\n\n### Who are currently supplying 802.11ad chipsets?\n\nChipsets are available from Broadcom, Intel, Qualcomm Atheros, Wilocity, Tensorcom, Peraso, Lattice Semiconductor, MediaTek, Nitero, and others. Wikipedia gives a list of specific chips from these vendors.\n\nAt CES 2013, Wilocity was one of the first to give a prototype demo of the technology based on its chips. The Wilocity chip was also used in the Dell Latitude 6430u Ultrabook. In July 2014, Wilocity was acquired by Qualcomm. Qualcomm stated that their future chips will be tri-band: 2.4 GHz, 5 GHz and 60 GHz. \n\n\n### Commercially, what 802.11ad products are currently available?\n\nWireless routers and access points are available from Netgear, Acelink, TP-Link, IgniteNet and Asus. In January 2016, Acer released TravelMate P648 notebook with 802.11ad support. Asus announced a 802.11ad smartphone back in September 2017. \n\nTP-Link's Talon AD7200 claims a theoretical speed of 7200 Mbps via multiband operation but a speed test over a distance of couple of meters gave 868 Mbps downlink and 280 Mbps uplink. \n\n\n### What are the alternatives to 802.11ad?\n\nIEEE 802.11ad is not the only protocol for multi-gigabit wireless.\n\nThere's SiBeam's **WirelessHD** (aka **UltraGig**), also operating in the 60 GHz band. Back in 2010 when 802.11ad was being standardized, SiBeam was already shipping its chips for integration into consumer products. WirelessHD is designed for video, as high as 28 Gbps. This means that uncompressed 1080p FullHD video can be transmitted. WirelessHD specs were released in January 2008. However, WirelessHD website shows no news after 2013.\n\nThere's also **Wireless Home Digital Interface (WHDI)**, which operates in the 5 GHz band. It's purpose is to deliver interactive HD video from any device to any display, with quality equivalent to wired HDMI cable. \n\nLet's not forget **Miracast**, which runs over 802.11n or 802.11ac in the 5 GHz band. With Miracast, for example, we can stream content from a smartphone to a TV. \n\n\n### How is 802.11ad related to Media Agnostic USB?\n\nIn September 2013, Wi-Fi Alliance transferred its work on \"WiGig Serial Extension Specification\" to USB-IF (USB Implementers Forum). USB-IF will use this as a starting point for standardising **Media Agnostic USB (MA-USB)**. An alternative standard called **Wireless USB** operates in the range 3.1-10.6 GHz. MA-USB is agnostic of the underlying technology. Data could be transferred on Wi-Fi 2.4/5 GHz or WiGig 60 GHz. \n\n## Milestones\n\nJan \n2009\n\nAt the IEEE, the VHT Study Group starts looking into **Very High Throughput (VHT)** at 60 GHz. Both 802.11ac and 802.11ad come under the scope of VHT. \n\nMay \n2009\n\nAbout 15 technology companies come together to form **Wireless Gigabit (WiGig) Alliance**, an organization tasked with defining a wireless specification at the 60 GHz band. \n\nDec \n2009\n\nVersion 1.0 of the WiGig specification is released. It supports data rates up to 7 Gbps. WiGig 1.0 announced\n\nMay \n2010\n\nWi-Fi Alliance and WiGig Alliance enter into a partnership. This enables Wi-Fi products in the 60 GHz band. Wi-Fi Alliance commits to studying the WiGig specs and perhaps certify for it. \n\nDec \n2012\n\nAs an amendment to the overall IEEE 802.11, **IEEE 802.11ad-2012** is published with the title \"Enhancements for Very High Throughput in the 60 GHz Band\". It contains changes to both PHY and MAC layers. This is subsequently amended in March 2014. \n\nMar \n2013\n\nAfter two years of collaboration, Wi-Fi Alliance and Wireless Gigabit Alliance merge. Further work on WiGig, including product certification, will be taken up by Wi-Fi Alliance. \n\nJan \n2016\n\nAt CES 2016, TP-Link demos its WiGig router, Talon AD7200. At 60 GHz, it claims 4.6 Gbps raw data rate. It also supports a/b/g/n/ac standards where 802.11ad is not available. It has eight antennas for beamforming. Inside, it uses two Qualcomm Atheros chips, one for older standards and another for 802.11ad. Also at CES 2016, Acer shows off its TravelMate P648 with 802.11ad support. \n\nMar \n2016\n\nOFDM mode has interoperability issues with Single Carrier (SC) mode. As a result, OFDM mode is made obsolete. In future, 802.11ay may design a proper OFDM PHY at 60 GHz. This change goes into IEEE 802.11-2016 standard, released in December 2016. \n\nSep \n2017\n\nAsus ZenFone 4 Pro becomes the world's first smartphone to support WiGig. It's not clear if the device is certified by the Wi-Fi Alliance. It uses Snapdragon 835 Mobile Platform.","meta":{"title":"IEEE 802.11ad","href":"ieee-802-11ad"}} {"text":"# TIP OpenWiFi\n\n## Summary\n\n\nOpenWiFi is an community-developed open source platform to lower the cost of developing and operating Wi-Fi networks, and to accelerate the innovation of services. Traditionally, every service provider or OEM would build the Wi-Fi stack from scratch for specific hardware. OpenWiFi provides an open implementation of the stack along with certified hardware on which the stack can run. It provides open APIs that enable device management from the cloud. \n\nLaunched in 2021, OpenWiFi is an initiative of the Open Converged Wireless (OCW) software project group under the Telecom Infra Project (TIP). This follows from other TIP initiatives such as OpenRAN and Open Optical & Packet Transport (OOPT). \n\nWith 16 billion+ Wi-Fi devices in use, Wi-Fi is projected to grow further, particularly when 6 GHz band is being approved for unlicensed use. Wi-Fi complements 5G. Given this context, OpenWiFi has generated much interest in the industry.\n\n## Discussion\n\n### Given open source projects such as OpenWrt and DD-WRT, why do we need OpenWiFi?\n\nOpenWrt and DD-WRT are based on WRT54G router firmware open sourced by Linksys back in 2003. Since then, the open source community has maintained and evolved such firmware to support new devices and Wi-Fi features as the need arose. Thus, these codebases grew without a top-down approach. There's no certification program to ensure that the firmware runs on a particular router hardware. Moreover, there are plenty of variants and alternatives to OpenWrt, leading to fragmentation. \n\nUnder TIP, OpenWiFi is a coordinated approach to openness within the Wi-Fi ecosystem. It uses OpenWrt. It brings all stakeholders including corporations, industry alliances and open source communities. A single codebase can be used on any compliant whitebox hardware. A standardized open API allows access points to be managed from the cloud. Application developers can use the API to build innovative services on the cloud and target any device running on the OpenWiFi stack. \n\nTIP OpenWiFi should not be confused with OpenWiFi Labs Private Limited, OpenWiFi (an R&D prototype) or openwifi (an SDR implementation).\n\n\n### What are the benefits of OpenWiFi?\n\nOpenWiFi's open codebase and compliance testing means that Wi-Fi solution providers don't have to build everything from scratch. OpenWiFi therefore enables them to **lower R&D costs** without compromising on latest features or developments in Wi-Fi. With lower barriers to entry, new service providers can come into the market. \n\nRather than focusing their efforts on \"Wi-Fi plumbing\", enterprises can focus on **service innovation**. In other words, they can focus on innovative applications rather than the Wi-Fi stack itself. \n\nOpenWiFi also enables multi-vendor selection of cloud controllers and access points. There's no vendor lock-in. Service providers get **choice and flexibility**. \n\nDue to economies of scale, OpenWiFi is expected to bring down CAPEX and OPEX, leading to **lower Total Cost of Ownership (TCO)** compared to current proprietary solutions. \n\n\n### What is the OpenWiFi stack?\n\nThe two main layers of the OpenWiFi stack are: \n\n + **Access Point Firmware**: Among its features are support for multiple topologies, multiple authentication standards, Passpoint®, and Zero Touch Provisioning. Wi-Fi standards supported include Wi-Fi 4/5/6 or IEEE 802.11n/ac/ax. Its Network Operating System (NOS) features include embedded captive portal, airtime fairness, local provisioning over SSID, SSID rate limiting, inter-AP communication, and many more.\n + **Cloud Controller SDK**: Among its features are Zero Touch Provisioning, firmware management, RESTful Northbound Interface (NBI), data model-driven API, template-based device provisioning with RADIUS profile management, and advanced RF control with RRM. It also enables remote troubleshooting and service assurance.\n\n### Who are the players involved in OpenWiFi?\n\nIn May 2021, OpenWiFi included more than 100 participants including services providers, Original Equipment Manufacturers (OEMs), Original Design Manufacturers (ODMs), Independent Software Vendors (ISVs), system integrators, silicon vendors and industry organizations. These names include Qualcomm, MediaTek, TP-Link, Edgecore Networks, T-Mobile, Vodafone, NetExperience, Facebook, and many more. \n\nODMs will offer whitebox access points that are OpenWiFi compatible and address various use cases. OEMs will bring together whitebox hardware and OpenWiFi stack to offer commercial solutions. OEMs are expected to offer innovative services that can be accessed via the OpenWiFi cloud controller. Finally, enterprises and service providers will take what OEMs provide to deploy Wi-Fi networks for specific use cases. \n\nTIP's Open Converged Wireless software project group that manages OpenWiFi has noted that it will collaborate and reuse software components from other groups including Wi-Fi Alliance, Wireless Broadband Alliance (WBA), and OpenWRT. An example of this is WBA's OpenRoaming standard that OpenWiFi has adopted. \n\n\n### How do I get started with OpenWiFi?\n\nThe easiest way to start is to purchase a certified TIP OpenWiFi AP and partner with cloud providers who can simplify the deployment. To create custom builds, the OpenWiFi source code is available under TIP's GitHub account. However, use of the code requires OCW membership. \n\nOpenWiFi's official documentation mentions high-level features of the Access Point (AP) and Cloud Software Development Kit (SDK). As on June 2021, the documentation was pretty basic and incomplete, possibly because the project was still very new.\n\nTo contribute to the project, an individual or company needs to become a TIP member. \n\n## Milestones\n\nFeb \n2021\n\nTIP Board of Directors approve **Open Converged Wireless (OCW)** as a software project group under Telecom Infra Project (TIP). It's described as \"a dedicated open source community for the design, development & testing of converged wireless connectivity systems software for Wi-Fi, switching, and other wireless technologies.\" \n\nMay \n2021\n\nTIP announces the launch of **OpenWiFi Release 1.0**. This disaggregated Wi-Fi system is community-developed and open source. At the same time, TIP kicks off lab and field trials of Wi-Fi connectivity solutions based on TIP OpenWiFi. The launch event sees many sponsors and dozens of participants including service providers, software OEMs and hardware ODMs. It's reported that 10+ service providers are trialing OpenWiFi, 5+ ODMs have shipped OpenWiFi-compatible whitebox access points, and 8+ Wi-Fi OEMs are using OpenWiFi to build commercial solutions. \n\nJun \n2021\n\nNetExperience announces general availability of its platform that can manage TIP OpenWiFi-compatible access points. The platform is available as licensed software or SaaS. The platform has been tested against access points from TP-Link, EdgeCore and CIG. NetExperience claims to be the \"first full Wi-Fi management platform for OpenWiFi\" to come to market.","meta":{"title":"TIP OpenWiFi","href":"tip-openwifi"}} {"text":"# Wi-Fi Direct\n\n## Summary\n\n\nWi-Fi-Direct is a standard developed by Wi-Fi Alliance so that Wi-Fi devices can connect and communicate with each other directly without using an access point (AP). This is also known as Wi-Fi P2P (Wi-Fi Point-to-Point). This is very useful for direct file transfer, internet sharing, or connecting to a printer. Wi-Fi Direct can communicate with one or more devices simultaneously at typical Wi-Fi speeds.\n\nWi-Fi Direct supports IEEE 802.11 a/b/g/n/ standards.\n\n## Discussion\n\n### Can you explain how Wi-Fi Direct works?\n\nWi-Fi Direct works in two steps: \n\n + **Device Discovery**: A device first broadcasts a probe request message asking for MAC ID from all nearby devices. This stage is similar to the scanning phase in regular Wi-Fi. All devices that hear this message respond with a unicast message to the sender. Devices alternate between sending out and listening for probe requests, and subsequently their responses.\n + **Service Discovery**: Now the sender sends a unicast service request message to each device. The receiving devices responds with unicast message with service details.In terms of the states or phases of Wi-Fi Direct, devices go through scan phase (scan the channels), find phase (which includes service discovery), formation phase (group is formed and includes WPS provisioning), and operational phase. Whereas discovery happens on Social channels 1, 6 and 11 in the 2.4GHz band, operation can be in either 2.4 or 5GHz bands. \n\n\n### How many devices can Wi-Fi Direct connect?\n\nA Wi-Fi Direct-certified network can be one-to-one, or one-to-many. The number of devices in a Wi-Fi Direct-certified group network is expected to be smaller than the number supported by traditional standalone access points intended for consumer use. Connection to multiple other devices is an optional feature that will not be supported in all Wi-Fi Direct-certified devices; some devices will only make 1:1 connections. \n\n\n### Can you explain architecture of Wi-Fi Direct?\n\nAs Wi-Fi Direct does not require any AP, the device itself has the capability to function like AP. Wi-Fi Direct devices, aka P2P Devices, communicate by establishing P2P Groups, which are functionally equivalent to traditional Wi-Fi infrastructure networks. The device implementing AP-like functionality in the P2P Group is referred to as the *P2P Group Owner (P2P GO)*, and devices acting as clients are known as *P2P Clients*. P2P GO is sometimes referred to as *Soft AP*. \n\nWi-Direct devices can function as 1:1 and 1:n devices. The shown topology in the diagram explains the scenario of 1:1 and 1:n. In 1:1 scenario, a Wi-Fi Direct device is printing files at a printer. In 1:n scenario, a laptop is sharing files to three other Wi-Fi devices. \n\n\n### Is it possible for Group Owner to connect to Wi-Fi AP?\n\nOnly the P2P GO is allowed to cross-connect the devices in its P2P group to an external network. Wi-Fi Direct does not allow transferring the role of P2P GO within the group. If P2P GO leaves the P2P group, then the group is torn down, and has to re-established. \n\n\n### Can you explain how group formation happens?\n\nGroup formation procedure involves two phases: \n\n + **Determination of P2P Group Owner**: Two P2P devices negotiate for the role P2P Group Owner based on desire/capabilities. P2P GO role is established at formation or at an application level.\n + **Provisioning of P2P Group**: P2P group session is established using appropriate credentials. Wi-Fi simple configuration is used to exchange credentials.There are three ways of group formation: Standard, Autonomous and Persistent. In **Standard**, P2P devices discover each other and negotiate who will act as P2P GO. In **Persistent**, devices recall if they've had a previous connection with persistent flag set. If so, GO Negotiation Phase (3-way handshake) is replaced with Invitation Phase (2-way handshake). WPS Provisioning is simplified since stored network credentials are reused. \n\n\n### Can you explain how service discovery happens?\n\nService Discovery process enables Wi-Fi Direct devices to discover each other and the services they support before connecting. For example, a Wi-Fi Direct device could see all compatible devices in the area and then narrow down the list to only devices that allow printing before displaying a list of nearby Wi-Fi Direct-enabled printers. Before establishment of a P2P Group, P2P Devices can exchange queries to discover the set of available services and, based on this, decide whether to continue the group formation or not.\n\n*Generic Advertisement Service (GAS)* as specified by 802.11u is used. GAS is a layer 2 query and response protocol implemented through the use of public action frames, that allows two non-associated 802.11 devices to exchange queries belonging to a higher layer protocol (e.g. a service discovery protocol).\n\n\n### How is security implemented in Wi-Fi Direct?\n\nWi-Fi Direct devices are required to implement Wi-Fi Protected Setup (WPS) to support a secure connection with minimal user intervention. WPS is based on WPA-2 security and uses AES-CCMP as ciphering, and a randomly generated Pre-Shared Key (PSK) for mutual authentication. \n\n\n### Could you explain Wi-Fi Direct's Power Saving Mode?\n\nWi-Fi Direct defines two new power saving mechanisms: \n\n + **Opportunistic Power Save Protocol**: The Opportunistic Power Save protocol (OPS) allows a P2P Group Owner to opportunistically save power when all its associated clients are sleeping. This protocol has a low implementation complexity but, given the fact that the P2P Group Owner can only save power when all its clients are sleeping, the power savings that can be achieved by the P2P Group Owner are limited.\n + **Notice of Absence Protocol**: The Notice of Absence (NoA) protocol allows a P2P GO to announce time intervals, referred to as *absence periods*, where P2P Clients are not allowed to access the channel, regardless of whether they are in power save or inactive mode. In this way, a P2P GO can autonomously decide to power down its radio to save energy.\n\n### What's the speed of Wi-Fi Direct?\n\nA Wi-Fi Direct-certified device supports typical Wi-Fi speeds, which can be as high as 250 Mbps. Even at lower speeds, Wi-Fi provides plenty of throughput for transferring multimedia content with ease. The performance of a particular group of Wi-Fi Direct devices depends on whether the devices are 802.11 a, b, g, or n, as well as the particular characteristics of the devices and the physical environment. \n\n\n### Can you relate Wi-Fi Direct to Miracast?\n\nMiracast is a Wi-Fi display certification program announced by Wi-Fi Alliance for seamlessly transferring video between devices. The intersection of wireless connectivity and streamed audio/video content can be termed as *Miracast*. This solution enables seamless mirroring of entire displays across devices or sharing of any type of content that a source could display using Wi-Fi Direct. \n\n\n### Why is the design of Wi-Fi Direct based on infrastructure mode and not ad hoc mode?\n\nMost commercial devices are for infrastructure mode and not ad hoc mode. For easier migration and interworking with legacy devices, Wi-Fi Direct is based on infrastructure mode. Wi-Fi Direct works on the principles similar to that of Wi-Fi AP. Hence, the P2P GO is sometimes also called *Soft AP*. .\n\n\n### Is it sufficient for only one device to be Wi-Fi Direct-certified to form a group?\n\nYes. Only the P2P GO needs to be Wi-Fi Direct-certified. Other group members may be legacy Wi-Fi stations that operate in infrastructure mode. Wi-Fi Direct essentially embeds a software access point for the group owner. The soft AP provides a version of Wi-Fi Protected Setup with its push-button or PIN-based setup. Thus, group members see only an AP. \n\n\n### What are the pros and cons of Wi-Fi Direct?\n\nWi-Fi Direct can connect multiple devices without a Wi-Fi AP. Wi-Fi Direct is portable. No Internet connection is required to transfer files between two devices. Only one device needs to be Wi-Fi Direct-certified and it can assume the role of the group owner. \n\nWi-Fi Direct has some limitations. Not all vendors support it. Connection has to be re-established with other devices every time to form the group. As soon as GO leaves the group, all connections are broken and service stops. The group is formed as a star topology with the GO at the center. Group members cannot talk to each other directly. \n\nProtected by WPA2, Wi-Fi Direct is secure, but applications could use it wrongly and thereby compromise overall security. Devices connect using Wi-Fi Protected Setup (WPS) but implementations that use WPS PIN method of setup are insecure. An analysis from April 2020 of some popular Android applications that use Wi-Fi Direct found 17 security issues. These applications (SHAREit, Xender, Xiaomi Mi Drop, Files by Google, Zapya and SuperBeam) often compromised security in favour of usability. \n\n## Milestones\n\nOct \n2010\n\nWi-Fi Alliance launches Wi-Fi Direct certification program. Within the same month, Atheros, Broadcom, Intel, Ralink and Realtek announce their first certified products. A month later, *Popular Science* magazine recognizes Wi-Fi Direct as one of the best tech innovations of the year. \n\nOct \n2011\n\nGoogle announces Wi-Fi Direct support in Android 4.0. \n\nMar \n2016\n\nSony, LG, Philips and X.VISION implement Wi-Fi Direct on some of their televisions. \n\nJun \n2017\n\nWith the growing adoption of Software Defined Networking (SDN), Poularakis et al. explore how SDN can be applied to wireless mobile networks. Traditional SDNs are centralized in the network whereas mobile networks require a distributed architecture. They propose a hybrid architecture. In one example, they should how Wi-Fi Direct can also be part of the network. A Wi-Fi Direct smartphone also becomes a virtual switch due to *Open vSwitch* and uses *OpenFlow* protocol.","meta":{"title":"Wi-Fi Direct","href":"wi-fi-direct"}} {"text":"# SQL Injection\n\n## Summary\n\n\nSQL (Structured Query Language) is a language used to create, update and access data in a database. By carefully crafting SQL commands, a hacker can intentionally cause the application to fail, delete data, steal data or gain unauthorized access. This is what we call SQL injection or **SQL Injection Attack (SQLIA)**. SQL itself is a highly flexible language, which creates opportunities for hackers to acquire sensitive information such as credit card details, user passwords or user IDs. .\n\nThere are various methods and tools to attack an SQL database. This article gives an overview of SQL injection attacks and what hackers aim to achieve. We also cover the types and techniques of SQL injection attacks. We note some possible preventive measures towards more secure database-driven websites.\n\n## Discussion\n\n### Why do we need to worry about SQL injection?\n\nCWE (Common Weakness Enumeration) is a system where all weaknesses and vulnerabilities are categorised concerning a particular system architecture, code, hardware or software. **CWE-89: SQL Injection** notes that any data-rich software is vulnerable to attack when user information is stored in an SQL-based database. An attack can lead to loss of data confidentiality, access control and data integrity. \n\nMany widely adopted databases are based on SQL including Oracle, MySQL, Microsoft SQL Server, PostgreSQL and SQLite. In April 2022, these were in the first, second, third, fourth and tenth positions respectively. SQLIA techniques may slightly differ across databases but all are vulnerable if not properly secured. \n\nIn 2021, the Open Web Security Project (OWASP) tested 94% of applications under study for injection attacks. They found that 19% had at least one of instance of a vulnerability type (called incidence rate). Injection category was the third-most serious risk and includes 33 CWEs. On a ten-point scale, 7+ values for both exploit and impact shows the threat of SQLIA. In the years 2004/2007/2010/2013/2017, injection attacks were in positions 6/2/1/1/1 respectively. \n\n\n### Could you introduce SQL injection with a few examples?\n\nThe figure illustrates a few examples, each affecting the application in a different way. In all cases, inputs submitted by hackers are directly used to form queries without proper validation.\n\nThe first example possibly represents a user sign-up form where the user is expected to enter first name and last name. Instead, a hacker enters first name that includes a single quote. Since single quote is normally used by developers to enclose values, the extra quote in input causes execution error. The application will not respond correctly to the user. \n\nIn the second example, the hacker submits the string `name' OR 'a'='a` for item name. In the resulting command, `'a'='a'` will match and return all records to the hacker. This is a serious data breach. \n\nIn the third example, using semi-colons, we execute multiple commands, called batched queries. Comment syntax `--` is used to ignore characters after the delete command. This attack results in a serious data loss. \n\n\n### What's the nature of SQL injection attacks?\n\nSQLIA can take many forms: identifying injectable parameters, performing database fingerprinting, determining database schema (table or column names, datatypes), adding/modifying/extracting data, performing denial of service (locking or dropping rows/tables/databases), evading detection, bypassing authentication, executing remote commands, performing privilege escalation, etc. In general, a hacker tries to gather information about the database, uses that information to handcraft attacking commands, and finally execute those commands. \n\nThese attacks can be placed into four categories: \n\n + **SQL Manipulation**: Modify SQL statements possibly by changing the `WHERE` clause.\n + **Code Injection**: Insert extra SQL statements after the exploitable SQL statement. This is possible on databases that support multiple SQL requests. Semi-colons typically separate multiple queries. `UNION` statements can be used provided the hacker knows the exact number columns and their data types returned by the exploited statement.\n + **Function Call Injection**: Insert function calls within the exploited SQL statement. This can not only manipulate data but also make operating system calls.\n + **Buffer Overflows**: A product of using a function call injection. Hacker exploits a weakness often found on unpatched servers.\n\n### What are the different types of SQL injection attacks?\n\nDifferent types of SQLIA have been documented: \n\n + **Boolean-based**: Aka tautology attack. The use of `OR 2 = 2` within the `WHERE` clause makes other conditions in the clause redundant.\n + **UNION-based**: The `UNION` statements combines the results of separate queries. `' UNION SELECT username, password FROM users--` is an example injection. Where multiple columns have be retrieved within fewer columns, columns can be concatenated.\n + **Error-based**: Error messages can give information about database type. For example, a query that fails due to a `SUBSTRING` function call suggests that it's probably an Oracle database where the equivalent function is named `SUBSTR`.\n + **Batch Queries**: Aka piggyback attack. For example on SQL-Server 2005, `; INSERT INTO users VALUES (‘Abubakar’, ‘1234’);#` ends the original statement, appends another statement and comments out any trailing characters with `#` character.\n + **LIKE-based**: Similar to tautology attack but based on pattern matching using `LIKE` and wildcard operator `%`. An example is `OR username LIKE 'S%'`.\n + **Stored Procedure**: Hacker executes built-in stored procedures using malicious SQL code injections.\n\n### What's a blind SQL injection?\n\nA hacker typically submits an SQL query and gathers information from the response. Information can come back in the same channel (in-band) or in a separate channel such as email (out-of-band). But in blind SQL injection (aka inferential attack) the hacker obtains useful information even when no data or error message is returned to the hacker. The hacker does this by observing the behaviour. Blind injections can be Boolean-based or time-based. \n\nWith **Boolean-based attacks**, the hacker sees that true and false evaluations result in different behaviour. He then exploits this, perhaps over many queries, to eventually learn sensitive information such as the administrator's password. \n\nWith **time-based attacks**, the hacker can insert time delays into SQL command execution. These delays can be conditional or unconditional. Due to synchronous processing, these delays affect the HTTP response to the client. Based on the query, the response time gives the hacker clues to what's going on. For example, an observed delay can confirm that an injected condition was true; or adding `SLEEP(1)` to the `WHERE` clause can inform the number of matching records. \n\n\n### What's a second-order SQL injection?\n\nSecond-order SQL injection occurs when the injected inputs don't do immediate damage. They're simply stored in the database as data. When later retrieved, they end up causing malicious executions. Second-order injections are in a way sophisticated because they bypass prevention techniques that're employed when inputs are validated. However, similar stringent validation may be lacking when data is retrieved from the database. \n\nThe figure shows an example in which a hacker creates a user account with name that hides within it an update command. The system may at best check for length, valid characters and uniqueness of username. But at a later time, when the hacker logs in with this handcrafted username, it executes the update command and changes the administrator password. This effectively gives the hacker much more power to cause serious damage. \n\nSecond-order injections can be used to create tables or functions that can be exploited later when those constructs are triggered. Shared search criteria, website statistics and customer services are some app features that may be exploited to achieve second-order injections. \n\n\n### What evasion techniques do hackers use for advanced SQLIA?\n\nThere are Intrusion Detection Systems (IDSs) and tools to detect and protect against common injections. Hackers have therefore evolved techniques to achieve more sophisticated attacks that basic systems may not detect: \n\n + **Encoding**: Instead of using plain ASCII characters, injections are URL-encoded. Alternatively, UTF-8 or hexadecimal encoding may be used. For example, `5' OR '5'='5` is encoded as `%35%27%20%4F%52%20%27%35%27%3D%27%35`; or `admin` is encoded as `char(97,100,109,105,110)`.\n + **White-spacing**: Whitespaces are not significant in SQL. Hackers therefore omit or insert extra whitespaces that may include line feeds, carriage returns and tabs. For example, `OR '5'='5'`, `OR '5'='5'` and `OR'5'='5'` are equivalent.\n + **Comment**: Using multiline comments, attackers can subvert detection techniques that attempt to match signatures. For example, a `SELECT` query can be written as `SELE/**/CT` or `/**/ SELECT/**/`.\n + **Capitalization**: Where the database is configured to be case-insensitive, `AND` is same as `anD`.\n + **Variation**: Techniques in this group include concatenation, variables, and conversions. For example, `'UN'||'ION` is same as `UNION`. A statement is split into multiple variables and combined later for execution.\n\n### Which tools help in detection and prevention of SQLIA?\n\nThe figure show tools to either detect or prevent SQLIA, and their handling of different attack types. A 2017 study noted that SQLCheck, SQLIPA, DB IDS, AMNESIA, SQLDOM and WebSSARI all achieve better than 90% protection. \n\nSQLrand is a proxy server between the web server and the database server. It receives randomized SQL from application and submits standard SQL to the database. All user inputs are treated as non-keywords. AMNESIA does static analysis of application code, constructs a model of valid queries and validates queries at runtime. WebSARRI is another tool that combines static analysis and runtime inspection. \n\nSQLDOM creates strongly-typed classes based on the database schema. These classes are used to generate SQL statements. It’s an object model as it's used with the help of object-oriented languages. SQLCheck focuses on grammar and policy that are created and stored on the web server. User inputs are augmented with metacharacters and then passed to the checker for validation. \n\n\n### What should developers do to protect against SQLIA?\n\nWhile a production system may employ tools to detect and prevent SQLIA, there are some best practices that can developers can adopt to mitigate the possibility of SQLIA in the first place. Give users only limited database privileges. Application code probably never needs to drop or truncate tables. Configure web server properly so that verbose error messages are not shown to users. \n\nNever trust user inputs. Don't rely only on client-side validation since they can be bypassed with curl, postman or similar tools. Do server-side validation. Adopt safe coding practices. Put quotes around input strings. Replace single quote with two single quotes. Where numbers are expected, check that indeed only digits are present. \n\nEven better is to avoid using raw SQL statements formed from raw input strings. Use custom stored procedures and pass user inputs as parameters into these procedures. Many SQL databases support the `PREPARE` statement and pass in user inputs as parameters to such a statement. This avoids the unsafe practice of concatenating raw inputs to form queries. \n\nRead OWASP SQL Injection Prevention Cheat Sheet and OWASP Query Parameterization Cheat Sheet. \n\n## Milestones\n\nDec \n1998\n\nJeff Forristal (alias rain.forest.puppy) writes about SQL injection in the Phrack magazine. He notes that MS SQL Server 6.5 allows execution of **batch commands** and gives many examples to exploit this. He concludes,\n\n> Don't assume user's input is ok for SQL queries.\n\n2003\n\nAs more and more web applications emerge, there's a need to find loopholes that may endanger the user's security. In this context, *WAVES (Web Application Vulnerability and Error Scanner)* is first introduced. WAVES is a project developed by Hunag et al. to create a platform to assess web application security based on vulnerabilities such as SQL injection and cross-site scripting. \n\n2008\n\nA hacker obtains access to 100 million debit and credit cards by hacking into the Heartland Payment System. As a result, Heartland pays around $140 million in fines and penalties. \n\n2011\n\nMultiple attacks targeting Sony through Sony Pictures, Sony Music Japan and Sony Playstation Network occur. Personal information of 100 million users is leaked, causing a loss of $171 million. The attacks on Sony Pictures and Sony Music Japan used SQL injection. \n\nMar \n2011\n\nAs many as 47 MySQL databases on Apache web server get hacked. Hackers publicly post usernames, database schemas and passwords that they gathered using **blind SQL injection**. The breach includes hashed passwords of administrative accounts of webmaster, WordPress, root and forums. \n\n2013\n\n**OWASP Top 10** is published stating how injection attacks take top spot among all types of attacks. This top spot is maintained in a subsequent survey in 2017. The report notes that injection attacks are easy while the impact is severe. \n\n2017\n\nIn a survey paper, Alwan and Younis mention two recent types of SQLIA called Fast Flux SQLIA and Compounded SQLIA. The latter is a combination of many attack techniques including DDoS attack, DNS hijacking, Cross-Site Scripting (XSS), and more. \n\nJun \n2019\n\nLewis of NCC Group shares a proof-of-concept of using **Machine Learning to detect SQLIA** vulnerabilities. They use real-world vulnerability data (though limited in size) and limit their study to only MySQL. They use SVM-Multiclass. In 2021, Erdodi et al. propose using Reinforcement Learning instead with Markov decision process as the model for SQLIA. These are just two examples to illustrate the use of machine learning in SQLIA.\n\n2020\n\nAbikoye et al. use the **Knuth-Morris-Pratt (KMP) string matching algorithm** to detect and prevent SQLIA. Based on syntactic analysis of various types of attacks, parse trees are designed. KMP algorithm is then used to compare user inputs against the known attack patterns.","meta":{"title":"SQL Injection","href":"sql-injection"}} {"text":"# 5G UE Measurements and Reporting\n\n## Summary\n\n\nMeasurements are essential to determine the health of any cellular system given the current configuration. Measurements help the UE and the network make decisions so that resources are managed better and ultimately quality of service is achieved. Measurements are done by both UE and the network, although this article focuses only on measurements performed by a 5G UE. \n\nTypically, a UE measures downlink signals while network measures uplink signals. However, it's possible for a UE to measure uplink signals sent by other UEs. \n\nRRC manages measurement configuration. Most measurements are executed by Layer 1, although some may be at Layer 2. RRC does filtering on Layer 1 measurements. If filtered measurements meet reporting criteria, they're reported to the network. Some measurements are reported by Layer 1 directly to the network.\n\nWhile UE measures many different signals, the main ones are based on **SSB** and **CSI-RS**.\n\n## Discussion\n\n### What are some acronyms pertaining to 5G UE measurements?\n\nFor convenience we use these acronyms related to measurements in this article: Channel State Information (CSI), Demodulation Reference Signal (DMRS), Reference Signal (RS), Reference Signal Received Power (RSRP), Reference Signal Received Power per Branch (RSRPB), Reference Signal Received Quality (RSRQ), Received Signal Strength Indicator (RSSI), Signal-to-Noise and Interference Ratio (SINR), Synchronization Signal (SS) and Synchronization Signal Block (SSB). \n\n\n### With respect to RRC states, what measurements are done by a 5G UE?\n\nAn essential UE procedure in RRC\\_IDLE is cell selection. In RRC\\_IDLE and RRC\\_INACTIVE, the UE can also do cell reselection. For both these procedures, UE measures RSRP and RSRQ of a cell. \n\nFor cell reselection, if supported and enabled, UE may do **Relaxed Measurements**. This is typically useful when the UE has low mobility or not at the cell edge. \n\nIf configured, a UE in RRC\\_IDLE or RRC\\_INACTIVE may collect measurements and report them later in RRC\\_CONNECTED. This is called **Logged Measurements**. It's related to a feature called *Minimization of Drive Test (MDT)*. \n\nIn RRC\\_CONNECTED, the UE is configured via dedicated signalling to perform intra-frequency or inter-frequency NR measurements, or inter-RAT measurements for E-UTRA or UTRA-FDD frequencies. The network uses these to decide on carrier aggregation, dual connectivity or handover. \n\n\n### What are some basic facts about 5G UE measurements?\n\nFor downlink channel sounding, 5G NR uses two main downlink signals that a UE measures:\n\n + **SSB**: Transmitted with a low duty cycle on a limited bandwidth compared to LTE's Cell-Specific Reference Signals (CRS) that's sent on the entire channel bandwidth. SSB measurements are used to determine path loss and average channel quality.\n + **CSI-RS**: Used for tracking rapidly changing channel conditions to support mobility and beam management.In general, measurements in FR1 are from UE's antenna connector. In FR2, measurements are based on the combined signal from antenna elements mapped to a given receive branch. If UE is using receiver diversity, it reports the maximum value. \n\nAlthough a UE can be configured for measurements early on, measurement reports can be sent to the network only after **AS security activation**. \n\n\n### Which are the main quantities measured by a 5G UE?\n\nWe describe the main UE measurements involving SS:\n\n + **SS-RSRP**: Average power of resource elements carrying secondary synchronization signals. In addition, resource elements of PBCH-DMRS and CSI-RS can be included.\n + **SS-RSRPB**: Similar to SS-RSRP but for each antenna connector (FR1) or for each receive branch (FR2). Measurements can include PBCH-DMRS but not CSI-RS.\n + **SS-RSRQ**: Given N resource blocks within the measurement bandwidth, this is \\(N \\cdot SS{\\text-}RSRP/RSSI\\_{NR\\,Carrier}\\). Both SS-RSRP and NR carrier RSSI are measured over the same resource blocks.\n + **SS-SINR**: This is SS-RSRP over average noise-plus-interference power. The latter is measured based on RRC configuration or over the same resource elements as SS-RSRP measurement.Equivalent measurements pertaining to CSI-RS are CSI-RSRP, CSI-RSRQ and CSI-SINR. \n\n**RSSI** is average power over certain OFDM symbols in a measurement bandwidth corresponding to channel bandwidth. It includes co-channel serving and non-serving cells, adjacent channel interference, thermal noise, etc. \n\n\n### Which are some feature-specific measurements done by a 5G UE?\n\nMeasurements for inter-RAT include: \n\n + IEEE 802.11 WLAN RSSI: for handovers to Wi-Fi\n + Reference Signal Time Difference (RSTD) for E-UTRA: relative timing difference between an E-UTRA cell and the E-UTRA reference cell\n + E-UTRA RSRP, RSRQ and RS-SINR\n + UTRA FDD CPICH RSCP, UTRA FDD carrier RSSI and UTRA FDD CPICH Ec/NoMeasurements for MR-DC include: \n\n + SFN and Frame Timing Difference (SFTD): Measured between PCell and PSCell.Measurements for sidelink channels include: \n\n + Sidelink RSSI\n + Sidelink Channel Occupancy Ratio (SL CR)\n + Sidelink Channel Busy Ratio (SL CBR)\n + PSBCH-RSRP, PSSCH-RSRP and PSCCH-RSRPMeasurements for UE positioning include: \n\n + Timing between a E-UTRA cell and a GNSS-specific reference time\n + DL PRS-RSRP measured on Positioning Reference Signal (PRS)\n + DL Reference Signal Time Difference (RSTD) measures the relative time difference between a Transmission Point (TP) and the reference TP\n + UE Rx–Tx time difference measured per TP\n + SS Reference Signal Antenna Relative Phase (SS-RSARP)\n\n### What are Cross Link Interference (CLI) measurements?\n\nCLI is a problem in TDD when a base station receiving in the uplink is facing interference from another base station transmitting in the downlink. It can happen across network operators due to out-of-band emissions. CLI can be mitigated by time-synchronization across base stations, that is, they share a common clock, phase reference and frame structure. \n\nEven within the same operator network, CLI can occur since cell neighbours may be using different TDD DL/UL patterns. CLI can be mitigated by gNBs coordinating their configuration over Xn and F1 interfaces. \n\nIf capable, a 5G UE measures and reports CLI-RSSI. These reports can also include SRS-RSRP measurements on Sounding Reference Signal (SRS), which are uplink signals coming from other UEs. This quantifies interference on downlink due to nearby uplink transmissions. Both CLI-RSSI and SRS-RSRP are measured within the active DL BWP. These are applicable only for RRC\\_CONNECTED intra-frequency in TDD mode. \n\nIn EN-DC and NGEN-DC, Secondary Node (SN) configures CLI measurements. In NE-DC, Master Node (MN) does it. In NR-DC, both MN and SN can do this. \n\n\n### What's the difference between 5G NR L1 and L3 measurements?\n\nRSRP, RSRQ, SINR, and RSSI are quantities measured at L1, not L3. We use the term \"L3 measurement\" to imply that L3 does filtering on the values and does the final reporting. Filtering is done to remove the effect of fast fading and ignore short-term variations. Though L1 may collect measurements more often, L3 might report them at a larger configured periodicity. Thus, L3 takes a longer-term view of channel conditions. \n\nTo avoid ping-pong behaviour and unnecessary reporting, L3 manages event-based reporting. It evaluates reporting criteria to decide if a report needs to be sent. Apart from thresholds, such criteria include hysteresis. \n\nBut L1 reports some measurements to quickly react to changing channel conditions. Beam management is an example. These are referred to as L1-RSCP and L1-SINR. L1 reports are part of Channel State Information (CSI). \n\nCSI-RS measurements and L1 reporting can be periodic, semi-periodic (activated/deactivated by MAC signalling) or aperiodic (triggered by DCI signalling). L1 reporting is on PUCCH (periodic, semi-periodic) or PUSCH (aperiodic, semi-periodic). \n\n\n### What L2 measurements are reported by a 5G UE to the network?\n\nAt Layer 2, there's measurement and reporting of **UL PDCP delay** for packets during the reporting period. Delay is reported at a granularity of 0.1ms and per Data Radio Bearer (DRB). At most one measurement identity per cell group has this quantity configured for reporting. The corresponding measurement object is ignored. \n\nThe delay is in fact queuing delay. It's the time taken to obtain uplink grant from the time packet enters PDCP from upper SAP. It's up to gNB to convert these per-DRB reports to delays at the level of QoS flows. \n\nFor completeness, we note that many more L2 measurements are done on the network side. These are described in TS 38.314 specification. \n\n\n### What's in a typical 5G NR measurement configuration?\n\nA 5G UE is given the following measurement details: \n\n + **Measurement Objects**: Specifies what is to be measured. For NR and inter-RAT E-UTRA measurements, this may include cell-specific offsets, blacklisted cells to be ignored and whitelisted cells to consider for measurements.\n + **Reporting Configuration**: Specifies how reporting should be done. This could be periodic or event-triggered.\n + **Measurement ID**: Identifies how to report measurements of a specific object. This is a many-to-many mapping: a measurement object could have multiple reporting configurations, a reporting configuration could apply to multiple objects. A unique ID is used for each object-to-report-config association. When UE sends a MeasurementReport message, a single ID and related measurements are included in the message.\n + **Quantity Configuration**: Specifies parameters for layer 3 filtering of measurements. Only after filtering, reporting criteria are evaluated. The formula used is \\(F\\_n = (1–a)\\*F\\_{n-1} + a\\*M\\_n\\), where \\(M\\) is the latest measurement, \\(F\\) is the filtered measurement, and \\(a\\) is based on configured filter coefficient.\n + **Measurement Gaps**: Periods that the UE may use to perform measurements.\n\n### Which are the event-triggered measurements reported by 5G UE RRC?\n\nEvents are triggered based on thresholds, hysteresis and sometimes offsets. RRC specification defines the following: \n\n + Event A1: Serving becomes better than threshold\n + Event A2: Serving becomes worse than threshold\n + Event A3: Neighbour becomes offset better than SpCell\n + Event A4: Neighbour becomes better than threshold\n + Event A5: SpCell becomes worse than threshold1 and neighbour becomes better than threshold2\n + Event A6: Neighbour becomes offset better than SCell\n + Event B1: Inter RAT neighbour becomes better than threshold\n + Event B2: PCell becomes worse than threshold1 and inter RAT neighbour becomes better than threshold2\n + Event I1: Interference becomes higher than threshold\n + Event C1: The NR sidelink channel busy ratio is above a threshold\n + Event C2: The NR sidelink channel busy ratio is below a thresholdOnly the serving cell is relevant for events A1/A2. For other events, consider only whitelisted cells if enabled; else consider any neighbour cell detected based on the measurement object configuration provided it's not in the blacklist. \n\nEvents B1/B2 relate to inter-RAT. They're used in EN-DC deployments as well where they're reported via E-UTRA RRC signalling. \n\n\n### How do I interpret events used in event-triggered measurement reporting?\n\nA1 is typically used to **cancel a handover procedure** since the UE has re-established good coverage on the serving cell. With A2, UE has poor coverage on serving cell. Since neighbour cell measurements are not available with A2, network can initiate a **blind handover**; or provide UE configuration to perform neighbour cell measurements (eg. A3 or A5). \n\nEvents A3 and A6 contain offsets specific to a neighbour cell. Both involve relative measurements, that is, comparing one cell with another. A3 may lead to intra- or inter-frequency handover away from the Special Cell (SpCell). A6 is relevant to carrier aggregation, where a Secondary Cell (SCell) is configured. A6 may not result in a handover but instead **reconfiguration of cell groups** (MCG or SCG). \n\nA4 could trigger a handover but the decision is not based on radio conditions on the serving cell. It could be for other reasons such as **load balancing**. B1 is similar for the inter-RAT case. \n\nA5 can be seen as a combination of A2 and A4, leading to intra- or inter-frequency handover. B2 is similar for the **inter-RAT** case. \n\n\n### Which are the 3GPP specifications relevant to 5G UE measurements?\n\nWe note the following specifications: \n\n + **TS 37.340**: Multi-connectivity overall description: Stage-2. Specifies measurement model for multi-connectivity operation involving E-UTRA and NR.\n + **TS 38.133**: Requirements for support of radio resource management. Specifies measurement requirements (including performance requirements), procedures and UE measurement capabilities.\n + **TS 38.215**: Physical layer measurements. Specifies measurement quantities.\n + **TS 38.300**: NR and NG-RAN overall description: Stage-2. Specifies measurement model.\n + **TS 38.304**: User Equipment (UE) procedures in idle mode and in RRC Inactive state. Specifies cell reselection measurement rules and relaxed measurements.\n + **TS 38.314**: Layer 2 measurements. Specifies mostly network requirements but section 4.3 is for UE.\n + **TS 38.331**: Radio Resource Control (RRC): protocol specification. Specifies UE reporting of measurements.\n + **TS 38.533**: User Equipment (UE) conformance specification: Radio Resource Management (RRM).Specification *TS 28.552: 5G performance measurements* is mostly about network-side measurements including network slicing. We mention it here for completeness. \n\n## Milestones\n\nDec \n2017\n\n3GPP publishes Release 15 \"early drop\". In 5G RRC specification TS 38.331, version 15.0.0, measurement configurations specified include measurement objects, measurement identifies, reporting, gaps, L3 filtering, and events A1-A6. Measurements supported include SS-RSRP, SS-RSRQ, SS-CINR, CSI-RSRP, CSI-RSRQ, CSI-CINR, RSSI and more. \n\nDec \n2019\n\nIn TS 38.133, version 16.2.0, these are introduced: CLI measurements, UMTS inter-RAT measurements, SRVCC-related measurements, and more. \n\nJul \n2020\n\n3GPP publishes Release 16 specifications. In 5G RRC specification TS 38.331, version 16.3.1, has a number of additions: measurement configuration for RRC\\_IDLE and RRC\\_INACTIVE; CLI-RSSI and SRS-RSRP reporting; sidelink and UTRA-FDD reporting; UL PDCP delay reporting; IEs `UE-MeasurementsAvailable` and `needForGapsInfoNR` as part of RRCReconfigurationComplete and RRCResumeComplete messages; and UE positioning measurements.","meta":{"title":"5G UE Measurements and Reporting","href":"5g-ue-measurements-and-reporting"}} {"text":"# Types of Blockchains\n\n## Summary\n\n\nHistorically, Blockchain started as a public permissionless technology when it was used for powering Bitcoin. Since then, other types of blockchains have been created. These can be categorized as a combination of public/private and permissionless/permissioned. Each type fits a specific set of use cases. When choosing a particular type, we have to be aware the tradeoffs.\n\nIn general, public/permissionless blockchains are open, decentralized and slow. Private/permissioned blockchains are closed and centralized, either partially or completely. They're also more efficient. It's also important to note that for some use cases, traditional databases may suffice instead of a blockchain.\n\n## Discussion\n\n### Could you describe the different types of blockchains?\n\nBroadly, there are two types of blockchains: \n\n + **Permissionless**: Anyone can join the network. They can read/write/verify transactions. The system is open. There's no central authority. This system makes sense when no one wants to use a trusted third party (TTP). Trust is therefore established among peers via an agreed consensus mechanism. While transactions can be read by anyone, it's also possible to hide sensitive information if so desired.\n + **Permissioned**: A central authority grants permissions to only some folks to read/write/verify transactions. Since write access is given to a trusted few, consensus is achieved in a simpler and more efficient way. Public read access may be allowed.Some classify blockchains as public, private and permissioned. In a private blockchain, controlling power is with only one organization. In a permissioned blockchain, controlling power is given to a few selected entities. Thus, no single entity can tamper the system on their own. These are also called federated or consortium blockchains. They are a compromise between the openness of public blockchain and the closed control of private blockchains. \n\n\n### How does one go about selecting a suitable blockchain type?\n\nBlockchain is useful in applications where multiple entities write to a shared database, these entities don't trust one another and don't want to use a trusted third party intermediary. If entities are unknown or wish anonymity, then a permissionless blockchain is desired. Otherwise, go for a permissioned blockchain. \n\nBlockchain is also useful when multiple copies of a ledger are maintained. In this case, blockchain enables real-time reconciliation without have a third-party trusted intermediary. \n\nA public permissioned blockchain is one in which some trusted entities write to the chain but public is allowed to verify. For example, a consumer might want to verify the source of the fish she buys but only those involved in the supply chain have permissions to write to the chain. In some applications, such as Cryptologic, confidential transaction data is hashed before added to the public blockchain. \n\nA private permissioned blockchain can be used when control rests with a single trusted entity. If multiple organizations are involved, then a consortium blockchain is preferred. \n\n\n### What are the advantages of using a permissioned blockchain?\n\nA permissioned blockchain is similar to a permissionless one except for an additional access control layer. This layer controls who can participate in the consensus mechanism, and who can create transactions or smart contracts. \n\nA permissioned blockchain gives the following advantages: \n\n + **Performance**: Excessive redundant computation of permissionless blockchains is avoided. Each node will perform only those computations relevant to its application.\n + **Governance**: Enables transparent governance within the consortium. Also, innovation and evolution of the network can be easier and faster than in permissionless blockchains.\n + **Cost**: It's cost effective since there's no need to do spam control such as dealing with infinite loops in smart contracts.\n + **Security**: It has the same level of security as permissionless blockchains: \"non-predictive distribution of power over block creation among nodes unlikely to collude.\" In addition, an access control layer is built into the network by design.\n\n### Can you name some examples of the different types of blockchains?\n\nBitcoin and Ethereum are well-known examples of public blockchains but Ethereum can also be used to create a private blockchain. OpenChain enables private blockchains. Chain supports permissioned blockchains suited for financial applications. Patientory is a permissioned blockchain for electronic health records. Ripple is a permissioned blockchain. \n\nBitcoin Cash, Zilliqa and Cypherium are permissionless blockchains. Universa and Oracle Network are permissioned blockchains. \n\nSome platforms can be configured to manage either any type of blockchain. For example, MultiChain and HydraChain can be used for private or permissioned blockchains. Hyperledger can be used for private or public blockchains. Hyperledger Fabric and R3 Corda are for private or permissioned blockchains. \n\nWilliam El Kaim has curated a useful list of blockchains, blockchain platforms and applications.\n\n\n### What's the point of private blockchains where immutability can be compromised?\n\nIt's true that since a private blockchains is controlled by a single entity or organization, it can be easily tampered. It's therefore argued that private blockchains are no better than shared databases. If trust and robustness are already guaranteed, one could simply use a database. Moreover, databases have for long supported code execution (example, via stored procedures) that are similar to what blockchain calls smart contracts. \n\nHowever, others argue that the use of cryptography and Merkle trees prevent non-valid transactions from getting added to the chain. With shared databases, hack on a single entity will corrupt the database for everyone. This isn't possible with private blockchains when a consensus algorithm such as Juno is used. \n\n## Milestones\n\nJan \n2009\n\nBitcoin is launched using blockchain technology. It's the first application of public blockchain. \n\n2012\n\nRipple, a permissioned blockchain is launched. \n\n2013\n\nEthereum, an open permissionless blockchain, is described in a white paper by Vitalik Buterin, a programmer involved with Bitcoin Magazine. \n\nJul \n2015\n\nInitial release of Ethereum, an open-source, public, blockchain-based distributed computing platform and operating system featuring smart contract (scripting) functionality. \n\nDec \n2015\n\nLinux foundation announces the creation of the Hyperledger Project, a permissioned open-source blockchain. \n\n2016\n\nThis is they year when people start showing greater interest in private plus permissioned blockchains than in public blockchains. This increased interest continues through 2018, as shown by Google Trends data collected in March 2018.","meta":{"title":"Types of Blockchains","href":"types-of-blockchains"}} {"text":"# Lemmatization\n\n## Summary\n\n\nConsider the words 'am', 'are', and 'is'. These come from the same root word 'be'. Likewise, 'dinner' and 'dinners' can be reduced to 'dinner'. Variations of a word are called **wordforms** or **surface forms**. It's often complex to handle all such variations in software. By reducing these wordforms to a common root, we simplify the input. The root form is called **lemma**. An algorithm or program that determines lemmas from wordforms is called a **lemmatizer**. \n\nFor example, Oxford English Dictionary of 1989 has about 615K lemmas as an upper bound. Shakespeare's works have about 880K words, 29K wordforms, and 18K lemmas. \n\nLemmatization involves word morphology, which is the study of word forms. Typically, we identify the morphological tags of a word before selecting the lemma.\n\n## Discussion\n\n### Why do we need to find the lemma of a word?\n\nMany NLP tasks can benefit from lemmatization. For instance, topic modelling looks at word distribution in a document. By normalizing words to a common form, we get better results. In word embeddings, that is, representing words as real-valued vectors, removing inflected wordforms can improve downstream NLP tasks. \n\nFor information retrieval (IR), lemmatization helps with query expansion so that suitable matches are returned even if there's not an exact word match. In document clustering, it's useful to reduce the number of tokens. It also helps in machine translation. \n\nUltimately, the decision to use lemmas is application dependent. We should use lemmas only if they show better performance. \n\n\n### What are the challenges with lemmatization?\n\nOut-of-vocabulary (OOV) words is a challenge. For example, WordNet that's used by NLTK package for lemmatization, doesn't have the word 'carmaking'. The lemmatizer therefore doesn't relate this to 'carmaker'. \n\nIt difficult to construct rules for irregular word inflections. The word 'bring' might look like an inflected form of 'to bre' but it's not. Even more challenging is a word such as 'gehört' in German. It's a participle of 'hören' (to hear) or of 'gehören' (to belong). Both are valid but only the context of usage can help us derive the correct lemma. It's for these reasons that neural network approaches to learning rules are preferred over hand-crafted rules. \n\nWhen content comes from Internet or social media, it's impractical to use predefined dictionary. This is another reason for a neural network approach with an open vocabulary. \n\nEven with neural networks, some inflected forms might never occur in training, such as, 'forbade', the past tense of 'forbid'. Many training corpora come from newspaper texts where verbs in second person are rare. This can impact lemmatization of such forms. \n\n\n### How is lemmatization different from stemming?\n\nGiven a wordform, stemming is a simpler way to get to its root form. Stemming simply removes prefixes and suffixes. Lemmatization on the other hand does morphological analysis, uses dictionaries and often requires part of speech information. Thus, lemmatization is a more complex process. \n\nStems need not be dictionary words but lemmas always are. Another way to say this is that \"a lemma is the base form of all its inflectional forms, whereas a stem isn't\". \n\nWordforms are either inflectional (change of tense, singular/plural) or derivational (change of part of speech or meaning). Lemmatization usually collapses inflectional forms whereas stemming does this for derivational forms. \n\nStemming may suffice for many use cases in English. For morphologically complex languages such as Arabic, lemmatization is essential. \n\nThere are two types of problems with stemming that lemmatization can solve: \n\n + Two wordforms with different lemmas may stem to the same result. Eg. 'universal' and 'university' result in same stem 'univers'.\n + Two wordforms of same lemma may end as two different stems. Eg. 'good' and 'better' have the same lemma 'good'.\n\n### Which are the available models for lemmatization?\n\nThe classical approach is to use a **Finite State Transducer (FST)**. FST models encode vocabulary and string rewrite rules. Where there are multiple encoding rules, there's ambiguity. We can think of FST as reading the surface form from an input tape and writing the lexical form to an output tape. There could be intermediate tapes for spelling changes, etc. \n\nOne well-known tool is Helsinki Finite State Toolkit (HFST) that makes use of other open source tools such as SFST and OpenFST. \n\nChrupała formalized lemmatization in 2006 by treating it as a string-to-string transduction task. Given a word w, we get its morphological attributes m. To obtain the lemma l, we calculate the probability P(l|w,m). This uses features based on (l,w,m). It then trains a **Maximum-Entropy Markov Model (MEMM)**, one each for POS tags and lemmas. Müller improved on this by using **Conditional Random Fields (CRFs)** for jointly learning tags and lemmas. \n\nOne researcher combined the best of stemming and lemmatization. \n\n\n### Which are the neural network approaches to lemmatization?\n\nA well-known model is the **Sequence-to-Sequence (seq2seq)** neural network. Words and their lemmas are processed character by character. Input can include POS tags. Every input is represented using word embeddings. \n\nTo deal with lemma ambiguity, we need to make use of the context. **Bidirectional LSTM** networks, that are based on RNNs, are able to do this. They take in a sequence of words to produce context-sensitive vectors. Then the lemmatizer uses automatically generated rules (pretrained by another neural network) to arrive at the lemma. However, such ambiguity is so rare that seq2seq architecture may be more efficient. **Encoder-decoder** architecture using GRU is another approach to handle unseen or ambiguous words. \n\nTurku NLP based on NN provides one of the state-of-the-art lemmatizers. Other good ones are UDPipe Future and Stanford NLP, although the latter performs poorly for low-resource languages, for which CUNI x-ling excels. \n\n\n### Could you mention some tools that can do lemmatization?\n\nIn Python, NLTK has `WordNetLemmatizer` class to determine lemmas. It includes the option to pass the part of speech to help us obtain the correct lemmas. Other Python-based lemmatizers are in packages spaCy, TextBlob, Pattern and GenSim. \n\nStanford's LemmaProcessor is another Python-based lemmatizer. It allows us to select a seq2seq model, a dictionary model or a trivial identity model. For Chinese, a dictionary model is adequate. In Vietnamese, lemma is identical to original word. Hence, identity model will suffice.\n\nTreeTagger does POS tagging plus gives lemma information. It supports 20 natural languages. Another multilingual framework is GATE DictLemmatizer, which is based on HFST and word-lemma dictionaries available form Wiktionary. We can use wiktextract to download and process Wiktionary data dumps. \n\nLemmaGen is an open source multilingual platform with implementations or bindings in C++, C# and Python. In .NET, there's LemmaGenerator. There's a Java implementation of Morpha.\n\n## Milestones\n\n1968\n\nIt's in the 1960s that morphological analysis is formalized. Chomsky and Halle show that an ordered sequence of rewrite rules convert abstract phonological forms to surface forms through intermediate representations. \n\n1972\n\nDouglas C. Johnson shows that pairs of input/output can be modelled by **finite state transducers**. However, this result is overlooked and rediscovered later in 1981 by Ronald M. Kaplan and Martin Kay. \n\n1983\n\nThus far, rules have been applied in a cascade. Kimmo Koskenniemi invents **two-level morphology**, where rules can be applied in parallel. Rules are seen as symbol-by-symbol constraints. Lexical lookup and morphological analysis are done in tandem. It's only in 1985 that the first two-level rules compiler is invented. \n\n1997\n\nKarttunen et al. show how we can **compile regular expressions** to create finite state transducers. The use finite state transducers for morphological analysis and generation is well known but it's application in other areas of NLP are not well known. The authors show how to use them for date parsing, date validation and tokenization. \n\n2000\n\nIn a problem related to lemmatization, Minnen et al. at the University of Sussex show how to **generate words** in English based on lemma, POS tag and inflection form to be generated. They write high-level descriptions or rules as regular expressions, which *Flex* compiles into finite-state automata. \n\n2006\n\nGrzegorz Chrupała publishes *Simple data-driven context-sensitive lemmatization*. Lemmatization is modelled as a classification problem where the algorithm chooses one of many \"edit trees\" that can transform a word to its lemma. Such trees are induced from wordform-lemma pairs. This work leads to a PhD dissertation in 2008 and the system is named *Morfette*. \n\n2015\n\nMany NLP systems take a pipeline approach. They do tagging followed by lemmatization, since POS tags can help the lemmatizer disambiguate. But there's a mutual dependency between tagging and lemmatization. Müller et al. present a system called *Lemming* that jointly does POS tagging and lemmatization using Conditional Random Fields (CRFs). It can also analyze OOV words. This work sets a new baseline for lemmatization on six languages. \n\n2016\n\nAt the SIGMORPHON Shared Task, it's noted that various neural **sequence-to-sequence** models give best results. In 2018, a seq2seq model is used along with novel context representation. This model, used within TurkuNLP in the CoNLL-18 Shared Task, gives best performance on lemmatization. \n\n2018\n\nBergmanis and Goldwater use **encoder-decoder** NN architecture in a lemmatizer they name as *Lematus*. Both encoder and decoder are 2-layer Gated Recurrent Unit (GRU). They compare its performance against context-free systems. They use character contexts of each form to be lemmatized. Thus, training resources needed are less. They note that context-free systems may be adequate if a language has many unseen words but few ambiguous words. \n\n2019\n\nMalaviya et al. use a NN model to jointly learn morphological tags and lemmas. They use a encoder-decoder model with hard attention mechanism. In particular, they use a 2-layer LSTM for morphological tagging. For the lemmatizer, they use 2-layer BiLSTM encoder and 1-layer LSTM decoder. They compare their results with other state-of-the-art models: Lematus, UDPipe, Lemming, and Morfette.","meta":{"title":"Lemmatization","href":"lemmatization"}} {"text":"# 5G Deployment Options\n\n## Summary\n\n\nThough 5G has been standardized, it has a number of options. Two network operators can deploy 5G in very different ways. This choice of option depends on the spectrum licensed to an operator, the geographic area they serve (terrain and user density), capabilities of the equipment they use, and business factors (cashflow and decision making). \n\n3GPP has defined options covering both 4G and 5G technologies with respect to Radio Access Network (RAN) and Core Network (CN). These options can guide operators as they migrate from current 4G deployments to 5G deployments.\n\nIt's generally expected that operators would first deploy 5G NR, let 4G RAN and 5G NR coexist, and finally deploy 5G Core. This implies that 4G+5G handsets would come out first and they would connect to both 4G eNB and 5G gNB.\n\n## Discussion\n\n### What are the broad challenges in deploying a 5G network?\n\nIdeally, an operator acquires 5G licenses, invests in 5G equipment for both RAN and CN, and deploys the network for full coverage. The operator then asks subscribers to switch to 5G. After a short transition period, the old 4G network is retired.\n\nIn reality, subscribers may be slow to migrate since they have to invest in 5G-capable handsets. The operator's 5G subscription plans may be costlier. The 5G network may have poor coverage in many areas where it's been deployed in only the mmWave band. \n\nAn incumbent operator has most likely invested heavily on 4G licenses and equipment. Their current 4G licenses may be in spectrum bands not supported by 5G. Perhaps the equipment they use can't easily be upgraded to 5G.\n\nIt's also possible that the government has delayed the auctioning of 5G spectrum. Operators don't want to wait. They may want to offer 5G services on 4G licensed spectrum. Even with 5G licenses, they would want 4G to coexist with 5G and steadily migrate to 5G as more 5G subscribers are added.\n\n\n### Which are the main 5G deployment options?\n\nIn LTE, both RAN and CN had to use LTE standards. 5G gives more flexibility. For example, 4G RAN can be combined with 5G Core or 5G NR can be combined with 4G EPC. This gives rise to two broad deployment scenarios: \n\n + **Standalone (SA)**: Uses only one radio access technology, either LTE radio or 5G NR. Both control and user planes go through the same RAN element. Deployment and network management is perhaps simpler for operators. Inter-RAT handover is needed for service continuity. Under SA, we have option 1 (EPC + 4G eNB), option 2 (5GC + 5G gNB), and option 5 (5GC + 4G ng-eNB).\n + **Non-Standalone (NSA)**: Multiple radio access technologies are combined. Control plane goes through what's called the master node whereas data plane is split across the master node and a secondary node. There's tight interworking between 4G RAN and 5G NR. Under NSA, we have option 3 (EPC + 4G eNB master + 5G en-gNB secondary), option 4 (5GC + 5G gNB master + 4G ng-eNB secondary), and option 7 (5GC + 4g ng-eNB master + 5g gNB secondary).\n\n### What are the differences across options 3, 3a and 3x?\n\nIn all three options, control plane is between EPC and eNB via S1-C interface, and eNB and gNB via X2-C interface. There's no direct signalling traffic between EPC and gNB. The differences are in how user plane traffic is routed. Below we describe downlink but it applies to uplink as well. \n\nIn option 3, user plane traffic is from EPC to eNB where **PDCP sublayer splits** the traffic so that some traffic is sent to gNB across the X2-U interface. In option 3a, EPC has a direct S1-U interface to gNB. In this option **EPC splits** the traffic. Option 3x is a hybrid: EPC splits the traffic for eNB and gNB, and/or gNB PDCP sublayer sends some traffic to eNB. For example, eMBB services use 5G NR whereas VoLTE uses LTE radio. \n\ngNB connected to EPC via S1-U is more specifically called **en-gNB**. It's part of E-UTRA-NR Dual Connectivity (EN-DC). The interface between en-gNB and eNB is also called **Xx interface**. \n\nSimilar variations of routing user plane traffic involving 5G Core give rise to options 4a, 7a and 7x. \n\n\n### Could you compare possible migration paths from 4G to 5G?\n\nSince options 1 and 3 use EPC, they can't support many 5G use cases. It's been noted that, \n\n> There's no real 5G without 5G Core.\n\nHowever, option 3 enables faster time-to-market since core network can be upgraded later. Due to 5G NR, users can experience better throughput. However, with this increased traffic, EPC may become a bottleneck. Traffic flow is split at the EPC. \n\nFrom NSA option 3, the operator can migrate to NSA option 7 or SA option 5. With both these options, 5GC enables all 5G services. eNB and en-gNB are upgraded to ng-eNB and gNB respectively to connect to 5GC. Option 3 will continue to support UEs that can't talk to 5GC. \n\nIt's also possible to add SA option 2 to complement NSA option 3. An alternative path is to deploy SA option 2 from the outset, thus immediately enabling all 5G use cases. However, operator has to acquire and deploy 5G equipment. Ideally, NR coverage is achieved on a wide area. Otherwise, frequent inter-RAT handover to SA option 1 or NSA option 3 may occur. \n\n\n### What scenarios can benefit from 5G deployment options 4, 5 and 7?\n\nOptions 4, 5 and 7 enable operators to continue using legacy 4G equipment while connecting to 5GC. With the higher bandwidths of options 4 and 7, all 5G use cases are possible.\n\nMigration paths 3→5, 3→7, 3→4+2, 3→4, 7→4, 5→4, 1→4, 1→7, and 4→2 have been suggested. \n\nFor some, **option 5** is not attractive. Legacy 4G UEs have to be replaced. eNB has to be upgraded substantially. Lots of interoperability testing are needed. If UE moves out of 5G NR coverage (option 2), traditional MBB/voice use cases can be supported simply with option 1 and inter-RAT handover. In the long term, improving option 2 coverage is a better path. **Option 7** depends on option 5 and inherits the same problems. \n\n**Option 4** is an extension of option 2. Using dual connectivity, LTE radio is added to 5G NR anchor. This improves coverage and bandwidth. However, it requires upgrade to eNB, gNB and UE, along with necessary interoperability testing. Instead, it would be better to focus on improving option 2 coverage. \n\n\n### Could you highlight the differences among eNB, gNB, ng-eNB and en-gNB?\n\n3G's *NodeB (NB)* has become *evolved NodeB (eNB)* in 4G. In 5G, this has evolved to *next generation NodeB (gNB)*. ng-eNB and en-gNB are variations of eNB and gNB respectively, depending on CN. \n\nWe compare the different RAN elements: \n\n + **eNB**: A 4G network element. Connects to a 4G UE and EPC. This relates to options 1 and 3.\n + **gNB**: A newly introduced 5G network element. Connects to a 5G UE and 5G Core. This relates to options 2, 4 and 7.\n + **en-gNB**: Sits in a 4G RAN and connects a 5G UE to EPC. Both 4G and 5G radio resources are active using dual connectivity. eNB is the master node while en-gNB is the secondary node. \"en\" refers to E-UTRA New Radio. This relates to option 3.\n + **ng-eNB**: Connects to 5G Core but serves a 5G UE over 4G radio. \"ng\" refers to Next Generation. This relates to options 4, 5 and 7. There's dual connectivity in option 4 (gNB is master, ng-eNB is secondary) and option 7 (ng-eNB is master, gNB is secondary).\n\n### How do 5G deployment options map to spectrum bands?\n\nCurrent 4G deployments might be in sub-1GHz and 1-3GHz bands. 5G NR is then deployed in mmWave band and possibly in mid-bands 3.5-8GHz. This brings higher throughput/capacity/density and lower latency. At the same time, 4G RAN ensures good wide-area coverage and serves subscribers who haven't migrated to 5G. With Dual Connectivity (DC), high-band NR downlink can be combined with low-band 4G uplink. More throughput can be achieved via inter-band Carrier Aggregation (CA). This is **option 3**. \n\nWhen the operator activates 5G Core, **option 2** comes into play. Initially, this will be limited to 5G NR in mmWave band and mid-bands 3.5-8GHz for Fixed Wireless Access (FWA) and industrial deployments. At a later time, when 4G spectra are re-farmed, or via spectrum sharing, option 2 can be deployed to wider areas. When a UE moves out of option 2 5G NR coverage, it triggers intersystem handover to EPC, either in option 3 or 1. \n\nEven when 5G is widely deployed, for Massive Machine Type Communications (mMTC), NB-IoT and LTE-M will be used in **option 1**. \n\n\n### What's a possible migration path from LTE EPC to 5G Core?\n\nAs an example, we describe Samsung's offering. They claim that their LTE EPC is already virtualized and provides Control and User Plane Separation (CUPS). For 5G NSA, EPC software is upgraded for dual connectivity. For 5G SA, EPC *Network Elements (NEs)* become 5GC *Network Functions (NFs)*. Specifically, GW-C, GW-U, HSS and PCRP are upgraded to SMF, UPF, UDM and PCF respectively. New NFs AMF, NRF, NSSF, NEF and UDF are introduced. LTE's MME functionality goes into AMF, SMF and AUMF. The final deployment is a **common core** that covers LTE, 5G and Wi-Fi. \n\n5G Core is a Service-Based Architecture (SBA). NFs will be **virtualized** in the cloud and implemented using microservices and containers. LTE's stateful NEs that store UE state will be replaced with **stateless** NFs. Increasingly, cloud native architecture will be used with lightweight containers. There will be centralized orchestration of containers, network slicing, centralized operation, and centralized analytics. Overall, network control, monitoring and optimization will be **automated**. This will be the bigger impact of moving from LTE EPC to 5G Core. \n\n## Milestones\n\nDec \n2017\n\n3GPP publishes Release 15 \"early drop\". This includes **NSA option 3**. Corrections to this option are made in June 2018. \n\nApr \n2018\n\nGSMA's report shares Korea Telecom's 4G to 5G migration plan. Early deployments are likely to be NSA option 3 with 5G NR only in 28GHz mmWave band. As 5GC gets introduced, multi-RAT interworking would become important. 5G NR coverage would improve with 3.5GHz band. NSA option 7 would be used, with eLTE being the anchor. Finally, EPC would be retired. \n\nJun \n2018\n\n3GPP publishes Release 15 \"main drop\". This includes **SA option 2** and **SA option 5**. \n\nSep \n2018\n\nA white paper by Nokia recommends either option 3 or option 2 for initial 5G rollout. The report also identifies **option 3X** in which high bandwidth traffic flows are routed to 5G gNB to avoid overloading 4G eNB. LTE user plane (SGW/PGW) would require performance improvements and a distributed architecture. \n\nMar \n2019\n\n3GPP publishes Release 15 \"late drop\". This includes **SA option 4** and **SA option 7**.","meta":{"title":"5G Deployment Options","href":"5g-deployment-options"}} {"text":"# React Hooks\n\n## Summary\n\n\nReact is based on components. The logic to manage component state and update the user interface is encapsulated with the component. But what if we wish to reuse stateful logic across components, such as for logging or authentication? React allows this via mixins, higher-order components and render props. However, these complicate the component hierarchy. React Hooks provide a cleaner and more elegant way of doing this. \n\nReact's function components are stateless and without side effects. React Hooks combines the simpler syntax of function components but allows for states and side effects. \n\nClass components continue to be available in React. Hooks can coexist with class components, although they can't be used inside a class. In fact, React concepts such as props, state, context, refs and lifecycle are still relevant in Hooks.\n\n## Discussion\n\n### What are the advantages of using React Hooks?\n\nReact Hooks solves the problems of class components. With Hooks, we can reuse stateful logic without complicating component hierarchy. \n\nClass components may have code repetition. For example, code may be repeated in lifecycle events `componentDidMount` and `componentDidUpdate`. Custom Hooks enable easy code reuse. \n\nDevelopers coming from other languages have a hard time understanding the `this` reference. Class components tend to be verbose. They don't minify well. Hot reloading is unreliable. Hooks offer developers a way to more effectively use React's features. \n\nCode is harder to organize logically in class components. For example, fetching data and setting up event listeners are doing different things but they're probably mixed together in the same lifecycle method. Fetching data and its clean up is also likely to be split between `componentDidMount` and `componentWillUnmount`. With Hooks, we can organize logical units of code into functions. For this reason, Dan Abramov noted, \n\n> Hooks apply the React philosophy (explicit data flow and composition) *inside* a component, rather than just *between* the components.\n\n\n### Could you explain React Hooks with an example?\n\nThe figure shows an example that uses two types of Hooks: `useState` and `useEffect`. We note that Hooks are used within a function component, not a class component. Instead of the `render()` method found in class components, we simply return the component at the end, whose syntax is in JSX format. We no longer use the `this` variable that was common in class components. \n\nThis component maintains a state called `count` that can be set with a function named `setCount()`. Both these are returned by `useState`. `setCount()` is triggered when the button is clicked. This updates the state, which in turn triggers a re-rendering of the component. \n\nAfter the component is rendered or re-rendered, the `useEffect` Hook is triggered. This simply updates the document title in this example. \n\nFinally, we observe that Hooks are called in response to UI events, DOM rendering, etc. Therefore, they're not independent of React lifecycle methods. React Hooks \"hook into\" React state and lifecycle features from function components. \n\n\n### Which are the Hooks that React provides?\n\nAmong the basic Hooks are `useState`, `useEffect`, and `useContext`. Additional Hooks include `useReducer`, `useCallback`, `useMemo`, `useRef`, `useImperativeHandle`, `useLayoutEffect`, and `useDebugValue`. We briefly describe these (refer to the Hooks API for more details): \n\n + `useState`: Initializes and manages component state.\n + `useEffect`: Called after DOM is rendered to the screen. Side effects such as logging or timers can be executed here.\n + `useContext`: Accepts a context object and returns its current context value based on the nearest ancestor's value prop.\n + `useReducer`: Better than `useState` for managing complex state logic.\n + `useCallback`: Given a callback and its dependencies, obtain a memoized callback. Useful to prevent unnecessary rerenders.\n + `useMemo`: Optimize expensive computations by returning a memoized value. Recomputes happen only when dependencies change.\n + `useRef`: Create a container to hold a mutable value. Useful for accessing child elements imperatively.\n + `useImperativeHandle`: Customize instance value that's exposed to parent components. However, prefer to avoid imperative code.\n + `useLayoutEffect`: Like `useEffect` but called synchronously before browser \"paints\" the screen. Useful to avoid flicker when an effect manipulates DOM.\n + `useDebugValue`: Display a label for custom Hooks in React DevTools. Useful when such Hooks are part of shared libraries.\n\n### What's the React Hooks lifecycle?\n\nFunction components with Hooks can be mounted, updated and unmounted. Each of these go through the render phase and commit phase. During the latter the DOM is updated. \n\nDifferent Hooks are called at different phases of the React Hooks lifecycle. `useMemo` is called during the render phase when the component is mounted. `useState`, `useReducer`, `useContext` and `useCallback` are called during the render phase when component state is updated. Render phase is not the place for executing side effects. \n\n`useLayoutEffect` and `useEffect` are called after React has updated the DOM but there's an important difference. `useLayoutEffect` is called synchronously. It will block the browser before browser has had a chance to \"paint\" the screen. `useEffect` is called after browser has painted the screen. Generally, `useEffect` is preferred unless the Hook makes visual updates to the DOM. \n\nBoth `useLayoutEffect` and `useEffect` are also called when the component is unmounted, provided these Hooks return a callback function. \n\n\n### Could you describe the State Hook?\n\nThe most basic Hook is `useState`. This takes an initial value and returns a stateful value plus a function to update it. When the component first renders, returned value is same as initial value. Next time the component renders, the current state is available, that is, it's not initialized again. This is what React Hooks enables. For expensive computation, a function can be passed into `useState`. \n\nFor updating state, a value can be passed to the update function, `fn(val)`. If current value is needed for the update, a function can be passed, `fn((c) => c+1)`. Where state contains sub-values such as `{name: 'john', age: 22}`, `useState` doesn't merge the values. We need to use the spread syntax, `{...prevState, ...updatedValues}`. \n\n\n### Could you describe the Effect Hook?\n\nHook `useEffect` allows us to perform side effects, such as fetching data or updating DOM. Having these side effects inside the main body of the function component (used during the render phase) can result in UI bugs and inconsistencies. Hence, use this Hook to implement side effects. \n\nThis Hook enables imperative programming, as a feature that complements React's purely functional programming approach. It accepts a function as an argument and optionally a second argument. Without the second argument, the Hook is called after every render. If the second argument is an empty array, the Hook is called only once. If array is non-empty, the Hook is called conditionally only when those dependencies change. \n\nThe Hook can also return a function, which is called when the component unmounts. \n\nHaving this Hook inside the component allows it to access component state. This state is never stale since with each render we recreate the function wired to the Hook. \n\n\n### What are the rules of Hooks?\n\nReact Hooks have only two essential rules that developers need to follow. Hooks should be called only at the top level within the component. Don't call them within loops, conditions or nested functions. This is because React Hooks expects that Hooks are always called in the same order whenever the component renders. \n\nThe second rule is that don't call Hooks from regular JavaScript functions. Call them only from React function components or from custom Hooks. This ensures that all stateful logic is visible within the component's source code. \n\nTo ensure that components meet these rules, developers can install the *React Hooks ESLint Plugin*, aka `eslint-plugin-react-hooks` on NPM. The plugin will warn developers when these rules are violated. When we create the default React app, this plugin is installed by default.\n\n\n### Can class components interoperate with React Hooks?\n\nReact Hooks don't replace class components. An application can use class components, function components and Hooks. In fact, these approaches can be used in the same DOM tree. However, at code level, it's not possible to use React Hooks within a class. \n\nDevelopers can adopt Hooks for new components. Legacy components could be migrated to Hooks incrementally. Particularly for render props and higher-order components, Hooks offer a simpler approach and reduce nesting of DOM elements. \n\nTo relate to component lifecycle methods, we note the following: \n\n + `constructor`: Not required in Hooks. Initialization can be done with `useState`.\n + `getDerivedStateFromProps`: Update component while it's being rendered.\n + `shouldComponentUpdate`: Wrap a function component within `React.memo`, a higher-order component. This is not exactly a Hook. Compares props (not states) in a shallow manner.\n + `render`: Function component body is returned.\n + `componentDidMount`, `componentDidUpdate`, `componentWillUnmount`: `useEffect` Hook can implement all three.\n + `getSnapshotBeforeUpdate`, `componentDidCatch`, `getDerivedStateFromError`: No equivalent in Hooks.\n\n### What are some criticisms of React Hooks?\n\nFor someone used to class components, React Hooks is something new to learn. Learning fatigue is common among JavaScript developers. The new knowledge is React-specific and can't be reused anywhere else. \n\nReact Hooks can't be used with class components. This means that legacy code needs to be migrated, which presents the risk of introducing bugs. Developers can't also opt-out, despite what the documentation claims. There's a good chance that the ecosystem and dependencies will move to React Hooks. When that happens, our own projects will have no choice but to adopt it. \n\nSome may find that the rules of React Hooks are rather limiting. Reading and understanding component code and its control flow can be hard when a component uses many types of Hooks. Control flow is determined by component lifecycle and not the ordering in the source code. \n\nWhere dependencies are passed to a Hook, object comparison works as intended for primitive types but not for arrays or other objects. Comparison is by reference and developers can't customize this. \n\n\n### Where can I learn more about React Hooks?\n\nThe section on React Hooks in the React official documentation is a good place to start. The same documentation also has the Hooks API Reference. There's a section on building custom Hooks. \n\nThose who like to learn from examples, can visit the useHooks website. Some examples include show/hide UI elements, subscribe to Firestore data, handle asynchronous calls, render component based on current state of user authentication, sync state to local storage, and more. \n\nXebia has published a useful React Hooks cheat sheet. This includes class-style and Hooks-style code for comparison. Emmanuel's cheat sheet is also worth reading.\n\nGunawardhana has shared a list of ten useful React Hooks libraries. Mostly, these are custom Hooks built on top of the built-in React Hooks. \n\nAryan's shares one approach to testing React Hooks. \n\n## Milestones\n\nMay \n2013\n\nFacebook **open sources React** at the JSConf US. An earlier prototype of this can be traced to 2011 and then developed within Facebook during 2012. \n\nFeb \n2019\n\nReact v16.8.0 is released. This introduces **React Hooks**. In the change log, it's described as \"a way to use state and other React features without writing a class.\" This change affects React DOM, React DOM Server, React Test Renderer and React Shallow Renderer. All of these must be at least version 16.8 to use React Hooks. This release also includes the **React Hooks ESLint Plugin** for linting. \n\nMar \n2019\n\nReact Hooks becomes available in **React Native v0.59**. \n\nAug \n2019\n\nReact v16.9.0 is released. The React Hooks ESLint Plugin is updated to treat the use of Hooks at module level (outside a function component) as a violation. \n\nOct \n2020\n\nReact v17.0.0 is released. As an experimental Hook, `unstable_useOpaqueIdentifier` is added.","meta":{"title":"React Hooks","href":"react-hooks"}} {"text":"# Antivirus Software\n\n## Summary\n\n\nMalicious software or malware are code that can harm computer systems and steal personal information, causing severe damage such as financial loss. These malware can enter through various ways including as an email attachment, on a USB drive, or user visiting infected websites or clicking malicious links. \n\nAntivirus software is a program that protects computer systems from malicious software. Antivirus software needs privileged access to the system to function properly. Many antivirus software provide real-time threat protection against harmful applications. It's also possible to manually perform security scans. \n\nIt's advisable to use antivirus programs on desktops. They can be optional in mobile phones if users don't download any app from unofficial sources or click suspicious links.\n\n## Discussion\n\n### What are the features of an antivirus software?\n\nBelow are some common features of an antivirus software:\n\n + **Real-time scanning**: Antivirus software that do security checks while using the system are real-time scanning software.\n + **Quarantine**: An isolated place where harmful files are kept.\n + **Bank mode**: A feature that provides a clean and safe virtual environment within the real desktop environment.\n + **Ransomware Shield**: Secures personal photos, documents, and other files from being modified, deleted, or encrypted by ransomware attacks.\n + **Remote Access Shield**: Protects the computer from remote and unauthorized access.\n + **Sandbox**: A sandbox is a system in which suspicious files are tested and analysed inside a virtual machine to detect malware.\n + **Hack alerts**: Warns if sensitive data is leaked on the Dark Web and other online sources.\n + **Password protection**: Protects passwords stored in web browsers.\n + **Webcam shield**: Restricts applications and malware from gaining unauthorized access to the webcam.\n + **Masking IP address**: Hide the IP address while browsing internet.\n + **Freemium vs Paid versions**: Basic features with free versions. More features with paid versions.\n\n### What are the types of antivirus programs?\n\n**Standalone antivirus software** are specialized to scan and remove certain viruses. They can easily be carried from one place to another in a USB drive. These type of antivirus programs do not provide real-time protection. Windows defender offline, Metascan client and Microsoft scanner are some examples. \n\n**Security software suites** provide more security features than antivirus software. They not only scan and remove the viruses but also protect from other malicious attacks. They provide complete protection to the user's computer. They include extra security features such as anti-spyware, firewall, site authentication and parental controls. Bitdefender Total Security, Norton 360 Deluxe, and Kaspersky Total Security are some of its examples. \n\n**Cloud-based antivirus software** analyse the files on the cloud instead of on the user systems. They have two parts: (a) client side that's installed on user systems; (b) cloud side that captures the data gathered from client side and runs the scans. The latter saves a lot of memory and resources on user computers. \n\n\n### What's the architecture of antivirus software?\n\nAn antivirus software consists of four main components namely Anti-virus Manager, Anti-virus Engine, Malware Signature Database and Anti-virus Driver. \n\nThe Anti-virus Manager handles overall management functions. It's responsible for executing malware scans, reviewing the results of malware restoration, raising alarms, and setting a periodic scan policy for malware. It updates the Malware Signature DB and the Anti-virus Engine. \n\nThe Anti-virus Engine is responsible for handling malware in the system. It analyses the file information and marks it safe or harmful based on the scanning. \n\nIn the Malware Signature Database, signatures of malware are kept. The antivirus engine evaluates the suspicious file by comparing its characteristics to these signatures. \n\nThe Anti-virus Driver is a real-time malware entry monitoring function that revokes access to malware-infected user files. It checks the input and output of the file and determines the exact file path to check for malware. \n\n\n### Which are major virus detection methods used by antivirus software?\n\nThere are mainly two virus detection techniques used by antivirus programs namely Heuristic-based detection and Signature-based detection. \n\nIn **signature-based detection**, the antivirus software scans the files and compares it with the predefined malicious code present in its database. The signature could include malicious network attack behaviour, content of email subject lines, file hashes, known byte sequences or malicious domains. If a file matches any malicious code, it flags it as virus and takes necessary actions. The antivirus can't detect any new malware or even any updated variant of malware that is not written in definition file. With rapid growth in variants of malware, this detection method becomes ineffective in long run. \n\nIn **heuristic-based detection** the antivirus software uses data mining and machine learning techniques to learn the behaviour of an executable file. For instance, it may detect commands to deliver payloads disguised within a Trojan horse virus or those used to distribute a worm virus. It's able to identify new or altered variants of malware even in the absence of updated malware code in the database. It's commonly used in combination with signature-based detection. \n\n\n### What are other virus detection techniques used by antivirus software?\n\n**Behavioural-based detection** analyses every single line of code in the file before its execution. If it finds the objective of the file suspicious or harmful (such as taking access to any critical or irrelevant files, processes, or internal services) it flags it as malware. Degree of potential danger is calculated by the antivirus software and required actions are taken afterwards. Some examples of malicious behaviours include any attempt to discover a sandbox environment, disabling security controls, installing rootkits, and registering for autostart. \n\n**Sandbox** is a system for malware detection that runs any suspicious object in a virtual machine (VM). If the object performs malicious actions in VM, the sandbox registers it as malware. VMs are isolated from the real system. \n\n**Cloud-based detection** identifies malware by collecting data from host computers and analysing it on the provider's infrastructure, instead of performing the analysis locally on the system. It's done by capturing relevant details about the file and the context of its execution and providing them to the cloud engine for processing. \n\n\n### What are the differences between server, desktop and mobile antivirus software?\n\nMost antivirus software are built for **desktops**, specially Windows as it has the largest user base. Many antivirus software vendors offer their product at different price ranges, starting with free version which offer only basic protection. These free versions often don't protect against malicious attachments in emails, fishy website links and other type of cyber attacks. Microsoft offers a basic free antivirus software with Windows known as Windows Defender. \n\n**Mobile antivirus** software differ than the desktop ones as they have less access and control over the host device. Mobile antivirus software can't do OS file access, website filtering, in-memory scans or real-time protection engines. Android users need antivirus software more than the iOS users as iOS devices have better security features. \n\nA good **server antivirus** software detects virus before it infects the system. It also stops data transfer between server and external drives. It defends information on servers from every kind of malicious applications. It provides security for different server subsystems such as email, firewall and proxy. \n\n\n### What are the challenges faced with an antivirus software?\n\nOne of the biggest challenges faced with antivirus software is that it slows down the computer. Antivirus software loads each time when the system boots up which makes booting slower. Without a powerful processor and a good amount of memory, antivirus software can slow down systems to a high extent. \n\nNo antivirus software provide 100% security from all kinds of cyber threats. Inexperienced users might fall into a false assumption of being completely secure. If the antivirus uses heuristic detection, it might flag harmless websites and software as malicious. \n\nAntivirus software runs at the kernel level of the operating system. It has privileged access to all the core files of the OS and system. Hence it becomes a potential target for cyber attacks. The US National Security Agency (NSA) and the UK Government Communications Headquarters (GCHQ) intelligence agencies have been exploiting anti-virus software to spy on users. In fact, it's been noted that popular software such as Acrobat Reader, Microsoft Word or Google Chrome are harder to exploit than 90% of the anti-virus products. \n\n\n### Are there any alternatives of antivirus software?\n\nBelow are some alternatives of antivirus software:\n\n + **Network firewall**: Network firewalls stop unknown programs and processes from accessing the system but unlike antivirus programs, they don't identify or remove any virus present in the system. They provide safety from malware coming from other websites on a network.\n + **Cloud antivirus**: Cloud antivirus offer features like other antivirus programs but rely on cloud to process malware. They're faster than usual antivirus software as it does most of its computing on the cloud. Due to up-to-date database of malware and viruses they can find new versions of malware. Panda Cloud Antivirus and Immunet are examples of cloud antivirus.\n + **Online scanning**: Some antivirus vendors provide websites that offer free scanning of system. These can be used without downloading heavy software in local systems.\n + **Specialized tools**: Virus removal tools are helpful in removing stubborn/sticky viruses. Avast Free Anti-Malware, AVG Free Malware Removal Tools and Avira AntiVir Removal Tool are some examples of such tools.\n\n\n## Milestones\n\n1971\n\nThe first known virus named as Creeper Virus is discovered. It infects DEC PDP-10 mainframe computers running the TENEX operating system. Ray Tomlinson writes a program to delete creeper virus. This program is known as **The Reaper**. Some people consider the Reaper as the first antivirus program but Reaper is itself a virus made to remove the Creeper Virus. \n\n1987\n\nBernd Fix becomes the innovator of **first antivirus product**. However, there are competing claims for it. John McAfee establishes the McAfee company. Peter Paško, Rudolf Hrubý, and Miroslav Trnka create the first version of NOD antivirus. The first two heuristic antivirus utilities are released: Flushot Plus by Ross Greenberg and Anti4us by Erwin Lanting. \n\n2000\n\nRainer Link and Howard Fuhs start the first **open-source antivirus engine** called OpenAntivirus Project. \n\n2001\n\nTomasz Kojm releases the first version of ClamAV, the first ever open-source antivirus engine to be commercialised. \n\n2005\n\nAV-TEST (a German based independent organization) reports that there are 333,425 unique malware samples in their database. In 2007 alone, AV-TEST reports 5,490,960 new unique malware samples. \n\n2008\n\nMcAfee Labs adds the industry-first **cloud-based anti-malware** functionality to VirusScan (an antivirus) under the name Artemis. \n\n2010\n\nA faulty update on the AVG anti-virus suite damages 64-bit versions of Windows 7, rendering it unable to boot, due to an endless boot loop created. If not properly managed, this highlights the danger of antivirus programs since they have privileged access.\n\n2011\n\nMicrosoft Security Essentials (MSE) removes the Google Chrome web browser, rival to Microsoft's own Internet Explorer. MSE flags Chrome as a Zbot banking trojan. In 2017, Google Play Protect on Moto G4 flags a Bluetooth system app as malware, causing Bluetooth functionality to become disabled for all apps. In 2022, Microsoft Defender flags all Chromium-based web browsers and Electron-based apps (Whatsapp, Discord, Spotify) as a severe threat. There are some examples of **false positives**.","meta":{"title":"Antivirus Software","href":"antivirus-software"}} {"text":"# Knowledge Distillation\n\n## Summary\n\n\nDeep learning is being used in a plethora of applications ranging from Computer Vision and Digital Assistants to Healthcare and Finance. The popularity of the fields of Machine Learning and Deep Learning can be attributed to the high accuracy of the obtained results, which is largely due to the average of an ensemble of thousands of models. However, such computationally intensive models cannot be deployed on mobile devices, or FPGAs for instant use. These devices have constraints on resources like limited memory and input/output ports.\n\nOne way of mitigating this problem is to use Knowledge Distillation. We train an ensemble of models or a complex model ('teacher') on the data. We then train a lighter model ('student') with the help of the complex model. The less-intensive student model can then be deployed on FPGAs.\n\n## Discussion\n\n### What is a teacher-student network?\n\nThe best Machine Learning models are those that average the predictions of an ensemble of thousands of models. While deploying on hardware devices like FPGAs, however, problems ensue. FPGAs have a limited number of I/O ports, which forces developers to drastically reduce the number of inputs and outputs at each layer of their network.\n\nTo alleviate this problem, we use two networks - a teacher and a student. Essentially, we train a bulky ensemble of models (teacher) and use a smaller, lighter model (student) for testing, prediction and deployment. The student is trained to mimic the prediction capabilities of the teacher. How we go about doing this constitutes the crux of Knowledge Distillation.\n\nIn other words, the ensemble is simply a function that maps input to output. Transfer the knowledge in this function to the student network is knowledge distillation. \n\n\n### What is dark knowledge and softmax temperature?\n\nIn classification problems, neural networks output *logits* that are computed for each class. A *softmax layer* \"normalizes\" these logits \\(z\\_i\\) into probabilities \\(q\\_i\\). For a softer distribution, logits are 'softened' or divided by a constant value, called the **temperature** \\(T\\): \n\n$$q\\_i = \\frac{exp(z\\_i/T)}{\\sum\\_j exp(z\\_j/T)}$$\n\nWhen the temperature is 1, the probabilities obtained are said to be unsoftened. Hinton et.al. that, in general, the temperature depends on the number of units in the hidden layer of a network. For example, when the number of units in the hidden layer was 300, temperatures above 8 worked well, whereas when the number of units was 30, temperatures in the range of 2.5-4 worked best. Higher the temperature, softer the probabilities. \n\nConsider a classification problem with four classes, `[cow, dog, cat, car]`. If we have an image of a dog, unsoftened hard targets would be `[0, 1, 0, 0]`. This doesn't tell much about what the ensemble has learned. By softening, we may get `[0.05, 0.3, 0.2, 0.005]`. It's clear that predicting a cow is 10 times greater than a car. It's this 'dark' knowledge that needs to be distilled from the teacher network to the student. \n\n\n### How could I implement this Knowledge distillation?\n\nBuciluǎ et al. designed the first methods of model compression. Later, Hinton et.al. showed the means of distilling the knowledge from an ensemble of models into a single, lighter model.\n\nFor example, in image classification, the student would be trained on the class probabilities, or logits, output by the teacher. The logits represent a similarity metric over the classes and help in training good classifiers. Extracting this form of 'dark knowledge' from the teacher network and passing it on to the student is called **distillation**.\n\nKariya's Medium article provides a simple implementation of Hinton's paper. He touches upon dark knowledge and proceeds to build a simple CNN-based network on the MNIST dataset , showing how the teacher-trained student performed better than a standalone student. \n\nImplementing knowledge distillation can be a resource-intensive task. It requires the training of the student model on the teacher's logits, in addition to training the teacher model.\n\nWhile training the student, care should be taken to avoid the vanishing gradient problem, which can occur if the learning rate of the student is too high.\n\n\n### How about performance?\n\nThe objective of distilling the knowledge from an ensemble of models into a single, lightweight model is to ease the processes of deployment and testing. It is of paramount importance that accuracy not be compromised in trying to achieve this objective.\n\nIn the original paper authored by Hinton et. al., the performance of the student network after knowledge distillation improved, when compared with a standalone student network. Both networks were trained on the MNIST dataset of images. The accuracies of the various models have been tabulated.\n\nAs is obvious from the table, the best results are obtained from the bulky ensemble of models and their student alternatives must be used only in case of constrained resources.\n\n\n### What are the challenges with Knowledge Distillation?\n\nKD is limited to classification tasks that use softmax layer. Sometimes the assumptions are too strict, such as in FitNets where student models may not suit constrained deployment environments. Other approaches to model compression may therefore be preferred over KD. \n\nHowever, KD continues to be a promising area of research. In 2017, it was adapted for multiclass object detection. In 2018, KD was applied to construct specialized student models for visual question answering. Also in 2018, Guo et al. improved the robustness of student network so that it resists perturbations. \n\nIn some domains such as healthcare, DNNs are not preferred. Decision trees are preferred since their predictions can be more easily interpreted. KD has been used to distil DNN into decision tree and thereby provide good performance and interpretability. \n\n## Milestones\n\n1989\n\nHanson and Pratt propose **network pruning** using biased weight decay. They call their pruned networks *minimal networks*. In the early 1990s, other pruning methods such as optimal brain damage and optimal brain surgeon are proposed. These are early approaches to compress a neural network model. Knowledge distillation as an alternative is invented about two decades later.\n\n2006\n\nBuciluǎ et al. publish a paper titled *Model Compression*. They present a method for “compressing” large, complex ensembles into smaller, faster models, usually without significant loss in performance. They use the ensemble to label large unlabelled datasets. They then use this labelled data to train a single model that performs as well as the ensemble. Because it's not easy to obtain large sets of unlabelled data, they develop an algorithm called *MUNGE* to generate pseudo data. This work is limited to shallow networks. \n\n2014\n\nBa and Caruana propose the idea of **teacher-student** learning method. They show that shallow models can perform as well as deep models. A complex teacher network (either a deep network or an ensemble) is trained. Instead of using the softmax output, the logits are used to train the shallow student network. Thus, the student network benefits from what the teacher network has learned without losing information via the softmax layer. Some call this **softened softmax**. \n\n2014\n\nHinton et al. introduce the idea of passing on the 'dark' knowledge from an ensemble of models into a lighter, deployable model. In a paper published in March 2015, they explain that they're \"distilling knowledge\" from the complex model. The core idea is that models should generalize well to new data rather than optimize on training data. Instead of using logits, they use **distillation**, in which the softmax is used with a higher temperature, also called \"soft targets\". They note that using logits is a special case of distillation. \n\n2015\n\n**FitNet** aims to produce a student network that's thinner than teacher network while being of similar depth. In addition to the teacher's distilled knowledge of the final softmax layer, Fitnets also make use of intermediate-level hints from the hidden layers. Yim et al. propose a variation of this in 2017 by distilling knowledge from the inner product of features of two layers. \n\n2018\n\nFurlanello et al. show that student models parameterized similar to teacher models outperform the latter. They call these **Born-Again Networks (BANs)** where model compression is not the goal. Students are trained to predict correct labels plus match the teacher's output distribution (knowledge distillation). \n\n2019\n\nResearchers at the Indian Institute of Science, Bangalore, propose **Zero-Shot Knowledge Distillation (ZSKD)** in which they don't use teacher's training dataset or a transfer dataset for distillation. Instead, they synthesize pseudo data from the teacher's model parameters. They call this **Data Impressions (DI)**. This is then used as a transfer dataset to perform distillation. Another research group, with the aim of reducing training, show that just 1% of the training data can be adequate. \n\n2019\n\nPark et al. look at the mutual relationships among data samples and transfer this knowledge to the student network. Called **Relational Knowledge Distillation (RKD)**, this departs from the conventional approach of looking at individual samples. Liu et al. propose something similar, calling it **Instance Relationship Graph (IRG)**. Attention network is another approach to distil relationships. \n\nSep \n2019\n\nYuan et al. note the conventional KD can be reversed; that is, the teacher can also learn from the student. Another observation is that a poorly-trained teacher can improve the student. They see KD not just as similarity across categories but also as a regularization of soft targets. With this understanding, they propose **Teacher-free Knowledge Distillation (Tf-KD)** in which a student model learns from itself.","meta":{"title":"Knowledge Distillation","href":"knowledge-distillation"}} {"text":"# REST API to GraphQL Migration\n\n## Summary\n\n\nWhile GraphQL has its limitations, there are many advantages that make it worthwhile to migrate existing REST APIs to GraphQL. To reap these benefits, it's important to think in terms of graphs rather than endpoints. \n\nThe migration itself can be done in one go for small projects. An incremental approach is preferred for complex projects handled by large teams. For most projects, it's expected that current REST APIs will coexist with new GraphQL APIs. This is because clients may not migrate to new app versions or update their code immediately.\n\nIn general, migration will involve building the **GraphQL schema** and writing the **resolvers** for that schema.\n\n## Discussion\n\n### What's a possible migration path from REST to GraphQL?\n\nLet's assume that we're using REST API and React client (a). We can start by keeping the REST endpoints but wrapping it with a **GraphQL gateway** (b). We need to define the GraphQL schema. React client will now call a single GraphQL endpoint. Resolvers will use REST endpoints for fetch the data. \n\nA natural evolution from here is to implement the **GraphQL server** (c). We can remove the gateway. React client will directly call the new GraphQL API. This has direct access to the database layer and therefore lower latency compared to the gateway architecture. REST endpoints may coexist to support legacy clients.\n\nA React client does imperative data fetching and might do manual client-side caching. By migrating to **Apollo Client**, we can do declarative data fetching and zero-config caching. By using Apollo Link REST, we don't even need changes at the server side. This could be an easy way to get started. The final goal is to have GraphQL API and Apollo Client (d). \n\n\n### To design for GraphQL, what sort of mindset should I have?\n\nOne study found that reducing the number of API calls by migrating to GraphQL is not a trivial exercise. Suppose the current code is organized to consume small amounts of data returned by specific REST endpoints. Refactoring this into fewer endpoints that return large graph structures requires a graph-based mindset.\n\nDevelopers have to stop thinking in terms of endpoints and start thinking in terms of graphs. It's important to understand this before designing the GraphQL schema. A typical endpoint-based approach is to ask, \"What pages are there and what data each of them needs?\" With graph-based approach, we should instead focus on the data and its relationships, and seek to expose these to clients. \n\nConsider a blogging site. Let's think about exposing user information. Relationships from the user might include posts, comments, liked articles, groups, etc. Thus, we are building a graph around the user. Relationships are bidirectional. When focusing on a comment, we should also link it back to the user. \n\nIn short, \n\n> Think in graphs, not endpoints. Describe the data, not the view.\n\n\n### How is TypeScript relevant to GraphQL?\n\nUnlike vanilla JavaScript, TypeScript gives type safety at the client side. GraphQL gives type safety at the server side. \n\nSuppose we have a client that uses only JavaScript. If this client makes a query to the GraphQL server in which the types don't match. This error will be caught at the server during runtime. Instead, if the client code had been in TypeScript, such a mismatch would have been caught during the build process. We can deploy our code with lot more confidence. \n\nA good approach is to define GraphQL schema and then use tools to auto-generate TypeScript types for the front-end. Ideally, we could extend this to the database layer so that from browser to the database, types are consistent. \n\nIn general, we should strive for type consistency systemwide. At Airbnb, they defined Thrift IDL that was then compiled into GraphQL schema. Moreover, they then automatically generated TypeScript types from the GraphQL schema. \n\n\n### Should I implement caching for my GraphQL service?\n\nA single client query can trigger multiple resolvers. GraphQL resolvers run as independent units. This has the benefit that even if some of them fail, the ones that succeed return the requested data to the client. One problem is that across resolvers there could be many duplicate calls to REST endpoints. \n\nIt's to solve this problem that a caching layer is helpful. Resolvers need not worry about optimizing their calls to REST endpoints. At Airbnb, engineers approached this problem via what they call a *data-centric schema* and a *data hydration layer*. \n\nAt another level, we can cache client request-response. Since GraphQL doesn't support HTTP-level caching, we need an alternative. It's possible to use application-level caching, such as Spring Boot caching. Database queries can also be optimized via caching. \n\n\n### Could you share some case studies on migrating to GraphQL?\n\nAt Netflix, the Marketing Technology team adopted GraphQL to solve their network bandwidth bottlenecks. Their use cases were complex and building custom REST endpoints for each page was cumbersome. GraphQL matched their graph-oriented data structures. They could roll out features lot faster. A GraphQL middle layer reduced number of direct calls from the browser. Instead, server-to-server calls within the data centre gave 8x performance boost. Payload of 10MB reduced to 200KB. \n\nAt Airbnb, they reconciled legacy presentation services with GraphQL by having Thrift/GraphQL translators per service. These services were then exposed to clients via a GraphQL gateway service. \n\nAt PayPal, a user checkout involved many round trips. Reducing round trips with REST meant overfetching data. They also found that UI developers spent less time building UI. Mostly, they were trying to figure out where and how to fetch data. After moving to GraphQL, data performance and developer productivity improved. \n\nInterested readers can head to the official GraphQL website that shares a number of GraphQL case studies.\n\n\n### Are there tools that help with GraphQL migration?\n\nGraphiQL is an in-browser IDE. Via GraphiQL, you can look at the schema, make queries and look at the responses. GraphQL Editor, GraphQL Playground and GraphQL Voyager are other useful tools. \n\nApollo Server and Apollo Client provide server and client-side implementations of GraphQL. Apollo Client can integrate with React, Angular or Vue frameworks. As an alternative to Apollo Client, there's Relay to act as a bridge between GraphQL and React. \n\nThere are many tools that can read your REST API files and export GraphQL schema. Apimatic's API Transformer is one such tool. Input files can be in various formats such as OpenAPI, RAML, or API Blueprint. This tool can also migrate SOAP users by translating WSDL files. \n\nSome clients may want to use only REST. To avoid maintaining two codebases, we can have a GraphQL schema and generate REST APIs from it. Sofa is a Node.js package that does this. \n\n\n### How I do map REST features to their equivalent GraphQL features?\n\nUsing REST, a client would make an endpoint call, parse the response and make further calls to get all relevant data. This is the N+1 problem. This can be solved using the nested API design but this leads to overfetching. With a well-designed GraphQL API, clients should be able to and should request only relevant data with a single API call. Moreover, we should make use of **connections**. A connection encapsulates the queried node and other *nodes* to which it's connected by *edges*. This comes directly from the underlying graph-based data abstraction. \n\nIn REST, we paginate with a query string such as `?limit=1&page=2`. **Cursor-based pagination** is the GraphQL equivalent. It's typically used with connections using arguments `first`, `last`, `offset` or `after`. \n\nIn REST, POST, PUT or DELETE requests modify data. In GraphQL, clients can instead use **mutations**. We could define mutations to create, update or delete items. \n\nGraphQL prefers serving a versionless API and avoids breaking changes as the API evolves. Clients only request what they need and understand. In REST, the practice has been to serve different versions. \n\n\n### Could you share some tips and best practices for moving to GraphQL?\n\nGraphQL should be a thin layer. Tasks such as authentication, caching or database queries should happen below the GraphQL service layer. By doing this, application will be more resilient to changes in platform or services. Likewise, name your fields in a self-documenting manner. Once clients start using them, it becomes hard to change later. \n\nDon't force clients to replace all their REST endpoint calls to GraphQL. Instead, allow them to adopt GraphQL incrementally. Shopify documentation shows how responses from REST endpoints can include GraphQL IDs that can be used for subsequent GraphQL queries. \n\nResolvers should be thin and fetch data asynchronously. Fetch data at field level and de-duplicate requests using a library such as DataLoader. \n\nServer logs help in debugging GraphQL calls. To debug within the browser, include logs into the response payload when enabled by a flag. \n\nWith GraphQL, we often fetch partial objects that can't be directly used in methods that need full objects. Use duck typing to get around this. \n\n## Milestones\n\nJul \n2015\n\nFacebook open sources GraphQL, which is also released as a draft specification. The development of GraphQL within Facebook can be traced back to 2012. \n\nJun \n2019\n\nBrito et al. compare the performance of GraphQL against REST using seven open source clients to call GitHub and arXiv APIs. They also make typical queries noted in recent papers presented at engineering conferences. They find that GraphQL results in 94% less number of fields fetched and 99% less number of bytes. These are median values. However, there's no significant reduction in the number of API calls. \n\nOct \n2019\n\nAirbnb engineer Brie Bunge shares Airbnb's migration journey to GraphQL. They adopted an incremental approach, making sure the app is always shippable and regression-free. As pre-requisites, they set up GraphQL in the backend and adopted TypeScript in the frontend. Migration itself had five stages. The approach was to first replace REST with GraphQL. Refactoring and optimizations were done later.","meta":{"title":"REST API to GraphQL Migration","href":"rest-api-to-graphql-migration"}} {"text":"# Regex Engines\n\n## Summary\n\n\nA regular expression describes a search pattern that can be applied on textual data to find matches. A regex is typically compiled to a form that can be executed efficiently on a computer. The actual search operation is performed by the regex engine, which makes use of the compiled regex.\n\nTo write good regexes, it's helpful for programmers to know how these engines work. There are a few types of engines, often implemented as a Finite Automaton. In fact, regexes are related to Automata Theory and Regular Languages. \n\nThe syntax and semantics of regexes have been standardized by IEEE as POSIX BRE and ERE. However, there are many non-standard variants. Often, the differences are subtle. Programmers who design regexes must be aware of the variant being used by the engine.\n\n## Discussion\n\n### Is my regex executed directly by a regex engine?\n\nA regex engine receives two inputs: the regex pattern plus the input string. Programmers specify regexes as strings. While an engine could be designed to work with strings directly, there's a better and more efficient way.\n\nIt's been shown that for every regex there's an equivalent **Finite State Machine (FSM)** or **Finite Automaton (FA)**. In other words, the regex can be modelled as a finite set of states and transitions among these states based on inputs received at a state. Therefore, the job of a compiler is to take the original regex string and compile it into a finite automaton, which can be more easily executed by the engine. \n\nIn some implementations, a preprocessor may be invoked before the compiler. This substitutes macros or character classes, such as, replacing `\\p{Alpha}` with `[a-zA-Z]`. A preprocessor also does locale-specific substitutions. \n\nLet's note that there's no standard definition of what's a regex engine. Some may consider parsing, compiling and execution as part of the engine. \n\n\n### Could you explain how automata theory is applied to regexes?\n\nThere are two types of automata: \n\n + **Deterministic Finite Automaton (DFA)**: Given a state and an input, there's a well-defined output state.\n + **Non-Deterministic Finite Automaton (NFA)**: Given a state and an input, there could be multiple possible output states. A variant of NFA is **ε-NFA**, where a state transition can happen even without an input.It's been proven that every DFA can be converted to an NFA, and vice versa. In the accompanying figure, all three automata are equivalent and represent the regex `abb*a` (or `ab+a`). For NFA, when 'b' is received in state q1, the automaton can remain in q1 or move to q2. Thus, it's non-deterministic. For ε-NFA, the automaton can move from q2 to q1 even without an input. It's been said, \n\n> Regular expressions can be thought of as a user-friendly alternative to finite automata for describing patterns in text.\n\n\n### What are the different types of regex engines?\n\nRegex engines could be implemented as DFA or NFA. However, in simpler language, a regex engine can be classified as follows:\n\n + **Text-Directed**: Engine attempts all paths of the regex before moving to next character of input. Thus, this engine doesn't backtrack. Since all paths are attempted at once, it will return the longest match. For example, `(Set|SetValue)` on input \"SetValue\" will match \"SetValue\".\n + **Regex-directed**: If the engine fails at a position, it backtracks to attempt an alternative path. Paths are attempted in left-to-right order. Thus, it returns the leftmost match even if there's a longer match in another path later. For example, `(Set|SetValue)` on input \"SetValue\" will match \"Set\".Most modern engines are regex-directed because this is the only way to implement useful features such as lazy quantifiers and backreferences; and atomic grouping and possessive quantifiers that give extra control to backtracking. \n\nToday's regexes are feature rich and can't always to implemented efficiently as an automaton. Lookaround assertions and backreferences are hard to implement as NFA. Most regex engines use **recursive backtracking** instead. \n\n\n### How do the different regex engines compare?\n\nWith recursive backtracking, pathological regexes result in lots of backtracking and searching through alternative paths. The time complexity grows exponentially. Thompson's NFA (or its equivalent DFA) is more efficient and maintains linear-time complexity. A compromise is to use Thompson algorithm and backtrack only when needed for backreferences. GNU's awk and grep tools use DFA normally and switch to NFA when backreferences are used. \n\nRuby uses non-recursive backtracking but this too grows exponentially for pathological regexes. Ruby's engine is called *Oniguruma*. \n\nDFA is more efficient than NFA since the automaton is in only one state at any given time. A traditional NFA tries every path before failing. A POSIX NFA tries every path to find the longest match. A text-directed DFA spends more time and memory analyzing and compiling the regex but this could be optimized to compile on the fly. \n\nIn terms of code size, NFA regex in *ed* (1979) was about 350 lines of C code. Henry Spencer's 1986 implementation was 1900 lines and his 1992 POSIX DFA was 4500 lines. \n\n\n### What are the essential rules that a regex engine follows?\n\nA regex engine executes the regex one character at a time in **left-to-right order**. This input string itself is parsed one character at a time, in left-to-right order. Once a character is matched, it's said to be **consumed** from the input, and the engine moves to the next input character.\n\nThe engine is by default **greedy**. When quantifiers (such as `* + ? {m,n}`) are used, it will try to match as many characters from the input string as possible. The engine is also **eager**, reporting the first match it finds. \n\nIf the regex doesn't match a character in input, it does two things. It will backtrack to an earlier greedy operation and see if a less greedy match will result in a match. Otherwise, the engine will move to the next character and attempt to match the regex all over again at this position. Either way, the engine always knows its **current position** within the regex. If the regex specifies alternatives, if one search path fails, the engine will backtrack to match the next alternative. Therefore, the engine also stores **backtracking positions**. \n\n\n### Could you explain the concepts \"greedy\" and \"eager\" with examples?\n\nTake regex `a.*o` on input \"cat dog mouse\". Even though \"at do\" is a valid match, since the engine is greedy, the match it gives is \"at dog mo\". To avoid this greedy behaviour, we can use a lazy or non-greedy match: `a.*?o` will match \"at do\". Ungreedy match in some flavours such as PCRE can be specified using 'U' flag. \n\nA non-greedy `\\w{2,3}?` on input \"abc\" will match \"ab\" rather than \"abc\". Suppose the regex is `\\w{2,3}?$`, then the match is \"abc\" and not \"bc\", even though the regex is non-greedy. This is because the engine is eager to report the first match it finds. Thus, it first matches \"ab\", then sees that `$` doesn't match \"c\". At this point, the engine will not backtrack for position \"b\". It will remain at \"a\" and compare the third character due to `{2,3}`. Thus, it finds \"abc\" as match. It eagerly reports this match. \n\nAnother example is `(self)?(selfish)?` applied on input \"selfish\". Because of eagerness, engine will report \"self\" as the match. However, a text-directed engine will report \"selfish\". \n\n\n### Could you explain backtracking with an example?\n\nLet's take the regex `/-\\d+$/g` and input string \"212-244-7688\". The regex engine will match `-\\d+` to \"-244\" but when it sees `$` it declares no match. At this point, it will backtrack to the start of the regex and current position in input string will advance from \"-\" to \"2\". In this example, only one backtracking happens. Suppose we apply `/\\d-\\d+$/g` on the same input, we'll have five backtracks as shown in the figure.\n\nEngine can also backtrack part of the way. Let's apply `/A\\d+\\D?./g` to \"A1234\". The engine will match `A\\d+\\D?` to \"A1234\" but when it sees `.` there's no match. It will backtrack to `\\d+` and give up one character so that `A\\d+` now matches only \"A123\". As the engine continues, it will match `.` with \"4\".\n\nAnother example of backtracking is `pic(ket|nic)`. If the string is \"Let's picnic.\", the engine will match `pic` to \"pic\" but will fail the next character match (k vs. n). The engine knows there's an alternative. It will backtrack to end of `pic` and process the second alternative. \n\n## Milestones\n\n1959\n\nMichael Rabin and Dana Scott introduce the concept of non-determinism. They show that an NFA can be simulated by a DFA in which each DFA state corresponds to a set of NFA states. \n\n1968\n\nKen Thompson shows in an ACM paper how a regex can be converted to a NFA. He presents an engine that can track alternative paths in the NFA simultaneously. This is now called **Thompson NFA**. For a regex of length m and input of length n, Thompson NFA requires \\(O(mn)\\) time. In comparison, the backtracking regex implementation requires \\(O(2^n)\\) time when there are n alternative paths. An NFA can be built from simple operations (such as concatenation, alternation, looping) on partial NFAs. \n\n1971\n\nUnix (First Edition) appears and *ed* text editor is one of the programs in it. The editor uses regex but it's not Thompson's NFA but recursive backtracking. Other utilities such as grep (1973) follow suit. \n\n1979\n\nUnix (Seventh Edition) includes *egrep*, the first utility to support full regex syntax. It pre-computes the DFA. By 1985, it's able to generate the DFA on the fly. \n\n1985\n\nThe *regexp(3)* library of Unix (Eighth Edition) adapts Thompson's algorithm to extract submatches or capturing groups. This work is credited to Rob Pike and Dave Presotto. This goes unnoticed and is not widely used. However, a year later, Henry Spencer reimplements the library interface from scratch using recursive backtracking. This is later adopted by Perl, PCRE, Python, and others. \n\n1999\n\nHenry Spencer writes an engine for Tcl version 8.1. This is a hybrid engine. It's an NFA supporting lookaround and backreferences. It also return the longest-leftmost match as specified by POSIX. \n\n2007\n\nRuss Cox provides a 400-line C implementation of Thompson's NFA. He shows that for pathological regex, this is a lot faster than common implementations (recursive backtracking) used in many languages including Perl. \n\nMar \n2010\n\nGoogle open sources **RE2** that's based on automata theory, has linear-time complexity and uses a fixed size stack. Because Google uses regexes for customer-facing tools such as Code Search, backtracking and exponential-time complexity of earlier implementations could lead to Denial-of-Service (DoS) attacks. In addition, recursive backtracking can lead to stack overflows. Work on RE2 can be traced to the work on Code Search in 2006.","meta":{"title":"Regex Engines","href":"regex-engines"}} {"text":"# Google Cloud Authentication\n\n## Summary\n\n\nGoogle Cloud supports three main types of credentials by which apps can gain access to APIs and services. These are API keys, OAuth 2.0 client IDs and service accounts. This article gives an overview of these methods. It offers some guidelines on how to choose the right authentication method for an application.\n\nImproper use of these methods or careless management of credentials can lead to a security breach. It can result in identity theft and data theft/loss. This article shares some best practices for better security.\n\n## Discussion\n\n### What are the different authentication methods available in Google Cloud?\n\nAnyone with a Google account can login to Google by supplying a valid username and password. Two-Factor Authentication (2FA) enhances security by adding one more step to the authentication process. In Google, 2FA is called 2-Step Verification. For a better user experience, Google also provides Sign In with Google, One Tap and Automatic sign-in. \n\nOften developers want their apps to have access to Google accounts or services, for the apps themselves or on behalf of end users. Google offers two ways to achieve this: \n\n + **User Account**: Represents a developer or administrator. Used when an application needs to access Google Cloud resources on behalf of the user. Managed by Google Accounts. Credentials used include API keys and OAuth 2.0 client credentials.\n + **Service Account**: Represents non-human users. Used when an application needs to access Google Cloud resources on its own without any user intervention. Managed by Google's Identity and Access Management (IAM). Credentials include service account keys. Unlike a user account, a service account doesn't have a user-facing login interface.Google Cloud APIs use OAuth 2.0 protocol to authenticate both user accounts and service accounts. \n\n\n### How do I select a suitable Google Cloud authentication method?\n\nMany Google Cloud APIs allow anonymous access to public data. **API keys** can be used. These identify the application, not the actual user. \n\nIf your app requires access to a user's private data, then **OAuth 2.0 client ID** can be used. In OAuth 2.0 terminology, your application is called the client. The client ID identifies the application. The user is prompted to allow Google share some of his/her data with your application. Your application can now use Google Cloud APIs on behalf of the user. \n\nWhere user authorization is not possible, use a **service account**. For apps running inside Google Cloud, it's automatically created. Otherwise, you manually create a service account. You download the credentials as a JSON file that your app can use. Your app assumes the identity of the service account so that users are not involved. Such server-to-server interactions are sometimes called *Two-Legged OAuth (2LO)*. \n\n\n### Could you share some use cases for each Google Cloud authentication method?\n\nGoogle Cloud has dozens of APIs that can be accessed via API keys or OAuth 2.0 client IDs. Some of these APIs are Google Drive, Google Calendar, GMail, YouTube Data, App Engine Admin, Cloud Datastore, Cloud Speech-to-Text, Cloud Vision, Firebase Management, and many more. \n\nFor example, YouTube Data API can be used read metadata of any video. It's implemented via the `videos` endpoint. Since this is public data, an API key is sufficient. The same API also allows an app to manage videos and playlists that you own. This requires OAuth 2.0 client ID. \n\nManaging your appointments on Google Calendar or your drafts on GMail are further examples where OAuth 2.0 client ID can be used. Web/mobile apps can use OAuth 2.0 client ID for visitors to quickly create accounts on their platforms. \n\nConsider an external application that wants to store nightly data backups to Google Drive. A service account suits this use case. \n\n\n### Could you share some tips related to Google Cloud authentication?\n\nAPI keys, OAuth 2.0 client IDs and service accounts are typically created via a web interface called *Google Cloud Console*. In all cases, the credentials fall within the scope of a project that has an associated billing account. Advanced users can look at the *API Keys API* to create or manage API keys via `gcloud` CLI. `gcloud` can also be used to manage service accounts. \n\nDevelopers can try out APIs using the Google API Explorer. This helps in understanding the APIs before doing it programmatically from application code. While it's possible to implement OAuth 2.0 flows, it's easier to use Google Cloud Client Libraries. \n\nWhen using OAuth 2.0 client IDs, code your apps to ask for minimum required permissions. This is done by setting *scopes*. Google maintains a list of available OAuth 2.0 Scopes per API. \n\nFor local development, use `gcloud auth login` and `gcloud auth application-default login` commands. Use short-lived tokens. \n\n\n### What are some best practices when using API keys?\n\nDon't embed API keys directly in code. Store them in environment variables or in files outside your application's version controlled source code. Do a code review to ensure API keys aren't there by mistake. \n\nWhen not used, delete API keys to minimize security attacks. Regenerate API keys periodically. \n\nFor better security, place restrictions on the use of API keys: \n\n + **Source**: Restrictions on who can use the key. Restrict by HTTP referrers, IP addresses, Android apps or iOS apps. HTTP referrers can be whitelisted by subdomains, paths, protocols (HTTP or HTTPS) and even wildcards.\n + **Target**: Restrictions on what services are allowed. For example, restrict the key to only Google Drive API.API keys are vulnerable to man-in-the-middle attacks since they're part of the request. Don't use them when API calls contain user data. \n\nDon't use API keys to identify users. An API key only identifies the project. Don't use API keys for secure authorization. API keys are useful to block anonymous traffic, control volume of API requests, identify usage patterns, or filter logs by the API key. \n\n\n### What are some best practices when using service accounts?\n\nService accounts are both a resource and an identity. As an identity, avoid granting Owner/Editor/Viewer roles to a service account. Grant instead a predefined or custom role. In general, grant only minimum necessary permissions. \n\nUse unique, descriptive names so that it's easier to manage multiple accounts. Delete accounts if they're unused since they present a security risk. Disable accounts before deleting them. \n\nService account keys are either Google Cloud-managed or user-managed. For the latter, follow processes for key storage, distribution, revocation, rotation and recovery. \n\nDon't use service accounts to access data without user consent. Don't use service accounts during development. Instead authenticate using tools such as `gcloud`, `gsutil`, `terraform`, etc. In other words, use your personal credentials in a development environment. \n\nYou can attach applications to an existing service account. On a GKE cluster, use multiple Google Cloud and Kubernetes service accounts and map them via Workload Identity. \n\n## Milestones\n\nApr \n2008\n\nGoogle previews **App Engine**, a platform for developing and hosting web applications in Google-managed data centres. App Engine is therefore the first cloud service offered by Google. It marks the birth of Google Cloud. \n\nOct \n2012\n\nIETF releases **OAuth 2.0** as RFC 6749. Although OAuth 2.0 is designed for authorization and not for user authentication, Google Cloud later uses OAuth 2.0 for both authorization and authentication. \n\nMar \n2011\n\nGoogle launches **Google API Explorer** that allows developers to experiment with Google's APIs before implementing them in their apps. By August 2021, the Explorer supports 230 APIs. API keys and OAuth 2.0 client IDs can be used with the Explorer. \n\nAug \n2021\n\nGoogle launches **Google Identity Services**, a single SDK that brings together different ways of authenticating users. It provides developers frictionless flows to onboard users to their platforms. *Sign in with Google* and *One Tap* are two such flows. These use secure tokens rather than passwords. These internally use OAuth 2.0. Developers can use client libraries provided by Google to quickly implement these flows. An alternative is *Firebase Authentication*.","meta":{"title":"Google Cloud Authentication","href":"google-cloud-authentication"}} {"text":"# Natural Language Processing\n\n## Summary\n\n\nIn computer science, languages that humans use to communicate are called \"natural languages\". Examples include English, French, and Spanish. Early computers were designed to solve equations and process numbers. They were not meant to understand natural languages. Computers have their own programming languages (C, Java, Python) and communication protocols (TCP/IP, HTTP, MQTT).\n\nTo instruct computers to perform tasks, we traditionally use a keyboard or a mouse. Why not speak to the computer and let it respond in a natural language? This is one of the aims of Natural Language Processing (NLP). NLP is an essential component of artificial intelligence.\n\nNLP is rooted in the theory of linguistics. Techniques from machine learning and deep neural networks have also been successfully applied to NLP problems. While many practical applications of NLP already exist, NLP has many unsolved problems.\n\n## Discussion\n\n### Why do computers have difficulty with NLP?\n\nComputers have mostly been dealing with **structured data**. This is data that's organized, indexed and referenced, often in databases. In NLP, we often deal with **unstructured data**. Social media posts, news articles, emails, and product reviews are examples of text-based unstructured data. To process such text, NLP has to learn the structure and grammar of the natural language. Importantly, 80% of enterprise data is unstructured. \n\nHuman languages are quite unlike the precise and unambiguous nature of computer languages. Human languages have plenty of complexities such as ambiguous phrases, colloquialisms, metaphors, puns, or sarcasms. The same word or text can have multiple meanings depending on the context. Language evolves with time. Worse still, we communicate imperfectly (spelling, grammar or punctuation errors) but still manage to be understood. These variations, so natural to human communication, are complex for computers.\n\nAmbiguities in natural languages can be classified as lexical, syntactic or referential. \n\nWhen the source of information is speech, more challenges arise: accent, tone, loudness, background noise or context, pronunciation, emotional content, pauses, and so on. \n\n\n### Could you share some examples of the complexities of English?\n\nConsider the sentence, \"One morning I shot an elephant in my pajamas\". The man was in his pajamas but grammatically it's also correct to think that the elephant was wearing his pajamas. Likewise, a person may say, \"Listening to loud music slowly gives me a headache\". Was she listening to music slowly or does the headache develop slowly? \n\nA more confusing example is, \"The complex houses married and single soldiers and their families\". Confusion arises because we may initially interpret \"complex houses\" as an adjective-noun combination. The sentence makes sense only when we see that \"complex\" is a noun and \"houses\" is a verb. NLP addresses this via part-of-speech tagging. \n\nConsider this one, \"John had a card for Helga, but couldn't deliver it because he was in her way\". Was John was in Helga's way? In fact, \"he\" refers to an earlier reference to a third person. NLP calls this coreference resolution. \n\n\"The Kiwis won the match\" is an example that requires context to make sense. New Zealand nationals are referred to as \"Kiwis\", after their national bird. Natural language is full of metaphors like this.\n\n\n### What are some example problems that NLP can solve?\n\nFrom the number of problems that NLP solves, we describe a few:\n\n + **Sentiment Analysis**: From product reviews or social media messages, the task is to figure out if the sentiment is positive, neutral or negative. This is useful for customer support, engineering and marketing departments.\n + **Machine Translation**: Suppose original content is published only in one language, machine translation can deliver this content to a wider readership. Tourists can use machine translation to communicate in a foreign country.\n + **Question Answering**: Given a question, an NLP engine leveraging a vast body of knowledge, can provide answers. This can help researchers and journalists. Whitepapers and reports can be written faster.\n + **Text Summarization**: NLP can be tasked to summarize a long essay or an entire book. It can provide a balanced summary of a story published at different websites with different points of view.\n + **Text Classification**: NLP can classify news stories by domain or detect email spam.\n + **Text-to-Speech**: This is an essential aspect of voice assistants. Audiobooks can be created for the visually impaired. Public announcements can be made.\n + **Speech Recognition**: Opposite of text-to-speech, this creates a textual representation of speech.\n\n### Who's been using NLP in the real world, and for what purpose?\n\nFacebook uses machine translation to automatically translate posts and comments. Google Translate processes 100 billion words a day. To connect sellers and buyers across language barriers, eBay is using machine translation. \n\nUsing speech recognition and text-to-speech synthesis, voice assistants such as Amazon Alexa, Apple Siri, Facebook M, Google Assistant, and Microsoft Cortana are enabling human-to-device interaction using natural speech. \n\nAmazon Comprehend offers an NLP API to perform many common NLP tasks. This has been extended by Amazon Comprehend Medical for healthcare domain. \n\nUber uses NLP for better customer support. Human agents are involved but they are assisted by NLP models that suggest top three solutions. This has reduced ticket resolution time by over 10%. \n\nPerception offers an NLP-based product to do theme clustering and sentiment analysis. This helps with performance reviews and employee retention while minimizing bias. \n\nFor aircraft maintenance, NLP is used for information retrieval, troubleshooting, writing summary reports, or even directing a mechanic via a voice interface. It's been observed that NLP can classify defects better than humans. \n\n\n### What are the main approaches adopted by NLP?\n\nClassical NLP from the 1950s took the **symbolic approach** rooted in linguistics. Given the rules of syntax and grammar, we could obtain the structure of text. Using logic, we could obtain the meaning. But rules had to be hand-crafted and were often numerous. They didn't handle colloquial text well. Rules worked well for specific use cases but couldn't be generalized. \n\nIn practice, better accuracy was achieved by using a **statistical approach** that began in the 1980s. Rules were learned and they had associated probabilities. Machine Learning (ML) models came in with support vector machines and logistic regression. More recently, Deep Learning (DL) models that employ a neural network of many layers have brought better accuracy. This success is partly due to the more efficient representations given by *word embeddings*. \n\nNLP involves different levels or scope of analysis. **Low-level** analysis is about word tokens and structure. **Mid-level** analysis is about identifying entities, topics, and themes. **High-level** analysis leads to meaning and understanding. Alternatively, some classify text processing into two parts: **shallow parsing or chunking** and **deep parsing**. \n\n\n### How is NLP related to NLU and NLG?\n\nNLP is broadly made of two parts: \n\n + **Natural Language Understanding (NLU)**: This involves converting speech or text into useful representations on which analysis can be performed. The goal is to resolve ambiguities, obtain context and understand the meaning of what's being said. Some say NLP is about text parsing and syntactic processing while NLU is about semantic relationships and meaning. NLU tackles the complexities of language beyond the basic sentence structure.\n + **Natural Language Generation (NLG)**: Given an internal representation, this involves selecting the right words, forming phrases and sentences. Sentences need to ordered so that information is conveyed correctly.NLU is about analysis. NLG is about synthesis. An NLP application may involve one or both. Sentiment analysis and semantic search are examples of NLU. Captioning an image or video is mainly an NLG task since input is not textual. Text summarization and chatbot are applications that involve NLU and NLG.\n\nThere's also **Natural Language Interaction (NLI)** of which Amazon Alexa and Siri are examples. \n\n\n### What's the typical data processing pipeline in NLP?\n\nA typical NLP pipeline consists of text processing, feature extraction and decision making. All these steps could apply classical NLP techniques, machine learning or neural networks. Where ML and NN are used, we would have to train a model from sufficient volume of data before it can be used for prediction and decision making. \n\nIn text processing, the input is just text and the output is a structured representation. This is done by identifying words, phrases, parts of speech, and so on. Since words have variations (go, going, went), it's common to reduce them to a root form with techniques such as **stemming** and **lemmatization**. Common words that don't add value to analysis (the, to, and, etc.) are called *stop words* and these are removed. Punctuations are also removed to simplify analysis. **Named Entity Recognition (NER)** involves identifying entities such as places, names, objects, and so on. **Coreference resolution** tries to resolve pronouns (he, they, it, etc.) to the correct entities. \n\nMore formally, text processing involves analysis of three types: syntax (structure), semantics (meaning), pragmatics (meaning in context). \n\n\n### What are some challenges that NLP needs to solve?\n\nNLU is still an unsolved problem. Systems are as yet incapable of understanding the way humans do. Until then, progress will be limited to better pattern matching. Where NLU is lacking, it affects the success of NLG. \n\nIn the area of chatbots, there's a need to model common sense. It's also not clear if models should begin with some understanding or should everything be learned using the technique of reinforcement learning. Computing infrastructure needed to build a full-fledged agent that can learn from its environment is also tremendous. \n\nNot much has been done for low-resource languages where the need for NLP is greater. Africa alone has about 2100 languages. We need to find a way to solve this even if training data is limited. \n\nCurrent systems are unable to reason with large contexts, such as entire books or movie scripts. Supervision with large documents is scarce and expensive. Unsupervised learning has the problem of sample inefficiency. \n\nJust measuring progress is a challenge. We need datasets and evaluation procedures tuned to concrete goals. \n\n\n### Could you mention some of the tools used in NLP?\n\nIn Python, two popular NLP tools are **Natural Language Toolkit (NLTK)** and **SpaCy**. NLTK is supposedly slower and therefore not the best choice for production. TextBlob extends NLTK. Textacy is based on SpaCy and handles pre-processing and post-processing tasks. There's also PyTorch-NLP suited for prototyping and production. AllenNLP and Flair are built on top of PyTorch for developing deep learning NLP models. Intel NLP Architect is an alternative. Gensim is a library that targets topic modelling, document indexing and similarity retrieval. \n\nThere are also tools in other programming languages. In Node.js, we have Retext, Compromise, Natural and Nlp.js. In Java, we have OpenNLP, Stanford CoreNLP and CogCompNLP. The last two have Python bindings as well. There are libraries in R and Scala as well but these haven't been updated for over a year. \n\nFor execution, **Jupyter Notebook** provides an interactive environment. If you don't want to install Jupyter, it's also available as web services. Azure Notebook Service is an example. Via subscriptions, these services allow you to use powerful cloud computing resources.\n\n## Milestones\n\n1948\n\nIn the area of automated translation, a dictionary look-up system developed at Birkbeck College, London can be seen as the first NLP application. In the years following World War II, researchers attempt translating German text to English. Later during the era of Cold War, it's about translating Russian to English. \n\n1957\n\nAmerican linguist Noam Chomsky publishes *Syntactic Structures*. Chomsky revolutionizes the theory of linguistics and goes on to influence NLP a great deal. The invention of Backus-Naur Form notation in 1963 for representing programming language syntax is influenced by Chomsky's work. Another example is the invention of Regular Expressions in 1956 for specifying text search patterns. \n\n1966\n\nIn the U.S., the Automatic Language Processing Advisory Committee (ALPAC) Report is published. It highlights the limited success of machine translation. This results in a lack of funding right up to 1980. Nonetheless, NLP advances in some areas including case grammar and semantic representations. Much of the work till late 1960s is about syntax though some addressed semantic challenges. \n\n1970\n\nIn this decade, NLP is influenced by AI with focus on world knowledge and meaningful representations. Thus, semantics becomes more important. SHRDLU (1973) and LUNAR (1978) are two systems of this period. Into the 1980s, these lead to the adoption of logic for knowledge representation and reasoning. **Prolog** programming language is also invented in 1970 for NLP applications. \n\n1980\n\nThis decade sees the growing adoption of Machine Learning and thereby signalling the birth of **statistical NLP**. Annotated bodies of text called *corpora* are used to train ML models to provide the gold standard for evaluation. ML approaches to NLP become prominent through the 1990s, partly inspired by the successful application of Hidden Markov Models to speech recognition. The fact that statistics has brought more success than linguistics is echoed by Fred Jelinek,\n\n> Every time I fire a linguist, the performance of our speech recognition system goes up.\n\n1982\n\nProject Jabberwacky is launched to simulate natural human conversations in the hope of passing the Turing Test. This heralds the beginning of **chatbots**. In October 2003, Jabberwacky wins third place in the Loebner Prize. \n\n1998\n\nThe *FrameNet* project is introduced. This is related to **semantic role modelling**, a form of shallow semantic parsing that's continues to be researched even in 2018. \n\n2001\n\nFor language modelling, the classical N-Gram Model has been used in the past. In 2001, researchers propose the use of a **feed-forward neural network** with vector inputs, now called *word embeddings*. In later years, this leads to the use of RNNs (2010) and LSTMs (2013) for language modelling. \n\n2003\n\n**Latent Dirichlet Allocation (LDA)** is invented and becomes widely used in machine learning. It's now the standard way to do topic modelling. \n\n2013\n\nImprovements to word embeddings along with an efficient implementation in *Word2vec* enable greater adoption of neural networks for NLP. RNNs and LSTMs become obvious choices since they deal with dynamic input sequences so common in NLP. CNNs from computer vision get repurposed for NLP since CNNs are more parallelizable. Recursive Neural Networks attempt to exploit the hierarchical nature of language. \n\nMar \n2016\n\nMicrosoft launches *Tay*, a chatbot on Twitter that would interact with users and get better in conversing. However, Tay is shut down within 16 hours after it learns to talk in racist and abusive language. A few months later Microsoft launches *Zo* chatbot. \n\nSep \n2016\n\nGoogle replaces its phrase-based translation system with **Neural Machine Translation (NMT)** that uses a deep LSTM network with 8 encoder and 8 decoder layers. This reduces translation errors by 60%. This work is based on **sequence-to-sequence learning** proposed in 2014, which later becomes a preferred technique for NLG.","meta":{"title":"Natural Language Processing","href":"natural-language-processing"}} {"text":"# Speech Recognition\n\n## Summary\n\n\nSpeech Recognition is the process by which a computer maps an acoustic speech signal to text. \n\nSpeech Recognition is also known as Automatic Speech Recognition (ASR) or Speech To Text (STT).\n\nSpeech Recognition crossed over to 'Plateau of Productivity' in the Gartner Hype Cycle as of July 2013, which indicates its widespread use and maturity in present times. \n\nIn the longer term, researchers are focusing on teaching computers not just to transcribe acoustic signals but also to understand the words. Automatic speech understanding is when a computer maps an acoustic speech signal to an abstract meaning. \n\nIt is a sub-field of computational linguistics (an interdisciplinary field concerned with the statistical or rule-based modeling of natural language) that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers.\n\n## Discussion\n\n### What are the steps involved in the process of speech recognition?\n\nWe can identify the following main steps: \n\n + Analog-to-Digital Conversion: Speech is usually recorded/available in analog format. Standard sampling techniques/devices are available to convert analog speech to digital using techniques of sampling and quantization. The digital speech is usually a one-dimension vector of speech samples, each of which is an integer.\n + Speech Pre-processing: Recorded speech usually comes with background noise and long sequences of silence. Speech pre-processing involves identification and removal of silence frames and signal processing techniques to reduce/eliminate noise. After pre-processing, the speech is broken down into frames of 20ms each for further steps of feature extraction.\n + Feature Extraction: It is the process of converting speech frames into feature vector which indicates which phoneme/syllable is being spoken.\n + Word Selection: Based on a language model/probability model, the sequence of phonemes/features are converted into the word being spoken.\n\n### Which are the popular feature extraction methods?\n\nWhile there are many feature extraction methods, we note three of them:\n\n + **Relative Spectral Transform-Perceptual Linear Prediction (RASTA-PLP)**: PLP is a way of warping spectra to minimize differences between speakers while preserving important speech information. RASTA applies a band-pass filter to the energy in each frequency subband in order to smooth over short-term noise variations and to remove any constant offset resulting from static spectral coloration in the speech channel e.g. from a telephone line\n + **Linear Predictive Cepstral Coefficients (LPCCs)**: A cepstrum is the result of taking the inverse Fourier transform (IFT) of the logarithm of the estimated spectrum of a signal. The power cepstrum is used in the analysis of human speech.\n + **Mel Frequency Cepstral Coefficients (MFCCs)**: These are derived from a type of cepstral representation of the audio clip (a nonlinear \"spectrum-of-a-spectrum\"). The difference between LPCC & mel-frequency cepstrum is that in the MFCC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal cepstrum. This frequency warping allows for better representation of sound.\n\n### Which are the traditional Probability Mapping and Selection methods?\n\nA **Hidden Markov Model** is a type of graphical model often used to model temporal data. Hidden Markov Models (HMMs) assume that the data observed is not the actual state of the model, but is instead generated by the underlying hidden (the H in HMM) states. While this would normally make inference difficult, the Markov Property (the first M in HMM) of HMMs makes inference efficient.\n\nThe hidden Markov model can be represented as the simplest dynamic Bayesian network. The mathematics behind the HMM were developed by L. E. Baum and coworkers.\n\nBecause of their flexibility and computational efficiency, Hidden Markov Models have found a wide application in many different fields like speech recognition, handwriting recognition, and speech synthesis. \n\n\n### How is the accuracy of a speech recognition program validated?\n\n**Word error rate (WER)** is a common metric of the performance of a speech recognition or machine translation system. Generally it is measured on **Switchboard** - a recorded corpus of conversations between humans discussing day-to-day topics. This has been used over two decades to benchmark speech recognition systems. There are other corpora like LibriSpeech (based on public domain audio books) and Mozilla’s Common Voice project.\n\nFor some languages, like Mandarin, the metric is often CER - **Character Error Rate**. There is also Utterance Error Rate. \n\nAn IEEE paper that focussed on ASR and machine translation interactions, in a speech translation system, showed that BLEU-oriented global optimisation of ASR system parameters improves translation quality by absolute 1.5% BLEU score, while sacrificing WER over the conventional WER-optimised ASR system. Therefore the choice of metrics for ASR optimisation is context and application dependent.\n\n\n### How has speech recognition evolved over the years?\n\nStarting from the 1960s, the pattern recognition based approaches started making speech recognition practical for applications with limited vocabulary with the use of LPC (Linear Predictive Coding) Coefficient and LPCC (Linear Predictive Cepstral Coefficient) based techniques. The advantage of this technique was that low resources were required to build this model and could be used for applications requiring up to about 300 words.\n\nIn the late 1970s, Paul Mermelstein found a new feature called MFCCs. This soon became the de-facto approach for feature extraction and helped to tackle applications related to multi-speaker as well as multi-language speech recognition. \n\nIn the 1990s, H. Hermansky came up with the RASTA-PLP approach of feature extraction which could be used for applications requiring very large vocabulary with multiple speakers and multiple languages with good accuracy. \n\n\n### What are the AI based approaches for speech recognition?\n\nIn the 1990s and in early 2000s, Deep Learning techniques involving Recurrent Neural Networks were applied on Speech Recognition. In 2000s, the variant of RNNs using LSTM (Long Short Term Memory) to include Long Term Memory aspects into the model helped to minimise or avoid the problems of vanishing gradient or exploding gradients as part of training the RNNs. \n\nBaidu research released DeepSpeech 2014 achieving a WER of 11.85% using RNN. They leveraged the “Connectionist Temporal Classification” loss function.\n\nWith DeepSpeech2 in 2015 they achieved a 7x increase in speed using GRUs (Gated Recurrent Units). \n\nDeepSpeech3 was released in 2017. They perform an empirical comparison between three models — CTC which powered Deep Speech 2, attention-based Seq2Seq models which powered Listend-Attend-Spell among others, and RNN-Transducer for end-to-end speech recognition. The RNN-Transducer could be thought of as an encoder-decoder model which assumes the alignment between input and output tokens is local and monotonic. This makes the RNN-Transducer loss a better fit for speech recognition (especially when online) than attention-based Seq2Seq models by removing extra hacks applied to attentional models to encourage monotonicity.\n\n\n### How is the speed of a speech recognition system measured?\n\nReal Time Factor is a very natural measure of a speech decoding speed that expresses how much the recogniser decodes slower than the user speaks. The latency measures the time between the end of the user speech and the time when a decoder returns the hypothesis, which is the most important speed measure for ASR. \n\nReal-time Factor (RTF): the ratio of the speech recognition response time to the utterance duration. Usually both mean RTF (average over all utterances), and 90th percentile RTF is examined in efficiency analysis. \n\n\n### How is Word Error Rate calculated?\n\nIn October 2016, Microsoft announced its ASR had a WER of 5.9% against the Industry standard Switchboard speech recognition task. This was surpassed by IBM Watson in March 2017 with a WER of 5.5%. In May 2017 Google announced it reached a WER of 4.9%, however google does not benchmark against the Switchboard. \n\nASR systems have seen big improvements in recent years due to more efficient acoustic models that use Deep Neural Networks (DNNs) to determine how well HMM states fit the extracted acoustic features rather than statistical techniques such as Gaussian Mixture Models, which were the preferred method for several years. \n\n\n### Which are the popular APIs that one can use to incorporate automatic speech recognition in an application?\n\nHere's a selection of popular APIs: Bing Speech API, Nuance Speech Kit, Google Cloud Speech API, AWS Transcribe, IBM Watson Speech to Text, Speechmatics, Vocapia Speech to Text API, LBC Listen By Code API, Kaldi, CMU Sphinx. \n\n\n### What are the applications for speech recognition?\n\nApplications of speech recognition are diverse and we note a few: \n\n + Aerospace (e.g. space exploration, spacecraft, etc.) NASA's Mars Polar Lander used speech recognition technology from Sensory, Inc. in the Mars Microphone on the Lander.\n + Automatic subtitling with speech recognition\n + Automatic translation\n + Court reporting (Realtime Speech Writing)\n + eDiscovery (Legal discovery)\n + Education (assisting in learning a second language)\n + Hands-free computing: Speech recognition computer user interface\n + Home automation (Alexa, Google Home etc.)\n + Interactive voice response\n + Medical transcription\n + Mobile telephony, including mobile email\n + Multimodal interaction\n + People with disabilities\n + Pronunciation evaluation in computer-aided language learning applications\n + Robotics\n + Speech-to-text reporter (transcription of speech into text, video captioning, Court reporting )\n + Telematics (e.g. vehicle Navigation Systems)\n + User interface in telephony\n + Transcription (digital speech-to-text)\n + Video games, with Tom Clancy's EndWar and Lifeline as working examples\n + Virtual assistant (e.g. Apple's Siri)\n\n### Which are the available open source ASR toolkits?\n\nHere's a selection of open source ASR toolkits:\n\n + **Microsoft Cognitive Toolkit**, a deep learning system that Microsoft used for its ASR system is available on GitHub through an open source license.\n + The Machine Learning team at Mozilla Research has been working on an open source Automatic Speech Recognition engine modelled after the **Deep Speech** papers published by Baidu. It has a WER of 6.5 percent on LibriSpeech’s test-clean set.\n + **Kaldi** is integrated with TensorFlow. Its code is hosted on GitHub with 121 contributors. It originated at a 2009 workshop in John Hopkins University. It is designed for local installation.\n + **CMU Sphinx** is from Carnegie Mellon University. Java and C versions exist on GitHub.\n + **HTK** began in Cambridge University in 1989.\n + **Julius** has been in development since 1997 and had its last major release in September of 2016.\n + **ISIP** originated from Mississippi State. It was developed mostly from 1996 to 1999, with its last release in 2011.\n + **VoxForge** - crowdsourced repository of speech recognition data and trained models.\n\n\n## Milestones\n\n1952\n\nBell Labs researchers - Davis, Biddulph and Balashak - build a system for single-speaker digit (10) recognition. Their system worked by locating the formants in the power spectrum of each utterance. They were basically creating realisations of earlier analysis establishing relationships between sound classes and signal spectrum by Harvey Fletcher and Homer Dudley, both from AT&T Bell Laboratories. Speech recognition research at Bell Labs was defunded after an open letter by John Robinson Pierce that was critical of speech recognition research. \n\n1962\n\nIBM demonstrates its 'Shoebox' machine at the 1962 World Fair, which could understand 16 words spoken in English. \n\n1968\n\nDabbala Rajagopal **\"Raj\" Reddy** conducts a demonstration of voice control of a robot, large vocabulary connected speech recognition, speaker independent speech recognition and unrestricted vocabulary dictation. *Hearsay-I* is one of the first systems capable of continuous speech recognition. Reddy's work lays the foundation for more than three decades of research at Carnegie Mellon University with his work in the field of **continuous speech recognition** based on dynamic tracking of phonemes. Funding by DARPA's Speech Understanding Research (SUR) program was responsible for Carnegie Melon's \"Harpy\" speech understanding system, that could understand 1011 words. \n\n1968\n\nIn Russia, Professor Taras Vintsyuk proposes the use of dynamic programming methods for time aligning a pair of speech utterances (generally known as **Dynamic Time Warping (DTW)**). Velichko and Zagoruyko use Vintsyuk's work to advance use of pattern recognition ideas in speech recognition. The duo build a 200-word recogniser. Professor Victorov develops a system that recognises 1000 words in 1980. \n\n1974\n\nFirst commercial speech recognition company - Threshold Technology \n\n1975\n\nJames Baker starts working on **HMM-based speech recognition systems**. In 1982 James and Janet Baker (students of Raj Reddy) cofound Dragon Systems, one of the first companies to use Hidden Markov Models in speech recognition. \n\n1991\n\nTony Robinson publishes work on **neural networks** in ASR . In 2012 he founded Speechmatics, offering cloud-based speech recognition services. In 2017 the company announced a breakthrough in accelerated new language modelling. By 1994 Robinson's neural network system was in the top 10 in the world in the DARPA Continuous Speech Evaluation trial, while the other nine were HMMs. .\n\n2007\n\nGoogle begins its first effort at speech recognition came after hiring some researchers from Nuance. By around 2007 itself, LSTM trained by Connectionist Temporal Classification (CTC) started to outperform traditional speech recognition in certain applications. \n\nSep \n2015\n\nGoogle's speech recognition experiences a performance jump of 49% through CTC-trained LSTM. \n\nOct \n2017\n\nBaidu Research releases Deep Speech 3 which enables end-to-end training using a pre-trained language model. Deep Speech 1 was Baidu's PoC, followed by Deep Speech 2 that demonstrated how models generalise well to different languages. \n\nNov \n2017\n\nMozilla open sources speech recognition model - DeepSpeech and voice dataset - Common Voice.","meta":{"title":"Speech Recognition","href":"speech-recognition"}} {"text":"# SQLite\n\n## Summary\n\n\nTraditional databases are often difficult to setup, configure and manage. This includes both relational databases (MySQL, Oracle, PostgreSQL) and NoSQL databases (MongoDB). What if you needed a simple database to manage data locally within your application? What if you don't need to manage remote data across the network? What if you don't have lots of clients writing to the database at the same time? This is where SQLite offers a suitable alternative.\n\nSQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. It's open source. The library is compact, using only 600KiB with all features enabled. It can work even in low-memory environments at some expense of performance. It's even faster than direct filesystem I/O. The database is stored as a single file.\n\n## Discussion\n\n### How's SQLite different from traditional databases?\n\nTraditional databases are based on the client-server architecture. Database files are not directly accessed by client applications but via a database server. SQLite takes a different approach. There's no server involved. The entire database–all tables, indices, triggers, views–is stored in a single file. \n\nBy eliminating the server, SQLite eliminates complexity. There's no need for multitasking or inter-process communication support from the OS. SQLite only requires read/write to some storage. Taking backup is just a file-copy operation. The file can be copied to any other platform be it 32-bit, 64-bit, little-endian or big-endian. \n\nWhile SQLite doesn't adopt a client-server architecture, it's common to call applications that read/write to SQLite databases as \"SQLite clients\". \n\n\n### What are the use cases where SQLite is a suitable database?\n\nFor embedded devices, particularly for resource-constrained IoT devices, SQLite is a good fit. It has a small code footprint, uses memory and disk space efficiently, is reliable and requires little maintenance. Because it comes with a command line interface, it can be used for analysis on large datasets. Even in enterprises, SQLite can stand in for traditional databases for testing, for prototyping or as a local cache that can make the application more resilient to network outages. \n\nUsing SQLite directly over a networked file system is not recommended. It can still be used for web applications if managed by a web server. It's suited for small or medium websites that receive less than 100K hits/day. \n\nApplications can use SQLite instead of file access commands such as `fopen`, `fread` and `fwrite`. These commands are often used to manage various file formats such as XML, JSON or CSV, and there's a need to write parsers for these. SQLite removes this extra work. Because SQLite packs data efficiently, it's faster than these commands. It's been noted that, \n\n> SQLite does not compete with client/server databases. SQLite competes with fopen().\n\n\n### Where shouldn't I use SQLite?\n\nSQLite is not suitable for large datasets, such as exceeding 128 TiB. It's also not suited for high-volume websites, particularly for write-intensive apps. SQLite supports concurrent reads but writes are serialized. Each write typically locks database access for a few dozen milliseconds. If more concurrency is desired due to many concurrent clients, SQLite is not suitable. \n\nIf a server-side database is desired, traditional databases such as MS SQL, Oracle or MySQL have intrinsic support for multi-core and multi-CPU architectures, user management and stored procedures. \n\n\n### How's the adoption of SQLite?\n\nIt's been claimed that SQLite is the most widely deployed database in the world with over one trillion database instances worldwide. It's present in every Android device, every iOS device, every Mac device, every Windows 10 device, every Firefox/Chrome/Safari browser, every Skype/Dropbox/iTunes client, and most set-top boxes and television sets. \n\nGoing by DB-Engines Ranking that's updated monthly, in January 2019, SQLite was seen in tenth position. \n\nSome users of SQLite include Adobe, Airbus, Apple, Bentley, Bosch, Dropbox, Facebook, General Electric, Google, Intuit, Library of Congress, McAfee, and Microsoft. \n\n\n### What tools are available for working with SQLite?\n\nSQLite comes with a command line interface (CLI) called `sqlite3` to create, modify and query an SQLite database. Another useful CLI tool is `sqlite3_analyzer` that tells about memory usage by tables and indices. \n\nFor those who prefer a graphical user interface (GUI), there are a number of utilities: SQLite Database Browser, SQLite Studio, SQLite Workbench, TablePlus, Adminer, DBeaver, DbVisualizer and FlySpeed SQL Query. Most are free and cross-platform with lots of useful features. DbVisualizer has Free and Pro editions. Navicat for SQLite is a paid product. Adminer is implemented in a single PHP file to be executed by a web server. \n\nTo embed SQLite into your own application, there are dozens of language bindings including C, C++, PHP, Python, Java, Go, MATLAB, and many more. SQLite's inventor noted that it was designed from the beginning to be used with Tcl. Tcl bindings have been present even before version 1.0 and regression tests are written in Tcl. \n\n\n### What's the architecture of SQLite?\n\nSQLite compiles SQL statements into bytecode, which runs on a virtual machine. To generate the bytecode, SQL statements go via the tokenizer (identify tokens), parser (associate meaning to tokens based on context) and finally to the code generator (generate the bytecode). \n\nSQLite itself is implemented in C. SQL functions are implemented as callbacks to C routines. In terms of lines of code, the code generator is the biggest followed by the virtual machine. \n\nDatabase itself is organized as a **B-Tree**. Each table and index has a separate tree but all trees are stored on the same file. Information from disk is accessed in default page size of 4096 bytes. Pager does the reading, writing and caching. It also does rollback, atomic commit and locking of the file. \n\n\n### How's the performance of SQLite?\n\nSQLite is 35% faster than filesystem I/O. When holding 10 KB blobs, an SQLite file uses 20% less disk space. \n\nIn one study, SQL CE 4.0 was compared against SQLite 3.6. SQLite fared slightly better for read operation but for everything else SQL CE was significantly faster. However, the hardware was a Dell Workstation with two Intel Xeons and 8 GB of RAM. \n\nWhen compared against Realm, SQLite performed better with inserts but with counts and queries Realm was faster. \n\nIn Python environment, one study compared `pandas` with `sqlite`. SQLite was used in two variants: file database and in-memory database. SQLite performed better with select, filter and sort operations. Pandas did better with group-by, load and join operations. There was no great performance gain with in-memory SQLite as compared to file-based SQLite. \n\n\n### What are the limits and limitations SQLite?\n\nLimits are documented on the SQLite's official website. We mention a few of them: \n\n + Maximum database size is 128 TiB or 140 TB.\n + Maximum number of tables in a schema is 2147483646.\n + Maximum of 64 tables can be used in a JOIN.\n + Maximum number of rows in a table is 264.\n + Maximum number of columns in a table/index/view is 1000 but can be increased to 32767.\n + Maximum string/BLOB length is 1 billion bytes by default but this can be increased to 231-1.\n + Maximum length of an SQL statement is 1 million bytes but can be increased to 1073741824.SQLite implements SQL with some omissions. For example, RIGHT OUTER JOIN and FULL OUTER JOIN are not supported. There's partial support for ALTER TABLE. GRANT and REVOKE commands are not supported. This implies there's no user management support. BOOLEAN and DATETIME are examples of data types missing in SQLite. AUTOINCREMENT doesn't work the same way as in MySQL. SQLite has a flexible type system (string can be assigned to an integer column) but from the viewpoint of static typing this can be seen as a limitation. \n\n## Milestones\n\nAug \n2000\n\nVersion 1.0 of SQLite is released. The earliest public release of alpha code dates back to May 2000. D. Richard Hipp creates SQLite from a need to have a database that didn't need a database server or a database administrator. The syntax and semantics are based on PostgreSQL 6.5. \n\nSep \n2001\n\nVersion 2.0.0 of SQLite is released. While 1.0 was based on GNU Database Manager (gdbm), 2.0.0 uses a custom B-tree implementation. \n\nMay \n2003\n\nSupport for **in-memory database** is added in version 2.8.1. This is useful when performance is important and data persistence is not required. Data is lost when the database connection is closed. \n\nSep \n2004\n\nVersion 3.0.7 of SQLite is released. An alpha release of 3.0.0 was available earlier in June 2004. \n\nJan \n2006\n\nIn version 3.3.0, a **shared-cache** mode is introduced in which multiple connections from the same thread can share the same data and schema cache. This reduces IO access and memory requirements. In version 3.5.0 (September 2007), this is extended to an entire process rather than just a thread. \n\nSep \n2006\n\nPython standard library version 2.5 includes SQLite under the package name `sqlite3`. \n\nOct \n2008\n\nFrom version 3.6.4, SQLite language syntax is described as **syntax diagrams** rather than BNF notation. \n\nDec \n2018\n\nVersion 3.26.0 of SQLite is released.","meta":{"title":"SQLite","href":"sqlite"}} {"text":"# Decision Trees for Machine Learning\n\n## Summary\n\n\nIn machine learning, we use past data to predict a future state. When data is labelled based on a desired attribute, we call it *supervised learning*. There are many algorithms facilitating such a learning. **Decision tree** is one such. Decision tree is a directed graph where nodes correspond to some test on attributes, branch represents an outcome of a test and a leaf corresponds to a class label.\n\n## Discussion\n\n### Could you explain with an example how a decision tree is formed?\n\nAssuming sufficient data has been collected, suppose we want to predict if an individual has risk of high blood pressure. We have two classes, namely, high BP and normal BP. We consider two attributes that may influence BP, overweight and extent of exercise.\n\nThe first decision node (topmost in the tree) could be either weight or extent of exercise. Either way, we also decide the threshold by which data is split at the decision node. We could further split the branches based on the second attribute. The idea is to arrive at a grouping that clearly splits those at risk of high BP and those who are not. In the figure, we note that being overweight clearly separates individuals at high risk. Among those who are not overweight, exercise attribute must be used to arrive at a suitable separation. \n\nThe process of choosing the decision nodes is iterative. The leaf node is chosen when the data in the node favours one class over all other classes.\n\n\n### How is information entropy related to decision trees?\n\nEntropy measures the uncertainty present in a random variable X. For example, an unbiased coin can either show head or tail when tossed. It has maximum entropy. If the coin is biased, say towards head, then it has less entropy since we already know that head is more likely to occur. The uncertainty is quantified in the range of 0 (no randomness) and 1 (absolute randomness). If there is no randomness, the variable does not hold any information. \n\nConsider a binary classification problem with only two classes, positive and negative class. If all samples are positive or all are negative, then entropy is zero or low. If half of the records are of positive class and half are of negative class, then entropy is one or high. Mathematically, entropy is defined as\n\n$$H(X) = - \\sum\\_{i=1}^n p\\_i(x) \\* log \\_2 p\\_i(x)$$\n\n\n### What is Gini Impurity?\n\nGini Impurity (also called *Gini Index*) is an alternative to entropy that helps us choose attributes by which we can split the data. It measures the probability of incorrectly identifying a class. Lower the Gini Index, better it is for the split. The idea is to lower the uncertainty and therefore get better in classification.\n\nMathematically, Gini Impurity is defined as\n\n$$Gini(X) = 1 - \\sum\\_{i=1}^n p\\_i(x) ^2$$\n\nThe choice of attribute between 2 classes, \\(x\\_1\\) and \\(x\\_2\\), with \\(N\\_1\\) and \\(N\\_2\\) as number of instances, and \\(N=N\\_1+N\\_2\\) is calculated from\n\n$$Gini(X) = \\frac{N\\_1}{N} \\* Gini(x\\_1) + \\frac{N\\_2}{N} \\* Gini(x\\_2)$$\n\nwhere\n\n$$ Gini(x1)=1 - p(x\\_1) ^2$$\n\n\n### What is information gain?\n\nInformation gain is change in entropy, prior to split and post split, with respect to selected attribute. We select that attribute for splitting data if it gives a high information gain. Information gain is also called as **Kullback-Leibler Divergence**. \n\nInformation Gain IG(S,A) for a set S is the effective change in entropy after deciding on a particular attribute A. Mathematically,\n\n$$IG(S,A)=H(S)-H(S,A)$$\n\n\n### What are the pros and cons of decision trees?\n\nAmong the pros are, \n\n + Decision trees are easily interpretable since it mirrors human decision-making process and a white box model.\n + Decision trees can be used for both regression (continuous variables) and classification (categorical variables).\n + Decision trees are non-Parametric. This means we make no assumption on linearity or normality of data.Among the cons are, \n\n + Small changes to data may alter the tree and therefore the results. Decision Trees are therefore not robust.\n + When a tree grows large and complex, it tends to overfit. This is overcome by limiting tree depth, boosting or bagging.\n\n\n## Milestones\n\n1963\n\nMorgan and Sonquist propose **Automatic Interaction Detection (AID)**, the first *regression tree* algorithm. This recursively splits data depending on impurity and stops splitting on reaching a certain level of impurity. \n\n1972\n\nMessenger and Mandell propose **Theta Automatic Interaction Detection (THAID)** , the first *classification tree* algorithm. This recursively splits data to maximize cases in modal category. \n\n1980\n\nKass propose **Chi-square Automatic Interaction Detection (CHAID)** that splits data into nodes to start with and then merge nodes that have non-significant splits. This is quicker but can be inaccurate. \n\n1984\n\nResearchers improvise AID and THAID to arrive at **Classification and Regression Trees (CART)**. This is done to improve accuracy by pruning the tree. The CART also gives variable importance scores. \n\n1986\n\nQunilan proposes **Iterative Dichotomiser (ID3)** where he uses *information entropy* to quantify impurity. This is then used to calculate gain ratio for *two-class decisions*. \n\n1993\n\nAs an extension of ID3, Quinlan proposes **C4.5**. This can handle *multiple-class decisions*.","meta":{"title":"Decision Trees for Machine Learning","href":"decision-trees-for-machine-learning"}} {"text":"# Backward Compatibility\n\n## Summary\n\n\nTechnology is rarely static. Technical specifications and products based on them evolve continually. When new versions are released, we can't simply throw away old versions and ask everyone to upgrade. There's a need for new versions to interwork with older versions. This is what we mean by **backward compatibility**. \n\nIndustry bodies that define standards must consider backward compatibility. Within companies, when technologies and products are evolved, product managers, architects and developers must consider backward compatibility. Backward compatibility ensures that customers can continue to use current products and services without being forced to upgrade. Customers can plan a phased migration towards latest versions.\n\nBackward compatibility is not without its critics. It has the negative effect of prolonging older technologies. Industry may become reluctant in adopting latest technologies unless they offer significant benefits. There's also an additional cost for designing and maintaining backward-compatible implementations.\n\n## Discussion\n\n### Which technology layers need to consider backward compatibility?\n\nBackward compatibility is applicable at both hardware and software levels. In **hardware**, it's about pin compatibility so that old peripherals can interface with newer versions of the system hardware. It can also be about clock synchronization, signal levels or timing so that different hardware components can interwork correctly.\n\nIn **software**, backward compatibility is defined for open or public interfaces of software components or products. This is essential when products are being built from many third-party components. At the **system level**, newer hardware should support older software. \n\nModern software often rely on web APIs for specific functionality. The **API layer** needs to be backward compatible so that calling a newer version of the API doesn't require changes in client software. In general, any communication **protocol** (including for instance ISO, IETF or IEEE standards) must be evolved in a backward-compatible manner.\n\nBackward compatibility applies to the **data layer** as well. Data is stored, encoded and decoded in specific formats and versions. However, as data readers and processing tools evolve, they must be able to read and understand older formats. This also applies to how **database schemas** are evolved. \n\n\n### What are some examples of backward compatibility?\n\nThe Raspberry Pi hardware has evolved through many revisions. The functions of the 26 pins present in the original Model 1A were changed in a backward-compatible manner, that is, Do Not Connect (DNC) pins are mapped to 5V, 3.3V or GND; and other pins are unchanged. This means that peripherals that could connect to Model 1A via the header can also connect to Model 4B. From a software perspective, recent releases of 32-bit Raspberry Pi OS will work on Model 1A as well. \n\nThe success of Palm Pilot is partly attributed to its ability to hot sync with Microsoft Windows and connect to PCs via USB ports. By supporting existing interfaces, Palm Pilot enabled vendors and users adopt it more easily. \n\nMicrosoft Office 2007 introduced a new format to save documents and spreadsheets: `*.docx` and `*.xlsx` that are essentially based on XML. However, Office 2007 can read from and save to `*.doc` and `*.xls` legacy formats. \n\nIn cellular telecom, CDMA2000 1xEV-DO and CDMA2000 1xEV-DV can interwork with CDMA2000 1X and cdmaOne. \n\n\n### What's the difference between backward and forward compatibility?\n\nConsider a client-server example in which different versions of clients and servers operate. If the new server version works with all clients that worked with the old server version, then it's backward compatible. If the new server version will continue to work with all clients in any future update, then it's forward compatible. In the figure, \\(S\\) is backward compatible with \\(S^{-1}\\) and forward compatible with \\(S^{+1}\\). We could say that, \n\n> When a product or interface is said to be forward compatible, it is really a statement of intent that future versions of the product will be designed to be backward compatible with this one.\n\nForward and backward compatibility are also called upward and downward compatibility, though some sources apply the former terms to versions of the same entity and the latter terms to communicating entities. \n\nNew software that can read data in older formats is considered backward compatible. Old software that can read newer formats (by skipping unknown fields) is forward compatible. But it's a matter of perspective. We can say that the new format itself is backward compatible.\n\n\n### What are the different types backward compatibility?\n\nSource compatibility means that software compiles even when some dependencies have been updated to a newer version. With binary compatibility, code compiles and also links to newer components. Wire compatibility is applicable to communication protocols. It means that two entities (such as client and server) can communicate even when they're running different protocol versions. Semantic compatibility implies that communicating or interfaced entities process or interpret the messages correctly. \n\nWhen products, protocols or standards evolve, we talk about each version being backward compatible with earlier versions. Another kind of backward compatibility is about new technologies interworking with older ones. When CSS came out, it allowed developers to continue styling concrete elements in HTML documents. \n\n\n### What are the main techniques to achieve backward compatibility?\n\nIt's okay to add new interfaces, new operations to existing interfaces or new optional parameters to existing operations. Anything else runs the risk of breaking backward compatibility. In particular, avoid changing the order of enumeration values, removing or renaming operations, and adding mandatory parameters to existing operations. Don't change field semantics or make input validation more restrictive. \n\nAvoid versioning. Instead prefer to add a new resource or new endpoint. Where not possible, server deployments can run multiple versions in parallel and retire older versions gracefully. Upgraded servers should check for client support before invoking new features. When clients upgrade to newer interface versions, they should support all mandatory features. Communicating entities can negotiate revision levels and ignore features that are not mutually understood. Some call this an **etiquette standard**. \n\nFollowing Postel's Law, clients can be conservative with requests and tolerant of API extensions. However, against Postel's Law, servers should return useful feedback for unknown or invalid inputs. \n\nObey technology-specific rules. For example, message types in Protocol Buffers can be evolved in a backward-compatible manner by following specific rules. \n\n\n### What's the cost of making technology backward compatible?\n\nIn the world of microservices, it's common for a service to maintain multiple versions of the API. Older versions are deprecated but clients continue to use them over a long period of time. This creates a technical debt, particularly for integration tests. \n\nBackward compatibility can come in the way of innovation. The need to continue with older technologies and make them coexist with newer technologies becomes a challenge. The final solution would also be more expensive for both vendors and consumers. Sony's PS3 is an example. It included additional hardware for backward compatibility with PS1 and PS2. Back in the early 1990s, Europeans invented GSM without attempting to make it backward compatible with various standards. This allowed them to apply state-of-the-art technologies. \n\nBackward compatibility can lead to less efficient solutions. Reduced codec efficiency was shown in a study of G.722 when extended for super-wideband. \n\n## Milestones\n\n1950\n\nThe Radio Corporation of America (RCA) introduces the first **backward-compatible colour television system**. In 1952, this is accepted as a standard for broadcast television in the U.S. by the National Television Systems Committee (NTSC). Information is composed of luminance (brightness) and chrominance (colour) components. Luminance carries the finer details and is allocated more bandwidth. Chrominance component, which consists of hue and saturation, is ignored by B&W TV receivers. \n\n1982\n\n**The Atari 5200** console is launched. However, it's **not backward compatible**, meaning that Atari 2600 games can't be played on Atari 5200. Later, Atari gives way to consumer pressure and creates an adaptor so that 2600 games could be played on the 5200. \n\n1998\n\nCircuit City introduces a new DVD rental format called **DIVX** (aka Digital Video Express). Unfortunately, the DIVX format needs a DIVX player since current DVD players don't understand this format. DIVX players themselves are backward compatible: they can play both DIVX and DVD formats. However, backward compatibility actually backfires for DIVX commercially. In 1999, DVD format continues to be more popular than DIVX. \n\n2003\n\nUnder EU laws, Restriction of Hazardous Substances (RoHS) in electrical and electronic equipment is defined. In July 2011, the **RoHS Directive** comes into force. Among the ten substances are lead. During the early transition period, tin-lead components are assembled with lead-free paste (forward compatibility). During the late transition period, some high reliability applications continue to use conventional tin-lead paste with lead-free components (backward compatibility). \n\n2007\n\n**Microsoft releases Office 2007** that supports newer XML-based file formats. However, it's can read and write in older formats and hence **backward compatible**. While the earlier Office 2003 was not designed to be forward compatible, it's possible to update Office 2003 so that it can read newer formats. \n\n2008\n\nThe first phone using Android OS is launched. Apps for this OS are built using the **Android SDK**, which is **forward compatible** but not backward compatible. Apps built with a specific `compileSdkVersion` (further qualified by `targetSdkVersion`) will run on newer Android versions as well. This is because old APIs are deprecated but never removed. \n\n2015\n\nMicrosoft announces the **Xbox One Backward Compatibility** program. The aim is to run original Xbox and Xbox 360 games on newer Xbox One consoles. By 2019, over 600 Xbox and Xbox 360 games could be played on newer generation hardware including the use of newer hardware features. With Xbox Series X and S, backward compatibility becomes an essential requirement. This also helps in preserving older games. Moreover, legacy games benefit from the latest hardware with features such as Auto HDR and doubled framerate.","meta":{"title":"Backward Compatibility","href":"backward-compatibility"}} {"text":"# IoT Alliances and Consortiums\n\n## Summary\n\n\nAlliances and consortiums are shaping the IoT landscape to reach a level of standardization, maturity and acceptance. They allow a reach which no individual organization can achieve. They give confidence to the market that there's larger commitment to the goals of the alliance or consortium.\n\nAlliances and consortiums also enable mass adoption of technology. Organizations like IEEE leverage their brand and mass reach to promote generic frameworks which cut across competing technologies. They also help to consolidate an otherwise fragmented marketspace. This helps businesses reach economies of scale to make the price of solutions affordable to the common man or enable the industry to deploy systems in a large scale.\n\nWhile many alliances/consortiums existed in 2014, later years saw mergers and collaborations, leading to less market fragmentation.\n\n## Discussion\n\n### What's the role of IoT alliances and consortiums?\n\nAlliances and consortiums provide for the following:\n\n + Help to promote standards and product certifications in a particular area (Examples: RFID Consortium, NFC Forum, Wi-Fi Alliance, Zigbee Alliance, LoRa Alliance )\n + Facilitate joint R&D efforts, influence direction of the industry, promote best practices, and provide testbeds (Example: IIC)\n + Help new entrants or startups to start quickly by providing required licenses, ecosystem or access to standards so that they can concentrate on building their differentiatorsStandards organizations can sometimes be guilty of catering only to those vendors who are on their committees. Alliances and consortiums, being non-profit, help drive requirements for standardization, and subsequent testing and certification in a vendor neutral manner. But when they also create the standards and fail to make them public, the entire industry suffers. \n\n\n### What should we look for when joining a particular IoT alliance or consortium?\n\nWe can evaluate based on the following: \n\n + **Openness**: What are their policies on intellectual property? Are there royalties involved in implementing their standards? Is it a closed group that requires paid membership?\n + **Availability**: Is their work (standards, white papers, guidelines) already available or is it still a work in progress?\n + **Adoption**: Are people in industry already adopting and using their work?Back in 2014, Ian Skerrett evaluated a number of IoT alliances and consortiums based on the above metrics and published a descriptive analysis, which is a good read.\n\nIn 2018, an online survey of 502 participants showed that Eclipse IoT, Apache Foundation, W3C, and IEEE are among the top consortiums that developers considered important in the IoT space. However, we should note that this survey was conducted by Eclipse IoT, AGILE IoT, IEEE and OMA. The survey targeted only members of their communities. \n\n\n### What are the major IoT alliances and consortiums?\n\nAmong the many IoT alliances and consortiums, some important ones are the following:\n\n + Industrial Internet Consortium (IIC)\n + IEEE\n + Open Connectivity Foundation (OCF)\n + Thread Group\n + LoRa AllianceIn one study from early 2018, the following important alliances were listed: OMA, Genivi, IIC, IoT Consortium, TrustedIoTAlliance, OneM2M, AIOTI, and OCF. \n\nSome alliances and consortiums are vertically focused on a particular industry. For example, Thread Group is focused on connected homes; Apple's HealthKit is about health and fitness; EnOcean Alliance is about building automation; Open Automotive Alliance is about connected cars; HART Communication Foundation looks at the industrial IoT space. \n\n\n### How does Industrial Internet Consortium (IIC) contribute to IoT?\n\nThe IIC was founded by AT&T, Cisco, General Electric, IBM, and Intel in March 2014. Though it's parent company is the Object Management Group, IIC itself is not a standardization body. It had around 155 members as of August 2020. \n\nIt was formed to bring together industry players — from multinational corporations, small and large technology innovators to academia and governments — to accelerate the development, adoption and widespread use of Industrial Internet technologies. IIC members work to create an ecosystem for insight and thought leadership, interoperability and security via reference architectures, security frameworks and open standards, and real world implementations to vet technologies and drive innovations (called testbeds). \n\nAs per Compass Intelligence, IIC won the *top IoT organization influencer of the year* award for 2018. \n\n\n### What role does IEEE play in IoT?\n\nThe IEEE IoT Initiative was launched in 2014. It aims to help engineering and technology professionals learn, share knowledge and collaborate around IoT. \n\nIEEE has a variety of programs that can help solution providers that are trying to understand IoT, including the IEEE IoT Technical Community, which is made up of members involved in research, application and implementation of IoT. IEEE also heads the IoT Scenarios Program, which is an interactive platform to demonstrate use cases, business models and service descriptions.\n\nIn March 2020, IEEE published an architectural framework for IoT, **IEEE 2413-2019**. IEEE also publishes a number of peer-reviewed IoT papers across its different publications, including *IEEE Internet of Things Journal (IoT-J)* that's dedicated to IoT. IEEE also organizes many conferences and events on IoT around the world, in particular the IEEE World Forum on Internet of Things (WF-IoT). \n\n\n### What are the major developments in the Open Connectivity Foundation (OCF)?\n\nOpen Interconnect Consortium (OIC) in 2016 changed its name to Open Connectivity Foundation (OCF) when Microsoft and Qualcomm joined this group. **The AllSeen Alliance** was developing the **AllJoyn** standard, which was first created by Qualcomm. In October 2016, the AllSeen Alliance and OCF merged together to become the new OCF. The merged group continues working on the open-source IoTivity (OCF) and AllJoyn (AllSeen) projects under the auspices of the Linux Foundation, eventually merging them into a single **IoTivity** standard. \n\nDeveloping interoperability standards is one of the key aims of the OCF, a single IoTivity implementation that offers the best of the previous two standards. Current devices running on either AllJoyn or IoTivity are expected to be interoperable and backward-compatible with the unified IoTivity standard. There are millions of AllJoyn-enabled products on the market. \n\n\n### What's the role of Thread Group among IoT alliances?\n\nThread is mesh network built on open standards and IPv6/6LoWPAN protocols to simply and securely connect products around the house. 6LoWPAN is a power-efficient personal area network protocol with underlying standards of IPv6 and IEEE 802.15.4. Thread is also capable of running for a long period of time from a long-lasting battery. \n\nMembers of the Thread Group include ARM, Nordic, NXP, Nest, Qualcomm, Tridonic, Texas Instruments, Silicon Labs, and many more. Thread Group members get practical resources to help grow the world of connected devices by joining a global ecosystem of technology innovators. The group also offers a certification program based on Thread 1.1 specification. Thread can run multiple applications layers including Dotdot, LWM2M, OCF or Weave. \n\nThread Group liaises with other IoT alliances and consortiums like Open Connectivity Foundation and Zigbee Alliance and tries to provide a one-stop shop for the home IoT.\n\n\n### What is special about Lora Alliance group?\n\nThe LoRa Alliance is a fast growing technology alliance. It's a non-profit association of more than 500 member companies, committed to enabling large scale deployment of Low Power Wide Area Networks (LPWAN) IoT through the development and promotion of the **LoRaWAN** open standard. \n\nMembers benefit from a vibrant ecosystem of active contributors offering solutions, products & services, which create new and sustainable business opportunities. \n\nThrough standardisation and the accredited certification scheme the LoRa Alliance delivers the interoperability needed for LPWA networks to scale, making LoRaWAN the premier solution for global LPWAN deployments. The speciality of Lora Alliance is that it operates primarily on the WAN side of the IoT space. \n\n\n### What are the other alliances and consortiums to keep a watch on?\n\nIn 2015, some companies got together to address security aspects of IoT. They defined the Open Trust Protocol (OTrP). It's overseen by the **OTrP Alliance** while collaborating with IETF and Global Platform. The key insight is that system-level root of trust is necessary. Security must be addressed at all levels, from firmware to applications. \n\nTrusted IoT Alliance applied blockchain technology to IoT. In January 2020, this alliance merged into IIC. \n\nFounded in 2008, IPSO Alliance promoted the Internet Protocol for \"smart object\" communications, advocating for IP networked devices in health care, industrial and energy applications. In 2018, it merged with Open Mobile Alliance (OMA) to form OMA SpecWorks. \n\n## Milestones\n\nDec \n2013\n\n**AllSeen Alliance** is formed. Premier level members include Haier, LG Electronics, Panasonic, Qualcomm, Sharp, Silicon Image and TP-LINK. \n\n2014\n\nThis is the year when alliances and consortiums start appearing. The Thread Group, Open Interconnect Consortium (OIC) and Industrial Internet Consortium (IIC) are all started in 2014. IEEE IoT Initiative is also launched. Thread Group opens for membership in October. \n\nMar \n2015\n\nOpen, non-profit **LoRa Alliance** becomes operational. It's mission is to promote global adoption of the LoRaWAN standard. By 2018, it has 500+ members. \n\nNov \n2015\n\nTo accelerate distributed computing, networking and storage for IoT, ARM, Cisco, Dell, Intel, Microsoft and Princeton University Edge Laboratory establish **OpenFog Consortium**. Focus is to build frameworks and architectures for end-to-end scenarios with capabilities pushed to network edges. \n\n2016\n\nIn February, Open Interconnect Consortium (OIC) is renamed into **Open Connectivity Foundation (OCF)**. In October, AllSeen Alliance merges into OCF. OCF will now sponsor both IoTivity and AllJoyn open source projects at The Linux Foundation. \n\nOct \n2017\n\nIndustrial Internet Consortium (IIC) and EdgeX Foundry merge with a mandate \"to align efforts to maximize interoperability, portability, security and privacy for the industrial Internet\". EdgeX Foundry, a Linux-backed, open-source project has been \"building a common interoperability framework to facilitate an ecosystem for IoT edge computing\". \n\nDec \n2019\n\nApple, Google, Amazon and Zigbee come together to form a new working group under the Zigbee Alliance called **Connected Home over IP (CHIP)**. It's based on IPv6 and meant to work with any PHY or MAC layer for any application or product. However, CHIP leaves out OCF from the discussions, which might create interoperability issues.","meta":{"title":"IoT Alliances and Consortiums","href":"iot-alliances-and-consortiums"}} {"text":"# Go kit\n\n## Summary\n\n\nBuilding an app based on microservices architecture is not trivial. We would need to implement many specialized parts such as RPC safety, system observability, infrastructure integration, program design, and more. Go language's standard library itself isn't adequate to implement these easily. This is where Go kit becomes useful. \n\nUsing Go kit means that developers can focus on their application rather than grapple with the challenges of building a distributed system. Go kit is positioned as a **toolkit for microservices**. It's not a framework. It doesn't force developers to use everything it provides. It's also lightly opinionated about what components or architecture that developers should use. Go kit is really a library. Developers can choose only what they want. \n\nGo kit is open source and follows the MIT License.\n\n## Discussion\n\n### What's the relevance of Go kit in the context of microservices?\n\nWhen an app is composed of dozens or even hundreds of microservices, there's lot more to be done than just defining and building those services. We need consistent logging across services. Metrics have to be collected both at infrastructure layer and application layer. When a request comes in, we may need to trace how this request moves across multiple services. For robustness, we need rate limiting and circuit breaking. Each service will likely be running multiple instances that are dynamically created and destroyed. Service discovery and load balancing are therefore essential. \n\nWhile Go has a much lower resource footprint compared to Ruby, Scala, Clojure or Node, back in 2014, it didn't have anything for a microservices architecture. Go kit's inventor, Peter Bourgon at SoundCloud, found that teams were choosing Scala and JVM because of better support for \"microservices plumbing\". In fact, SoundCloud adopted Twitter's Finagle. Go didn't have an equivalent to this. Thus was born the idea for Go kit. It's really, \n\n> a set of standards, best practices, and usable components for doing microservices in Go\n\n\n### Which are the key components of Go kit?\n\nGo kit microservices have three layers: Transport, Endpoint, Service. Requests come in at transport layer, move through endpoint layer and arrive at service layer. Responses take the reverse route. \n\nGo kit supports a number of transports. Thus, legacy transports and modern transports can be supported within the same service. Some supported transports include NATS, gRPC, Thrift, HTTP, AMQP and AWS Lambda. \n\nThe primary messaging in Go kit is RPC. An endpoint is an RPC method that maps to a service method. A single endpoint can be exposed via multiple transports. \n\nService layer is where all the business logic resides. This is the concern of application developers who need not know anything about endpoints or transports, which are taken care of by Go kit. A service is modelled as an interface and its implementation contains the business logic. \n\nDue to this separation of concerns, Go kit encourages you to adopt SOLID design principles and clean architecture. In fact, Go kit can be used to build elegant monoliths as well. \n\n\n### What's the concept of middleware in Go kit?\n\nGo kit takes the idea of \"separation of concerns\" further with the use of middleware. Rather than bloat core implementations with lots of functionality, additional functionality is provided through middleware using the **decorator pattern**. Go kit provides middleware for endpoint layer while application developers can write middleware for the service layer. We can also chain multiple middleware. \n\nHere's a sample of middleware: \n\n + Endpoint: Load balancing, safety, operational metrics, circuit breaking, rate limiting.\n + Service: Includes anything that needs knowledge of business domain including app-level logging, analytics, and instrumentation.As can be expected of the decorator pattern, an endpoint middleware accepts an endpoint and returns an endpoint. A service middleware accepts a service and returns a service. \n\nGo kit doesn't force you to use all the middleware the come with it. For example, if you're using Istio that already comes with circuit breaking and rate limiting, then you ignore their equivalent Go kit middleware. \n\n\n### Which are the packages available in Go kit?\n\nWithout being exhaustive, here are some Go kit packages: \n\n + **Authentication**: Basic, casbin, JWT.\n + **Circuit Breaker**: Hystrix, GoBreaker, and HandyBreaker.\n + **Logging**: Provide an interface for structured logging. Recognizes that logs are data. They need context and semantics to be useful for analysis. Supported formats are logfmt and JSON.\n + **Metrics**: Provides uniform interfaces for service instrumentation. Comes with counters, gauges, and histograms. Has adapters for CloudWatch, Prometheus, Graphite, DogStatsD, StatsD, expvar, and more.\n + **Rate Limit**: Uses Go's token bucket implementation.\n + **Service Discovery**: Consul, DNS SRV, etcd, Eureka, ZooKeeper, and more.\n + **Tracing**: OpenCensus, OpenTracing, and Zipkin.\n + **Transport**: AMQP, AWS Lambda, gRPC, HTTP, NATS, Thrift.\n\n### Could you share some best practices for using Go kit?\n\nDon't change your platform or infrastructure to suit Go kit. Remember that Go kit is only lightly opinionated. It will integrate well with whatever platform or infrastructure you're already using. \n\nGo kit adopts domain-driven design and declarative composition. It uses interfaces as contracts. You should too when implementing your services. \n\nEach middleware should have a single responsibility. \n\nOther frameworks might use dependency injection. In Go kit, the preferred approach is to wire up the entire component graph in `func main`. This forces you to pass them explicitly to components via constructors. Avoid using global state. \n\nErrors can be encoded in two ways: as an error field in response struct, or an error return value. For services, prefer the former method. For endpoints, prefer the latter since these are recognized by middleware such as circuit breaker. \n\nIndividual services should not collect or aggregate logs. This is the job of the platform. Your services should write to stdout/stderr. \n\nAccess to databases will likely be in a service implementation. Better still, use an interface to model persistence operations while an implementation wraps the database handle. \n\n\n### What are some criticisms or limitations of Go kit?\n\nIt's been said that Go kit is **too verbose**. Adding an API to a microservice involves a lot of boilerplate code. While Go kit nicely separates the business logic, endpoint and transport layers, this abstraction comes at a cost. The code is **harder to understand**. \n\nHowever, Developers who don't wish to write boilerplate code can make use of code generators. Two generators are *kitgen* and Kujtim Hoxha's *GoKit CLI*. \n\nInspired by Go kit, the engineering team at Grab created **Grab-Kit**. They felt that Go kit still requires a lot of manual work, something Grab-Kit aims for solve. The idea is to codify best practices. Grab-Kit standardizes APIs, SDKs, error handling, etc. It uses a consistent middleware stack across services including automatic profiling and execution traces. \n\n## Milestones\n\nFeb \n2015\n\nPeter Bourgon, while working at SoundCloud, makes the first commit of Go kit code. However, the codebase is not versioned at this point. First tagged version v0.1.0 happens only in June 2016. \n\nJan \n2018\n\nBased on GitHub stars, Go kit is seen as the most popular toolkit, followed by Micro, Gizmo and Kite. \n\nMar \n2018\n\nVersion 0.7.0 is released. This includes **kitgen** code generator, **JSON-RPC** transport, and **etcdv3** support in service discovery. \n\nNov \n2018\n\nVersion 0.8.0 is released. This includes **NATS** and **AMPQ** transports. \n\nJun \n2019\n\nVersion 0.9.0 is released. This supports **AWS Lambda** as a transport.","meta":{"title":"Go kit","href":"go-kit"}} {"text":"# Time Series Analysis\n\n## Summary\n\n\nTime series data is an ordered sequence of observations of well-defined data items at regular time intervals. Examples include daily exchange rates, bank interest rates, monthly sales, heights of ocean tides, or humidity. Time Series Analysis (TSA) finds hidden patterns and obtains useful insights from time series data. TSA is useful in predicting future values or detecting anomalies across a variety of application areas. \n\nHistorically, TSA was divided into time domain versus frequency domain approaches. The time domain approach used autocorrelation function whereas the frequency domain approach used Fourier transform of the autocorrelation function. Likewise, there are also Bayesian and non-Bayesian approaches. Today these differences are of less importance. Analysts use whatever suits the problem. \n\nWhile most methods of TSA are from classical statistics, since the 1990s artificial neural networks have been used. However, these can excel only when sufficient data is available.\n\n## Discussion\n\n### What are the main objectives of time series analysis?\n\nTSA has the following objectives: \n\n + **Describe**: Describe the important features of the time series data. The first step is to plot the data to look for the possible presence of trends, seasonal variations, outliers and turning points.\n + **Model**: Investigate and find out the generating process of the time series.\n + **Predict**: Forecast future values of an observed time series. Applications are in predicting stock prices or product sales.\n\n### What are some applications of time series analysis?\n\nTSA used in numerous practical fields such as business, economics, finance, science, or engineering. Some typical use cases are Economic Forecasting, Sales Forecasting, Budgetary Analysis, Stock Market Analysis, Yield Projections, Process and Quality Control, Inventory Studies, Workload Projections, Utility Studies, and Census Analysis. \n\nIn TSA, we collect and study past observations of a time series data. We then develop an appropriate model that describes the inherent structure of the series. This model is then used to generate future values for the series, that is, to make forecasts. Time series analysis can be termed as the act of predicting the future by understanding the past. \n\nForecasting is a common need in business and economics. Besides forecasting, TSA is also useful to see how a single event affects the time series. TSA can also help towards quality control by pointing out data points that are deviating too much from the norm. Control and monitoring applications of TSA are more common in science and industry. \n\n\n### What are the main components of time series data?\n\nThere are many factors that result in variations in time series data. The effects of these factors are studied by following four major components: \n\n + **Trends**: A trend exists when there is a long-term increase or decrease in the data. It doesn't have to be linear. Sometimes we will refer to a trend as \"changing direction\" when it goes from an increasing trend to a decreasing trend.\n + **Seasonal**: A seasonal pattern exists when a series is influenced by seasonal factors (quarterly, monthly, half-yearly). Seasonality is always of a fixed and known period.\n + **Cyclic Variation**: A cyclic pattern exists when data exhibits rises and falls that are not of fixed period. The duration of these cycles is more than a year. For example, stock prices cycle between periods of high and low values but there's no set amount of time between those fluctuations.\n + **Irregular**: The variation of observations in a time series which is unusual or unexpected. It's also termed as a *Random Variation* and is usually unpredictable. Floods, fires, revolutions, epidemics, and strikes are some examples.\n\n### What is a stationary series and how important is it?\n\nGiven a series of data points, if the mean and variance of all the data points remain constant with time, then we call it a stationary series. If these vary with time, we call it a non-stationary series. \n\nMost prices (such as stock prices or price of Bitcoins) are not stationary. They are either drifting upward or downward. Non-stationary data are unpredictable and cannot be modeled or forecasted. The results obtained by using non-stationary time series may be spurious in that they may indicate a relationship between two variables where one doesn't exist. In order to receive consistent, reliable results, non-stationary data needs to be transformed into stationary data.\n\n\n### Given a non-stationary series, how can I make it stationary?\n\nThe two most common ways to make a non-stationary time series curve stationary are: \n\n + **Differencing**: In order to make a series stationary, we take a difference between the data points. Suppose the original time series is \\(X\\_1, X\\_2, X\\_3, \\ldots X\\_n\\). Series with difference of degree 1 becomes \\(X\\_2-X\\_1, X\\_3-X\\_2, X\\_4-X\\_3, \\ldots, X\\_n-X\\_{n-1}\\). If this transformation is done only once to a series, we say that the data has been **first differenced**. This process essentially eliminates the trend if the series is growing at a fairly constant rate. If it's growing at an increasing/decreasing rate, we can apply the same procedure and difference the data again. The data would then be **second differenced**.\n + **Transformation**: If the series can't be made stationary, we can try transforming the variables. Log transform is probably the most commonly used transformation for a diverging time series. However, it's normally suggested to use transformation only when differencing is not working.\n\n### What are the different models used in Time Series Analysis?\n\nSome commonly used models for TSA are:\n\n + **Auto-Regressive (AR)**: A regression model, such as linear regression, models an output value based on a linear combination of input values. \\(y = \\beta\\_0 + \\beta\\_1x + \\epsilon\\). In TSA, input variables are observations from previous time steps, called *lag variables*. For p=2, where p is the order of the AR model, *AR(p)* is \\( x\\_t = \\beta\\_0 + \\beta\\_1 x\\_{t-1} + \\beta\\_2 x\\_{t-2}\\)\n + **Moving Average (MA)**: This uses past forecast errors in a regression-like model. For q=2, *MA(q)* is \\(x\\_t = \\theta\\_0 + \\theta\\_1 \\epsilon\\_{t-1} + \\theta\\_2 \\epsilon\\_{t-2}\\)\n + **Auto-Regressive Moving Average (ARMA)**: This combines both AR and MA models. *ARMA(p,q)* is \\(\\begin{align}x\\_t = &\\beta\\_0 + \\beta\\_1 x\\_{t-1} + \\beta\\_2 x\\_{t-2} + \\ldots + \\beta\\_p x\\_{t-p} + \\\\ &\\theta\\_0 + \\theta\\_1 \\epsilon\\_{t-1} + \\theta\\_2 \\epsilon\\_{t-2} + \\ldots + \\theta\\_q \\epsilon\\_{t-q} \\end{align}\\)\n + **Auto-Regressive Integrated Moving Average (ARIMA)**: The above models can't handle non-stationary data. *ARIMA(p,d,q)* handles the conversion of non-stationary data to stationary: *I* refers to the use of differencing, *p* is lag order, *d* is degree of differencing, *q* is averaging window size.\n\n### What are autocorrelations in the context of time series analysis?\n\nAutocorrelations are numerical values that indicate how a data series is related to itself over time. It measures how strongly data values separated by a specified number of periods (called the **lag**) are correlated to each other. **Auto-Correlation Function (ACF)** defines autocorrelation for a specific lag. \n\nAutocorrelations may range from +1 to -1. A value close to +1 indicates a high positive correlation while a value close to -1 implies a high negative correlation. These measures are most often evaluated through graphical plots called **correlogram**. A correlogram plots the auto-correlation values against lag. Such a plot helps us choose the order parameters for ARIMA model.\n\nIn addition to suggesting the order of differencing, ACF plots can help in determining the order of MA(q) models. **Partial Auto-Correlation Function (PACF)** correlates a variable with its lags, conditioned on the values in between. PACF plots are useful when determining the order of AR(p) models. \n\n\n### How do I build a time series model?\n\nARMA or ARIMA are standard statistical models for time series forecast and analysis. Along with its development, the authors Box and Jenkins also suggested a process for identifying, estimating, and checking models. This process is now referred to as the **Box-Jenkins (BJ) Method**. It's an iterative approach that consists of the following three steps: \n\n + **Identification**: Involves determining the order (p, d, q) of the model in order to capture the salient dynamic features of the data. This mainly leads to use graphical procedures such as time series plot, ACF, PACF, etc.\n + **Estimation**: The estimation procedure involves using the model with p, d and q orders to fit the actual time series and minimize the loss or error term.\n + **Diagnostic checking**: Evaluate the fitted model in the context of the available data and check for areas where the model may be improved.\n\n### How do we handle random variations in data?\n\nWhenever we collect data over some period of time there's some form of random variations. Smoothing is the technique to reduce the effect of such variations and thereby bring out trends and cyclic components. There are two distinct groups of smoothing methods:\n\n**Averaging Methods** \n\n + Moving Average: we forecast the next value by averaging 'p' previous values.\n + Weighted Average: we assign weights to each of the previous observations and then take the average. The sum of all the weights should be equal to 1.**Exponential Smoothing Methods**: It assigns exponentially decreasing weights as the observation get older. In other words, recent observations are given relatively more weight in forecasting than the older observations. There are several varieties of this method: \n\n + Simple exponential smoothing for series with no trend and seasonality: the basic formula for simple exponential smoothing is \\(S\\_{t+1} = \\alpha y\\_t + (1-\\alpha)S\\_t, 0 < \\alpha <=1, t > 0\\)\n + Double exponential smoothing for series with a trend and no seasonality.\n + Triple exponential smoothing for series with both trend and seasonality.\n\n\n## Milestones\n\n1662\n\nJohn Graunt publishes a book titled *Natural and Political Observations … Made upon the Bills of Mortality*. The book contains the number of births and deaths recorded weekly for many years starting from early 17th century. It also includes the probability that a person dies by a certain age. Such tables of life expectancy later become known as **actuarial tables**. This is one of the earliest examples of time series style of thinking applied to medicine. \n\n1861\n\nRobert FitzRoy coins the term \"weather forecast\". Such forecasts start appearing in *The Times* from August 1861. Atmospheric data collected from many parts of England are relayed by telegraph to London, where FitzRoy analyzes the data (along with past data) to make forecasts. His forecasts forewarn sailors of impending storms and directly contribute to reducing shipwrecks. \n\n1887\n\nAugustus D. Waller, a doctor by profession, records what is possibly the first electrocardiogram (ECG). As practical ECG machines arrive in the early 20th century, TSA is applied to estimate the risk of cardiac arrests. In the 1920s, electroencephalogram (EEG) is introduced to measure brain activity. This gives doctors more opportunities to apply TSA. \n\n1927\n\nYule applies **harmonic analysis** and **regression** to determine the periodicity of sunspots. He separates periodicity from superposed fluctuations and disturbances. Yule's work starts the use of statistics in TSA. In general, application of autoregressive models is due to Yule and Walker in the 1920s and 1930s. \n\n1960\n\nMuth establishes a statistical foundation for **Simple Exponential Smoothing (SES)** by showing that it's optimal for a random walk plus noise. Further advances to exponential smoothing happen in 1985: Gardner gives a comprehensive review of the topic; Snyder links SES to innovation state space model, where *innovation* refers to the forecast error. \n\n1969\n\nBates and Granger show that by **combining forecasts** from two independent models, we can achieve a lower mean squared error. They also propose how to derive the weights in which the two original forecasts are to be combined. The same year, David Reid publishes his PhD thesis that's probably the first non-trivial study of time series forecast accuracy. \n\n1970\n\nBox and Jenkins publish a book titled *Time Series Analysis: Forecasting and Control*. This work popularizes the ARIMA model with an iterative modelling procedure. Once a suitable model is built, forecasts are conditional expectations of the model using mean squared error (MSE) criterion. In time, this model is called the **Box-Jenkins Model**. \n\n1978\n\nThrough the 1970s, many statisticians continue to believe that there's a single model waiting to be discovered that can best fit any given time series data. However, empirical evidence show that an ensemble of models give better results. These debates cause George Box to famously remark,\n\n> All models are wrong but some are useful\n\n1979\n\nMakridakis and Hibon use 111 time series data and compare the performance of many forecasting methods. Their results claim that a combination of simpler methods can outperform a sophisticated method. This causes a stir within the research community. To prove the point, Makridakis and Hibon organize a competition, called **M-Competition** starting from 1982: 1001 series (1982), 29 series (1993), 3003 series (2000), 100,000 series (2018), and 42,840 series (2020). \n\n1980\n\nAlthough Kalman filtering was invented in the 1960, it's only in the 1980s that statisticians use **state-space parameterization** and **Kalman filtering** for TSA. The recursive form of the filter enables efficient forecasting. An ARIMA model can be put into a state-space model. Similarly, a state-space model suggests an ARIMA model. \n\n1982\n\nRobert Engle develops the **Autoregressive Conditional Heteroskedasticity (ARCH)** model to account for time-varying volatility observed in economics time series data. In 1986, his student Time Bollerslev develops the Generalized ARCH (GARCH) model. In general, variance of the error term depends on past error terms and their variance. ARCH and GARCH are non-linear generalizations of the Box-Jenkins model. \n\n1987\n\nEngle and Grange propose **cointegration** as a technique for multivariate TSA. Cointegration is a linear combination of marginally unit-root nonstationary series to yield a stationary series. This becomes a popular method in econometrics due to long-term relationship between variables. An earlier method of multivariable TSA is **Vector Autoregressive (VAR)** model. \n\n1998\n\nZhang et al. publish a survey of **neural networks** applied to forecasting. They note an early work by Lapedes and Farber (1987) who proposed multi-layer feedforward networks. However, the use of ANNs for forecasting happens mostly in the 1990s. In general, feedforward or recurrent networks are preferred. At most two hidden layers are used. Number of input nodes correspond to the number of lagged observations needed to discover patterns in data. Number of output nodes correspond to the forecasting horizon. \n\n2019\n\nSánchez-Sánchez et al. highlight many issues in using neural networks for TSA. There's no clarity on how to select the number of input or hidden neurons. There's no guidance on how best to partition the data into training and validation sets. It's not clear if data needs to be preprocessed or if seasonal/trend components have to be removed before data goes into the model. In 2018, Hyndman commented that neural networks perform poorly due to insufficient data. This is likely to change as data becomes more easily available.","meta":{"title":"Time Series Analysis","href":"time-series-analysis"}} {"text":"# Types of Regression\n\n## Summary\n\n\nRegression is widely used for prediction or forecasting where given one or more independent variables we try to predict another variable. For example, given advertising expense, we can predict sales. Given a mother's smoking status and the gestation period, we can predict the baby's birth weight. \n\nThere are many types of regression models, one source mentioning as many as 35 different models. An analyst or statistician must select a model that makes sense to the problem. Models differ based on the number of independent variables, type of the dependent variable and how these two are related to each other.\n\nRegression comes from statistics. It's one of many techniques used in machine learning.\n\n## Discussion\n\n### Could you introduce regression?\n\nSuppose there's a dependent or response variable \\(Y\\_i\\) and independent variables or predictors \\(X\\_i\\). The essence of regression is to estimate the function \\(f(X\\_i,\\beta)\\) that's a model of how the dependent variable is related to the predictors. Adding an error term or residual \\(\\epsilon\\_i\\), we get \\(Y\\_i = f(X\\_i,\\beta) + \\epsilon\\_i\\), for scalar \\(Y\\_i\\) and vector \\(X\\_i\\). \n\nThe residual is not seen in data. It's the difference between the observed value \\(Y\\_i\\) and what the model predicts. With the goal of minimizing the residuals, regression estimates model parameters or coefficients \\(\\beta\\) from data. There are many ways to do this and the term **estimation** is used for this process. \n\nRegression modelling also makes important assumptions. The sampled data should represent the population. There are no measurement errors in the predictor values. Residuals have zero mean (when conditioned on \\(X\\_i\\)) and constant variance. Residuals are also uncorrelated with one another. More assumptions are used depending on the model type and estimation technique. \n\nRegression uncovers useful relationships, that is, how predictors are correlated to the response variable. Regression makes no claim that predictors influence or cause the outcome. Correlation should not be confused for causality. \n\n\n### How do you classify the different types of regression?\n\nRegression techniques can be classified in many ways:\n\n + **Number of Predictors**: We can distinguish between *Univariate Regression* and *Multivariate Regression*.\n + **Outcome-Predictors Relationship**: When this is linear, we can apply *Linear Regression* or its many variants. If the relationship is non-linear, we can apply *Polynomial Regression* or *Spline Regression*. More generally, when the relationship is known it's *Parametric Regression*, otherwise it's *Non-parametric Regression*.\n + **Predictor Selection**: With multiple predictors, sometimes not all of them are important. *Best Subsets Regression* or *Stepwise Regression* can find the right subset of predictors. We could penalize too many predictors in the model using *Ridge Regression*, *Lasso Regression* or *Elastic Net Regression*.\n + **Correlated Predictors**: If predictors are correlated, one approach is to transform them into fewer predictors by a linear combination of the original predictors. *Principal Component Regression (PCR)* and *Partial Least Squares (PLS) Regression* are two approaches to do this.\n + **Outcome Type**: When predicting categorical data, we can apply *Logistic Regression*. When outcome is a count variable, we can apply *Poisson Regression* or *Negative Binomial Regression*. In fact, a suitable method of regression can be inferred from the distribution of the dependent variable.\n\n### What are the types of linear regression models?\n\n**Simple Regression** involves only one predictor. For example, \\(Y\\_i = \\beta\\_0 + \\beta\\_{1}X\\_{1i} + \\epsilon\\_i\\). \n\nIf we generalize to many predictors, the term **Multiple Linear Regression** is used. Consider a bivariate linear model \\(Y\\_i = \\beta\\_0 + \\beta\\_{1}X\\_{1i} + \\beta\\_{2}X^2\\_{2i} + \\epsilon\\_i\\). Although there's a square term, the model is still linear in terms of the parameters. \n\nTo represent many Multiple Linear Regression models in a compact form we can use the **General Linear Model**. This generalization allows us to work with many dependent variables dependent on the same independent variables. This also incorporates different statistical models including ANOVA, ANCOVA, OLS, t-test and F-test. \n\nThe General Linear Model makes the assumption that \\(Y\\_i ∼ N(X^T\\_i\\beta,\\sigma^2)\\), that is, response variable is normally distributed with a mean that's a linear combination of predictors. A larger class of models is called **Generalized Linear Model (GLM)** that allows \\(Y\\_i\\) to be any distribution of the exponential family of distributions. The General Linear Model is a specialization of the GLM. \n\nIf response is affected by randomness, the **Generalized Linear Mixed Model (GLMM)** can be used. \n\n\n### Could you compare linear and logistic regression?\n\nSince logistic regression deals with categorical outcomes, it predicts the probability of an outcome rather than a continuous value. Predictions should therefore be restricted to the range 0-1. This is done by transforming the linear regression equation to the **logit scale**. This is the natural log of the odds of being in one category versus the other categories. \n\nFor this reason, logistic regression may be seen as a particular case of GLM. Logit is used as the *link function* that relates predictors to the outcome. \n\nLogistic regression shares with linear regression many of the assumptions: independence of errors, linearity (but in the logit scale), absence of multicollinearity among predictors, and lack of influential outliers. \n\nThere are three types of logistic regressions: \n\n + **Binary**: Only two outcomes. Example: predict that a student passes a test. When all predictors are categorial, we call them *logit models* .\n + **Nominal**: More than two outcomes. Also called *Multinominal Logistic Regression*. Example: predict the colour of an iPhone model a customer is likely to buy.\n + **Ordinal**: More than two ordered outcomes. Example: predicting a medical condition (good, stable, serious, critical).\n\n### Could you explain parametric versus non-parametric regression?\n\nLinear models and even non-linear models are parametric models since we know (or make an educated guess) about how the outcome relates to predictors. Once the model is fixed, the task is to estimate the parameters \\(\\beta\\) of the model. If we have problems in this estimation, we can revise the model and try again. \n\nNon-parametric regression is more suitable when we have no idea how the outcome relates to the predictors. Usually when the relationship is non-linear, we can adopt non-parametric regression. For example, one study attempting to predict the logarithm of wage from age found that non-parametric regression approaches outperformed simple linear and polynomial regression methods. \n\nParametric models have a finite set of parameters that try to capture everything about observed data. Model complexity is bounded even with unbounded data. Non-parametric models are more flexible because the model gets better as more data is observed. We can view them as having infinite parameters or functions that we attempt to estimate. Artificial neural networks with infinitely many hidden units is equivalent to non-parametric regression. \n\n\n### What are some specialized regression models?\n\nWe note a few of these with brief descriptions:\n\n + **Robust Regression**: This is better suited than linear regression in handling outliers or influential observations. Observations are weighted.\n + **Huber Regression**: To handle outliers better, this optimizes a combination of squared error and absolute error.\n + **Quantile Regression**: Linear regression predicts the mean of the dependent variable. Quantile regression predicts the median. More generally, it predicts the nth quantile. For example, predicting the 25th quantile of a house price means that there's 25% chance that the actual price is below the predicted value.\n + **Functional Sequence Regression**: Sometimes predictors affect the outcome in a time-dependent manner. This model includes the time component. For example, onion weight depends on environmental factors at various stages of the onion's growth.\n + **Regression Tree**: Use a decision tree to split the predictor space at internal nodes. Terminal nodes or leaves represent predictions, which are the mean of data points in each partitioned region.\n\n### Could you share examples to illustrate a few regression methods?\n\nIn a production plant, there's a linear correlation between water consumption and amount of production. Simple regression suffices in this case, giving the fit as `Water = 2273 + 0.0799 Production`. Thus, even without any production, 2273 units of water are consumed. Every unit of production increases water consumption by 0.0799 units. Both predictor and outcome are continuous variables. \n\nAs an example of multiple linear regression, let's predict the birth weight of a baby (continuous variable) based on two predictors: mother is a smoker or non-smoker (categorial variable) and gestation period (continuous variable). We represent non-smokers as 0 and smokers as 1. The regression equation is `Wgt = - 2390 + 143.10 Gest - 244.5 Smoke`. If we plot this, we'll actually see two parallel lines, one for smokers and one for non-smokers. \n\nOne study looked at the number of cigarettes college students smoked per day. They predicted this count from gender, birth order, education level, social/psychological factors, etc. The study used poisson regression, negative binomial regression, and many others. \n\n\n### With so many types of regression models, how do I select a suitable one?\n\nTo apply linear regression, the main assumptions must be met: linearity, independence, constant variance and normality. Linearity can be checked via graphical analysis. A plot of residuals versus predicted values can show non-linearity, or use goodness of fit test. Non-linear relations can be made linear using transformations of predictors and/or the outcome. These could be log, square root or power transformations. Try adding transformations of current predictors. Try semi or non-parametric models. \n\nIn practice, linear regression is sensitive to outliers and cross-correlations. Piecewise linear regression, particularly for time series data, is a better approach. Non-parametric regression can be used when there's an unknown non-linear relationship. SVR is an example of non-parametric regression. \n\nWhen overfitting is a problem, use cross validation to evaluate models. Ridge, lasso and elastic net models can help tackle overfitting. They can also handle multicollinearity. Quantile regression is suited to handle outliers. \n\nFor predicting counts, use negative binomial regression if variance is larger than the mean. Poisson regression can be used only if variance equals the mean. \n\n\n### What are some tips to analyze model statistics?\n\nWell-known model performance metrics include R-squared (R2), Root Mean Squared Error (RMSE), Residual Standard Error (RSE) and Mean Absolute Error (MAE). We also have metrics that penalize additional predictors: Adjusted R2, Akaike's Information Criteria (AIC) and Bayesian Information Criteria (BIC) and Mallows Cp. Higher the R2 or Adjusted R2, better the model. For all other metrics, lower value implies a better model. \n\nA high t-statistic implies coefficient is probably non-zero. A low p-value on the t-statistic gives confidence on the estimate. Low coefficients and low p-value for the model as a whole can imply multicollinearity. While t-test is applied to individual coefficients, F-test is applied to the overall model. \n\nTwo models can be compared graphically. For example, the coefficients and their confidence intervals can be plotted and compared visually. \n\n\n### What software packages support regression?\n\nIn R, functions `lm()`, `summary()`, `residuals()` and `predict()` in the `base` package enable linear regression. For GLM, we can use `glm()` function. Use `quantreg` package for quantile regression; `glmnet` for ridge, lasso and elastic net regression; `pls` for principal component regression; `plsdepot` for PLS regression; `e1071` for Support Vector Regression (SVR); `ordinal` for ordinal regression; `MASS` for negative binomial regression; `survival` for cox regression. Other useful packages are `stats`, `car`, `caret`, `sgd`, `BLR`, `Lars`, and `nlme`. \n\nIn Python, `scikit-learn` provides a number of modules and functions for regression. Use module `sklearn.linear_model` for linear regression including logistic, poisson, gamma, huber, ridge, lasso, and elastic net; `sklearn.svm` for SVR; `sklearn.neighbors` for k-nearest neighbours regression; `sklearn.isotonic` for isotonic regression; `metrics` for regression metrics; `sklearn.ensemble` for ensemble methods for regression. \n\n## Milestones\n\n1795\n\nRegression starts with Carl Friedrich Gauss with the **method of least squares**. He doesn't publish the method until much later in 1809. In 1805, Adrien-Marie Legendre invents the same approach independently. Legendre uses it to predict the orbits of comets. \n\n1877\n\nFrancis Galton plots in 1877 what may be called the first **regression line**. It concerns the size of sweet-pea seeds. It correlated the size of daughter seeds against that of mother seeds. Such an analysis came about in the course of investigating Darwin's mechanism for heredity. By these experiments, Galton also introduces the concept of \"reversion to the mean\", later called **regression to the mean**. \n\n1915\n\nR.A. Fisher gives the exact sampling distribution of the coefficient of correlation, thus marking the beginning of **multivariate analysis**. Fisher then simplifies it to a form via **z-transformation**. In the early 1920s, he introduces the **F distribution** and **maximum likelihood** method of estimation. \n\n1957\n\nHotelling proposes **Principal Component Regression (PCR)** in an attempt to reduce the number of explanatory variables (predictors) in the regression model. PCR itself is based on Principal Component Analysis (PCA) that was invented independently by Pearson (1901) and Hotelling (1930s). \n\n1960\n\nAlthough the logistic function was invented by Verhulst in the 1830s, it's only in the 1960s that it's applied to regression analysis. D.R. Cox is among the early researchers to do this. Many researchers, including Cox, independently develop **Multinomial Logistic Regression** through the 1960s. \n\n1970\n\nHoerl and Kennard note that least squares estimation is unbiased and this can give poor results if there's multicollinearity among the predictors. To improve the estimation they propose a biased estimation approach that they call **Ridge Regression**. Ridge regression uses standardized variables, that is, outcome and predictors are subtracted by mean and divided by standard deviation. By introducing some bias, variance of the least squares estimator is controlled. \n\n1972\n\nD.R. Cox applies regression to life-table analysis. Among the sampled individuals, he observes either the time to \"failure\" or that the individual is removed from the study (called censoring). Moreover, the distribution of survival times is often skewed. For these reasons, linear regression is not suitable. Cox instead uses a hazard function that incorporates age-specific failure rate. In later years, this approach is simply called **Cox Regression**. \n\n1972\n\nNelder and Wedderburn introduce the **Generalized Linear Model (GLM)**. As examples, they relate GLM to normal, binomial (probit analysis), poisson (contingency tables), and gamma (variance components) distributions. However, it's only in the 1980s that GLM becomes popular due to the work of McCullagh and Nelder. \n\n1978\n\nKoenker and Bassett introduce **Quantile Regression**. This uses weighted least absolute error rather than least squares error common in linear regression. \n\n1981\n\nHuber proposes an estimator that's quadratic in small values and grows linearly for large values. It's later named **Huber Regression**. \n\n1996\n\nTibshirani proposes **Lasso Regression** that uses the least squares estimator but constrains the sum of absolute value of coefficients to a maximum. This forces some coefficients to zero or low values, leading to more interpretable models. This is useful when we start with too many predictors. \n\n2002\n\nDe'ath proposes the **Multivariate Regression Tree (MRT)**. The history of regression trees goes back to the 1960s. With the release of CART (Classification and Regression Tree) software in 1984, they became more well known. However, CART is limited to a single response variable. MRT extends CART to multivariate response data. \n\n2005\n\nZou and Hastie propose **Elastic Net Regression**. This combines elements of both ridge regression and lasso regression.","meta":{"title":"Types of Regression","href":"types-of-regression"}} {"text":"# Full Stack Developer\n\n## Summary\n\n\nAny technological solution to a real-world problem consists of several IT components interacting with each other. The entire basket of software platforms, tools, services, and even hardware or networking devices employed in the development of an IT application is called *Technology Stack*. A developer whose skills cover the entire range of the technology stack, both at client and system end is a Full Stack Developer. It's more of a coinage indicating a programmer who is jack of all arts and master of one or two. \n\nThese professionals can easily understand most programming languages and can help to bring the company's minimum viable product into the market quickly. This is especially important for web or mobile app start-ups. Full stack developers, due to their wider system understanding, are able to contribute better to system design in addition to development.\n\n## Discussion\n\n### How did the sudden demand for full stack developers occur?\n\nThe whole narrative of full stack developer emerged from the IT start-up boom. Earlier, large IT service companies or product MNCs were keen on specialists who knew one thing well. These traditional roles were GUI developer, C/Java programmer, database specialist/admin, network engineer, test or automation engineer, and so on. A typical IT application would be an integrated hierarchical solution requiring most of these skill-sets. \n\nBut in start-ups, the team is looking to build a minimum viable product that can showcase their basic idea. This helps them seek funding and then make expansion plans. This has to be achieved with minimum number of developers and limited investment. Time to market is also very short. Hence, the need arose to recruit developers capable of programming in the entire spectrum of technologies.\n\nA 2018 LinkedIn survey listed \"Full Stack Developers\" among the top 10 hard skills that developers need to possess in the IT industry. \n\nWhile full stack developers are presently the trend, it doesn't imply that experts are no longer vital. As the product grows and scales, specialists are required.\n\n\n### What are the technologies that a full stack developer is expected to know?\n\nSkillset of full-stack developers resembles the T-model. They have knowledge across wide-breadth of technologies but in-depth knowledge of a couple of those.\n\nKnowledge of at least one language/platform in each technology layer is a must. In the web development context, popular full stack combinations include: \n\n + **LAMP stack**: JavaScript - Linux - Apache - MySQL - PHP\n + **LEMP stack**: JavaScript - Linux - Nginx - MySQL - PHP\n + **MEAN stack**: JavaScript - MongoDB - Express - AngularJS - Node.js\n + **Django stack**: JavaScript - Python - Django - MySQL\n + **Ruby on Rails**: JavaScript - Ruby - SQLite - RailsVendor-specific full stack expertise is also quite prevalent:\n\n + **Microsoft**: .NET (WPF, WCF, WF), Visual Studio, C#, ASP.NET/ MVC/ MVVM, REST/ Web API, Azure, GIT/ SVN, Web (HTML5/CSS3/JS), jQuery / Typescript, Bootstrap / Angular, MS SQL Server\n + **Amazon**: AWS Amplify (Web UI), Amazon Cognito (web browser), Amazon API gateway, AWS Lambda, Amazon DynamoDBHowever, developers must always remember that the technology choice is dependent on what works best for the product under design, not the other way round.\n\n\n### What is a company really asking for while recruiting a full stack developer?\n\nTechnology used in products are never static. Companies may decide to migrate to a new version, platform or vendor that enters the market. Therefore, there is no point in recruiting a full stack developer with rigid set of skills. By asking for full stack developers, companies are actually looking for the following skills and attitude in a candidate:\n\n + At least know one language or platform in each technology function - user Interface, backend processing, middleware, business logic, DB/storage, networking/communication, testing.\n + Self-learning ability to master a new technology quickly.\n + Ability to make a demonstrable product or prototype.\n + Can independently perform debugging and customer support for applications, with quick turnaround time. The bug could be anywhere in the system.\n + Work in teams and relate to the problems faced by developers working on other modules.\n + Understand the big picture, translate the customer requirement into a system design, which would encompass several technologies across layers. A full stack developer will make a good tech lead or system architect.\n + Good at integration testing and system testing from customer perspective.\n\n### Are there multiple types of technology stacks?\n\nThere is no text book definition for what constitutes a technology stack. Any hierarchy of interdependent modules built using different technologies, frameworks and languages is a technology stack.\n\nFor instance, the concept of a protocol stack has existed for decades now - OSI Layers, TCP/IP stack and other communication/control protocols.\n\nA mobile phone device can be an example of a hardware technology stack – body, network processor chip, peripherals, memory, battery, LCD screen all stacked one above the other.\n\nA MEAN stack refers to a stack for building web apps and websites. It consists of MongoDB for database storage, Express.js as the application framework, AngularJS MVC framework, and Node.js as the server-side framework. \n\nThe system design document prepared by a product development team would document the customised technology stack to be used for its own products.\n\nMany large IT companies openly declare what technology stack is used in their development. \n\n\n### How to choose a technology stack for a product/application?\n\nTechnology choices are made only in the product design phase, after the product requirements are finalised. This involves evaluating various technology alternatives to make up the stack. The factors considered are:\n\n + **Meets the requirements entirely**: For example, if the requirement is to build a military application, then platforms with highest data security and reliability are chosen. Limited network connectivity, multi-language support, accessibility for the disabled are examples of specialised requirements that influence the choice of stack.\n + **Scalable to support future requirement additions**: Product requirements are always changing based on customer feedback. So the technology choice must support product growth for at least 3-5 years.\n + **Cost considerations**: When budgets are limited, companies tend to prefer open source options. Or if an older project has a pre-purchased software license, the same may continue into the new one. This is not optimal, but happens a lot in the industry.\n + **Skillset of existing workforce**: This goes against the idea of designing for requirements. But very often, due to HR constraints and inability to reskill, companies decide to stick to a particular technology stack.\n\n### Can you list the technology stacks used in popular IT applications?\n\nSome product MNCs and start-ups swear by the efficacy of hiring full stack developers. They openly publicise the technology stack used in their solutions, such as: \n\n + **Facebook**: PHP, React, GraphQL, Memcached, Cassandra, Hadoop, MySQL, Swift, C++, PHP, JavaScript, JSON, HTML, CSS.\n + **Amazon**: Java, Perl, Angular JS, MySQL, Amazon EC2 container service, DynamoDB and a host of other Amazon frameworks.\n + **Google**: Python, Java, Android SDK, Go, C++, Preact, Angular JS, Kubernetes, TensorFlow and a host of other Google frameworks.\n + **Dropbox**: REDIS, C#, .NET, MS SQL Server\n + **StackOverflow**: NGINX, Amazon, MySQL, Python\n + **Airbnb**: Javascript, MySQL, Java, Ruby on Rails\n + **Fitbit**: Node.JS, Javascript, Objective C, JavaHowever many technology experts and recruiters call the idea as a passing fad which breeds superficial programmers, who lack the ability to build deep expertise in anything. Without knowing the nuances of a language, the best implementations are not possible, they claim. \n\n## Milestones\n\n2008\n\nOne of the earliest mentions of the term \"Full Stack Developer\" is in a blog written by Randy Schmidt for Forge38 magazine. It's clearly used in the context of web development. \n\n2010\n\nThe first Google search for the term \"Full Stack Developer\" happens. So it's a fairly recent phenomenon. \n\n2012\n\nFront-end development takes a rapid leap. Full stack developers for web development become a popular choice. Earlier, web browsers were poor at interpreting a lot of JavaScript. Adding complex functionality with JS wasn't always a good idea. As browsers became more powerful, JavaScript became versatile with extensions such as AngularJS and jQuery. \n\n2014\n\nThe number of layers in the stack is steadily increasing. In 2010, a full-stack developer perhaps needed to know PHP, jQuery, HTML, CSS, and FTP to transfer files to the hosting server. Today, a full-stack developer needs a wider spectrum of skills from modular frameworks to CSS pre-processors, from responsive UI design to cloud cluster management. \n\n2020\n\nFacebook announces a new update to their technology stack. They are rebuilding their tech stack for Facebook.com, moving beyond a simple PHP website. Their stack includes React (a declarative JavaScript library for building user interfaces) and Relay (a GraphQL client for React). \n\nOct \n2020\n\nThe position \"Full stack developer\" has over 14,000 listings in the US and 6,000 India on the Indeed Job portal.","meta":{"title":"Full Stack Developer","href":"full-stack-developer"}} {"text":"# IEEE 802.11ac\n\n## Summary\n\n\nIEEE 802.11ac is a Wi-Fi standard for Very High Throughput (VHT) applications. It's a significant improvement over the earlier IEEE 802.11n while remaining backward compatible. It's designed only for the 5 GHz band. \n\nIt provides higher data rates, reaching a maximum of 6.9 Gbps. Explicit beamforming and MU-MIMO are two important features of 802.11ac that improves network capacity and efficiency. \n\nIt has a migration path towards IEEE 802.11ax.\n\n## Discussion\n\n### What are the typical use cases for 802.11ac?\n\nVideo consumption is on the rise. A 720p uncompressed video at 60 frames/sec needs 1.3 Gbps. A H.264 lightly compressed video needs 70-200 Mbps. IEEE 802.11n offer a theoretical 600 Mbps but practical rates available for application are lot less.\n\nThe number of devices in the home or office is also increasing. There's a need for multiple devices to connect to the same access point and utilize the channels more efficiently. This is particularly true for bring-your-own-device (BYOD) scenarios where each employee may bring multiple Wi-Fi devices to the office. \n\nIn general, 802.11ac aims to provide high data rates for video streaming, low latency experience and more efficient multiplexing of multiple clients. \n\n\n### What are main features of 802.11ac contributing to higher data rate?\n\nWhile 802.11n offers a maximum data rate of 600 Mbps, 802.11ac can offer 10x data rate due many improvements: \n\n + **Channel Bonding**: Each channel is 20 MHz wide but with 802.11ac we can combine 8 of these to obtain 160 MHz channel for a single client. If available, contiguous channels are easily combined although standard defines 80+80 MHz mode to combine non-contiguous channels.\n + **Higher Modulation**: The use of 256QAM is possible, 4x denser than 64QAM of 802.11n. This means that 4x more bits can be carried per symbol.\n + **Spatial Streams**: Up to 8 spatial streams are possible, although Wave 2 certification covers only 4 spatial streams.The maximum data rate of 6.9 Gbps is obtained when using 160 MHz channels, 256 QAM, eight spatial streams and 400 ns guard interval. A handy reference comparing 802.11n and 802.11ac rates relative to SNR and RSSI is available online.\n\n\n### How is 802.11ac able to achieve better efficiency?\n\nFor better channel utilization and network efficiency, the following are useful:\n\n + **5 GHz Band**: The use of 2.4 GHz band is avoided where interference is higher due to cordless phones, microwaves and other devices. The 5 GHz band is cleaner.\n + **Enhanced RTS/CTS**: To avoid collisions due to the use of wider channels, RTS/CTS mechanism is extended.\n + **A-MPDU**: For higher MAC layer throughput, all MAC frames are sent as Aggregate MAC Protocol Data Unit (A-MPDU), which was introduced in 802.11n for selective use.\n + **MU-MIMO**: With Single-User Multiple-Input and Multiple-Output (SU-MIMO), only one client could send/receive at a particular time. Multi-User Multiple-Input and Multiple-Output (MU-MIMO) is able to multiplex multiple clients at the same time, thus reducing latency and improving overall network efficiency.\n + **Beamforming**: Due to the use of MIMO and multiple antennas, beamforming is possible. Transmission is steered towards each client. This is made more efficient via explicit feedback from clients–using Null Data Packet (NDP)–for better channel estimation.\n\n### What are some trade-offs involved in using 802.11ac?\n\nDecision to trade-off one metric with another will depend on real-time network conditions. Moving from 40 MHz to 80 MHz aggregate bandwidth will increase data rate, but since the same power is spread across many more subcarriers, range will reduce. Obtaining a free channel 160 MHz wide is also difficult, especially in enterprise use cases. It's easier with 80+80 MHz mode but this requires twice as many RF chains. \n\nMoving from 64QAM to 256QAM is possible only over short distances (good signal-to-noise ratio) since the constellation is tighter and more sensitive to errors. \n\nBeamforming using Explicit Compressed Feedback (ECFB) gives a precise channel estimate but this feedback comes with a lot of overhead. Beamforming and MU-MIMO become less effective when clients are moving. \n\nIn terms of spatial streams, each stream requires its own antenna and RF chain. Although 8 streams are defined in the standard, often this is impractical in mobile devices since each antenna must be sufficiently spaced. \n\n\n### Could you explain 802.11ac MU-MIMO?\n\nSince 802.11n, SU-MIMO allows routers to send/receive multiple streams of data to/from clients. MU-MIMO is introduced in 802.11ac in the downlink. An access point can send data to multiple clients at the same time. For example, with three clients A, B and C, two streams may be sent to A, one stream to B and one stream to C simultaneously. Clients receiving single streams need not have multiple antennas or RF chains, which is often the case with small devices and smartphones. \n\nWith MU-MIMO, we get better network capacity utilization because a single client with not much to send can be multiplexed with other clients. Because multiple clients can be allowed to receive at the same time, latency also drops. Even non-MIMO clients will benefit since they can access the channel more easily. More device clients can be supported on the Wi-Fi network. \n\nIn 802.11ac, MU-MIMO is available only in the downlink. Another limitation is that access points might fallback to SU-MIMO if they detect that clients are moving, since MU-MIMO may not work well. \n\n\n### What are 802.11ac Wave 1 and Wave 2?\n\nThe idea of identifying two \"waves\" of 802.11ac products was to allow vendors to release their first 802.11ac products into the market quickly. Wave 2 is complex due to MU-MIMO, 160 MHz bandwidth support and four spatial streams. By defining Wave 1 without these features, we can still benefit from having 256QAM and 80 MHz bandwidth support that 802.11n lacked.\n\nFirst Wave 1 products started arriving in 2013. Certification for these also started in mid-2013. From 2014, all new products started supporting Wave 1. Early Wave 2 products arrived in 2015. Certification for Wave 2 started in mid-2016. \n\n## Milestones\n\nSep \n2008\n\nWork on 802.11ac standardization formally commences with the approval of Project Allocation Request. \n\n2012\n\nDraft 2.0 of 802.11ac is released in January. A refined draft 3.0 is released in May. \n\nJun \n2013\n\nWi-Fi Alliance announces certification process for 802.11ac Wave 1 products. \n\nDec \n2013\n\n**IEEE 802.11ac-2013** standard is published. Along with IEEE 802.11ad-2012 that was standardized a year earlier, both are the result of the **Very High Throughput (VHT)** study group. \n\n2014\n\nQuantenna becomes the first to have a chipset that's capable of 4x4 MU-MIMO but without 160 MHz support. In April, Qualcomm Atheros also starts offering chipsets for MU-MIMO. In general, 802.11ac Wave 2 support arrives in 2014 whereas 2013 saw only 802.11ac Wave 1 support. \n\nJan \n2015\n\nAt CES 2015, more chipsets and products capable of 802.11ac Wave 2 are announced. \n\nMay \n2015\n\nWhat's probably the world's first 802.11ac router with MU-MIMO support, Linksys releases its EA8500. For best results, clients also need to support MU-MIMO. Some existing devices might support it with a firmware upgrade since their underlying chipsets have MU-MIMO support already. An example of this is ASUS RT-AC87U router released in August 2014. \n\nJun \n2016\n\nWi-Fi Alliance announces certification process for 802.11ac Wave 2 products. \n\n2018\n\nCNET identifies some of the best 802.11ac routers. This includes brands Asus, Synology, D-Link and Netgear. Asus has multiple routers highly recommended by CNET.","meta":{"title":"IEEE 802.11ac","href":"ieee-802-11ac"}} {"text":"# Text Clustering\n\n## Summary\n\n\nThe amount of text data being generated in the recent years has exploded exponentially. It's essential for organizations to have a structure in place to mine actionable insights from the text being generated. From social media analytics to risk management and cybercrime protection, dealing with textual data has never been more important.\n\nText clustering is the task of grouping a set of unlabelled texts in such a way that texts in the same cluster are more similar to each other than to those in other clusters. Text clustering algorithms process text and determine if natural clusters (groups) exist in the data.\n\n## Discussion\n\n### What's the principle behind text clustering?\n\nThe big idea is that documents can be represented numerically as vectors of features. The similarity in text can be compared by measuring the distance between these feature vectors. Objects that are near each other should belong to the same cluster. Objects that are far from each other should belong to different clusters. \n\nEssentially, text clustering involves three aspects: \n\n + Selecting a suitable distance measure to identify the proximity of two feature vectors.\n + A criterion function that tells us that we've got the best possible clusters and stop further processing.\n + An algorithm to optimize the criterion function. A greedy algorithm will start with some initial clustering and refine the clusters iteratively.\n\n### What are the use cases of text clustering?\n\nWe note a few use cases:\n\n + **Document Retrieval**: To improve recall, start by adding other documents from the same cluster.\n + **Taxonomy Generation**: Automatically generate hierarchical taxonomies for browsing content.\n + **Fake News Identification**: Detect if a news is genuine or fake.\n + **Language Translation**: Translation of a sentence from one language to another.\n + **Spam Mail Filtering**: Detect unsolicited and unwanted email/messages.\n + **Customer Support Issue Analysis**: Identify commonly reported support issues.\n\n### How is text clustering different from text classification?\n\nClassification is a supervised learning approach that maps an input to an output based on example input-output pairs. Clustering is a unsupervised learning approach.\n\n + **Classification**: If the prediction value tends to be category like yes/no or positive/negative, then it falls under classification type problem in machine learning. The different classes are known in advance. For example, given a sentence, predict whether it's a negative or positive review.\n + **Clustering**: Clustering is the task of partitioning the dataset into groups called clusters. The goal is to split up the data in such a way that points within single cluster are very similar and points in different clusters are different. It determines grouping among unlabelled data.\n\n### What are the types of clustering?\n\nBroadly, clustering can be divided into two groups:\n\n + **Hard Clustering**: This groups items such that each item is assigned to only one cluster. For example, we want to know if a tweet is expressing a positive or negative sentiment. *k-means* is a hard clustering algorithm.\n + **Soft Clustering**: Sometimes we don't need a binary answer. Soft clustering is about grouping items such that an item can belong to multiple clusters. *Fuzzy C Means (FCM)* is a soft clustering algorithm.\n\n### What are the steps involved in text clustering?\n\nAny text clustering approach involves broadly the following steps:\n\n + **Text pre-processing**: Text can be noisy, hiding information between stop words, inflexions and sparse representations. Pre-processing makes the dataset easier to work with.\n + **Feature Extraction**: One of the commonly used technique to extract the features from textual data is calculating the frequency of words/tokens in the document/corpus.\n + **Clustering**: We can then cluster different text documents based on the features we have generated.\n\n### What are the steps involved in text pre-processing?\n\nBelow are the main components involved in pre-processing.\n\n + **Tokenization**: Tokenization is the process of parsing text data into smaller units (tokens) such as words and phrases.\n + **Transformation**: It converts the text to lowercase, removes all diacritics/accents in the text, and parses html tags.\n + **Normalization**: Text normalization is the process of transforming a text into a canonical (root) form. Stemming and lemmatization techniques are used for deriving the root word.\n + **Filtering**: Stop words are common words used in a language, such as 'the', 'a', 'on', 'is', or 'all'. These words do not carry important meaning for text clustering and are usually removed from texts.\n\n### What are the levels of text clustering?\n\nText clustering can be document level, sentence level or word level.\n\n + **Document level**: It serves to regroup documents about the same topic. Document clustering has applications in news articles, emails, search engines, etc.\n + **Sentence level**: It's used to cluster sentences derived from different documents. Tweet analysis is an example.\n + **Word level**: Word clusters are groups of words based on a common theme. The easiest way to build a cluster is by collecting synonyms for a particular word. For example, WordNet is a lexical database for the English language that groups English words into sets of synonyms called *synsets*.\n\n### How do I define or extract textual features for clustering?\n\nIn general, *words* can be used to represent a common class of feature. *Word characteristics* are also features. For example, capitalization matters: US versus us, White House versus white house. *Part of speech* and *grammatical structure* also add to textual features. *Semantics* can be a textual feature: buy versus purchase. \n\nThe mapping from textual data to real-valued vectors is called feature extraction. One of the simplest techniques to numerically represent text is **Bag of Words (BOW)**. In BOW, we make a list of unique words in the text corpus called vocabulary. Then we can represent each sentence or document as a vector, with each word represented as *1 for presence* and *0 for absence*. \n\nAnother representation is to count the number of times each word appears in a document. The most popular approach is using the **Term Frequency-Inverse Document Frequency (TF-IDF)** technique. \n\nMore recently, word embeddings are being used to map words into feature vectors. A popular model for word embeddings is **word2vec**. \n\n\n### How can I measure similarity in text clustering?\n\nWords can be similar lexically or semantically: \n\n + **Lexical similarity**: Words are similar lexically if they have a similar character sequence. Lexical similarity can be measured using string-based algorithms that operate on string sequences and character composition.\n + **Semantic similarity**: Words are similar semantically if they have the same meaning, are opposite of each other, used in the same way, used in the same context or one is a type of another. Semantic similarity can be measured using corpus-based or knowledge-based algorithms.Some of the metrics for computing similarity between two pieces of text are *Jaccard coefficient*, *cosine similarity* and *Euclidean distance*.\n\n\n### Which are some common text clustering algorithms?\n\nIgnoring neural network models, we can identify different types: \n\n + **Hierarchical**: In the *divisive* approach, we start with one cluster and split that into sub-clusters. Example algorithms include DIANA and MONA. In the *agglomerative* approach, each document starts as its own cluster and then we merge similar ones into bigger clusters. Examples include BIRCH and CURE.\n + **Partitioning**: k-means is a popular algorithm but requires the right choice of *k*. Other examples are ISODATA and PAM.\n + **Density**: Instead of using a distance measure, we form clusters based on how many data points fall within a given radius. DBSCAN is the most well-known algorithm.\n + **Graph**: Some algorithms have made use of knowledge graphs to assess document similarity. This addresses the problem of polysemy (ambiguity) and synonymy (similar meaning).\n + **Probabilistic**: A cluster of words belong to a topic and the task is to identify these topics. Words also have probabilities that they belong to a topic. *Topic Modelling* is a separate NLP task but it's similar to soft clustering. pLSA and LDA are example topic models.\n\n### How can I evaluate the efficiency of a text clustering algorithm?\n\nMeasuring the quality of a clustering algorithm has shown to be as important as the algorithm itself. We can evaluate it in two ways:\n\n + **External quality measure**: External knowledge is required for measuring the external quality. For example, we can conduct surveys of users of the application that includes text clustering.\n + **Internal quality measure**: The evaluation of the clustering is compared only with the result itself, that is, the structure of found clusters and their relations to one another. Two main concepts are compactness and separation. *Compactness* measures how closely data points are grouped in a cluster. *Separation* measures how different the found clusters are from each other. More formally, compactness is intra-cluster variance whereas separation is inter-cluster distance.\n\n### What are the common challenges involved in text clustering?\n\nDocument clustering is being studied for many decades. It's far from trivial or a solved problem. The challenges include the following: \n\n + Selecting *appropriate features* of documents that should be used for clustering.\n + Selecting an *appropriate similarity measure* between documents.\n + Selecting an *appropriate clustering method* utilising the above similarity measure.\n + *Implementing the clustering algorithm* in an efficient way that makes it feasible in terms of memory and CPU resources.\n + Finding ways of *assessing the quality* of the performed clustering.\n\n\n## Milestones\n\n1971\n\nText mining research in general relies on a vector space model. Salton first proposes it to model text documents as vectors. Features are considered to be the words in the document collection and feature values come from different term weighting schemes, the most popular of which is the **Term Frequency-Inverse Document Frequency (TF-IDF)**. \n\n1983\n\nMassart et al. in the book *The Interpretation of Analytical Chemical Data by the Use of Cluster Analysis* introduces various clustering methods, including hierarchical and non-hierarchical methods. They show how clustering can be used to interpret large quantities of analytical data. They discuss how clustering is related to other pattern recognition techniques. \n\n1992\n\nCutting et al. adapt partition-based clustering algorithms to cluster documents. Two of the techniques are Buckshot and Fractionation. **Buckshot** selects a small sample of documents to pre-cluster them using a standard clustering algorithm and assigns the rest of the documents to the clusters formed. **Fractionation** finds k centres by initially breaking N documents into N/m buckets of a fixed size m > k. Each cluster is then treated as if it's an individual document and the whole process is repeated until there are only K clusters. \n\n1997\n\nHuang introduces **k-modes**, an extension to the well-known k-means algorithm for clustering numerical data. By defining the mode notion for categorical clusters and introducing an incremental update rule for cluster modes, the algorithm preserves the scaling properties of k-means. Naturally, it also inherits its disadvantages, such as dependence on the seed clusters and the inability to automatically detect the number of clusters. \n\n2008\n\nSun et al. develop a novel hierarchal algorithm for document clustering. They use cluster overlapping phenomenon to design cluster merging criteria. The system computes the overlap rate in order to improve time efficiency.","meta":{"title":"Text Clustering","href":"text-clustering"}} {"text":"# JSON Web Token\n\n## Summary\n\n\nJSON is a data format commonly used in web applications. JSON Web Token (JWT) is a mechanism that brings **security to JSON data**. \n\nJSON grew in adoption from the mid-2000s. This influenced the adoption of JWT. Compared to alternatives such as XML or SAML, app developers found JWT easier to implement and use. JWTs are less verbose and more secure. By the late 2010s, JWTs were widely used in the world of cloud computing and microservices. \n\nJWT is available in two formats: JSON Web Signature (JWS) and JSON Web Encryption (JWE). JWS offers protection against data tampering (integrity). JWE prevents others from reading the data (confidentiality). Moreover, developers have a choice of various keys and algorithms to protect JSON data in either of these formats.\n\nIETF has published the main RFCs that cover JWTs. There are also plenty of open source implementations in many languages.\n\n## Discussion\n\n### How does JWT bring security to the web?\n\nConsider an application consisting of many services exposed to clients via APIs. We certainly don't want clients to authenticate with each service. Authentication is done by a specific service or server. Once authenticated successfully, the client should be able to access any of the services without further authentication. This is where JWTs can help. \n\nFor example, in AWS, Amazon Cognito does authentication. An authenticated client is issued a JWT. Whenever the client makes an API request, it presents this token. The API gateway validates the token before allowing the client to access the requested service. Thus, all relevant information is within the JWT. The API gateway need not contact the authentication server to determine if the client should be allowed access. \n\nFor authorized access, privileges can be set within the token. For example, the name-value pair `admin:true` could be set to allow deletion of records and other admin operations. Moreover, such privileges are set when the token is issued. The client or third-party hackers can't tamper with the token. \n\n\n### What are some use cases where JWT can be used?\n\nThe common use case of JWTs is **authorization**. For example, APIs often require an access token and this could be a JWT. Systems implementing Single Sign-On (SSO) can issue JWTs to allow the user to access various services. \n\nAssume a server authenticates a user and issues a single-use short-lived JWT. User uses this token to download a file from another server. In this example, JWT temporarily authorizes the user to download a protected resource. In microservices architecture, JWTs are used to pass authorization across services. OAuth 2.0 access tokens are JWTs. \n\nJWTs can be used for **authentication**. For example, in OpenID Connect (OIDC), users login with a JWT. Another example is to authenticate a SOAP request with JWT rather than SAML2 assertion. In Oracle Cloud, API gateway authenticates an API client based on the JWT it receives. Once validated, claims in the token are used to authorize the client. JWTs can be used to authenticate SPAs. \n\nDue to its protection against tampering and snooping, JWTs are a means to **exchange information securely**. \n\n\n### What are the main components of a signed JWT?\n\nA JWT has two essential components: header and payload. In practice, JWTs are signed, that is, they include a signature. This is what we call *JWS*. Thus, the three main components of a signed JWT are: \n\n + **Header**: Specifies the type of token (typically `JWT`) and the algorithm used.\n + **Payload**: The main content of the token that includes a set of claims.\n + **Signature**: This is computed from header and payload to protect the integrity of the token.Header and payload are JSON objects. However, these are not transmitted as such. They are Base64-URL encoded, which is similar to Base64 encoding except that characters special to URLs are replaced: `+` becomes `-`, `/` becomes `_`. \n\nSignature is computed as `BASE64URL(UTF8(JWS Protected Header)) || '.' || BASE64URL(JWS Payload)` and then Base64-URL encoded. \n\nThe JWS is constructed by concatenating header, payload, and signature. Period character separates the fields. We can represent this as `BASE64URL(UTF8(JWS Protected Header)) || '.' || BASE64URL(JWS Payload) || '.' || BASE64URL(JWS Signature)`. This format of JWS is called *JWS Compact Serialization*. There's also *JWS JSON Serialization* that can have multiple signatures. \n\n\n### What are JWT claims and how to specify such claims?\n\nThe payload of a JWT has a set of claims. A claim is a name-value pair. It states a fact about the token and its subject such as username or email address. Claims are not mandatory since JWTs are meant to be compact. It's up to applications to include claims that matter. \n\nSome claim names are **registered** with IANA. Examples include \"iss\" (issuer), \"sub\" (subject), and \"aud\" (audience). Some registered claim names are of datetime type: \"exp\" (token expires at this time), \"nbf\" (token can't be used before this time), and \"iat\" (time when token was issued). For example, `\"exp\":1300819380` says that the token expires at the specified timestamp. \n\nThere are also **public** or **private** claim names. These are application specific and their semantics are agreed between producer and consumer of the token. Public names must be collision-resistant. For example, a name based on domain name or Universally Unique IDentifier (UUID) is unlikely to collide with another public name. \n\n\n### Is it possible to encrypt the payload in a JWT?\n\nA signed token can be read by anyone. The purpose of signature is to prevent hackers from tampering with the header or payload. Any changes to these would result in a different signature, which the attacker can't create without the secret key. Such tampering would cause a failure during signature verification. \n\nIf the intention is to send sensitive payload, signature alone is inadequate. There's a need to encrypt the payload. This is where JWE format becomes relevant. \n\nEncryption of content uses symmetric keys. However, these keys need not be shared in advance. They can be generated dynamically and exchanged within JWE. However, the shared symmetric key is encrypted using asymmetric private-public key pair. \n\nIt's also possible to do both, such as a JWS within a JWE or vice versa. Such as token is called **Nested JWT**. For example, we could create a JWS as usual and encrypt it. This then becomes the encrypted payload of a JWE. A simpler approach is to use only encryption algorithms mentioned in JWA since they also provide integrity protection. \n\n\n### Which are the various algorithms supported by JWT?\n\nBoth symmetric and asymmetric algorithms are supported for signing and encrypting tokens. In fact, this support for a variety of algorithms is perhaps one reason for the wider adoption of JWTs. \n\nFor JWS, at the minimum, an implementation should support HMAC using SHA-256. This uses a shared symmetric key. For the \"alg\" header parameter, its value is \"HS256\". Apart from HMAC for signature, we can have digital signatures using asymmetric keys with RSASSA-PKCS1-v1\\_5, ECDSA, or RSASSA-PSS. Signing is done with the private key. Verification happens with the public key. \n\nFor content encryption in JWE, at the minimum, an implementation should support A128CBC-HS256 and A256CBC-HS512. A128CBC-HS256 does AES encryption in CBC mode with 128-bit IV value, plus HMAC authentication using SHA-256 and truncating HMAC to 128 bits. Encryption key is called Content Encryption Key (CEK). \n\nThe CEK itself is encrypted using other algorithms and included in the JWE. Some of these are RSA-OAEP, A128KW, ECDH-ES, ECDH-ES+A128KW, and more. It's also possible to use a shared symmetric key as CEK. \n\n\n### Can I use JWTs as a replacement to session objects?\n\nWith session objects, server maintains the state about each logged-in user, who gets a session ID via a HTTP cookie. Subsequent requests contain the session ID. Server uses it to retrieve the session object and serve the client. Thus, stateless HTTP calls are strung together into a stateful session. \n\nWith JWTs, server doesn't need to store session state. All relevant information is contained in the JWT. This also makes it convenient to deploy on a distributed architecture. One server might issue the JWT. Subsequent client requests could be served by another server via a load balancer. \n\nWhile JWTs seem attractive, a JWT takes more space compared to a session ID. Even without session objects, most client requests will still need access to the database, implying that JWTs don't improve performance. Most web frameworks automatically sign session cookies, implying that signing a JWT isn't really an advantage. JWTs stored on local storage can be less secure. We can't invalidate individual JWTs or deal with stale claims in the token. For these reasons, use of JWTs as an alternative to session IDs is not recommended. \n\n\n### Which are the known vulnerabilities of JWT?\n\nMost attacks on JWT are due to implementations rather than its design. \n\nSome early implementations used the \"alg\" value in header to verify the signature. An attacker could set the value to \"none\"; or change \"RS256\" to \"HS256\" and use the public key as the shared secret key to generate a valid signature. Based on the \"alg\" value, the consumer would skip verification or incorrectly find that the signature is valid. \n\nBrute force attacks on HS256 are simple if the shared secret key is too short. Another possible oversight in implementations is not verifying the claims or not including adequate claims. For example, a token without audience is issued to Org1 but the attacker could present the same token to Org2 can gain access if that username exists in both organizations. \n\nDon't store sensitive information in JWS, that is, in unencrypted form. Don't also assume that encrypted data can't be tampered with. \n\nRFC8725 details many vulnerabilities and best practices to overcome the same. \n\n\n### What are some best practices when using JWT?\n\nPick strong keys. These are often long and created by cryptographic-quality pseudo-random number generators (PRNGs). Don't use human-readable shared secret keys. For federated identity, or when third-party services are involved, it's inconvenient and unsafe to use a shared secret. Instead, use public-private key pair. \n\nDon't rely on the header to select the algorithm for verifying the signature. Use libraries that allow for explicit selection of algorithm. \n\nIt's a good practice to verify applicable claims. For example, verify that token has not expired. In AWS we might verify that the audience claim matches the app client ID created in the Amazon Cognito user pool. Where nested tokens are used, verify signature on all tokens, not just on the outermost token. \n\nUse different validation rules for each token. Avoid key reuse for different tokens. For example, we could use different secret keys for each subsystem. Using \"kid\" claim, we could identify which secret key is used by the token. \n\nKeep the lifetime of tokens short, says for a few minutes or hours. In addition, we could include a nonce in the token to prevent replay attacks (which is what OpenID Connect does). \n\n\n### Could you mention some resources concerning JWT?\n\nIETF documents relevant to JWT include RFC7519: JSON Web Token (JWT), RFC7515: JSON Web Signature (JWS), RFC7516: JSON Web Encryption (JWE), RFC7517: JSON Web Key (JWK), RFC7518: JSON Web Algorithms (JWA), and RFC7797: JSON Web Signature (JWS) Unencoded Payload Option. \n\nIANA's JOSA page contains lists of registered header parameter names and algorithm names. \n\nPeyrott's JWT Handbook is worth reading. This book includes JavaScript code with explanations, which is a useful reference for developers. Another useful reference is a JWT cheatsheet published by Pragmatic Web Security.\n\nFor a simpler approach, developers can use third-party JWT libraries. These are available in many languages. Lists of JWT implementations are available at OpenID and at jwt.io. \n\nSite jwt.io offers a debugger to paste a JWT and view its decoded form. Optionally, signature verification is possible if you include the secret key. \n\n## Milestones\n\nApr \n2001\n\nDouglas Crockford and Chip Morningstar send out what is historically the first JSON message. Since JSON is nothing more than plain JavaScript, Crockford himself states that probably this message format was in use as early as 1996. In July 2006, Crockford describes in RFC 4627 the JSON format and its MIME media type `application/json`. \n\n2005\n\nFrom the mid-2000s, the growth of Web 2.0 and the use REST APIs in web apps lead to wider adoption of JSON. The term AJAX itself is coined in 2005 but it's clear than AJAX is not limited to XML: JSON can be used instead. By late 2000s, with the increasing use of JSON on the web, it's recognized that standards are needed to offer security services in JSON format. \n\nDec \n2010\n\nAs an Internet-Draft, **JSON Web Token (JWT)** is published at IETF. This document goes through multiple revisions, with the final draft revision appearing in December 2014. In May 2015, it becomes RFC7519.\n\nSep \n2011\n\nAt IETF, the **Javascript Object Signing and Encryption (JOSE) Working Group** is formed. For better interoperability, the group aims to standardize the mechanism for integrity protection (signature and MAC), encryption, the format of keys and algorithm identifiers. \n\nMay \n2015\n\nIETF publishes **RFC7519: JSON Web Token (JWT)** as a Proposed Standard. Other relevant RFCs are also published the same month: RFC7515: JWS, RFC7516: JWE, RFC7517: JWK, RFC7518: JWA. \n\nJul \n2015\n\nOn Auth0 blog, a new logo for JWT is announced along with a redesigned website at jwt.io. The blog post also notes that interest in JWT has been increasing since 2013. By now, there are 972 JWT-related GitHub repositories and 2600+ threads on StackOverflow. It's also claimed that,\n\n> If you use Android, AWS, Microsoft Azure, Salesforce, or Google then chances are that you are already using JWT.\n\nFeb \n2016\n\nIETF publishes **RFC7797: JSON Web Signature (JWS) Unencoded Payload Option** as a Proposed Standard. Typically, JWT payload is Base64-URL encoded. This document gives the option of skipping this encoding step. For example, a payload `$.02` is sent as it is rather than sending its encoded form of `JC4wMg`. Header parameter \"b64\" controls the use of this option and \"crit\" parameter facilitates backward compatibility. \n\nJul \n2019\n\nJWTs stored in local or session storage are vulnerable to XSS attacks. On the other hand, JWTs stored in cookies are vulnerable to CSRF attacks. One blogger proposes to mitigate the risks by storing the signature in a HttpOnly, SameSite, Secure cookie. JWT header and payload are in local storage and transferred in HTTP header as a bearer token. The application server has to assemble the complete JWT from its parts. HttpOnly cookies are inaccessible to JavaScript.","meta":{"title":"JSON Web Token","href":"json-web-token"}} {"text":"# Binary Exponential Backoff\n\n## Summary\n\n\nWhen multiple entities attempt to gain access to a shared resource, only one of them will succeed. Those who fail wait till the resource becomes available and then retry. But if everyone were to retry at the same time, quite possibly none of them will succeed. Moreover, new packets are arriving in addition to those waiting for retries.\n\n**Binary Exponential Backoff (BEB)** is an algorithm to determine how long entities should backoff before they retry. With every unsuccessful attempt, the maximum backoff interval is doubled. BEB prevents congestion and reduces the probability of entities requesting access at the same time, thereby improving system efficiency and capacity utilization.\n\nBEB was initially proposed for computer networking where multiple computers share a single medium or channel. It's most famously used in Ethernet and Wi-Fi networking standards.\n\n## Discussion\n\n### Where is binary exponential backoff useful?\n\nBEB is most useful in distributed systems without centralized control or systems that lack predetermined resource allocation. In such systems, multiple entities attempt to access a shared resource. Because there's no centralized control, whoever manages to grab the resource before anyone else will be allowed to use it. Others have to wait for their turn. \n\nThe problem is that when the resource becomes available, everyone else will attempt to grab it. This results in delays. Entities spend time trying to resolve the confusion. Resource utilization is therefore not optimal. The problem gets worse when many entities (dozens or hundreds) are involved.\n\nBEB is an algorithm that mitigates this problem. BEB is therefore useful in probabilistic systems. It's not useful in deterministic systems where the resource is allocated by a controller, each entity knows its turn and will use the resource at specific times and durations as allocated. \n\n\n### Could you explain how BEB works?\n\nConsider Wi-Fi as an example. Two Wi-Fi stations Sue and Mira want to send data to Arnold. When the stations access the channel at the same time, we say that it's a **collision**. Stations whose packets have just collided will initiate a backoff procedure. Every station maintains a number called **Contention Window (CW)**. The station will choose a random value within this window. This value, which is really the number of idle transmission slots that the station has to wait, is called the **Backoff Period**. During this period, these stations (Sue and Mira) cannot transmit. \n\nThe essence of BEB is that the backoff period is randomly selected within the CW. Each station will potentially have a different waiting time. They can't transmit until the backoff period has passed. Moreover, when another station gains access, backoff timer is paused. It's resumed only when the channel becomes idle again as determined by *Distributed Interframe Space (DIFS)*. \n\nWith every collision, the station will double its CW. This is why the prefix \"binary exponential\" is used. It's common to have minimum and maximum values for CW. \n\n\n### Could you share some facts or details about BEB?\n\nBEB doesn't eliminate collisions. By staggering the channel access due to random backoff, it reduces the *probability of collision*. It's possible that two nodes that collide may backoff the same amount and collide again when they retry. Collision can also happen with nodes that collided long ago and whose backoff just completed. \n\nIt may be argued that randomizing the backoff with every retry is enough to lower the collision probability. Why do we need to double the contention window (CW)? This is because new packets are getting generated and need to be transmitted in addition to collided packets. If CW is not increased, we'll have network congestion with more nodes vying for the channel within the same time. However, doubling the CW is not optimal when network load is low.\n\nOften minimum CW is non-zero, so that retrying nodes backoff at least some amount before retrying. Likewise, there's a maximum CW so that nodes are not starved due to long backoff periods.\n\n\n### What are some well-known applications of BEB?\n\nMedium Access Control (MAC) layer of networking protocols use BEB. For example, both Ethernet and Wi-Fi use Truncated BEB to set the contention window. Actual backoff is selected randomly within the contention window. Due to this randomization, the term *Randomized Exponential Backoff* is sometimes used. \n\nTransmission Control Protocol (TCP) is a protocol that guarantees packet delivery by acknowledging correctly received packets. If acknowledgements are not received, the sender will retransmit the packet. Immediate retransmission can potentially congest the network. Hence, the sender uses BEB before retransmitting. \n\nIn a mobile ad hoc network, routes are discovered when required. It's possible for an attacker to flood the network with Route Request (RREQ) packets. One defence against this is BEB. RREQ packets that are seen too soon (not obeying BEB backoffs) are dropped. \n\nIn network applications, when a request fails due to contention, BEB with jitter is used for retries. Examples include access to AWS DynamoDB, or Google Cloud Storage. \n\nIn general, BEB and its variants are used in wired/wireless networks, transactional memory, lock acquisition, email retransmission, congestion control and many cloud computing scenarios. \n\n\n### What metrics are useful to measure the performance and stability of BEB?\n\nThe following metrics are commonly used:\n\n + **Throughput**: This is the number of packets per second successfully sent over the channel. Algorithm is considered stable if the throughput does not collapse as the offered load goes to infinity. Offered load can be defined as number of nodes waiting to transmit or total packet arrival rate relative to channel capacity.\n + **Delay**: Nodes that experience a collision, backoff and retry later. Delay increases as the channel experiences more packet collisions. Algorithm is considered stable if the delay is bounded.\n + **Call Count**: This is the average number of retries needed to achieve a successful transmission.Other metrics useful during analysis include probability of collision \\(p\\_c\\) and probability that a node will transmit in an arbitrary time slot \\(p\\_t\\). Sometimes BEB is generalized to *exponential backoff*, with a *backoff factor* \\(r\\). With BEB, \\(r\\) is set to 2. It's been shown that the optimum backoff factor that maximizes asymptotic throughput is \\(1/(1-e^{-1})\\). \n\n\n### What's the Capture Effect that occurs with BEB?\n\n**Capture Effect** points to a lack of fairness for channel access. Nodes that experience collisions will be in their backoff procedures. New nodes entering the system have a higher chance to capture the channel. These new nodes can therefore transmit long sequences of packets while others are waiting for their backoffs to end. Even if old and new nodes collide, newer nodes will have shorter backoff and will therefore gain access more quickly. \n\nCapture effect has been studied for the Ethernet scenario. It was found that the effect is severe for small number of nodes and improves as more nodes contend for the channel. One proposed solution is to use **Capture Avoidance Binary Exponential Backoff (CABEB)**. \n\nCapture effect is different from **starvation effect**. With starvation, some nodes have little chance to transmit while most are able to transmit. With capture effect, a few nodes occupy the channel most of the time. \n\n\n### What are some variations of BEB?\n\nBy definition, BEB simply doubles the backoff with every collision. So two nodes that collide with their first attempt will most likely collide again since their retries coincide. For this reason, an element of randomness is added to the backoff. This could be termed as *jitter*. \n\nAlternatively, nodes could select a random slot within the contention window as standardized in Ethernet or Wi-Fi. In a modified BEB, each successive slot will be selected with a probability of \\(1/2^i\\) after \\(i\\) collisions. This means that next retry can potentially happen after \\(2^i\\) slots. \n\nWith **Truncated BEB**, a maximum backoff time is defined so that nodes that experience lots of collisions don't end up waiting longer and longer. However, there may be limit to the maximum number of retries. In Wi-Fi, CW is in the range [23, 1023]. \n\nOther variations include continuously listening to the channel and modifying the backoff; tuning the CW based on slot utilization and collision count; increasing the CW with every alternate collision. \n\n## Milestones\n\n1970\n\nNorman Abramson proposes the use of a shared broadcast channel for the ALOHA system. This system would use radio communications to connect computers of the University of Hawaii spread across the islands of Hawaii. The system comes into operation in June 1971. The design of ALOHA didn't define any backoff since it was assumed that both new and retransmitted packets arrive according to a Poisson process. \n\n1973\n\nLeonard Kleinrock and Simon S. Lam propose the first backoff algorithm for multiple access in slotted ALOHA. A uniform random retransmission delay over K slots is proposed. Channel throughput increases with K but when K goes to infinity channel throughput approaches 1/e. In July, Lam shows that with fixed K backoff, slotted ALOHA is unstable. This suggests that K has to be adaptive. \n\n1975\n\nSimon S. Lam and Leonard Kleinrock propose an adaptive backoff algorithm called **Heuristic RCP (Retransmission Control Procedure)**. The idea is to adapt K based on the number of collisions (m) a packet has experienced. If K increases steeply with respect to m, channel saturation won't happen. Binary exponential backoff is a special case of Heuristic RCP where \\(K=2^m\\). \n\n2005\n\nIEEE publishes IEEE 802.11e, an amendment to the 802.11 standard. This specifies Quality of Service (QoS) enhancements at the MAC layer. It proposes a feature named **Enhanced Distributed Channel Access (EDCA)**. Traffic is categorized by type into an Access Category (AC). Each AC has its own interframe spacing, and minimum and maximum values for the contention window. This is the way traffic is prioritized towards channel access.","meta":{"title":"Binary Exponential Backoff","href":"binary-exponential-backoff"}} {"text":"# Data Modelling with MongoDB\n\n## Summary\n\n\nThough MongoDB is schema-less, there's an implied structure and hence a data model. \n\nWhen designing a data model, developers should ask what data needs to be stored, what data is likely to be accessed together, how often will a piece of data be accessed, and how fast will data grow and will it be unbounded. Answers to these questions will lead to a data model that's right for each application. \n\nIn designing the data model, there's no format process, algorithms or rules. Based on years of experience, MongoDB practitioners have come up with a set of **design patterns** and how they fit into common use cases. These are only guidelines. Ultimately, the developer has to analyze the application and see what fits best.\n\n## Discussion\n\n### Given that MongoDB is schema-less, why do we need data modelling?\n\nBy being schema-less, we can easily change the structure in which data is stored. Documents in a collection need not conform to a single rigid structure. The trade-off is that by not having a schema and not validating against that schema, software bugs can creep in. When data is stored in many different ways, application complexity increases. Schema is self-documenting and leads to cleaner code. With sophisticated validations, application code can become simpler. \n\nWhile MongoDB is schema-less, in reality, data is seldom completely unstructured. Most data has some implied structure, though MongoDB may not validate that structure. Even when data evolves over time, there's some base structure that stays constant. Starting from MongoDB 3.2, document validation is possible. \n\nHere are some specific areas where schema helps: \n\n + When matching on nested document fields, the order of fields matters.\n + Data with a complicated structure is easier to understand and process given the schema.\n + When using an Object Document Manager (ODM), the ODM can benefit from the schema.\n + Frequent changes to the document structure without a schema can cause performance issues.\n\n### What's the difference between embedded data models and normalized data models?\n\nWhile MongoDB is document-oriented, it stills allows for relations among collections. This is supported by the `$lookup` and `$graphLookup` pipeline operators. When designing a data model, we have to therefore decide if it makes sense to embed information within a document or to keep it in a separate collection. \n\n**Embedded data model** is applicable when there's a \"contains\" relationship. Reads are performant when there's a need to access nested documents along with the main document. Document can also be updated in a single atomic operation. To model one-to-many relationship, an array of documents can be nested. However, MongoDB limits a document size to 16MB and at most 100 levels of nesting. \n\n**Normalized data model** uses object references to model relationships between documents. Such a model avoids data duplication: a document can be referenced by many other documents without duplicating its content. Complex many-to-many relationships are easily modelled. Likewise, large hierarchical datasets can be modelled. Data can also be referenced across collections. \n\n\n### How do I define 1-to-1, 1-to-n and n-to-n relationships in MongoDB?\n\nConsider a User document. The person works for only one company. Company field or document is embedded within the User document. This is an example of 1-to-1 relationship. \n\nA person may have multiple addresses, which is a 1-to-n relationship. This is modelled in MongoDB as an array of documents, that is, an array of addresses is embedded into the User document. \n\nAnother 1-to-n example is a Product document that contains many Part documents. There could be dozens of parts. It may be necessary to access each part on its own independent of the product. Hence, we might embed into a Product document an array of ObjectID references to the parts. Parts are stored in a separate collection. \n\nConsider a to-do application that assigns tasks to users. A user can have many tasks and a task can be assigned to many users. This is an example of n-to-n relationship. A User document would embed an array of ObjectID references to tasks. A Task document would embed an array of ObjectID references to users. \n\n\n### Could you share essential tips towards MongoDB schema design?\n\nEmbed documents unless there's a good reason not to. Objects that need to be accessed on their own or high-cardinality arrays are compelling reasons not to embed. \n\nArrays model 1-to-n relationship. If n is few, embed the documents. If n is in the hundreds, embed ObjectID references, called **child referencing**. If n is in the thousands, embed the `1` ObjectID reference from within `n` documents, called **parent referencing**. \n\nDon't shun application-level joins. If data is correctly indexed and results are minimally projected, they're quite efficient. \n\nConsider the **read-to-write ratio**. A high ratio favours denormalization and improves read performance. A field that's updated often is not a good candidate for denormalization since it has to be updated in multiple places. \n\nAnalyze your application and its data access patterns. Structure the data to match the application. \n\n\n### What schema design patterns are available in MongoDB?\n\nWe briefly describe each pattern: \n\n + **Approximation**: Fewer writes and calculations by saving only approximate values.\n + **Attribute**: On large documents, index and query only on a subset of fields.\n + **Bucket**: For streaming data or IoT applications, bucket values to reduce the number of documents. Pre-aggregation (sum, mean) simplifies data access.\n + **Computed**: Avoids repeated computations on reads by doing them at writes or at regular intervals.\n + **Document Versioning**: Allows different versions of documents to coexist.\n + **Extended Reference**: Avoid lots of joins by embedding only frequently accessed fields.\n + **Outlier**: Data model and queries are designed for typical use cases, and not influenced by outliers.\n + **Pre-Allocation**: Reduce memory reallocation and improve performance when document structure is known in advance.\n + **Polymorphic**: Useful when documents are similar but don't have the same structure.\n + **Schema Versioning**: Useful when schema evolves during the application's lifetime. Avoids downtime and technical debt.\n + **Subset**: Useful when only some data is used by application. Smaller dataset will fit into RAM and improve performance.\n + **Tree**: Suited for hierarchical data. Application needs to manage updates to the graph.\n\n### Could you share an example showing the use of MongoDB schema design patterns?\n\nThe example in the figure pertains to an e-commerce application. It shows the use of five design patterns among three collections: \n\n + **Schema Versioning**: Every collection includes an integer field `schema` to store the schema version.\n + **Subset**: Items and reviews are stored as separate collections but since `top_reviews` are frequently accessed from items, these are embedded into items documents. Other reviews are rarely accessed. Another example is `staff` information embedded into stores collection.\n + **Computed**: Since `sum_reviews` and `num_reviews` are frequently accessed, these are pre-computed and stored in reviews. Another example are fields `tot_rating` and `num_ratings` in items collection.\n + **Bucket**: Rather than store each review as a separate document, reviews are bucketed into a time window (`start_date` and `end_date`).\n + **Extended Reference**: Fields part of other collections, but frequently accessed, are duplicated for higher read performance and avoid joins. Fields `sold_at` in items and `items_in_stock` in stores are two examples.\n\n### What are some schema design anti-patterns in MongoDB?\n\nWe could have large arrays stored in documents, some of which are unbounded. Storing lots of a data together leads to bloated documents. Storing numerous collections in a database, some of which are unused, is another anti-pattern. \n\nA collection may have many indexes, some of which could be unnecessary. Remove indexes that are rarely used. Remove indexes already covered by another compound index. \n\nAnother anti-pattern is storing information in separate collections although they're often accessed together. The `$lookup` operator is similar to JOIN in relational databases. It's slow and resource intensive. Instead, it's better to denormalize this data. \n\nA case-insensitive query without having a case-insensitive index to cover it is another anti-pattern. We could use `$regex` with the `i` option but this doesn't efficiently utilize case-insensitive indexes. Instead, create a case-insensitive index, that is, index with a collation strength of 1 or 2. \n\nOn MongoDB Atlas, Performance Advisor or Data Explorer can spot anti-patterns and warn developers of the same. \n\n## Milestones\n\nFeb \n2009\n\n**MongoDB 1.0** is released. By August, this version becomes generally available for production environments. \n\nDec \n2015\n\nMongoDB 3.2 is released. **Schema validation** is now possible during updates and inserts. Validation rules can be specified with the `validator` option via the `db.createCollection()` method and `collMod` command. Validation rules follow the same syntax as query expression. This release also adds `$lookup` pipeline stage to join collections. \n\nNov \n2017\n\nMongoDB 3.6 is released with support for **JSON Schema validation**. The `$jsonSchema` operator within a `validator` expression must be used for this purpose. \n\nJun \n2018\n\nMongoDB 4.0 is released with support for multi-document transactions. However, it's useful to note that due to associated performance costs, developers may benefit from schema redesign, such as a denormalized data model. \n\nApr \n2019\n\nOn the MongoDB blog, Coupal and Alger conclude their series of articles titled *Building with Patterns* with a useful summary. Along with a brief description of each data modelling pattern, they note possible use cases of each pattern. \n\nJul \n2021\n\nMongoDB 5.0 is released. When schema validation fails, detailed explanations are shown.","meta":{"title":"Data Modelling with MongoDB","href":"data-modelling-with-mongodb"}} {"text":"# Linear Regression\n\n## Summary\n\n\nLinear regression is a statistical technique used to establish the relationship between variables in a dataset. The equation \\(y = mx + c\\) describes a linear relationship between dependent variable \\(y\\) and independent variable \\(x\\). We may state that \\(y\\) depends on \\(x\\). Given sufficient data, linear regression estimates the values of coefficient \\(m\\) and constant \\(c\\). In a geometric interpretation, \\(m\\) is the slope and \\(c\\) is the intercept. In an alternative notation, these are expressed as \\(β\\_1\\) and \\(β\\_0\\) respectively. \n\nVariable \\(y\\) is also called response or predicted variable. Variable \\(x\\) is also called predictor variable. The reason for this that once parameters of the model \\(β\\_0\\) and \\(β\\_1\\) are estimated, we can make predictions of \\(y\\) given any value of \\(x\\). \n\nLinear regression is a field of statistics. This article looks at the types, important assumptions and techniques in linear regression.\n\n## Discussion\n\n### Could you explain linear regression with some examples?\n\nLinear regression is frequently used by businesses to understand the link between advertising budget and revenue. In other words, it answers the question \"For every advertising dollar I spend, how much will my revenue increase?\" This can be modelled as \\(Revenue=β\\_0+β\\_1 \\cdot AdSpending\\)\n\n\\(β\\_0\\) represents total expected revenue when ad spending is zero. The coefficient \\(β\\_1\\) represents the average increase in total revenue when ad spending is increased by one unit. When \\(β\\_1<0\\), higher ad spending is associated with lower revenue. When \\(β\\_1=0\\) is close to zero, ad spending has little effect on revenue. When \\(β\\_1>0\\) is positive, higher ad spending leads to higher revenue. The model thus aids decision making: a company may decrease or increase its ad spending based on the value of \\(β\\_1\\). \n\nIn the figure, the red line is the **best-fit straight line**, \\(y=4.187-0.356x\\). The values \\(β\\_0=4.187\\) and \\(β\\_1=-0.356\\) are what regression analysis has estimated from available data. Yield falls by 0.356% for every 1% increase in cultivated area. This model can now be used to make predictions; that is, given an area, we can predict the yield. \n\n\n### What are the main types of linear regression models?\n\nThe main types of linear regression models are:\n\n + **Simple Linear Regression**: This is the most basic type and deals with a single predictor variable. Predicting revenue from ad spending is an example.\n + **Multiple Linear Regression**: Aka *multivariable linear regression*. This is applicable when there are many predictor variables. An example of this is predicting wine prices. This depends on mean growing season temperature, harvest rainfall, winter rainfall, and more.\n + **Hierarchical Linear Model**: Aka *multilevel regression*. Such a model captures the natural hierarchy in predictor variables. Analysis involves a hierarchy of regressions, such as A regressed on B, and B regressed on C. For example, students are nested into classes, classrooms into schools, and schools into districts. So a student's test score can be modelled based on overall performance at different levels.\n\n### What's the mathematical notation of a linear regression model?\n\nWe consider the general case of multiple linear regression with \\(k\\) independent variables. The model therefore has to estimate \\(k+1\\) parameters: constant \\(β\\_0\\) and coefficients \\(β\\_j\\;for\\;1\\le j\\le k\\). The mathematical notation of this model is given in the figure. \n\nIn linear algebra, this is expressed as \\(Y = X \\cdot β + ϵ\\), where \\(X\\) is a \\(m\\,\\times\\,k+1\\) matrix, \\(β\\) and \\(Y\\) are m-dimensional vectors, and \\(m\\) being the number of observations or data points. \n\nRegression analysis is simply finding \\(β\\) that minimizes the error term \\(ϵ\\). This leads us to what's called the **normal equation**: \\(β=(X^TX)^{-1} \\cdot X^TY\\). As is apparent, this equation is in a form that solves for model parameters \\(β\\). \n\nIn Machine Learning (ML), it's common to use \\(θ\\) instead of \\(β\\) for the model parameters. \n\n\n### What are some estimation methods in linear regression?\n\nMethods commonly used for estimating the model parameters (also called estimates) are: \n\n + **Ordinary Least Squares (OLS)**: This looks at the sum of the square of the difference between actual observed value and its prediction via the model. The method attempts to minimize this sum.\n + **Method of Moments (MoM)**: This uses moments, which are the expectation of the powers of a random variable. Number of moments to be calculated is equal to the number of unknown parameters. The resulting system of equations is then solved.\n + **Maximum Likelihood Estimate (MLE)**: This seeks to maximize the likelihood function. In other words, we determine the estimates that make the observed values most probable. MLE method is applicable when the probability distribution of the error terms is known.Related to OLS are more sophisticated methods including Weighted Least Squares (WLS) and Generalized Least Squares (GLS). Less common ones are Least Median Squares and Least Trimmed Squares. In fact, the minimization need not be of the squares. We could minimize on Least Absolute Deviations, Huber, Bisquare, etc. \n\n\n### How do I evaluate the performance of a linear regression model?\n\nModels are rarely perfect and there's a need to measure how good a model really is. The differences between predicted values and actual values are called **residuals**. Model evaluation and validating the assumptions can be performed from residuals and this field of study is called **Residual Analysis**. \n\n**Mean Absolute Error (MAE)** and **Mean Squared Error (MSE)** are two ways to quantify the residuals. MAE looks at absolute differences. MSE looks at the square of the differences. Another measure is **Root Mean Squared Error (RMSE)** that's the square root of MSE. RMSE has the same unit as the output variable, making it easier to interpret. \n\nPerhaps the most widely used statistical measure is **R-Squared (R2)**. It quantifies the proportion of the variation explained by the model. Closer it is to 1, better the model explains the data. R2 is also called the **Coefficient of Determination**. \n\n\n### What are the main assumptions when constructing a linear regression model?\n\nIn linear regression, we usually make the following assumptions: \n\n + **Linearity**: The dependent variable Y is related to the independent variables X in a linear way.\n + **Independence**: Observations are independent of one another. We could also say that the residuals are independent of Y. In time-series data, observations are not correlated. When data doesn't meet this assumption, we have a problem called *autocorrelation*.\n + **Normality**: Residuals are normally distributed. Equivalently, at a fixed observation X, the dependent variable Y is normally distributed.\n + **Homoscedasticity**: The residuals have the same variance at all predicted or fitted points Y. When data doesn't meet this assumption, we have a problem called *heteroscedasticity*.If one or more of these assumptions is violated, what the model predicts may be incorrect or even deceptive. \n\n\n### How can I determine if linear regression is appropriate for a particular set of data?\n\nScatterplot can help us validate the linearity assumption. For multiple linear regression, 2-D pairwise scatter plots, rotating plots, and dynamic graphs can help. \n\nTo validate the independence assumption, a scatterplot of residuals versus fitted values shouldn't show any pattern. \n\nTo validate the normality assumption, a normal probability plot, a residual histogram or a quantile-quantile plot can be used. \n\nTo validate the homoscedasticity assumption, do a scatterplot of residuals against the fitted values. A cone-shaped pattern implies that the residuals vary more for some predicted values than others, thus invalidating the assumption. There are also lots of statistical tests to check for homoscedasticity: Bartlett's Test, Box's Test, Brown-Forsythe Test, Hartley's Fmax Test, Levene's Test, and Breusch-Pagan Test. \n\nWe also expect predictor variables to be independent of one another. A scatterplot of one independent variable with another can validate this. We can also calculate the correlation coefficients pairwise for all independent variables. Correlation coefficients close to ±1 imply high correlation. Low model coefficients or high Variance Inflation Factor (VIF) indicate correlated variables. \n\n\n### How do autocorrelation, multicollinearity, and heteroscedasticity affect linear regression estimates?\n\n**Autocorrelation** exists when multiple observations of a predictor variable are not independent. This is common in time-series data but could occur in other scenarios such as samples drawn from a cluster or geographic area. Autocorrelation can be detected with the Durbin-Watson test. Due to autocorrelation, OLS estimators will be inefficient. Estimated variance of regression coefficients will be biased and inconsistent. Hypothesis testing will be invalid. R2 will be overestimated. \n\nA correlation between two or more predictor variables is referred to as **multicollinearity**. Though the overall model fit is not affected, multicollinearity can increase the variance of estimates and make them sensitive to model changes. Model becomes harder to interpret. It's harder to determine the precise effect of each predictor. \n\nHeteroscedasticity means unequal spread of the residuals. It's a problem because OLS regression assumes constant spread. Though this doesn't introduce bias to the estimates, it does make them less precise. It produces smaller p-values because OLS regression doesn't detect the increase in the variance of the estimates. Thus, we may wrong conclude that an estimate is statistically significant. \n\n\n### How do we model the interaction of independent variables?\n\nWhen one independent variable has a distinct effect on the outcome based on the values of another independent variable, we call this an **interaction**. \n\nAssume that a cholesterol-lowering medication is being evaluated in a clinical trial. Drug effect is dependent on both dose administered and the patient's sex. Without interaction between dose and sex, effect increases at a fixed slope with respect to the dose regardless of the sex. \n\nWith interaction, we can no longer ask what's the drug's effect since for every unit dose the incremental effect depends on the sex. Dose affects males differently from females and this what interaction is about. We can see in the figure that the slope is steeper for males than for females. We could use two separate linear models, one for each sex. But it's easier to enhance a single model to handle the interaction. Such as model can be written as, \\(Y = β\\_0 + β\\_1 \\cdot dose + β\\_2 \\cdot sex + β\\_3 \\cdot dose \\cdot sex\\). If there's no interaction, \\(β\\_3\\) is zero. \n\n\n### What are random effects, fixed effects and mixed effects models?\n\nWhen model parameters \\(β\\) are random variables, it's called a random effects model. Otherwise, we have a fixed effects model. A model that considers both fixed and random effects is called a mixed effects model. \n\nMathematically, a mixed effects model is written as, \\(Y = X \\cdot β + Z \\cdot u + ϵ\\) where the first term models the fixed effects and the second term models the random effects. \n\nLet's assume a study involving 10 people. Repeated measurements are collected from each person. However, these individuals are only \"random\" samples from a larger population. This sampling is accounted for by the random effects term of the model. \n\nThe random effects term also models a hierarchy of distinct populations (hence it relates to multilevel regression). An example hierarchy is students, schools, and districts. Random effects term models the variations at school and district levels. Perhaps a good definition that clarifies this is, \n\n> Fixed-effect parameters describe the relationships of the covariates to the dependent variable for an entire population, random effects are specific to clusters of subjects within a population.\n\n\n### What software packages are useful for solving linear regression problems?\n\nIn Python, **scikit-learn** is perhaps the most helpful Python for linear regression. In particular, `sklearn.linear_model.LinearRegression` and `sklearn.metrics` are relevant. \n\nIn R, there are many packages with functions to perform linear regression. Package `stats` is most useful. Package `car` can help with ANOVA analysis, residual analysis and testing the assumptions. Package `MASS` enables Generalized Least Squares (GLS) and robust fitting of linear models. Package `caret` streamlines the model training process and includes ML algorithms. Package `glmnet` enables many types of linear regression. `BLR` supports Bayesian linear regression, which is a subset of linear regression. `Lars` supports Lasso regression efficiently. \n\n## Milestones\n\n1875\n\nSir Francis Galton and Karl Pearson reveal that Galton's research on the genetic traits of sweet peas is probably the first example of linear regression. \n\n1894\n\nSir Francis Galton proposes the notion of linear regression for the first time. \n\n1896\n\nPearson's first rigorous discussion of **correlation** and **regression** is published in the *Philosophical Transactions of the Royal Society of London*. Pearson credits Bravais (1846) with discovering the first mathematical formulas for correlation. \n\n1922\n\nPearson's theory explains how the **regression slope** is discovered. \n\n1938\n\nPearson develops a theory for **multiple regression**. He also makes novel advances in other areas of statistics such as chi-square. \n\n1981\n\nGhiselli explains a simpler proof for the product-moment approach than Pearson's. \n\n1990\n\nComputations to a complete linear regression is formed.","meta":{"title":"Linear Regression","href":"linear-regression"}} {"text":"# Naive Bayes Classifier\n\n## Summary\n\n\nNaive Bayes is a **probabilistic classifier** that returns the probability of a test point belonging to a class rather than the label of the test point. It's among the most basic Bayesian network models, but when combined with kernel density estimation, it may attain greater levels of accuracy. . This algorithm is applicable for Classification tasks only, unlike many other ML algorithms which can typically perform Regression as well as Classification tasks. \n\nNaive Bayes algorithm is considered naive because the assumptions the algorithm makes are virtually impossible to find in real-life data. It uses conditional probability to calculate a product of individual probabilities of components. This means that the algorithm assumes the presence or absence of a specific feature of a class which is not related to the presence or absence of any other feature (absolute independence of features), given the class variable.\n\n## Discussion\n\n### Could you explain the Naive Bayes classifier with examples?\n\nConsider two groups of insects, grasshoppers and katydids. By studying the antenna lengths from many insect samples, we can discern some patterns and computed probabilities. For examples, given an antenna length of 3 cm, the insect is more likely to be a grasshopper than a katydid. Naive Bayes classifier is a technique to perform such a classification. Antenna length is a feature that's used to classify an insect into one of two classes. \n\nSuppose the antenna length is 5 cm. Probabilities computed from observed samples inform that both classes are equally likely. In this case, classification can be improved by considering more features such as abdomen length. NB classifier assumes that features are independent of one another. \n\nConsider the statement \"Officer Drew arrested me.\" Is Drew male or female? We can answer this by gathering data on the officer: height, eye colour and long/short hair. Then we lookup a police database of all officers and apply NB classifier. This problem uses three independent features and two classes (male or female). \n\n\n### What is Bayes' Theorem and how is it relevant to the NB classifier?\n\n**Bayes theorem** (aka Bayes rule) works on conditional probability. In conditional probability, the occurrence of a particular outcome is conditioned on the outcome of another event occurring. Given two events A and B, Bayes theorem states that,\n\n$$P(A|B) = \\frac{P(A⋂B)}{P(B)} = \\frac{P(A) \\cdot P(B|A)}{P(B)}$$\n\nwhere \\(P(A)\\) and \\(P(B)\\), called marginal probability or prior probability, are the probabilities of events A and B event occurring; where \\(P(A|B)\\), called posterior probability, is the probability of event A occurring given that event B has occurred; where \\(P(B|A)\\), called likelihood probability, is the probability of event B occurring given that event A has occurred; \\(P(A⋂B)\\) is the joint probability of both events occurring. \\(P(A|B)\\) and \\(P(B|A)\\) are also called conditional probabilities.\n\nSuppose you have drawn a red card from a deck of playing cards. What's the probability that it's a four? We apply conditional probability. There are 26 possible red cards and two of the are fours. Thus, \\(P(four|red)=2/26=1/13\\). Bayes Theorem allows us to reformulate the problem as follows: \n\n$$P(four|red) = P(four) \\cdot P(red|four) / P(red)\\\\= (4/52 \\cdot 2/4) / (26/52)\\\\= 1/13$$\n\n\n### What are the types of the NB classifier?\n\nscikit-learn implements three naive Bayes variants based on the same number of different probabilistic distributions: Bernoulli, multinomial, and Gaussian.\n\n**Bernoulli Naive Bayes** \n\nThe predictors in this case are boolean variables. So your only options are 'True' and 'False' (you might also have 'Yes' or 'No'). When the data has a multivariate Bernoulli distribution, we use it. \n\n**Multinomial Naive Bayes** \n\nThe frequency with which particular events were created by a multinomial distribution are represented by feature vectors. This is the event model that is most commonly used for document classification.This algorithm is used to tackle document classification difficulties. For example, if you want to know whether a document is in the 'Legal' or 'Human Resources' category, you'd use this technique to figure it out. It makes advantage of the frequency of the current words as a feature.\n\n**Gaussian Naive Bayes** \n\nIt is used for numerical / continuous features. The distribution of continues values are \"assumed\" to be Gaussian. And therefore the likelihood probabilities are computed based on Gaussian distribution. \n\n\n### How would you use Naive Bayes classifier for categorical features?\n\nFor a discrete variable with more than two possible outcomes, such as the roll of a dice, the categorical distribution is an extension of the Bernoulli distribution. In contrast, the categorical distribution provides a probability of different outcomes for one drawing rather than multiple drawings as is the multinomial distribution. \n\nThe properties should be encoded using label encoding techniques, and each category should be assigned a unique number.\n\nIt is given by: \n\n\\(p(x\\_i = t | y = c; α) = N\\_???+α /N\\_c+α n\\_i\\)\n\n\\(?\\_???\\) = Number of times category t appears in the samples ??, which belong to class ?\n\n\\(?\\_?\\) = Total number of samples with class c\n\n\\(?\\) = Laplace smoothing parameter used to handle zero frequency problem\n\n\\(?\\_?\\) = Number of available categories of feature\n\n\n### What is Laplace smoothing in the context of the NB classifier?\n\nLaplace smoothing is a smoothing technique used in Naive Bayes to solve the problem of zero probability. Consider text categorization, where the aim is to determine if a review is good or negative. Based on the training data, we create a likelihood table. We use the Likelihood table values when querying a review, but what if a word in a review was not present in the training dataset?. For example, a test query has form, Query review= x1x2x’\n\nLet, a test sample have three words, where we assume x1 and x2 are present in the training data but not x’. Laplace smoothing comes into picture.\n\n\\(P(x’/positive)= (number of reviews with x’ and target\\_outcome=positive + α) / (N+ α\\*k)\\)\n\nK denotes the number of dimensions (features) in the data.\n\nN is the number of reviews with the target outcome=positive.\n\nα represents the smoothing parameter. \n\n\n### Can we use the NB classifier when features are not independent?\n\nThe process of evaluating features depending on how successful they are in predicting the target variable is known as feature importance.The naive bayes classifiers do not provide an intrinsic technique for determining the relevance of features. Naive Bayes algorithms forecast the class with the highest probability by computing the conditional and unconditional probabilities associated with the features.As a result, no coefficients have been generated or connected with the characteristics used to train the model.However, there are ways for analysing the model after it has been trained that can be used post-hoc. One of these strategies is the Permutation Importance, which has been neatly implemented in scikit-learn. \n\nWhen the data is tabular, **permutation feature importance** is a model inspection technique that can be utilised for any fitted estimator. For a given dataset, the permutation importance function computes the feature importance of estimators. The **n\\_ repeats** option specifies how many times a feature is randomly shuffled before returning a sample of feature importances. \n\n\n### What are some applications of the NB classifier?\n\n**Text classification/ Spam Filtering/ Sentiment Analysis**: Naive Bayes classifiers, which are commonly employed in text classification (owing to better results in multi-class problems and the independence criterion), have a greater success rate than other techniques. As a result, it is commonly utilised in spam filtering (determining spam e-mail) and sentiment analysis (in social media analysis, to identify positive and negative customer sentiments)\n\n**Recommendation System**: The Naive Bayes Classifier and Collaborative Filtering work together to create a Recommendation System that employs machine learning and data mining techniques to filter unseen data and forecast whether a user would enjoy a given resource or not. \n\n**Multi-class Prediction**: This algorithm is also well-known for its multi-class prediction capability. We can anticipate the likelihood of various target variable classes here. \n\n**Real-time Prediction**: Naive Bayes is a quick learning classifier that is eager to learn. As a result, it might be utilised to make real-time forecasts. \n\n\n### How is the NB classifier related to logistic regression?\n\nGiven input features \\(X\\), both NB classifier and logistic regression predict an output class, that is, output \\(Y\\) is categorical. Logistic regression directly estimates \\(P(Y|X)\\) whereas NB classifier applies the Bayes theorem and estimates \\(P(Y)\\) and \\(P(X|Y)\\). As such, we call logistic regression a **discriminative classifier** and NB a **generative classifier**. \n\nIt's been observed that on small training datasets, NB classifier does better than logistic regression. If more training samples are available, logistic regression does better. While logistic regression has a lower asymptotic error, NB classifier may converge faster to its higher asymptotic error. \n\nIt's known that the Gaussian Naive Bayes (GNB) classifier is closely related to logistic regression. Parameters of one model can be expressed in terms of the other. Moreover, asymptotically both converge to the same classifier when GNB assumptions hold. When the assumptions don't hold, such as dependence among features, logistic regression does better because it adjusts its parameters to give a better fit. \n\n\n### What are some disadvantages and advantages of the NB classifier?\n\n**Advantages**: Naive bayes is Simple to put into action. The conditional probabilities are simple to compute. The probabilities can be determined immediately, there is no need for iterations. As a result, this strategy is useful in situations when training speed is critical. If the conditional Independence assumption is true, the consequences could be spectacular. This algorithm predicts classes faster than many other classification algorithms. \n\n**Disadvantages**:The premise of independent predictors is the main imitation of Naive Bayes. Naive Bayes implicitly assumes that all attributes are independent of one another. In practise, it is very hard to obtain a set of predictors that are totally independent. If a categorical variable in the test data set has a category that was not observed in the training data set, the model will assign a 0 (zero) probability and will be unable to predict. This is commonly referred to as Zero Frequency. you can utilise the smoothing approach to remedy this. Laplace estimation is one of the most basic smoothing techniques. \n\n## Milestones\n\n1763\n\nThe Royal Society publishes a paper on probability by Thomas Bayes after his death in 1761. It's titled *Essay Towards Solving a Problem in the Doctrine of Chances* and details what would later become famous as the **Bayes inference**. The basic idea is to revise predictions based on new evidence. Decades later (early 19th century), Pierre-Simon Laplace develops and popularizes Bayesian probability. \n\n1940\n\nBayesian approach is applied during the Second World War. It sees a revival in the years after the war. Earlier, Bayesian approach had been criticized. The frequentist approach developed by R.A. Fisher had been favoured since the mid-1920s. \n\n1960\n\nMaron and Kuhns apply Bayes' Theorem to the task of **Information Retrieval (IR)**. The probability of retrieving a relevant document given a query can be computed from the prior probability of document relevance and conditional probability of user making a particular query given the relevant document. Over the next forty years, Naive Bayes is the main technique in IR until machine learning techniques become popular. \n\n1968\n\nHughes considers a **two-class pattern recognition problem**. The model considers \\(n\\) discrete values that can be measured and \\(m\\) sample patterns. He shows that for a given \\(m\\), there's an optimal \\(n\\) that minimizes the pattern recognition error. This is shown in the figure (right) for the case of equal class probabilities. The figure (left) also shows an example of \\(n=5\\) in which values 1-3 imply class \\(c\\_1\\) and values 4-5 imply class \\(c\\_2\\). \n\n1973\n\nDuda and Hart use the Naive Bayes classifier in **pattern recognition**. \n\n1992\n\nLangley et al. present an **analysis** of Bayesian classifiers considering noisy classes and noise-free attributes. They find that the Naive Bayes classifier gives comparable results to the C4 algorithm that induces decision trees. They conclude that despite its simplicity, the Naive Bayes classifier deserves more research attention. \n\n1997\n\nDomingos and Pazzani show that even when **attributes are not independent**, the Bayesian classifier does well. It can be optimal under zero-one loss (misclassification rate). It's optimal under squared error loss only when the independence assumption holds. \n\n1998\n\nKasif et al. propose a probabilistic framework for memory-based reasoning (MBR). Such a framework can be used for classification tasks. They note that a **probabilistic graphical model** is really another way of looking at the Naive Bayes classifier.","meta":{"title":"Naive Bayes Classifier","href":"naive-bayes-classifier"}} {"text":"# 5G Service-Based Architecture\n\n## Summary\n\n\nCellular core networks till LTE used specialized telecom protocols running on specialized telecom hardware. With LTE vEPC, the first efforts were made towards virtualization. Service-Based Architecture (SBA) is an evolution of this approach and it's been adopted by 5G System.\n\nIn SBA, a set of Network Functions (NFs) provide services to other authorized NFs. These NFs are nothing more than software implementations running on commercial off-the-shelf hardware, possibly in the cloud. A NF can offer one or more services. NFs are interfaced via well-defined APIs and a client-server model. Traditional telecom signalling messages are replaced with API calls on a logically shared service bus. \n\nSBA utilizes the maturity of web and cloud technologies. Modularity, scalability, reliability, cost-effective operation, easy deployments, and faster innovation are some of the benefits of moving to SBA.\n\n## Discussion\n\n### Why does the 5G Core need a service-based architecture?\n\n5G is not just about new devices, new use cases and higher speeds. The core network needs to be modernized to be able to support demanding performance requirements. eMBB, mMTC and URLLC are all different use cases that can't be satisfied by a monolithic architecture. We're expecting a massive increase in high-bandwidth content, low-latency applications and huge volumes of small data packets from IoT sensors. \n\n5G Core needs to be flexible, agile and scalable. User plane and control plane need to be scaled independently. Traffic handling must be optimized. Network operators must be able to quickly launch new services. This calls for virtualization, a software-driven approach and adopting web protocols and cloud technologies. The network must be composed of loosely coupled network functions that can be managed independently but interfaced efficiently via mature and scalable technologies. \n\nTo reduce both CAPEX and OPEX, there's a need to use off-the-shelf hardware running multi-vendor software exposing open interfaces. Operators must have the flexibility to use private or public clouds. Compute and storage should be distributed. There's a need to support edge computing. \n\n\n### What are the benefits of moving to 5G SBA?\n\nSince NFs are loosely coupled and interfaced with APIs, each NF can be evolved and deployed independently. SBA signifies a move from monolithic to **modular architecture**. New NFs can be rolled out without impacting existing ones. Via NEF, external applications can interwork with 5G Core. Across the 5G ecosystem, this enables **faster innovation**. \n\nSBA brings **scalability and resilience**. Rather than add physical nodes that may take weeks, new instances of NFs can be created/destroyed dynamically in minutes. If an instance or a physical node fails, monitoring systems can detect this and quickly spin up new instances. \n\nSBA's modular design enables **network slicing**. Multiple logical networks can run on a single physical network, thus catering to multiple industry verticals. \n\nIn terms of true business value, Oracle has noted that, \n\n> SBAs provide a set of loosely coupled services that empower communications service providers (CSPs) to be more agile and enable rapid service delivery.\n\n\n### What web technologies are enabling 5G SBA?\n\nTelecom protocols common in earlier generations have been replaced in 5G Core with web technologies. SCCP, TCAP, SCTP and Diameter are some examples that have been replaced with TCP/IP for networking, HTTP/2 at the web application layer and JSON as the data serialization format. For security, TLS is used above TCP layer. \n\nNetwork functions in SBA are exposed via well-defined Service-Based Interfaces (SBIs). The bottom-up layering of L2, IP, TCP, TLS, HTTP/2 and application layers is formally called the **SBI protocol stack**. \n\nThe interfaces themselves are defined with an Interface Definition Language (IDL) called **OpenAPI Specification**. \n\nInterfaces are exposed as **RESTful APIs**. 5G SBA has adopted the web's client-server model but a client is called **Service Consumer** and a server is called **Service Provider**. \n\nMany cloud native tools and technologies are enabling 5G SBA: Docker for containerization, Kubernetes for container orchestration, Istio for service mesh, Prometheus for monitoring, Grafana for visualization, and many more. \n\n\n### Could you describe the architecture of 5G SBA?\n\n5G SBA is described by a **reference point representation** that names the points by which each NF connects to other NFs. In practice, the reference points are implemented by corresponding NF **Service-Based Interfaces (SBIs)**. Instead of point-to-point connections, NFs interconnect on a logically shared infrastructure or service bus. For instance, AMF and SMF are connected via the N11 reference point for which the corresponding SBIs are Namf and Nsmf. \n\nSBIs are defined only for the control plane. Thus, the reference point between SMF and UPF is N4. It has no equivalent SBI. Likewise, SBIs are defined for 5G Core functionality. Thus, reference points N1, N2 and N3 that involve the UE or RAN don't have SBIs. \n\n\n### Which are the main functions in 5G SBA?\n\n5G Core has about two dozen **network functions**: Access and Mobility Management Function (AMF), Application Function (AF), Authentication Server Function (AUSF), Binding Support Function (BSF), CHarging Function (CHF), Network Data Analytics Function (NWDAF), Network Exposure Function (NEF), Network Repository Function (NRF), Network Slice Selection Function (NSSF), Network Slice Specific Authentication and Authorization Function (NSSAAF), Policy Control Function (PCF), Session Management Function (SMF), UE radio Capability Management Function (UCMF) and Unstructured Data Storage Function (UDSF), Unified Data Repository (UDR), User Plane Function (UPF) and Unified Data Management (UDM). \n\nAmong the network **entities** are Service Communication Proxy (SCP) and Security Edge Protection Proxy (SEPP). \n\nFor **interworking** with non-3GPP access networks, we have Non-3GPP InterWorking Function (N3IWF), Trusted Non-3GPP Gateway Function (TNGF), Wireline Access Gateway Function (W-AGF) and Trusted WLAN Interworking Function (TWIF). \n\nFor **location services**, NFs include Location Management Function (LMF), Location Retrieval Function (LRF) and Gateway Mobile Location Centre (GMLC). \n\n\n### Could you describe some of the network functions in 5G SBA?\n\nWe note a few of Release 15 NFs: \n\n + **AMF**: Registration, access control and mobility management.\n + **SMF**: Creates, updates and removes PDU sessions. Manages session context with UPF. UE IP address allocation and DHCP role.\n + **UPF**: User plane packet forwarding and routing. Anchor point for mobility.\n + **NRF**: Maintains updated records of services provided by other NFs.\n + **NEF**: Securely opens up the network to third-party applications.\n + **AUSF**: Authentication for 3GPP access and untrusted non-3GPP access.\n + **PCF**: Unified policy framework to govern network behaviour. Provides policy rules for control plane.\n + **NSSF**: Selects network slice instances for the UE. Determines AMF set to serve the UE.\n + **UDM**: Generates AKA authentication credentials. Authorizes access based on subscription data.\n + **AF**: Interfaces with 3GPP core network for traffic routing preferences, NEF access, policy framework interactions and IMS interactions.\n + **BSF**: Binds an AF request to the relevant PCF.\n\n### Could you describe an example how services communicate in 5G SBA?\n\nA service producer will register itself with the Network Repository Function (NRF). A service consumer will consult the NRF to discover available NF instances. Thus, the process involves **Service Registration** and **Service Discovery**. \n\nOnce so discovered, the service consumer can **directly consume** authorized APIs exposed by the service producer. These API calls are RESTful: client-server model, stateless calls, unique URIs, and use of HTTP verbs GET/POST/PUT/PATCH/DELETE. \n\n**Indirect communication** is also possible via Service Communication Proxy (SCP). Service registration and discovery still happen with the NRF but this may be delegated to the SCP. Consumers send their requests through the SCP. The SCP itself doesn't expose any network function. \n\nFinally, it's possible to configure consumers with NF profiles of producers. This bypasses service registration and discovery. In fact, the specification identifies this as Model A. Model B is direct communication. Model C and Model D are indirect communications. \n\nAll NF services are detailed in TS 23.502 specification. Services within an NF can call one another but their interactions are not specified in Release 16. \n\n\n### What are the challenges with 5G SBA?\n\n**Security** is one the big challenges with many possible vulnerabilities. JSON is an exact opposite of telecom world's ASN.1. JSON specification is less rigorous and has versioning problems. Implementing and deploying OAuth 2.0 is going to be complex. The web's practice of rapid changes and CI/CD pipelines can make telecom systems less secure. REST APIs have many known vulnerabilities. There are also problems with TLS. \n\nArchitecture, frameworks, libraries and tools need greater **maturity**. Cloud-native architecture was developed for enterprise clouds, not for telecom systems that need low downtime of few minutes per year. Kubernetes lacks networking features such as ECMP, GTP tunnelling, SCTP and LACP. In a distributed environment involving hundreds of nodes, deploying and operating OpenStack and Kubernetes for NFVI and MANO are not trivial. Complexity increases when extensions such as DPDK are included. Traditional network visibility tools must evolve to monitor at the container level. \n\nWith NFs being developed by many vendors, integration and **interoperability** may become an issue. LTE EPC and 5G Core need to interoperate as well and shouldn't be managed in silos. Legacy OAM tools for 4G that can't handle 5G Core may lead to inefficient operations. \n\n## Milestones\n\n1990\n\nThis is the decade when **Service-Oriented Architecture (SOA)** starts to gain wider adoption. Applications are not monoliths but composed of isolated and independent services. Services are loosely bound together by a standard communications framework.\n\n2000\n\nIn this decade, **Web Services**, a web-based implementation of SOA, becomes popular over proprietary methods such as CORBA or DCOM. \n\n2010\n\nMany web and cloud technologies develop and become popular through the 2010s: the term **microservices** is coined (2011); **REST** and **JSON** become the de facto standard to consume backend data (2012); **Docker** for containerization gets open sourced (2013); and Google open sources **Kubernetes**, a container orchestration system. These developments soon enable the birth of 5G's service-based architecture.\n\nFeb \n2013\n\nAt Mobile World Congress, **Virtual Evolved Packet Core (vEPC)** solutions are showcased by NEC, Cisco and Intel. In October, NEC claims to be the world's first to offer a vEPC solution on commercial off-the-shelf (COTS) hardware based on Intel architecture. Deployment of vEPC solutions gathers pace and adoption through 2014. By 2015, operators see the value in virtualization of the core network. \n\nJul \n2016\n\nAt the ÜberConf 2016 conference, Ford and Richards deliver presentations on **service-based architecture**. SOA breaks applications by layers but microservices breaks them by domain. They note that moving a monolithic application to microservices is not a trivial exercise. SBA offers a middle ground with dozens of services rather than hundreds of microservices. Services in SBA may even share a common data storage. \n\nJan \n2017\n\n3GPP publishes version 0.0.0 of *TS 23.501: System architecture for the 5G System*. In future, this is the main document that would detail 5G Core's service-based architecture. This evolves to version 1.0.0 by June. \n\nDec \n2017\n\n3GPP publishes **Release 15** \"early drop\". In TS 23.501, this release includes network functions and entities AUSF, AMF, UDSF, NEF, NRF, NSSF, PCF, SMF, UDM, UDR, UPF, AF, 5G-EIR and SEPP. BSF is also included in this release. \n\nJan \n2018\n\nIn an interview, we come to learn that 3GPP Core Networks and Terminals (CT) Working Groups are interfacing with IETF on how to adopt **QUIC**. As a replacement to the TCP/IP stack in the 5G Core, QUIC may be considered. As of Release 16 (July 2020), QUIC isn't part of 5G Core. \n\nJun \n2020\n\nTowards supporting **Location Services (LCS)**, relevant network functions are defined in TS 23.273, version 16.0.0. These include Gateway Mobile Location Centre (GMLC), Location Retrieval Function (LRF) and Location Management Function (LMF). \n\nJul \n2020\n\n3GPP publishes **Release 16** specifications. New capabilities are NEF-based infrequent small data transfer via NAS, which will benefit MTC use cases and IoT applications; indirect communication between network services via Service Communication Proxy (SCP) and implicit discovery; support of trusted non-3GPP access; NF Set and NF Service Set; and more. New NFs include UCMF, NWDAF, CHF, N3IWF, TNGF, W-AGF. \n\nJul \n2020\n\nA study involving service providers who operator 30% of the world's commercial 5G networks shows that **Unified Data Management (UDM)** is the most tested and deployed network function. This comes from the realization that data is the new gold. Operators wish to control and monetize it.","meta":{"title":"5G Service-Based Architecture","href":"5g-service-based-architecture"}} {"text":"# Byte Ordering\n\n## Summary\n\n\nA byte (of 8 bits) has a limited range of 256 values. When a value is beyond this range, it has to be stored in multiple bytes. A number such as 753 in hexadecimal format is 0x02F1. It requires at least two bytes of storage. The order in which these two bytes are stored in memory can be different. Byte 0x02 can be stored in lower memory address followed by 0xF1; or vice versa.\n\nPrograms must conform to the byte ordering as supported by the processor. If not, 0x02F1 might be wrongly interpreted as 0xF102, which is the number 61698 in decimal system. Byte ordering is also important when data is transferred across a network or between systems using different ordering. \n\nByte ordering is an attribute of the processor, not the operating system running on it.\n\n## Discussion\n\n### Which are the different byte orderings present in computing systems?\n\nTwo common ordering systems include, \n\n + **Little-Endian**: Low-order byte is stored at a lower address. This is also called *Intel order* since Intel's x86 family of CPUs popularized this ordering. Intel, AMD, PDP-11 and VAX are little-endian systems.\n + **Big-Endian**: High-order byte is stored at a lower address. This is also called *Motorola order* since Motorola's PowerPC architecture used this ordering. Motorola 68K and IBM mainframes are big-endian systems.Some processors support both ordering and are therefore called **Bi-Endian**. PowerPC and Itanium are bi-endian. Bi-Endian processers can switch between the two orderings. Most RISC architectures (SPARC, PowerPC, MIPS) were originally big-endian but are now configurable. While ARM processors are bi-endian, the default is to run them as little-endian systems, as seen in the Raspberry Pi. \n\nDue to the popular adoption of x86-based systems (Intel, AMD, etc.) and ARM, little-endian systems have come to dominate the market. \n\n\n### Could you compare host-byte vs network-byte ordering?\n\nSince computers using different byte ordering, and exchanging data, have to operate correctly on data, the convention is to always send data on the network in big-endian format. We call this **network-byte ordering**. The ordering used on a computer is called **host-byte ordering**. \n\nA host system may be little-endian but when sending data into the network, it must convert data into big-endian format. Likewise, a little-endian machine must first convert network data into little-endian before processing it. \n\nFour common functions to do these conversions are (for 32-bit long and 16-bit short) `htonl`, `ntohl`, `htons` and `ntohs`. \n\nEven if your code doesn't do networking, data may be stored in a different ordering. For example, file data is stored in big-endian while your machine is big-endian. In some cases, CPU instructions may be available for conversion. Intel's 64-bit instruction set has `BSWAP` to swap byte ordering. Since ARMv6, `REV` swaps byte order and `SETEND` to set the endianness. \n\n\n### Aren't conversions redundant for big-endian machines since it already conforms to network-byte ordering?\n\nIt's good programming practice to invoke these conversions since this makes your code *portable*. The same codebase can be compiled for a little-endian machine and it will work. Calling the conversion functions on a big-endian machine has no effect. \n\nWhile a little-endian system sending data to another little-endian system need not do any conversion, it's good to convert. This makes the code more portable across systems.\n\n\n### Are there situations where byte ordering doesn't matter?\n\nWhen data is stored and processed as a sequence of single bytes (not shorts or longs), then byte ordering doesn't matter. No conversions are required when receiving or sending such data into the network.\n\nASCII strings are stored as a sequence of single bytes. Byte ordering doesn't matter. Byte ordering for Unicode strings depends on the type of encoding used. If encoding is UTF-8, ordering doesn't matter since encoding is a sequence of single bytes. If encoding is UTF-16, then byte ordering matters. \n\nWhen storing TIFF images, byte ordering matters since pixels are stored as words. GIF and JPEG images don't care about byte ordering since storage is not word oriented. \n\n\n### What's the purpose of Byte Order Mark (BOM)?\n\nAn alternative to using host-to-network and network-to-host conversions is to send the data along with an indication of the ordering that's being used. This ordering is indicated with additional two bytes, known as **Byte Order Mark (BOM)**. \n\nThe BOM could have any agreed upon value but 0xFEFF is common. If a machine reads this as 0xFFFE, it implies that ordering is different from the machine's ordering and conversion is required before processing the data further. \n\nBOM adds overhead. For example, sending two bytes of data incurs an overhead of additional two bytes. Problems can also arise if a program forgets to add the BOM or data payload starts with the BOM by coincidence. \n\nBOM is not required for single-byte UTF-8 encoding. However, some editors such as Notepad on Windows may include BOM (three bytes, EFBBBF) to indicate UTF-8 encoding. \n\n\n### What are the pros and cons of little-endian and big-endian systems?\n\nIn little-endian systems, just reading the lowest byte is enough to know if the number is odd or even. This may be an advantage for low-level processing. Big-endian systems have a similar advantage: lowest byte can tell us if a signed integer is positive or negative. \n\nTypecasting (say from `int16_t` to `int8_t`) is easier in little-endian systems. Because of the simple relationship between address offset and byte number, little-endian can be easier for writing math routines. \n\nDuring low-level debugging, programmers can see bytes stored from low address to high address, in left-to-right ordering. Big-endian systems store in the same left-to-right order and this makes debugging easier. For the same reason, binary to decimal routines are easier. \n\nFor most part, programmers have to deal with both systems. Each system evolved separately and therefore it's hard to complain about not having a single system.\n\n\n### Is the concept of endianness applicable for instructions?\n\nEndianness is applicable for multi-byte numeric values. Instructions are not numeric values and therefore endianness is not relevant. However, an instruction may contain 16-bit integers, addresses or other values. The byte ordering of these parts is important. \n\nFor example, 8051 has a `LCALL` instruction that stores the address of the next instruction on the stack. Address is pushed to stack in little-endian format. However, `LJMP` and `LCALL` instructions contain 16-bit addresses that are in big-endian format. \n\n\n### How can I check the endianness of my system?\n\nOn Linux and Mac, the command `lscpu` can be used to find endianness. \n\nDevelopers can also write a simple program to determine the endianness of the host machine. In a C program, for example, store two bytes in memory. Then use a `short *ptr` to point to the lower address. Dereference this pointer to obtain a short value, which will tell us if the machine is little-endian or big-endian. \n\n\n### Do systems differ in the ordering of bits within a byte?\n\nBits within a byte are commonly numbered as Bit0 for the least significant bit and Bit7 for the most significant bit. Thus, bit numbering in a 32-bit integer will be left-to-right order in big-endian, and right-to-left in little-endian. However, some systems such as the OpenXC vehicle interface use the opposite numbering in which the least significant bit is Bit7. Note that in either case, the content remains the same, only the numbering is different. \n\nDanny Cohen in his classic paper on the subject of endianness notes some examples where bit numbering was inconsistent in early computer systems. For example, M68000 was big-endian but the bit numbering resembled little-endian. \n\nIn digital interfaces, bit ordering matters. In Serial Peripheral Interface (SPI), this can be configured based on what both devices support. In I2C, most significant bit is sent first. In UART, either ordering is fine and must be configured correctly at both ends. If not, sending least significant bit first is usually assumed. \n\n## Milestones\n\n1970\n\nPDP-11 released by DEC is probably the first computer to adopt little-endian ordering. \n\n1980\n\nThe terms big-endian and little-endian are used for the first time in the context of byte ordering. The terms are inspired by Jonathan Swift's novel titled *Gulliver's Travels*. \n\n1983\n\nByte order conversion functions `htonl`, `ntohl`, `htons` and `ntohs` are introduced in BSD 4.2 release. \n\n1992\n\nFirst samples of an ARM processor came out in 1985. It's only in 1992 with the release of ARM6 that the processor becomes bi-endian. Legacy big-endian is supported for both instructions and data. Otherwise, instructions are little-endian. Data is little-endian or big-endian as configured. While byte ordering can be configured in software for some ARM processors, for others such as ARM-Cortex M3, the order is determined by a configuration pin that's sampled at reset.","meta":{"title":"Byte Ordering","href":"byte-ordering"}} {"text":"# RISC-V Architecture\n\n## Summary\n\n\nWhen computers \"compute\", they're in fact executing instructions that are defined by what's known as **Instruction Set Architecture (ISA)**. Each computer hardware will support a particular ISA. RISC-V is a free, open ISA that can be extended or customized for a variety of hardware or application requirements. \n\nApart from defining the instructions themselves, to be a success, any ISA requires broad industry support from chip manufacturers, hardware designers, tool vendors, compiler writers, software engineers, and more. While RISC-V is still new, progress has been made in building a healthy ecosystem and first RISC-V chips have been released.\n\nIt's been said that, \n\n> The real value of RISC-V is enabling the software and tools community to develop around a single common hardware specification.\n\n## Discussion\n\n### What's the motivation for creating RISC-V?\n\nMany of the world's PCs and laptops are based Intel's x86 architecture. Many of the world's smartphones and embedded devices are based on ARM architecture. Both are proprietary and any use of these architectures involve licensing cost and may involve royalty fees. Moreover, companies may lack full competency to design their proprietary ISA. Another problem is long-time support. For example, when DEC died, there was no one to support their proprietary ISAs Alpha or VAX. \n\nOne researcher claimed that security flaws such as Meltdown and Spectre are down to flaws in Intel's instruction sets. This is less likely when ISAs are open to inspection by a wide engineering community. \n\nRISC-V is an open ISA. It uses **BSD Open Source License**. This license does not restrict even commercial use of the ISA. Anyone implementing RISC-V are not required to release the source code of their RISC-V cores. The license only requires that authors of RISC-V must be acknowledged. An open ISA will permit software reuse, greater innovation and reduced cost. \n\n\n### Do we need RISC-V when there are other open ISAs?\n\nCreators of RISC-V considered and dismissed other open ISAs. OpenRISC has technical shortcomings and little industry adoption. OpenSPARC is suitable for servers but not for embedded devices and smartphones. OpenSPARC is also licensed under GPLv2, which may not attract support from commercial players. \n\nRISC-V benefits from the mistakes of the past. It has no burden to support legacy instructions. It adopts RISC for its simplicity. By leaving out delayed branches, RISC-V is kept simple and clean. Other open ISAs are not modular. RISC-V is modular so that the right balance of cost and efficiency can be attained for a particular application. In particular, it can be customized for constrained IoT devices, smartphones/tablets or servers. \n\nOne claim states that LatticeMico32 is an open source RISC processor that's some are already using and questions the need for RISC-V. They also state that the ecosystem is more important than having a perfect ISA. \n\n\n### What are some possible benefits of using RISC-V?\n\nBeyond saving on licensing or royalty fees, RISC-V has many benefits:\n\n + **Universal**: As one of its goals, RISC-V should suit all sizes of processors, all types of implementations (FPGA/ASIC/SoC), various software stacks, and various programming languages.\n + **Modular**: The ISA has a base specification plus optional extensions. This means designers can leave out stuff they don't need for their application.\n + **Extensible**: Designers can add custom instructions for specialized functions such as machine learning or security. This is particularly important when Moore's Law is ending.\n + **Freedom**: Designers have the freedom to work on their own optimized implementations and retain the choice to make their IP open.\n + **Frozen**: By freezing the ISA specifications, we can be certain that today's software and tools will work on RISC-V systems many decades from now.\n + **Adoption & Reuse**: By being open, RISC-V will encourage wider adoption because of compatibility. This enables reuse.\n\n### Could you compare RISC-V with alternative architectures?\n\nIn terms of code sizes, one study found RISC-V compressed ISA (RV32C) is similar to Thumb-2, while RV64C has better code density than its alternatives. \n\nIn terms of performance (speed and power), there's no reason to believe that RISC-V processors will fare worse than ARM or x86 processors. It will be dependent on implementation: microarchitectural design, circuit design and processing technology. \n\n\n### Could you name some processors based on RISC-V?\n\nBecause RISC-V is open, anyone can design and develop their own processors without licensing fees. However, design and engineering costs can run into millions of dollars plus a delayed time to market. It therefore makes sense to use IP cores or processors developed by others. \n\nSome offer RISC-V **IP cores** that chip makers can license. Among them are Andes Technology, Codasip, Bluespec, Cortus, and SiFive. There are others who offer **soft cores** that can run in FPGAs: Microsemi, Rumble Development, and VectorBlox. \n\nSiFive has two families of licensable cores: E Series and U Series. They also offer these in silicon plus their development boards. Freedom E310 (FE310) is the first member of the Freedom Everywhere family. \n\nIndia's Shakti is a RISC-V chip developed at IIT Madras.\n\nlowRISC is a fully open-sourced, Linux-capable, RISC-V-based SoC currently being developed. Their Rocket core currently runs on an FPGA. \n\nRISC-V Foundation maintains a list of RISC-V cores and SoCs.\n\n\n### What tools are available for developers who wish to work on RISC-V?\n\nBasic tools include compiler, assembler, disassembler, profiler, debugger, and linker. Beyond these are IDEs, SDKs, simulators, and many more.\n\nTools are being developed and maintained by the team at UC Berkeley plus the wider community outside. RISC-V supports GNU/GCC, GNU/GDB and LLVM. Instruction Set Simulators (ISS) are available from Antmicro and QEMU. Full IDEs are available from Imperas, Microsemi and SiFive. SiFive's Eclipse-based IDE is called Freedom Studio. Tools are available to design your own RISC-V subsystem for FPGAs. \n\nRISC-V Foundation maintains the state of the current RISC-V software ecosystem.\n\n\n### Which operating systems have been ported to run on RISC-V?\n\nDifferent flavours of Linux have been ported to RISC-V, including Yocto. In January 2018, kernel version 4.6 was being used. Researchers at the University of Cambridge have ported FreeBSD. As on August 2018, 80% of Debian software library has been compiled for RISC-V. Fedora/RISC-V project aims to bring the Fedora experience on RV64GC architecture. \n\nAmong the RTOS, Zephyr is planning a port as of August 2018. A port of FreeRTOS is also available.\n\n## Milestones\n\n2010\n\nResearchers at the University of California, Berkeley conceive RISC-V (pronounced \"risk five\") as an ISA for research and education at Berkeley. Previously, they used SPARC ISA and a modified MIPS ISA but want a unified ISA for future projects. This is the fifth generation, following in the steps of four earlier generations in the 1980s. \n\nMay \n2011\n\nVersion 1.0 of the RISC-V base user-level ISA is published as volume 1. This version is not frozen. Volume 2 is for supervisor-level ISA. About this time, Raven-1 testchip is taped out using ST 28nm FDSOI process node. \n\nMay \n2014\n\nVersion 2.0 of the user-level ISA is published. This is a final frozen version. \n\n2015\n\nIn January, the 1st RISC-V Workshop is organized in Monterey, CA. To drive the future development and adoption of RISC-V, **RISC-V Foundation** is established. With more than 100 member organizations, this is an open collaborative community including both hardware and software innovators. \n\nDec \n2016\n\nSiFive releases Freedom E310 32-bit microcontroller at 320 MHz. This is the first commercial RISC-V chip. The CPU core is SiFive E31 and its ISA is RV32IMAC. \n\nOct \n2017\n\nSiFive releases U54-MC Coreplex, the first RISC-V-based chip that supports Linux, Unix, and FreeBSD. It has 5 CPU cores: 4xU54 + 1xE51. This enables RISC-V processors to compete against ARM cores. In February 2018, SiFive releases a development board named HiFive Unleashed for this chip. \n\nJan \n2018\n\nIt's reported that commercial players including Western Digital and Nvdia plan to use RISC-V ISA for their next generation of products. \n\nJul \n2018\n\nIndia's Shakti RISC-V processor at 400 MHz boots up Linux. The processor is based on Intel's 22nm FinFET process node.","meta":{"title":"RISC-V Architecture","href":"risc-v-architecture"}} {"text":"# 5G Quality of Service\n\n## Summary\n\n\n5G Quality of Service (QoS) model is based on **QoS Flows**. Each QoS flow has a unique identifier called QoS Flow Identifier (QFI). There are two types of flows: Guaranteed Bit Rate (GBR) QoS Flows and Non-GBR QoS Flows. The QoS Flow is the finest granularity of QoS differentiation in the PDU Session. User Plane (UP) traffic with the same QFI receive the same forwarding treatment. \n\nAt the Non-Access Stratum (NAS), packet filters in UE and 5GC map UL and DL packets respectively to QoS flows. At the Access Stratum (AS), rules in UE and Access Network (AN) map QoS flows to Data Radio Bearers (DRBs). \n\nEvery QoS flow has a QoS profile that includes QoS parameters and QoS characteristics. Applicable parameters depend on GBR or non-GBR flow type. QoS characteristics are standardized or dynamically configured.\n\n## Discussion\n\n### Could you explain 5G QoS with an example?\n\nConsider multiple PDU sessions, each of which could be generating packets of different QoS requirements. For example, packets from the Internet may be due to user browsing a website, streaming a video or downloading a large file from an FTP server. Delay and jitter are important for video but less important for FTP download. \n\nBetween the User Equipment (UE) and the Data Network (DN), PDU sessions and Service Data Flows (SDFs) are set up. Each application gets its own SDF. In our example, we may say that Internet, Netflix and IMS are PDU sessions. The Internet PDU session has four SDFs and the IMS PDU session has two SDFs. \n\nMultiple IP flows can be mapped to the same QoS flow. QoS flow 2 is an example that carries both WhatsApp video and Skype video. On the radio interface, QoS flows are mapped to data radio bearers (DRBs) that are configured to deliver that QoS. Multiple QoS flows can be mapped to a single DRB. DRB2 is an example and it carries QoS flows 2 and 3. \n\n\n### How does the QoS model differ between LTE and 5G networks?\n\nIn 4G/LTE, QoS is applied at the level of Evolved Packet Service (EPS) bearer. There's a one-to-one mapping, which really means that for an EPS bearer there's a corresponding EPS Radio Access Bearer (RAB), an S1 bearer and a Radio Bearer (RB). \n\n5G provides a more flexible QoS model with **QoS Flow** being the finest granularity at which QoS is applied. The abstraction of QoS flow allows us to decouple the roles of 5G Core and NG-RAN. SMF in the 5G Core configures how packets ought to be mapped to QoS flows. \n\nAN independently decides how to map QoS flows to radio bearers. This is a flexible design since gNB can choose to map multiple QoS flows to a single RB if such an RB can be configured to fulfil the requirements of those QoS flows. The figure shows an example in which QoS flow 1 goes on DRB1; QoS flows 2 and 3 go on DRB2. \n\nTo summarize, 4G QoS is at the EPS bearer level and 5G QoS is at the QoS flow level. \n\n\n### Could you describe end-to-end QoS for a PDU session?\n\nApplication packets are classified or mapped to suitable QoS flows by the UPF in DL and UE in UL. UPF uses Packet Detection Rules (PDRs). UE uses QoS rules. Because multiple PDRs and rules can exist, these are evaluated in precedence order, from lowest to highest values. If no match is found, packet is discarded. \n\nOn the N3 interface between UPF and AN, packets are marked with QFI in an encapsulation header. Due to this marking, AN knows which packets belong to which QoS flow. SDAP sublayer in the AN maps the flows to suitable DRBs. At the receiving end, UE's SDAP sublayer does the reverse mapping from DRBs to QoS flows. \n\nA similar flow happens for UL packets. UE marks the packet with QFI in SDAP header. AN does QFI marking in an encapsulation header on N3. UPF verifies if a received QFI is aligned with configured QoS rules or Reflective QoS. \n\n\n### What's the basis for classifying packets into QoS flows?\n\nIn the downlink, UPF uses Packet Detection Rules (PDRs). In the uplink, UE uses QoS rules. Both these make use of Packet Filter Set that has one or more packet filters. \n\nAn **IP Packet Filter Set** is based on a combination of fields: Source/destination IP address or IPv6 prefix; Source/destination port number (could be a port range); Protocol ID of the protocol above IP/Next header type; Type of Service (TOS) (IPv4) or Traffic class (IPv6) and Mask; Flow Label (IPv6); Security parameter index; and Packet Filter direction. \n\nAn **Ethernet Packet Filter Set** is based on a combination of fields: Source/destination MAC address (may be a range); Ethertype as defined in IEEE 802.3; Customer-VLAN tag (C-TAG) and/or Service-VLAN tag (S-TAG) VID fields as defined in IEEE Std 802.1Q; Customer-VLAN tag (C-TAG) and/or Service-VLAN tag (S-TAG) PCP/DEI fields as defined in IEEE Std 802.1Q; IP Packet Filter Set, in the case that Ethertype indicates IPv4/IPv6 payload; and Packet Filter direction. \n\n\n### What are the defaults used in the 5G QoS model?\n\nThe specification defines a **default QoS rule**. Every PDU session is required to have a QoS flow associated with such a default. This flow remains active for the lifetime of the PDU session. This is a Non-GBR QoS Flow to facilitate EPS interworking. \n\nFor IP and Ethernet sessions, the default QoS rule is the only rule with a Packet Filter Set that allows all UL packets, and it has the highest precedence. Note that QoS rules and PDRs are evaluated in increasing order of precedence values. \n\nReflective QoS is not applied and RQA is not sent for a QoS flow that's using default QoS flow. \n\n**QoS parameters** too have defaults. On a per-session basis, SMF obtains from the UDM subscribed Session-AMBR, and subscribed defaults for Non-GBR 5QI, ARP and optionally 5QI Priority Level. Based on interaction with PCF, SMF may change subscribed values. \n\n\n### What's the role of SMF within the QoS model?\n\nQoS flows are controlled by SMF. They're preconfigured or created/updated via PDU Session Establishment or Modification procedures. \n\nSMF interacts with UDM and PCF to obtain subscribed and authorized QoS parameters for each QoS flow. PCF responds with Policy Charging and Control (PCC) Rules that includes Packet Filter Set. SMF extracts QoS Flow Binding Parameters (5QI, ARP, Priority, MDBV, Notification Control) and creates a new QoS flow if one doesn't exist for this combination. The binding of PCC rules to QoS Flows is an essential role of SMF. \n\nSMF associates a QoS flow with QoS profile, QoS rules and PDRs. PDRs are derived from the PCC rule and inherit the precedence value. PDRs are part of SDF Template passed to UPF. SMF sends QoS profile to AN via AMF over N2, QoS rules to the UE via AMF over N1, and PDRs to the UPF over N4. \n\nSMF assigns a QFI for each QoS flow and an identifier to each QoS rule. Both identifiers are unique within the PDU session. \n\n\n### What is meant by Reflective QoS?\n\nFor UL packet classification, SMF provides QoS rules to the UE. Or the UE implicitly derives the rules from downlink packets. This latter case is called **Reflective QoS**. \n\nBoth reflective and non-reflective QoS can coexist within the same PDU session. Reflective QoS applies to IP PDU session and Ethernet PDU session. UE indicates to SMF if it supports Reflective QoS during PDU Session Establishment. UE may change its support and indicate this via PDU Session Modification. UE-derived QoS rule would include UL packet filter, QFI and precedence value. \n\nReflective QoS is controlled per packet. SMF signals the use of **Reflective QoS Indication (RQI)** marking to UPF. SMF signals **Reflective QoS Attribute (RQA)** to the AN via N2 interface. Subsequently, UPF includes RQI for every DL packet of an SDF that's using Reflective QoS. AN indicates the RQI to the UE. \n\nWhen UE receives a DL packet with RQI, it creates or updates the QoS rule for UL traffic. It also starts the **Reflective QoS Timer**. There's one timer per UE-derived rule. The timer is restarted when a matching DL packet is received. Rule is discarded when timer expires. \n\n\n### Which are the 5G QoS parameters?\n\nWe note the following: \n\n + **5G QoS Identifier (5QI)**: An identifier for QoS characteristics that influence scheduling weights, admission thresholds, queue management thresholds, link layer protocol configuration, etc.\n + **Allocation and Retention Priority (ARP)**: Information about priority level, pre-emption capability (can pre-empt resources assigned to other QoS flows) and the pre-emption vulnerability (can be pre-empted by other QoS flows).\n + **Reflective QoS Attribute (RQA)**: Optional parameter. Certain traffic on this flow may use reflective QoS.\n + **Guaranteed Flow Bit Rate (GFBR)**: Measured over the Averaging Time Window. Recommended to be the lowest bitrate at which the service will survive.\n + **Maximum Flow Bit Rate (MFBR)**: Limits bitrate to the highest expected by this QoS flow.\n + **Aggregate Maximum Bit Rate (AMBR)**: *Session-AMBR* is per PDU session across all its QoS flows. *UE-AMBR* is for each UE.\n + **QoS Notification Control (QNC)**: Configures NG-RAN to notify SMF if GFBR can't be met. Useful if application can adapt to changing conditions. If alternative QoS profiles are configured, NG-RAN indicates if one of these matches currently fulfilled performance metrics.\n + **Maximum Packet Loss Rate**: In Release 16, this is limited to voice media.\n\n### Could you describe some standardized 5QI values and their QoS characteristics?\n\nSpecification TS 23.501 defines some 5QI values that translate to QoS characteristics commonly used. This leads to optimized signalling, that is, specifying the 5QI value is sufficient, though default values can be modified. For non-standard 5QI values, QoS characteristics need to be signalled as part of the QoS profile. \n\nSMF signals QoS profiles to NG-RAN via AMF. QoS characteristics in these profiles are in fact guidelines to the NG-RAN to configure suitable RBs to carry QoS flows. \n\nThere are about two dozen standard 5QI values, grouped into three resource types: Guaranteed Bit Rate (GBR), Non-GBR, Delay-critical GBR. **QoS characteristics** include resource type, priority level (lower number implies higher priority), Packet Delay Budget (PDB), Packet Error Rate (PER), averaging window (for GBR and delay-critical GBR only), and Maximum Data Burst Volume (MDBV) (for delay-critical GBR only). \n\nConversational voice (5QI=1) has 100ms PDB and 0.01 PER. Real-time gaming (5QI=3) has 50ms PDB and a lower priority. IMS signalling (5QI=5) has stringent PER of 0.000001. Discrete automation (5QI=82) has stringent PDB of 10ms. \n\n\n### Does meeting 5G QoS requirements imply high QoE?\n\nQoS is based on objective metrics whereas Quality of Experience (QoE) is more subjective and based on actual user experience. Thus, it's quite possible to optimize the network to yield better KPIs for QoS without users actually noticing any difference. \n\nDelay, jitter and packet loss are some metrics used to determine QoS. QoE has different concerns: service accessibility, wait times, are users leaving because of poor features, response time, seamless interactivity, etc. QoE depends on user expectations, which can change with evolving technology and applications. \n\nA 5% packet loss could have little impact for a cloud service but even a 0.5% packet loss could result in huge throughput reduction for another application. It's therefore clear that QoE should consider user and application perspectives, and not just network performance metrics. For example, on the same network with high bitrates an AR application might experience low QoE but a file download has high QoE even when latency or transport continuity are relatively poor. Even within AR, we can discern a variety of applications, each with its own QoE characteristics. \n\n\n### What are the relevant specifications that detail 5G QoS?\n\nAn overview of 5G QoS model and architecture is given in **TS 38.300**, section 12. A more detailed and complete description of QoS is in **TS 23.501**, section 5.7. These two documents are good starting points for beginners.\n\nFrom the perspective of 5G System (5GS) procedures, QoS details are covered in **TS 23.502**. QoS flow binding and parameter mapping are covered in **TS 29.513**. QoS Information Elements (IEs) are detailed in **TS 38.413**. \n\nThe mapping of QoS flows to Radio Bearers (RBs) is done at Service Data Adaptation Protocol (SDAP) sublayer, which is covered in **TS 37.324**. \n\n## Milestones\n\nDec \n2017\n\n3GPP approves the first specifications for 5G, called \"early drop\" of **Release 15**. The focus of this release is Non-Standalone (NSA) mode of operation using Dual Connectivity (DC) between LTE and 5G NR. At this time, SDAP specification TS 37.324 is at v1.2.0 and not yet approved for Release 15. \n\nJun \n2018\n\n**SDAP specification TS 37.324**, v15.0.0 is approved for Release 15. This date coincides with \"main drop\" of Release 15 specifications. This release enables Standalone (SA) mode of operation based on 5G Core. SDAP sublayer in gNB and UE maps QoS flows to DRBs.\n\nOct \n2018\n\nModern application traffic, particularly concerning mobile devices, often pass through enterprise networks, the Internet and cellular mobile networks. Whereas 3GPP uses QoS Class Identifiers (QCIs) and 5G QoS Identifiers (5QIs), IETF uses Differentiated Services Code Point (DSCP) markings. Henry and Szigeti therefore publish an IETF Internet-Draft titled *Diffserv to QCI Mapping*. Version 04 of this draft is published in April 2020 but it expires in October 2020, without any further continuity. \n\nMar \n2019\n\nSpecification **TS 23.501** is updated for Release 16. Among QoS-specific changes are Deterministic QoS for Time-Sensitive Communication (TSC); QoS support for Multi-Access PDU Session; and New 5QIs for Enhanced Framework for Uplink Streaming. \n\nApr \n2020\n\nAs an example from an implementation perspective, we note Cisco's publication of document titled *Ultra Cloud Core 5G Session Management Function, Release 2020.02 - Configuration and Administration Guide*. This guide details the 5G QoS model and how it's managed by SMF. It explains default bearer QoS handling for 4G, 5G and Wi-Fi sessions.","meta":{"title":"5G Quality of Service","href":"5g-quality-of-service"}} {"text":"# Python for Scientific Computing\n\n## Summary\n\n\nFortran has been the language of choice for many decades for scientific computing because of speed. In the 1980s, when a programmer's time was becoming more valuable than compute time, there was a need for languages that were easier to learn and use. For the purpose of research, code-compile-execute workflow gave way to interact-explore-visualize workflow. In this context were born MATLAB, IDL, Mathematica and Maple. \n\nModern scientific computing is not just about numerical computing. It needs to be versatile: deal with large datasets, offer richer data structures than just numerical arrays, make network calls, interface with databases, interwork with web apps, handle data in various formats, enable team collaboration, enable easy documentation. \n\nPython offers all of the above. It's been said that, \n\n> Python provides a balance of clarity and flexibility without sacrificing performance.\n\n## Discussion\n\n### What makes Python a suitable language for scientific computing?\n\nPython is not just suited for manipulating numbers. It offers a \"computational ecosystem\" that can fulfil the needs of a modern scientist. Python does well in system integration, in gluing together many different parts contributed by different folks. Python's duck typing is one of the reasons why this is possible. In terms of data types, `memoryview`, `PyCapsule` and NumPy's `array` aid scientific work. \n\nPython is easy to learn and use. It offers a natural syntax. This enables researchers to express and explore their ideas more directly rather than fight with low-level language syntax. \n\nWith Python, performance bottlenecks can be optimized at a low-level without sacrificing high-level usability. A researcher needs to explore and visualize ideas in an incremental manner. Python allows for this via IPython/Jupyter notebooks and matplotlib. A variety of Python tools can work together and share data within the same runtime environment without having to exchange data only via the filesystem. \n\n\n### I'm used to MATLAB. Why should I use Python?\n\nMATLAB is proprietary, expensive and hard to extend. Python is open, community-driven, portable, powerful and extensible. Python is also better with strings, namespaces, classes and GUIs. While MATLAB, along with Simulink, has vast libraries, Python is catching up as many scientific projects are adopting Python. MATLAB is said to be poor at scalability, complex data structures, memory handling, system tasks and database programming. \n\nHowever, there are some criticisms of Python (December 2013). Syntax is not consistent since different packages are written by different folks with different needs. There are two ordinary differential equation (ODE) solvers in scipy with incompatible syntax. Duplicated functionality across packages may result in confusion. MATLAB does better with data regression, boundary value problems and partial differential equations (PDE). \n\nIn 2014, Konrad Hinsen commented that Python may not be suitable for small-scale projects where code is written once and rarely maintained thereafter. This becomes a problem when Python scientific libraries are upgraded by deprecating older classes/functions/methods. \n\n\n### Should I worry about performance when using Python for scientific research?\n\nThe short answer is no. There are many initiatives that aim to make Python faster. PyPy and Pyston do just-in-time (JIT) compilation for better performance. Nuitka aims to replace the Python runtime to automatically transpile code to languages that run fast natively. Numba speeds up math-heavy Python code to native machine instructions with just a few annotations on your Python code. \n\nWhile pure Python code is definitely slower when compared to Fortran or C, scientific packages in Python often make use of low-level implementations that are themselves written in Fortran, C, etc. For example, NumPy operations often call BLAS or LAPACK functions that are written in Fortran. \n\nf2py is enabling Python to directly use Fortran implementations. SWIG and Cython allow us to make calls to optimized C/C++ implementations from within Python. For example, Cython is being used by scikit-learn. Intel Math Kernel Library (MKL) and PyCUDA are also bringing Python on par with Fortran on specific hardware platforms. \n\n\n### What are the essential packages for scientific computing in Python?\n\nHere are some packages that could be considered essential: \n\n + **numpy**: Multi-dimensional arrays and operations on them. Executes faster than Python.\n + **scipy**: Linear algebra, interpolation, integration, FFT...\n + **matplotlib**: Plotting and data visualization with an API similar to MATLAB. Eases transition from MATLAB to Python.\n + **spyder**: An IDE that includes IPython (for interactive computing).\n + **pandas**: Numerical data structures, data manipulation and analysis.\n + **jupyter**: Web-based sharing of code, graphs, annotations and results.Most Python scientific packages are based on numpy and scipy. If visualization is involved, matplotlib may be used. For higher-level data structures, pandas may be used. Spyder, IPython and Jupyter are simply useful tools for the scientist or engineer.\n\n\n### Could you name some useful scientific projects/packages in Python?\n\nHere are some that can be applied to any domain:\n\n + **Image processing**: Pillow, OpenCV, scikit-learn , Mahotas\n + **Visualization**: matplotlib, bokeh, plotly, mayavi, seaborn , basemap, NetworkX\n + **Markov chains**: Markov, MarkovNetwork, PyMarkov, pyEMMA, hmmus\n + **Stochastic process**: stochastic, StochPy, sdeint\n + **Solving PDEs**: FEniCS, SfePy\n + **Convex Optimization**: CVXPY\n + **Units and conversions**: quantities, pint\n + **Multi-precision math**: mpmath, GmPy\n + **Spatial analysis**: cartopy, georasters, PySAL, geopy\n + **Data access**: pydap, cubes, Blaze, bottleneck, pytables\n + **Machine learning**: scikit-learn, Mlpy, TensorFlow, Theano, Caffe, Keras\n + **Natural Language Processing**: NLTK, TextBlob, spaCy, gensim, pycorenlp\n + **Statistics**: statistics, statsmodels, patsy\n + **Tomography**: TomoPy\n + **Symbolic computing**: sympy\n + **Simulations**: SimPy, BlockCanvas, railgun\n\n### Could you name some domain-specific scientific projects/packages in Python?\n\nSince there are dozens of packages for all types of scientific work, we can only give a sample: \n\n + **Solar data analysis**: SunPy\n + **Astronomy**: astropy\n + **Chemistry**: thermo, chemlab\n + **Biology**: biopython\n + **Neurosciences**: PsychoPy, NIPY\n + **Life sciences**: DEAP\n + **Network analysis**: NetworkX\n + **Quantum dynamics**: QuTiP\n + **Protein analysis**: ProDy\n + **Neuron simulations**: NEURON\n + **Seismology**: ObsPy\n + **Phylogenetic computing**: DendroPy\n + **Software defined radio**: GNU Radio\n\n### What's the recommended Python distribution for scientific computing?\n\nInstallation of Python for scientific work used to be a pain earlier but with modern distributions, this is no longer an issue. \n\nRather than install Python's standard distribution and then install scientific packages one by one, the recommended approach is to use an alternative distribution customized for scientific computing: Enthought Canopy, Anaconda, Python(x,y) or WinPython. Enthought Canopy is commercial but the rest are free. Enthought Canopy claims to include 450+ tested scientific and analytic packages. \n\nAnaconda distribution uses *conda* for package management. The use of virtual environments is recommended so that different projects can use their own specific environments. Powered by Anaconda, Intel offers its own distribution that's optimized for performance. \n\nSageMath is another distribution that offers a web-based interface and uses Jupyter notebooks. It aims to be the free open source alternative to Magma, Maple, Mathematica and Matlab.\n\n\n### As a beginner in scientific Python, what should be my learning path?\n\nFrom tools and environment perspectives, get familiar with using IPython, Jupyter Notebook and optionally Spyder.\n\nAfter learning the basics of Python, the next step is to learn numpy since it's the base for many scientific packages. With numpy, you can work with matrices and do vectorized operations without having to write explicit loops. You should learn about operations such as reshaping, transposing, filling, copying, concatenating, flattening, broadcasting, filtering and sorting. \n\nYou could then learn scipy to do optimization, linear algebra, integration, and so on. For visualization, matplotlib can be a starting point. For dealing with higher-level data structures and manipulation, learn pandas.\n\nIf you wish get into data science, scikit-learn and Theano can be starting points. For statistical modelling, you can learn statsmodels. \n\n\n### What useful developer resources are available for scientific computing in Python?\n\nOne good place to start learning is the SciPy Lecture Notes.\n\nSciPy Conference is an annual event for Python's scientific community. It also happens in Europe as EuroSciPy and in India as SciPy India. \n\nEarthPy is a collection of IPython notebooks for learning how to apply Python to Earth sciences.\n\nThe Hacker Within, Software Carpentry and Data Carpentry are some communities that bring together research and scientific folks. Although these are not exclusive to Python, Python programmers will find them useful.\n\n## Milestones\n\n1995\n\n*Numeric* is released to enable numerical computations. This is the ancestor of today's *NumPy*. \n\n2001\n\nMany scientific modules are brought together and released as a single package named *SciPy*. The same year, IPython is born. \n\n2002\n\n\"Python for Scientific Computing Workshop\" is organized at Caltech. In 2004, this is renamed as *SciPy Conference* and is now an annual event. In 2008, EuroSciPy is held for the first time. In 2009, 1st SciPy India is held. \n\n2005\n\nNumPy is released based on an older library named *Numeric*. It also combines features of another library named *Numarray*. NumPy is initially named *SciPy Core* but renamed to *NumPy* in January 2006. \n\nJul \n2017\n\n*Anaconda Accelerate* is split into *Intel Distribution for Python* and open source Numba's sub-projects pyculib, pyculib\\_sorting and data\\_profiler.","meta":{"title":"Python for Scientific Computing","href":"python-for-scientific-computing"}} {"text":"# Systems Programming\n\n## Summary\n\n\nSystems are built from hardware and software components. Systems programming is about implementing these components, their interfaces and the overall architecture. Individual components perform their prescribed functions and at the same time work together to form a stable and efficient system. \n\nSystems programming is distinct from application programming. System programs provide services to other software. Via abstractions, they expose API to simplify the development of applications. They're often optimized to low-level machine architecture. Unlike application software, most system software are not directly used by end users. \n\nAssembly and C language have been historically used for systems programming. Go, Rust, Swift and WebAssembly are newer languages suited for systems programming.\n\n## Discussion\n\n### How is systems programming different from application programming?\n\nOperating systems, device drivers, BIOS and other firmware, compilers, debuggers, web servers, database management systems, communication protocols and networking utilities are examples of system software. As part of operating systems, memory management, scheduling, event handling and many more essential functions are done by system software. Examples of application software are Microsoft Office, web browsers, games, and graphics software. \n\nApplication software generally don't directly access hardware or manage low-level resources. They do so via calls to system software. We may thus view system software as aiding application software with low-level access and management. Application developers can therefore focus on the business logic of their applications.\n\nWhile application developers may not build system software, they can become better developers by knowing more about the design and implementation of system software. Specifically, by knowing and using system APIs correctly they can avoid implementing similar functions in their applications. \n\nFor example, a C application on UNIX/Linux calls the system API `getpid()`. The OS creates the process, allocates memory and assigns a process identifier. \n\n\n### What are the main concerns of systems programming?\n\nAn operating system such as Linux is a collection of system programs. These deal with files and directories, manage processes for executable programs, enable I/O for those programs and allocate/release memory as needed. The OS manages users, groups and associated permissions. The OS prevents normal user programs from executing privileged operations. The shell is a special system program that allows users to interact with the system. If processes need to communicate or respond to external events, signals (aka software interrupts) facilitate this. \n\nThere are systems programs that transform other programs into machine-level instructions for execution: compilers, assemblers, macro processors, loaders and linkers. Their aim is to generate instructions optimized for speed or memory. Use of registers, loops, data structures, and algorithms are considered in these system programs. \n\nProgramming languages, editors and debuggers are also system programs. These are tools to write good and reliable system programs. They have to be easy to learn and productive for a developer while also being efficient and safe from a system perspective. \n\n\n### Is a system programmer same as a system administrator?\n\nA system programmer is one who write system software and this task is called system programming or systems programming. But there's an older definition that's been used in the context of mainframes since the 1960s.\n\nOn a mainframe, a system programmer installs, upgrades, configures and maintains the operating system. She does capacity planning and evaluates new products. She's also skilled in optimizing the system for performance, troubleshooting problems and analyzing memory dumps. \n\nOn the other hand, a system administrator handles day-to-day operations such as adding/removing users, configuring access, installing software, and monitoring performance. She deals with applications whereas a system programmer is more well-versed with the mainframe hardware. \n\nIn small IT organizations, both roles may be performed by a single individual. \n\n\n### What languages are suited to systems programming?\n\nWikipedia lists more than a dozen languages: C/C++, Ada, D, Nim, Go, Rust, Swift, and more. Notably, the following popular languages are absent: JavaScript, Java, C#, Python, Visual Basic, PHP, Perl, and Ruby. Implemented in Haskell, COGENT is a functional programming language that's also suited for systems programming. \n\nSystem programming languages are required to provide direct access to hardware including memory and I/O access. Performance, explicit memory management and fine-grained control at bit level are essential capabilities. Where they also offer high-level programming constructs, system programming languages (C, Rust, Swift, etc.) are also used for application programming. \n\nSince such languages give access to low-level hardware functions, there's a risk of introducing bugs. Rust was created to balance aspects of both safety and control. C sacrifices safety for control while Java does the opposite. \n\nScripting languages such as Python, JavaScript and Lua are not for systems programming. However, the introduction of static typing (for safety) and Just-in-Time (JIT) compilation (for speed) has seen these languages being used for systems programming. \n\n\n### What's systems programming in the context of the web?\n\nOn the web, applications adopt the client-server architecture. Server-side logic may be implemented as many microservices distributed on a cloud platform. Applications make use of APIs served by different endpoints. In this context, we have systems programs that help create distributed applications for the cloud. System programs must be designed to address network topology changes, security concerns, high latency, and poor network connections. \n\nRob Pike commented that developers thought that Go was for systems programming. In fact, it was for writing any server-side code. Later he saw anything running on the cloud as systems software. Ousterhout commented in 1998 that on the web Java was being used for systems programming. \n\nBunardzic commented that \"the only way to program a system is to program a network\". Systems should be designed for concurrency, fault encapsulation/detection/recovery, upgrade without downtime, observability, and asynchronous communication. Developers who use system services and APIs must be free to choose their own technology stack. One service shouldn't depend on others. \n\n\n### What are the best practices in systems programming?\n\nA systems programmer needs to know the system APIs well. In addition, she should know how the OS kernel, programs and users interact with one another. \n\nDesigning a good system is not a one-time task. The system should be designed for iteration and incremental improvements. System designers must be open to feedback. Historically, many Linux system programmers blamed application developers for program crashes instead of seeing these as opportunities to improve the Linux kernel or system tools. \n\nAll assumptions must be made explicit. Systems software should get the abstractions right, minimize if not avoid leaky abstractions. In other words, its users shouldn't need to know implementation details. For instance, ORM frameworks that interface between applications and databases are often leaky due to a conceptual mismatch between objects and relations. \n\nBefore implementation, it's beneficial to do system modelling, analysis and simulations. Unified Modelling Language (UML) can help. Small and simple systems are amenable to formal analysis. Anything else, we need to apply statistical analysis. More components there are, more complex becomes the system. \n\n## Milestones\n\n1960\n\nTill early and even mid-1960s, the first concern in designing computer systems is the hardware itself. Programming them becomes a secondary concern. Programming techniques are chaotic. Often they're not as intended by the hardware designers. Systems programming as a discipline is only starting to emerge. \n\n1966\n\nShaw considers assemblers, interpreters, compilers, and monitors as **translators**; that is, they translate code in one form to another. Referring to the figure, translator T translates A to B. He defines the following,\n\n> Systems Programming is the science of designing and constructing translators.\n\nOct \n1968\n\nAt the NATO Conference on Software Engineering, the merits of **high-level languages** are discussed: cost, maintainability, correctness. System programmers however object to high-level languages. They don't like anything to get in between themselves and the machine. Reconciling these two concerns is the main challenge in design a suitable systems programming language. \n\n1969\n\n**Unix** operating system is invented at Bell Laboratories, first in assembly on PDP-7 (1969) and subsequently migrated to C on PDP-11 (1971). A decade later one of its inventors, Dennis Ritchie, comments that only the assembler is still written in assembler. Most other system programs are written in C language. \n\n1971\n\nResearchers at the Carnegie-Mellon University invent a new systems programming language that they name **BLISS**. They describe it as a general purpose high-level language for writing large software systems for specific machines. They enumerate the requirements of any good systems programming language: space/time economy, access to hardware features, data structures, control structures, understandability, debugging, etc. The figure shows an example of how BLISS enables bit-level access. Such low-level access is typical of systems software. \n\n1984\n\nWeicker proposes **Dhrystone as a benchmark** for systems programming. Currently we have the Whetstone benchmark that's tuned to measure floating-point arithmetic. Systems programs often use enumerations, records and pointer data types. Compared to numerical programs, systems programs have fewer loops, simpler compute statements, more conditional branching and more procedure calls. The Dhrystone benchmark accounts for these. \n\n1986\n\nAn IBM research report details the challenges and pitfalls in programming for a multi-CPU and multi-threaded environment. They refer to the System/370 architecture. Shared memory and data structures have to be handled carefully. Therefore, **parallelism** is a new concern for systems programming. However, multi-programming was considered in the design of Unix back in the early 1970s. \n\n1990\n\nThis decade sees the wide adoption of **scripting languages** such as Perl, Tcl, Ruby, PHP, Python and JavaScript. They're seen as languages that \"glue together\" various components whereas systems programming languages are used to create those components. In the 2010s, the boundaries blur as some scripting languages are used to build large systems software. \n\n1992\n\nWolfe proposes changes to how systems programming must be taught to students. The curriculum would include small challenges in various aspects of systems programming and not just focus on assemblers and compilers. Currently, students perceive systems programming as boring, the techniques themselves stale, and the courses disconnected from systems programming jobs out there. \n\n2005\n\nBrewer et al. note that 35 years after the invention of C language, we still don't have a suitable alternative for systems programming. Shapiro voices a similar complaint in 2006. Advances in programming languages have not motivated systems programmers to adopt them. \n\n2009\n\nMozilla begins sponsoring the development of **Rust**. Rust is announced to the world in 2010. The first stable release of the language happens in May 2015. Today Rust is well-recognized as a systems programming language. \n\n2019\n\nCrichton at Stanford University proposes that students of programming languages must be taught systems programming. Courses must not be about theory and formal methods alone. They should include practical programming while also providing effective mental models of computation and formal reasoning. He includes WebAssembly and Rust in the proposed curriculum.","meta":{"title":"Systems Programming","href":"systems-programming"}} {"text":"# MongoDB Query Language\n\n## Summary\n\n\nWhereas relational databases are queried using Structured Query Language (SQL), MongoDB can be queried using MongoDB Query Language (MQL). It's the interface by which clients can interact with the MongoDB server. Developers and database administrators can write MQL commands interactively on the MongoDB Shell. For client applications, drivers are available in popular programming languages to execute MQL commands.\n\nMQL supports CRUD operations. Results can be sorted, grouped, filtered, and counted via aggregation pipelines. Special operations such as text search and geospatial queries are possible. Multi-document transactions are supported. \n\nThis article covers MongoDB Shell commands except aggregations. Commands in Node.js are similar. For commands in other languages, refer to the documentation of language-specific drivers. This article doesn't cover administrative commands that relate to users, backups, replica sets, sharded clusters, etc.\n\n## Discussion\n\n### What are the basic commands to get started with MQL?\n\nStart by launching the MongoDB Shell `mongosh`. Type `help()` to see a list of **shell commands**. Here are some useful commands: `show databases`, `show collections` (of current database), `use ` (to switch to a database), `version()` (for shell version), `quit` and `exit`. \n\nThe `Database` class has many methods. View them with the command `db.help()`. Here are some useful methods: `getMongo()` (get current connection), `getCollectionNames()`, `createCollection()`, `dropDatabase()`, and `stats()`. \n\nThe `Collection` class has many methods. View them with the command `db.coll.help()`. Here are some useful methods: `insert()`, `find()`, `update()`, `deleteOne()`, `aggregate()`, `count()`, `distinct()`, `remove()`, `createIndex()`, `dropIndex()`, and `stats()`. \n\nThe command syntax is same as JavaScript. Commands can be saved into a JavaScript file and executed using the `load()` command. \n\n\n### Could you introduce the commands for CRUD operations in MongoDB?\n\nDatabase operations are conveniently referred to as *CRUD (Create, Read, Update, Delete)*. MongoDB supports CRUD with the following methods of `Collection` class: \n\n + **Create**: Add documents to a collection. Methods include `insertOne()` and `insertMany()`. If the collection doesn't exist, it will be created. Unique `_id` field is automatically added to each document if not specified in the method calls.\n + **Read**: Retrieve documents from a collection. Main method is `find()`, called with query criteria (how to match documents) and projection (what fields to retrieve).\n + **Update**: Modify existing documents of a collection. Methods include `updateOne()`, `updateMany()` and `replaceOne()`. Method calls include query criteria and what to update.\n + **Delete**: Remove documents from a collection. Methods include `deleteOne()` (first matching document is deleted) and `deleteMany()`, called with query criteria.It's also possible to insert new documents with the option `upsert: true`. Some methods that do this are `update()`, `updateOne()`, `updateMany()`, `findAndModify()`, `findOneAndUpdate()`, `findOneAndReplace()`, and `bulkWrite()`. \n\n\n### How do I find documents in a MongoDB collection?\n\nThe main method is `find(query, projection)`. Query parameter specifies the filter. If omitted, all documents returned. Projection parameter specifies what fields to retrieve. If omitted, all fields are retrieved. The method returns a cursor to the documents. Method `findOne(query, projection)` returns the first matching document, not a cursor. \n\nHere are some sample commands on a collection named `people`: \n\n + `db.people.find()`: Retrieve all fields of all documents.\n + `db.people.find({status: \"A\", {age: {$gt: 25}}})`: Retrieve all fields of documents of status 'A' and age exceeding 25. Multiple query fields are combined with `$and` operator by default.\n + `db.people.find({}, {user_id: 1, status: 1, _id: 0})`: Retrieve only two fields of all documents.\n + `db.people.find({$or: [{status: \"A\"}, {age: {$gt: 25, $lte: 50}}] })`: A complex filter specifying status 'A' or age within a range.\n + `db.people.find({status: {$ne: \"A\"}).sort({age: 1}).skip(10).limit(5)`: Filter, sort, skip and limit the results.Comparison operators include `$eq`, `$gt`, `$gte`, `$in`, `$lt`, `$lt`, `$lte`, `$ne`, and `$nin`. Logical operators include `$and`, `$not`, `$nor` and `$or`. \n\nSince MongoDB is schema-less, documents may include only some fields or fields of differing types. Operators `$exists` and `$type` check if field exists and if the type matches. \n\n\n### How do I write queries when arrays are involved?\n\nConsider an inventory collection with two array fields `tags` and `dim_cm`. Here are some example queries: \n\n + `db.inventory.find({ tags: [\"red\", \"blank\"] })`: Documents with both 'red' and 'blank' tags. Order of tags matters.\n + `db.inventory.find({ tags: { $all: [\"red\", \"blank\"] } })`: As above, but order of tags doesn't matter.\n + `db.inventory.find({ tags: \"red\" })`: Documents tagged 'red'.\n + `db.inventory.find({ dim_cm: { $gt: 15, $lt: 20 } })`: Documents where both conditions are satisfied but not necessarily by the same array item.\n + `db.inventory.find({ dim_cm: { $elemMatch: { $gt: 15, $lt: 20 } } })`: Documents where both conditions are satisfied by the same array item.\n + `db.inventory.find({ \"dim_cm.1\": { $gt: 25 } })`: Second item of array satisfies the condition.\n + `db.inventory.find({ tags: { $size: 3 } })`: Documents with exactly three tags.Operators `$in` and `$nin` check presence or absence in an array. For example, `db.inventory.find({dim_cm: {$in: [15, 18, 20]}})` finds documents with specific dimensions.\n\n\n### How do I find documents based on values in nested documents?\n\nIt's possible to find documents based on the content of inner documents, either with a full or a partial match. Assume documents with the field `size` that itself is a document with fields `h`, `w` and `uom`. Here are some examples: \n\n + `db.inventory.find({ size: { h: 14, w: 21, uom: \"cm\" } })`: Exact document match. Field order matters.\n + `db.inventory.find({ \"size.h\": { $lt: 15 } })`: Nested field query.Assume documents with the field `instock` that's an array of documents. Here are some examples: \n\n + `db.inventory.find({ \"instock\": { warehouse: \"A\", qty: 5 } })`: At least one item in array matches the query. Field order matters.\n + `db.inventory.find({ 'instock.qty': { $lte: 20 } })`: Nested field query. At least one item in array matches the query.\n + `db.inventory.find({ 'instock.0.qty': { $lte: 20 } })`: Check the first item in array.\n + `db.inventory.find({ \"instock.qty\": 5, \"instock.warehouse\": \"A\" })`: Two nested fields but they need not match the same item within the array.\n + `db.inventory.find({ \"instock\": { $elemMatch: { qty: 5, warehouse: \"A\" } } })`: Two nested fields and both should match at least one item within the array.\n\n### Could you share more details about MongoDB projection?\n\nThe projection parameter is used in `find()` and `findOne()` methods. It controls what fields are sent by MongoDB to the application. It contains field-value pairs. Values can be boolean (1 or true, 0 or false), array, meta expression or aggregation expression. \n\nThe `_id` field is always included in returned documents unless explicitly suppressed with `_id: 0`. \n\nHere are some examples of projections: \n\n + `db.inventory.find({status: \"A\"}, {item: 1, _id: 0})`: Retrieve only `item`. Suppress `_id`.\n + `db.inventory.find({}, {status: 0, instock: 0})`: Retrieve all fields except `status` and `instock`.\n + `db.inventory.find({}, {\"size.uom\": 1})`: Given `size` is a nested document, retrieve `_id` and nested field `size.uom`. An alternative form is `db.inventory.find({}, {size: {uom: 1}})`.\n + `db.inventory.find({}, {_id: 0, \"instock.qty\": 1})`: Given `instock` is an array of documents, retrieve only the `instock.qty` field.\n + `db.inventory.find({}, {instock: {$slice: -2}})`: Retrieve only the last two items of `instock` and all other fields of inventory.\n + `db.inventory.find({}, {instock: {$slice: [3, 5]}})`: Retrieve five items after skipping first three items of `instock`. Include all other fields of inventory since no explicit inclusion is specified.\n + `db.inventory.find({}, {total: {$sum: \"$instock.qty\"}})`: Using aggregation expression in projection, get total stock quantity in each document.\n\n### What are cursor methods in MongoDB?\n\nCursor methods modify the execution of the underlying query. Since the collection method `find()` returns a cursor, we can list all cursor methods on the shell by calling `db.coll.find().help()`.\n\nHere are a few examples, some showing how to chain method calls: \n\n + `db.products.find().count()`: Return number of documents rather than the documents themselves.\n + `db.products.find().limit(2)`: Limit results to first two documents.\n + `db.products.find().min({price: 2}).max({price: 5}).hint({price: 1})`: Query only documents within the specified index range. Index `{price: 1}` must exist.\n + `db.products.find().pretty()`: Show results in a user-friendly format.\n + `db.products.find().sort({price: 1, qty: -1})`: Sort by price and then reverse sort by quantity.\n + `db.people.find().skip(3).limit(5)`: Skip first three documents and show only the next five.\n\n### What are the update operators in MongoDB?\n\nThere are three groups of update operators:\n\n + **Field Update**: Operators include `$currentDate` (Date or Timestamp), `$inc` (increment), `$min` (updates only if it's less than current field value), `$max` (updates only if it exceeds current field value), `$mul`, `$rename`, `$set`, `$setOnInsert` (only if update results in an insert), and `$unset`.\n + **Array Update**: Operators include `$`, `$[]`, `$[]`, `$addToSet`, `$pop` (-1 to remove first item, 1 to remove last item), `$pull` (removes items based on a query condition), `$push` (append to array), and `$pullAll` (removes items based on a list of values). Operator modifiers include `$each`, `$position`, `$slice` and `$sort`.\n + **Bitwise Update**: The only operator is `$bit`. It supports AND, OR and XOR updates of integers.\n\n### As a SQL developer, how do I get started with MongoDB Query Language?\n\nSQL's `(tables, rows, columns)` map to MongoDB's `(collections, documents, fields)`. SQL's `GROUP BY` aggregations are implemented via aggregation pipelines in MongoDB. Many other features such as primary keys and transactions are equivalent though not the same. \n\nIn SQL, we can drop or add columns to a table. Since MongoDB is schema-less, these operations are not important. However, it's possible to add or drop fields using the method `updateMany()`. \n\nSQL developers can refer to these useful guides:\n\n + SQL to MongoDB Mapping Chart\n + SQL to Aggregation Mapping Chart\n + Logan's Gist on SQL to MongoDB Mapping Chart\n\n\n## Milestones\n\nFeb \n2009\n\n**MongoDB 1.0** is released. By August, this version becomes generally available for production environments. \n\nMar \n2010\n\nMongoDB 1.4 is released. Among the improvements are `$all` with regex, `$not`, `$` operator for updating arrays, `$addToSet`, `$unset`, `$pull` with object matching, and `$set` with array indexes. \n\nAug \n2012\n\nMongoDB 2.2 is released. Projection operator `$elemMatch` is introduced. It returns only the first matching element in an array. For bulk inserts, it's now possible to pass an array of documents to `insert()` in the shell. \n\nMar \n2015\n\nMongoDB 3.0 is released. This includes a new **query introspection system** towards query planning and execution. Command `explain` and methods `cursor.method()` and `db.collection.explain()` are relevant here. \n\nDec \n2015\n\nMongoDB 3.2 is released. Query operators to test bit values are introduced: `$bitsAllSet`, `$bitsAllClear`, `$bitsAnySet`, `$bitsAnyClear`. Many CRUD methods on Collection class are added to correspond to drivers' APIs: `bulkWrite()`, `deleteMany()`, `findOneAndUpdate()`, `insertOne()`, and many more. **Query modifiers** are deprecated in the shell. Instead, **cursor methods** should be used. \n\nNov \n2016\n\nMongoDB 3.4 is released. Type `decimal` for 128-bit decimal is introduced. To support language-specific rules for string comparison, **collation** is introduced. \n\nNov \n2017\n\nMongoDB 3.6 is released. With `arrayFilters` parameter, we can selectively update elements of an array field. Positional operators `$[]` and `$[]` allow multi-element array updates. For `$push` operator, `$position` modifier can be negative to indicate position from end of an array. Deprecated `$pushAll` operator is removed. New query operators include `$jsonSchema` and `$expr`. MongoDB shell now supports sessions. \n\nAug \n2019\n\nMongoDB 4.2 is released. We can now use the **aggregation pipeline for updates**. **Wildcard indexes** support queries against fields whose names are unknown or arbitrary. Some commands (`group`, `eval`, `copydb`, etc.) are removed. \n\nJul \n2020\n\nMongoDB 4.4 is released. Projections in `find()` and `findAndModify()` are made consistent with `$project` aggregation stage. Method `sort()` uses the same sorting algorithm as `$sort` aggregation stage. Compound hashed indexes and hidden indexes are introduced. \n\nJul \n2021\n\nMongoDB 5.0 is released. It provides better support for field names that are `$` prefixed or include `.` characters.","meta":{"title":"MongoDB Query Language","href":"mongodb-query-language"}} {"text":"# Sorting Algorithms\n\n## Summary\n\n\nSorting is defined as the rearrangement of the given data in a particular order. This order can be related to numerical values i.e. ascending or descending order, alphabetical i.e. case sensitive or insensitive and can be based on length of the string or on different coding styles like ascii or unicode. \n\nSorting is an extra step that is carried out to increase the efficiency of the operation that is being performed. For example: sorting a data structure like an array in advance may speed up a search operation. Also, it is necessary for some algorithms to function, for example, binary search works only on sorted data. \n\nWe have a variety of sorting algorithms ranging from bubble sort, selection sort, merge sort, quick sort, counting sort, radix sort, etc. Each one of them has its own set of weaknesses and strengths, based on which they can be employed by the developers.\n\n## Discussion\n\n### Why do we need sorting?\n\nIn school, the children of a class are sorted in the increasing order of height while standing in a queue for the morning assembly. Similarly, the attendance register has the names of children sorted in alphabetical order. The telephone directories also have the contact numbers sorted by owner's name alphabetically. The sorted data is preferred over unsorted data because it is easier to handle and search for. \n\nSmall amount of data, a class of 60 in school, can be manually sorted but large data, for example, employee data in big company cannot be sorted manually. So, several sorting algorithms came into existence that could help human sort the data in seconds.\n\nThe data can be sorted in various ways like an ascending order, descending order, alphabetical order (in case of names). It can also be sorted using multiple keys for example sorting the employee data first by their respective department and then in an alphabetical order. \n\nSorting optimizes the efficiency of various algorithms like searching and merging, etc. Some algorithms like Prim's algorithm , Kruskal's algorithm, dijkstra's algorithm require the data to be already sorted before they could be applied.\n\n\n### What are various properties of a sorting algorithm?\n\nA sorting algorithm has different properties like:\n\nInplace or not: An in-place sorting algorithm does not use any extra space to arrange the given elements in order i.e. sorts the elements by swapping them. For example: insertion sort, selection sort, etc.\n\nStability: A stable sorting algorithm keeps the equal elements in same order as in the input. For example: insertion sort, merge sort, bubble sort, etc.\n\nComparison-based or not: As the word suggests, a comparison-based algorithm compares elements to each other to sort them like bubble sort, insertion sort, selection sort, etc. The non-comparison based sorting algorithms work on assumptions like counting sort, bucket sort and do not sort using comparison operator. \n\nInternal or External: Internal sorting can be carried out while residing in the computer's memory whereas when there are a large number of elements external sorting has to be carried out by storing data in files. Sorting algorithms can be implemented both internally and externally depending on memory requirement. \n\n\n### Explain some basic sorting algorithms.\n\nThe basic sorting algorithms include bubble, insertion and selection sort which make the sorting concept easier to understand. These algorithms are capable of sorting small amount of data and are not efficient with large data.\n\n**Bubble sort** swaps two consecutive out-of-order elements, do this repeatedly until no more swaps occur. It bubbles(moves) up the elements to its proper place that they belong in specified order. \n\n**Insertion sort** is an in-place, stable sorting technique that maintains two sub lists i.e. sorted and unsorted and progresses by comparing each consecutive element and inserting it in the correct order. \n\n**Selection sort** selects the maximum element from the data and keeps replacing it with the elements from the end for maintaining an ascending order. \n\n\n### What are some efficient sorting algorithms?\n\nSome efficient sorting algorithms include merge sort, quick sort and shell sort, heap sort that work well with large amount of data.\n\n**Merge sort** is based on the divide and conquer technique i.e. it partitions the given data into groups until each group contains single element and then sorts and merges each group iteratively to produce sorted list of data. \n\n**Quick sort** chooses a pivot element to partition the data. Both the sides are sorted by iteratively splitting and sorting and then merged in a way that the elements before the pivot have a value lesser than that of a pivot and vice versa. \n\n**Shell sort** is an improved insertion sort. An insertion sort exchanges consecutive elements but a shell sort can exchange far apart elements with each other which leads to an increase in the efficiency of the algorithm. \n\n\n### What are some distribution-based sorting algorithms?\n\nDistribution based sorting algorithms are those which divide the data into structures, sort it individually and then combine to get the sorted output. These include counting sort, bucket sort and radix sort. \n\n**Counting sort** is a non-comparison-based algorithm that groups all the elements into bins in the range 0 to k and then counts the number of elements less than or greater than a particular element to place it at the correct position. \n\n**Radix sort** uses any stable sort to sort the input according to the least significant digit followed by the next digit iteratively until we reach the most significant digit. It assumes the input consists of d-digit integers. \n\n**Bucket sort** assumes that the input is uniformly distributed and falls in the range [0, K). It uses a hash function to assign each element to a bucket. Then all the buckets are sorted using any sorting algorithm and then merged to get sorted data. \n\n\n### What is tree sort and heap sort?\n\n**Tree sort** utilizes the property of binary search trees to sort the elements. A binary search has the property that the value of the left child node is less than the parent node and the value of the right child node is greater than that of the parent node. So, this sorting algorithm first builds a binary search tree with the elements and then ouptuts the in order traversal of the binary search tree which gives the elements in sorted order. \n\n**Heap sort** first inserts the given elements into a heap and then deletes the root element to get the elements in sorted order. Max-heap has the property that the value of each node is greater than equal to its child nodes and vice versa for the min-heap. For example, a min-heap is build using the elements. The root is deleted repeatedly while maintaining heap property to get ascending order as the root is smallest element. \n\n\n### What is the time and space complexity of different sorting algorithms?\n\nTime complexity is the amount of time required by algorithm to execute as a function of input size. Similarly, space complexity refers to the amount of space required by algorithm to execute as function of input size. \n\nThe symbols to denote best-case time complexity is \\(Ω(n)\\), average-case time complexity is \\(Θ(n)\\) and worst-case time complexity is \\(O(n)\\). Time complexity order is same as mathematics 1>logn>n>nlogn>n^2.\n\nBubble sort and Insertion sort offer \\(Ω(n)\\) best-case time complexity and \\(O(n^2)\\) worst-case time complexity. The worst-case space complexity is \\(O(1)\\) as they are in place sorting.\n\nSelection sort offers \\(Ω(n^2)\\) best and worst-case time complexity as it is independent of the prior order of elements and \\(O(1)\\) space complexity. \n\nQuick sort and Merge sort offer \\(Ω(n \\cdot log(n)\\) best-case time complexity whereas worst-case time complexity of merge sort is \\(O(n \\cdot log(n)\\) but \\(O(n^2)\\) for quick sort because it depends on position of the pivot element. . The worst-case space complexity of merge sort is \\(O(n)\\) to store an extra array and that of stable quick sort is \\(O(log(n)\\) because of recursive calls. \n\n\n### When to use which sorting algorithm?\n\nChoosing a sorting algorithm entirely depends on the problem at hand.\n\nBubble sort is used for basic understanding of sorting algorithms and can detect already sorted data. . Bubble sort It is used to sort the TV channels according to user's viewing time. \n\nSelection sort is quite easy to implement and offers in-place sorting but does not perform well when the size of data increases. \n\nInsertion sort sorts the data as soon as it receives it and is quite adaptive i.e. works well with already sorted data. \n\nMerge sort is stable algorithm that can be implied when an efficient algorithm is needed. It is used where the data is accessed in sequential manner like tape drives and also in field of e-commerce. \n\nQuick sort works well with arrays and medium sized datasets. It is considered one of the fast sorting algorithms if the pivot element is chosen randomly and is employed by sport channels to sort scores. \n\nHeap sort is used when there is restriction on space complexity as it offers O(1) worst-case space complexity, i.e. utilises no extra space. It is employed in security and embedded system problems and used to sort linked list.\n\n## Milestones\n\n1890\n\nSorting originated in the late 1800s when Herman Hollerith was tasked with determining the population count, so he invented the sorting machine to sort one column at a time. \n\n1945\n\nJohn Von Neumann develops merge sort in 1945 and bottom-up merge sort in 1948. \n\n1946\n\nJohn Mauchly introduces insertion sort. \n\n1954\n\nRadix sort is developed at MIT by Harold H. Seward and is considered one of the first sorting algorithms. \n\n1956\n\nBubble sort initially called **'Sorting by Exchange'** is developed in 1956. Iverson coins the word bubble sort in 1962. \n\n1960\n\nTony Hoare invents quick sort in Moscow while working on a machine translation project and wanted to sort Russian sentences.","meta":{"title":"Sorting Algorithms","href":"sorting-algorithms"}} {"text":"# C++ Inheritance\n\n## Summary\n\n\nInheritance is one of the most important principles of object-oriented programming. In C++, inheritance takes place between classes wherein one class acquires or inherits properties of another class. The newly defined class is known as **derived class** and the class from which it inherits is called the **base class**. Class inheritance reflects heredity in the natural world, where characteristics are transferred from parents to their children.\n\nIn C++, there are many types of inheritance namely, single, multiple, multilevel, hierarchical, and hybrid. C++ also supports different modes of inheritance. These are public, private, and protected.\n\nInheritance promotes *code reuse*. Reusing code not only makes code easy to understand but also reduces redundancy. Code maintenance is easier. Code reliability is improved.\n\n## Discussion\n\n### Could you explain C++ inheritance with an example?\n\nConsider two different classes, `Dog` and `Cat`. Both classes might have similar attributes such as breed and colour. In C++, such attributes are called **data members**. The classes might also have similar behaviours or actions such as eat, sleep and speak. In C++, such actions are implemented as **member functions**. Instead of writing similar code in two different classes we can create an `Animal` class that brings together common attributes and behaviours. Classes `Dog` and `Cat` can inherit from `Animal` class. They can add their own functionalities.\n\nIn this example, `Animal` is the base class, `Dog` and `Cat` are derived classes. We may say that (Animal, Dog) represents a parent-child relationship. Dogs are inheriting characteristics of animals. We may also say that `Animal` is a generalization of `Dog` and `Cat`; and `Dog` and `Cat` are specializations of `Animal`. \n\nDerived classes can specialize by adding extra data members and member functions. They can also **override** the general behaviour defined in the base class. \n\nInheritance in C++ follows a *bottom up* approach. Derived classes acquire properties from base classes but the reverse is not true.\n\n\n### What are different types of inheritance in C++?\n\nC++ supports many types of inheritance based on the number of base or derived classes and the class hierarchy: \n\n + **Single Inheritance**: A derived class inherits from only one base class. Eg. `class B` inherits from `class A`.\n + **Multiple Inheritance**: A derived class inherits from more than one base class. Eg. `class C` inherits from both `class A` and `class B`.\n + **Multilevel Inheritance**: The class hierarchy is deeper that one level. Eg. `class C` inherits from `class B` that itself is inherited from `class A`.\n + **Multipath Inheritance**: A derived class inherits from another derived class and also directly from the base class of the latter. Eg. `class C` inherits from `class B` and `class A` when `class B` itself is inherited from `class A`.While the above types are about a single class, there are others that are about the structure of the class hierarchy. These are more design patterns than types: \n\n + **Hierarchical Inheritance**: A base class is specialized in many derived classes, which in turn are specialized into more derived classes.\n + **Hybrid Inheritance**: This combines both multiple and hierarchical inheritance.\n\n### How does C++ deal with the diamond problem or ambiguity?\n\nThe **diamond ambiguity** can occur with multiple inheritance. It happens when the base classes themselves are derived from another common base class. The final derived class ends up with multiple copies of the distant base class. \n\nConsider D inheriting from classes B1 and B2, B1 inheriting from A, and B2 inheriting from A. Thus, D has multiple copies of A. The problem occurs when a call is made to a member of A from an instance of D. C++ compiler can't determine which copy of A to use. Compilation will fail.\n\nTo solve this problem, C++ uses the concept of **virtual inheritance**. When inheriting A, B1 and B2 will specify the `virtual` keyword. The compiler sees this and creates a single instance or copy of A within D. No `virtual` keyword is needed when defining D. \n\nHaving multiple copies of a base class is not really an error. In fact, it's possible to define a class that uses both virtual and non-virtual inheritance. Ultimately, the compiler should be able to determine member access *unambiguously*. \n\n\n### What's the concept of name hiding in C++?\n\n**Name hiding** happens when a derived class has a member with the same name as a member of the base class. The derived class definition hides the base class definition. Thus, `Animal::age` is hidden by `Dog::age`. \n\nThe concept is also related to but different from **overriding**. For example, `Dog::speak()` if defined overrides `Animal::speak()`. In addition, `Dog::speak()` can explicitly call the base class implementation with the statement `Animal::speak()`. \n\nC++ also allows functions to be **overloaded**, which is about having multiple member functions of the same name that differ in their parameter types. So, we could have `Animal::speak()` and the `Dog` class that defines only `Dog::speak(const string&)`. Unfortunately, in this case, `Dog` class doesn't have access to `Animal::speak()`. In this case, we have name hiding, not overriding. \n\nIt's possible for `Dog` class to have access to `Animal::speak()` by simply including the line `using Animal::speak;` in its class definition. With this `using` directive, all the overloaded `speak()` functions of the base class become accessible in the derived class. \n\n\n### What is object slicing in C++?\n\nWhen a derived class object is typecast into one of its base classes, we're doing what's called **object slicing**. Conceptually, it's similar to typecasting a decimal number to an integer type, whereby the number loses it's decimal portion. With object slicing, the object loses members specialized in the derived class and retains only those parts of the base class to which it's been typecast. \n\nConsider a base class A and a derived class B. Consider object b of type B. Object slicing happens with the assignments `A a = b` and `A& a_ref = b`. Assignment operator is not virtual in C++. Hence, in these assignments the assignment operator of A is called and not that of B. \n\nThe figure shows another example. Base class defines data member `a` and virtual functions `bar1()` and `bar2()`. Derived class overrides `bar1()`, and adds `bar3()`, `bar4()` and `b`. An A-type slice of B can access `a`, derived class function `bar1()` and base class function `bar2()`. \n\n\n### What are compile-time and runtime bindings in the context of inheritance?\n\nIf the C++ compiler can determine what function to call at compile time, we call it **compile-time binding**, early binding or static dispatch. However, where virtual functions are defined in a base class and overridden by derived classes, the compiler can't know which function to call. This information is available only at runtime, giving rise to the terms **runtime binding**, late binding or dynamic dispatch. \n\nSuppose classes `Square` and `Triangle` are derived from `Shape`. Consider member functions `virtual void Shape::area() {...}`, `void Square::area() {...}` and `void Triangle::area() {...}`. Also defined is the function `paintArea(Shape& shape) { int area = shape.area(); ... }`. Exactly which area function is called inside `paintArea()`? This is known only at runtime. \n\nUnder the hood, runtime binding relies on *vpointers* and *vtables*. Every class with virtual functions has a *vtable* that stores pointers to virtual function definitions. When a virtual function is overridden, the pointer would point to the derived class implementation. Every class with a *vtable* also has a *vpointer* that points to the correct *vtable*. \n\n\n### What are the rules of inheritance involving C++ abstract classes?\n\nAn abstract class in C++ has at least one pure virtual function, which essentially defines the interface but doesn't supply an implementation. Abstract classes are meant to be used as base classes for other class definitions. Abstract classes themselves can't be instantiated. Their very purpose is inheritance that conforms to an interface.\n\nAn abstract class can't be used as a parameter type, a function return type or the result of an explicit conversion. Pointers and references to an abstract class are allowed. \n\nIf a derived class inherits from an abstract class without supplying implementations, the derived class is also an abstract class. Thus, \"abstraction\" can be inherited. On the other hand, it's possible to inherit from a non-abstract base class, add pure virtual functions or override a non-pure virtual function with a pure virtual function, and thereby define an abstract derived class. \n\nWhen a pure virtual function is called from a constructor, the behaviour is undefined. \n\n\n### What are the different visibility modes in inheritance in C++?\n\nC++ supports three access specifiers: public, protected and private. These access specifiers are used on data members and member functions. If not explicitly mentioned, private access is the default. \n\nLikewise, a derived class can use an access specifier on each of its base classes. Available access specifiers are public, protected, and private. Where multiple inheritance is used, each base class can have a different access specifier. If not explicitly mentioned, private inheritance is the default. \n\nRegardless of the access specifier on the base class, base class private members will remain private; that is, derived class can't access them. For public and protected members of base class, we summarize how access specifier on the base class affects access to members of the derived class: \n\n + **Public**: Public and protected members of base class become public and protected members of derived class respectively.\n + **Protected**: Public and protected members of base class become protected members of derived class.\n + **Private**: Public and protected members of base class become private members of derived class.\n\n### What's the influence of access specifiers on virtual functions?\n\nThe figure shows three examples involving the member function `A::name()`. In (a), the function is non-virtual. Hence, when it's called via the base class pointer, `A::name()` is called. This is really compile-time binding.\n\nIn (b), the function is made virtual. Now when the same call is made, runtime binding happens and `B::name()` is called. Although `B::name()` is private, it's called via the base class pointer and `A::name()` itself is public. \n\nIn (c), we make the inheritance protected. This makes `A::name()` protected in the derived class. This means that it can't be called from outside the class, such as from `main()`. Hence we get a compile-time error just as if `B::name()` had been declared protected or private. \n\n\n### How is inheritance of struct different from that of class?\n\nData structure `struct` comes from C programming language and is applicable in C++ as well. C++ extends `struct` so that it can include member functions. Like `class`, a `struct` definition can also be inherited. \n\nThe main difference is that when access specifiers aren't specified, `struct` members are public by default whereas `class` members are private by default. Likewise, when access specifiers are omitted in inheritance, `struct` inheritance is public by default whereas `class` inheritance is private by default. \n\nFor completeness, we note that there's also `union` that comes from C. It's members are public by default. Unions can't be used as base classes. \n\n\n### What are the workings of C++ class constructors and destructors?\n\nConstructors and destructors were traditionally not inherited. Since C++11, constructors could be inherited. \n\nBase class constructors are called before derived class constructors. In the case of multiple inheritance, base classes constructors are called in the depth-first left-to-right order of inheritance in the derived class. Destructors execute in reverse order.\n\nConstructors have be defined as public or protected so that derived class constructors can call base class constructors. \n\nConstructors need not be virtual: constructors are always called by name. Destructors have to be virtual. In other words, it's not sufficient to destroy only the base class object. We need to destroy the original derived class object. \n\n\n### What are the main criticisms of C++ inheritance?\n\nInheritance in C++ is complex. Even a single inheritance has six variants: private/protected/public and virtual/non-virtual. With multiple inheritance, this complexity increases. The utility of a private virtual function is rather limited. \n\nThe designer of a base class decides if a member function must be declared virtual. Derived class can't control this or prevent calls to base class non-virtual functions. Virtual inheritance is also a problem since the decision is made early. It prevents defining a derived class that wants two copies of the distant base class. \n\nC++ implementation of polymorphism via virtual functions can impact performance. Virtual member functions are not directly called. Instead, they've to be looked up via *vpointer* and *vtable*. \n\n## Milestones\n\nApr \n1979\n\nAt Bell Laboratories, inspired by Simula, Bjarne Stroustrup conceives the idea of *C with Classes*. The new language would combine the low-level features of C and high-level code organization of Simula. Even in these early days, the language includes classes, class hierarchies and **simple inheritance**. Also included are **private and public inheritance** of a base class. \n\n1983\n\nTo support runtime polymorphism, **virtual functions** are added to the language. Only with the introduction of virtual functions, the language claims to support **object-oriented programming**. This is also when the first implementation becomes available to users. Subsequently, the language is renamed to *C++* (1984) and the first commercial release happens (1985). \n\nJun \n1989\n\nC++ 2.0 is released. This includes support for **multiple inheritance** and **abstract class**. Abstract classes provide a \"cleaner separation between a user and an implementor\" and reduces compile times. \n\nSep \n2011\n\nISO publishes the C++11 standard, formally named ISO/IEC 14882:2011. This release introduces identifiers `override` and `final`. These help manage complex class hierarchies. They can be applied on virtual member functions when overridden in a derived class. Identifier `final` can also be applied to a class so that it can't be inherited. **Constructors can now be inherited** with the `using` declaration. This is useful when a derived object needs to be initialized in the same way as the base object. \n\nDec \n2017\n\nISO publishes the C++17 standard, formally named ISO/IEC 14882:2017. It's now possible to do **aggregate initialization involving derived and base classes**.","meta":{"title":"C++ Inheritance","href":"c-plus-plus-inheritance"}} {"text":"# HTTP/2\n\n## Summary\n\n\nHTTP/2 is an alternative to HTTP/1.x that has been the workhorse protocol of the web for about 15 years. The older version served well in the days when web pages were simple. In February 2017, an average web page is seen to have more than a hundred assets -- JS files, CSS files, images, font files, iframes embedding other content, and more. This means that the browser has to make multiple requests to the server for a single page. User experience is affected due to long load times. HTTP/2 aims to solve these problems.\n\nIn particular, HTTP/2 allows multiple requests in parallel on a single TCP connection. HTTP/1.x allowed only a single request. HTTP/2 is also more bandwidth efficient since it uses binary encoding and compresses headers. It allows for Server Push rather than relying only on client requests to serve content.\n\n## Discussion\n\n### Are there any published results to show that HTTP/2 is better?\n\nAs of February 2017, a test comparing plain HTTP/1.1 against encrypted HTTP/2 showed that former is 73% percent slower. The server was in Dallas and the client was in Bangalore for this test. Early testing of HTTP/2 done in January 2015 showed that HTTP/2 uses fewer header bytes due to header compression. Its response messages were also smaller when compared against HTTP/1.1. For page loads, HTTP/2 took 0.772 seconds whereas HTTP/1.1 took 0.988 seconds. \n\n\n### How is the support for HTTP/2 from browsers and servers?\n\nIn 2015, popular browsers started supporting HTTP/2. This includes Chrome, Firefox, Safari, Opera and Edge. Chrome for Android started support in February 2017. \n\nAt the server side, W3Techs reported that as of February 2017 12% of the servers support HTTP/2. Of these, 77% were on Nginx and 18% were on LiteSpeed. We see that when platforms that host a number of domains start supporting HTTP/2, there's a spike in HTTP/2 traffic. This can be seen when WordPress, CloudFlare, Blogspot and Wikipedia started support for the protocol. Apache introduced it on an experimental basis in version 2.4.17. KeyCDN reported that as of April 2016 68% of its traffic is HTTP/2. \n\n\n### Are there any security vulnerabilities with HTTP/2?\n\nIn August 2016, Imperva, Inc. reported four security vulnerabilities. These should be seen as implementation flaws rather than flaws in the protocol itself. At least one of these vulnerabilities was found in five different server implementations, which were subsequently fixed.\n\n\n### Is the use of encryption mandatory for HTTP/2?\n\nNo. However, major browsers have said that they will support HTTP/2 only over TLS. The use of HTTP/2 over TLS is referred to as *h2*. The use of HTTP/2 over TCP, implying cleartext payload, is referred to as *h2c*.\n\n\n### Do I have to change my application code when migrating to HTTP/2?\n\nNo. The idea of HTTP/2 was to improve the way packets are \"put on the wire.\" For reasons of interoperability, HTTP Working Group was clear that the semantics of using HTTP should not be changed. This means that headers, methods, status codes and cache directives present in HTTP/1.x will remain the same in HTTP/2 as well. The semantics of HTTP remain unchanged. Hence, application code need not be changed. \n\nOf course, if you're building your own server code or custom client code, you will need to update these to support HTTP/2. The binary framing layer of HTTP/2 headers is not compatible with HTTP/1.x but this affects only HTTP clients and servers. A HTTP/1.1 client cannot talk to a server that supports only HTTP/2. \n\nIf your application is using any of the well-known \"best practices\" for better performance over HTTP/1.1, the recommendation is to remove these best practices. The bigger migration challenge may be to use HTTP/2 over a secure connection if your applications are not currently using TLS.\n\n\n### Didn't HTTP pipelining already solve this problem in HTTP/1.1?\n\nHTTP/1.0 was really a stop-and-wait protocol. The client could not make another request until response to the pending request was received. HTTP/1.1 attempted to solve this by introducing HTTP Pipelining, which allows a client to send multiple requests to the server in a concurrent manner. Requirement from the server was that responses must be sent out in the same order in which requests were received.\n\nIn the real world, performance improvement due to pipelining is not proven. Proxies when implemented wrongly can lead to erroneous behaviours. Other factors including round trip time, packet size and network bandwidth affect performance. There's also the problem of head-of-line blocking whereby packets pending from one request can block other pipelined requests. For these reasons, most modern browsers disable HTTP pipelining by default. \n\n\n### What sort of hacks did developers use to improve performance before HTTP/2?\n\nSome of the \"best practices\" to improve performance were really hacks to overcome the limitations of HTTP/1.x. These included domain sharding, inlining, image sprites and concatenation. The idea was to minimize the number of requests. With the coming of HTTP/2, these hacks will no longer be required.\n\nThere's one trick that browsers have used to improve performance without using HTTP pipelining. Browsers opened multiple TCP connections to enable concurrent requests. As many as six requests were used per domain. This is the reason why some applications deployed their content across domains so that multiple TCP connections to multiple domains will lead for faster page loads. With HTTP/2, multiple TCP connections will not be required. Multiple HTTP requests can happen on a single TCP connection. \n\n\n### What's the rationale for using binary encoding?\n\nHTTP/1.x was a textual protocol, which made it easier to inspect and debug the messages. But it was also inefficient to transmit. Implementation was also complex since the presence of newlines and whitespaces resulted in variations of the message, all of which had to be decoded properly by the receiver. Binary formats are more efficient since fewer bytes are needed. Since the structure is fixed, decoding becomes easier. Debugging a binary protocol is more difficult but with the right tool support it's not much of a problem. Wireshark supports decoding and analysis of HTTP/2 over TLS. \n\n\n### Can you give some details of multiplexing HTTP requests on a single TCP connection?\n\nHTTP/2 defines three things: \n\n + Frame: Smallest unit of transmission that contains its own header.\n + Message: A sequence of frames that correspond to a HTTP request or response.\n + Stream: A bidirectional flow of messages and identified by a unique identifier.Thus, multiple streams are multiplexed (and interleaved) on to a single TCP connection. Each stream is independent of others, which means that even if one is blocked or delayed, others are not affected. HTTP/2 also allows prioritization of streams, which was not possible with HTTP/1.1.\n\n\n### How is header compression done in HTTP/2?\n\nHeaders in HTTP/1.x can typically 500-800 bytes but can grow to the order of kilobytes since applications use headers to include cookies and referers. It therefore makes sense to compress these headers. HPACK is the name give to HTTP/2 header compression. It was approved by IETF in May 2015 as RFC 7541. \n\nHuffman coding is used to compress each header frame, which always appears at the start of each message within a stream. In addition, many fields of the header will remain the same across messages. HPACK removes this redundancy by replacing the fields with indices that point to tables that map these indices to actual values. There are predetermined static tables defined by HPACK. Dynamic tables allow further compression. \n\nA test on CloudFlare gave 76% compression for ingress headers, which translates to 53% savings on total ingress traffic. The equivalent numbers for egress are 69% and 1.4%. Results also showed that with more traffic, the dynamic table grows, leading to higher compression. KeyCDN reported savings of 30% on average. \n\n\n### How does Server Push improve performance?\n\nIn a typical client-server interaction, the client will request one page, parse it and then figure out other assets that need to be requested from the server. Server Push is a feature that enables the server to push assets without the client asking for it. For example, if a HTML page references images, JS files and CSS files, the server can send these after or before sending the HTML page. Each pushed asset is sent on its own stream and therefore can be prioritized and multiplexed individually. \n\nSince Server Push changes the way clients and servers interact, it's still considered experimental. Wrong configuration could lead to worse performance. For example, the server may push an asset even if the client has cached it from an earlier request. Likewise, servers may push too much data and affect user experience. Proper use of cookies, cache-aware Server Push and correct server configuration can aid in getting Server Push right. \n\n## Milestones\n\n2012\n\nIETF's HTTP Working Group begins looking at Google's **SPDY** protocol as a starting point for defining HTTP/2. \n\nFeb \n2015\n\nIESG approves HTTP/2 as a **proposed standard**. \n\nMay \n2015\n\nHTTP/2 is published as **RFC 7540**.","meta":{"title":"HTTP/2","href":"http-2"}} {"text":"# WordNet\n\n## Summary\n\n\nWordNet is a database of words in the English language. Unlike a dictionary that's organized alphabetically, WordNet is organized by concept and meaning. In fact, traditional dictionaries were created for humans but what's needed is a lexical resource more suited for computers. This is where WordNet becomes useful. \n\nWordNet is a network of words linked by lexical and semantic relations. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms, called **synsets**, each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the WordNet browser. \n\nWordNet is freely and publicly available for download.\n\n## Discussion\n\n### What's the distinction between WordNet and a thesaurus?\n\nA thesaurus provides similar words (synonyms) and opposites (antonyms). WordNet does much more than this. Via synsets, WordNet brings together specific word senses. As a result, words that are found in close proximity to one another in the network are semantically disambiguated. \n\nA synset is also linked to other synsets by semantic relations. Such relations are missing in a thesaurus. These relations are based on concepts and therefore give us valuable information about words. For example, the verbs (communicate, talk, whisper) are all about talking but the manner goes from general to specific. A similar example with nouns would be (furniture, bed, bunkbed). An example of a part-whole relation is (leg, chair). These sorts of relations are captured in WordNet. \n\nThe nodes of WordNet are synsets. Links between two nodes are either **conceptual-semantic** (bird, feather) or **lexical** (feather, feathery). Lexical links subsume conceptual-semantic links. \n\n\n### Could you explain WordNet's synsets with an example?\n\nConsider the word 'bike'. It has multiple meanings. It could be a motorcycle (noun), a bicycle (noun) or bicycle (verb). WordNet represents these as three synsets with unique names: `motorcycle.n.01`, `bicycle.n.01` and `bicycle.v.01`. \n\nEach synset has an array of lemma names that share the same concept. Thus, `motorcycle.n.01` has the words 'motorcycle' and 'bike'. The synset also has a definition, also called **gloss**. It now becomes clear that the word 'bike' must be present in all the three synsets, each representing a different concept or meaning. \n\nIt's also possible to move from one synset to another by certain relations. For example, `motor_vehicle.n.01` is a more general concept of `motorcycle.n.01` whereas `minibike.n.01` and `trail_bike.n.01` are more specific concepts of `motorcycle.n.01`. Thus, synsets are linked by relations making WordNet a network of conceptually related words.\n\n\n### What is WordNet used for?\n\nWordNet is typically used by linguistics, psychologists and those working in the fields of AI and NLP. Among its many applications are word sense disambiguation, information retrieval, automatic text classification, automatic text summarization, and machine translation. \n\nWordNet can be used as a thesaurus except that words are organized by concept and semantic/lexical relations. In NLP, WordNet has become a useful tool for word sense disambiguation. When a word has multiple senses, WordNet can help in identifying the correct sense. WordNet's symbolic approach complements statistical approaches. \n\nMeasuring similarity between words is another application. Different algorithms exist to measure word similarity. Such a similarity measure can be used in spelling checking or question answering. However, WordNet is limited to noun-noun or verb-verb similarity. We can't compare nouns and verbs, or use other parts of speech. \n\nWhere neural networks are used for NLP work, word embeddings (low-dimensional vectors) are used. However, word embeddings don't discriminate different senses. WordNet has been applied to create **sense embeddings**. \n\n\n### What are major lexical relations captured in WordNet?\n\nMajor lexical relations include the following: \n\n + **Synonymy**: Synonyms are words that have similar meanings. Often context determines which synonym is best suited.\n + **Polysemy**: Polysemous words have more than one sense. The word bank can mean river bank, where money is stored, a building, or institution. Polysemy is associated with the terms *homonymy* and *metonymy*.\n + **Hyponymy/Hypernymy**: Is-a relation. Robin is a hyponym of bird since robin is a type of bird. Likewise, bird is a hypernym of robin. Thus, hypernyms are synsets that are more general whereas hyponyms are more specific.\n + **Meronymy/Holonymy**: Part-whole relation. Beak is meronym of bird since beak is part of a bird's anatomy. Likewise, bird is holonym of beak. WordNet identifies three types of relations: components (leg, table), constituents (oxygen, water), and members (parent, family).\n + **Antonymy**: Lexical opposites such as (large, small).\n + **Troponymy**: Applicable for verbs. For example, whisper is a troponym of talk since whisper elaborates on the manner of talking.\n\n### What are some limitations of WordNet?\n\nWordNet doesn't include syntactic information, although later work showed that at least for verbs there's correlation between semantic makeup and syntactic behaviour. \n\nSemantic relations are more suited to concrete concepts, such as tree is a hypernym of conifer. It's less suited to abstract concepts such as fear or happiness where it's hard to identify hyponym/hypernym relations. Some relations may also be language specific and therefore can make different wordnets less interoperable. \n\nWordNet's senses are sometimes too fine-grained for automatic sense disambiguation. One possible solution is to group related senses. \n\nWordNet doesn't include information about the etymology. Thus, word origins and how they've evolved over time are not captured. Offensive words are also included and it's left to applications to decide what's offensive since meanings change over time. Pronunciation is missing. There's limited information about usage. WordNet covers most of everyday English but doesn't include domain-specific terminology. \n\nWordNet was created in the mid-1980s when digital corpora were hard to come by. WordNet was assembled by the intuition of lexicographers rather than by a corpus-induced dictionary. \n\n## Milestones\n\n1928\n\nMurray’s Oxford English Dictionary (OED) is compiled \"on historical principles\". By focusing on historical evidence, OED, like other standard dictionaries, neglects questions concerning the synchronic organization of lexical knowledge. \n\n1969\n\nCollins and Quillian propose a **hierarchical semantic memory model** for storing information in computer systems. They hypothesize that human memory is in fact organized in this manner. They test their hypothesis by measuring retrieval times. For example, if a person is asked if a canary can fly, the actual retrieval might involve inference from a memory that contains \"canary is a bird\" and \"birds can fly\". This important work goes on to influence the creation of WordNet almost two decades later. \n\n1976\n\nMiller and Johnson-Laird propose **psycholexicology**, a study of the lexical component of language, which is about words and vocabulary of a language. \n\n1985\n\nSome psychologists and linguists at Princeton University start developing **a lexical database**. While a dictionary helps us search for words alphabetically, a lexical database allows us to search based on concepts. This marks the beginning of **Princeton WordNet**. We can say that it's a dictionary based on psycholinguistic principles. \n\n1991\n\n**WordNet 1.0** is released. \n\n1996\n\n**EuroWordNet** is started as an EU project covering languages Dutch, Spanish and Italian. It's inspired by and is designed to link to the Princeton WordNet. In 1997, more languages are added: German, French, Czech and Estonian. The project is completed towards the end of 1999. One novel feature is the *Inter-Lingual-Index (ILI)* that defines equivalence relations between synsets in different languages. In later years, this work is extended by other projects: EUROTERM, BALKANET, and MEANING. By 2006, it's noted that databases exist for 35 languages globally. \n\nMar \n2005\n\n**WordNet 2.1** is released. There's support for UNIX-like systems and Windows. WordNet 2.1 contains almost 118,000 synsets, comprising more than 81,000 noun synsets, 13,600 verb synsets, 19,000 adjective synsets, and 3,600 adverb synsets. \n\nDec \n2006\n\n**WordNet 3.0** is released. This release has 117,798 nouns, 11,529 verbs, 22,479 adjectives, and 4,481 adverbs. The average noun has 1.23 senses, and the average verb has 2.16 senses. \n\nJun \n2011\n\n**WordNet 3.1** is released. It's available only online. It's possible to download only the database and use the installation from 3.0. This version contains 155,327 words organized in 175,979 synsets for a total of 207,016 word-sense pairs. It's compressed size is 12MB. \n\nJun \n2018\n\nUnder the guidance of Global WordNet Association, the **English WordNet** is created on GitHub as a fork of the Princeton WordNet 3.1. Annual updates of this resource happen in April 2019 and April 2020.","meta":{"title":"WordNet","href":"wordnet"}} {"text":"# OAuth\n\n## Summary\n\n\nOAuth is an authorization framework that allows a third-party application to access private resources of a user when those resources are stored as part of another web service. For example, a user has an account with GMail or Facebook. The user visits a gaming platform that requests access to the user's basic profile on GMail or Facebook. OAuth allows the user to authorize this gaming platform to access the profile. \n\nThe strength of OAuth lies in the fact that the user's credentials (username, password) are not shared with third-party services. Such credentials are used only between the resource owner (user) and the authorization server. Thus, OAuth is for authorization, not authentication. In addition, the user can at any point revoke this authorization.\n\n## Discussion\n\n### Why was OAuth invented in the first place?\n\nBack in 2006, Blaine Cook was investigating a means to grant third-party applications access to Twitter API via delegated authentication. He considered using OpenID for this purpose but found it unsuitable. The idea was that users should not be required to share their usernames and passwords to third-party applications.\n\nThere were at the time other frameworks but they were not open standards: Flickr Auth, Google AuthSub and Yahoo! BBAuth. OAuth was thus invented as an open framework for authorization without even specifying the methods of authentication. In fact, it's been said that OAuth was written by studying and adopting the best practices from these early proprietary methods. \n\n\n### What are the available OAuth roles?\n\nFour roles are defined in the standard: \n\n + **Client** is the application requiring access to protected resources.\n + **Resource Owner** is the entity owning the resource. It's usually a person, called end-user. Resource owner gives authorization to client to access protected resources.\n + **Authorization Server** does authentication of the resource owner and handles authorization grants coming from the client. It gives out access tokens.\n + **Resource Server** manages protected resources based on access tokens presented by clients.As an example, a gaming application (client) requires the profile of a gamer (resource owner) from her GMail account. Access to the gamer's GMail profile (protected resource) is authorized by the gamer. The GMail service encapsulates both the authorization and resource servers. Implementations have the choice to combine authorization and resource servers into a single server, but they might just as well be different servers. \n\n\n### Could you briefly describe the protocol flow?\n\nOAuth specifies different authorization grant types, each of which has its own flow. We can summarize the flow as made up of these steps: obtain authorization from user, exchange this authorization for an access token, use the token to access protected resources. \n\nDelving into the details, OAuth flows can be related to the level of confidentiality involved. If the client is a web application, then it's a confidential client since client ID and secret key are kept on the client server. If the client is a user-agent (web browser) or a native application running on a device with the resource owner, then it's a public client. From this perspective, we can look at flows in this manner: \n\n + **Confidential**: client credentials flow, password flow, authorization code flow\n + **Public**: implicit flow, password flow, authorization code flowThough it's more common to talk of grant types these days, the terms two-legged flow and three-legged flow are still in use. With the former, the resource owner is not involved, as in client credentials flow. With the latter all three (resource owner, client and authorization server) are involved. \n\n\n### What essential data are exchanged for authorization?\n\nEvery client must have a client ID and a client secret key. Sometimes these are called consumer ID and consumer secret key. The client will also have a redirect URI. This is the endpoint where the authorization code will be received by the client and subsequently exchanged for an access token. This tuple of client ID, secret key and redirect URI is generated or configured in advance on the authorization server, what is called client registration. Authorization will fail if these do not match.\n\nAccess token is the one that's presented to the resource server after a successful authorization. Access token simply allows access to protected resources. The resource owner may choose at any point to revoke access at the authorization server. A token can also expire, in which case the client can request a token refresh if refresh is allowed. \n\nIn the case of password flow, authorization will be based on the resource owner's username and password. From a security standpoint, this flow makes sense only if the resource owner trusts the client enough to give out her username and password. \n\n\n### Which are the endpoints defined in OAuth?\n\nAn endpoint is nothing more than a URI. The standard defines the following endpoints though extra endpoints could be added: \n\n + **Authorization endpoint**: Located at the authorization server, this is used by the client to obtain authorization from the resource owner. Resource owner is redirected to authorization server via a user-agent (typically the web browser).\n + **Redirection endpoint**: Located at the client, this is used by the authorization server to return responses containing authorization credentials to the client via the user-agent.\n + **Token endpoint**: Located at the authorization server, this is used by the client to exchange an authorization grant for an access token.The typical call flow, at least for authorization code grant type, invokes the endpoints in the order of Authorization, Redirection and Token. In the case of implicit grant type, the order is Authorization and Redirection endpoints. The Token endpoint is not involved since the Authorization endpoint implicitly returns an access token. \n\n\n### Are there open source implementations of OAuth?\n\nOAuthLib is an open source implementation in Python. Hydra is an implementation in Go. It covers OAuth and OpenID Connect. In Java, we have Spring Security OAuth and Apache Oltu. The latter includes OAuth, JWT, JWS and OpenID Connect. From Google, there's Google OAuth Client Library for Java. Anvil Connect supports OAuth, JWT and OpenID Connect. \n\nWe have keep in mind that these implementations may be specific to client or server or both.\n\n\n### How would you compare OAuth with OpenID Connect and SAML?\n\nOpenID Connect (OIDC) is an identity and authentication layer that uses OAuth 2.0 as the base layer for authorization. As an ID token it uses a signed JSON Web Token (JWT), also called JSON Web Signature (JWS). It uses REST/JSON message flows. OIDC best suited for single sign-on apps while OAuth is best for API authorization. \n\nIn some use cases, OAuth 2.0 may be used by implementors as a means of pseudo-authentication, by assuming the entity owning the access token is also the resource owner. Eran Hammer-Lahav summarized the difference between OpenID and OAuth nicely (authentication vs authorization), \n\n> While OpenID is all about using a single identity to sign into many sites, OAuth is about giving access to your stuff without sharing your identity at all.\n\nSecurity Assertion Markup Language (SAML) offers both authentication and authorization. What is called a token in OpenID and OAuth terminology, is called an assertion in SAML. Assertions (structured in XML) contain statements that fulfil the purpose of authentication and authorization. SAML may not be suited for mobile apps. \n\nAs a simplification, we could see OIDC as a combination of OAuth and SAML. \n\n\n### What are OAuth extensions?\n\nOAuth allows extensibility in terms of authorization grant types, access token types, and more. This will allow OAuth to interwork with other protocol frameworks.\n\nWebsite OAuth.net considers three RFCs as part of the OAuth 2.0 Core: 6749, 6750, 6819. It then lists a number of OAuth extensions. Likewise, a search on IETF Datatracker yields all documents that pertain to OAuth. \n\n\n### How does OAuth 1.x compare against OAuth 2.0?\n\nOAuth 2.0 uses SSL/TLS for security. With OAuth 1.x, each request had to be secured by the OAuth implementation, which was cumbersome. In this sense, OAuth 2 is simpler. OAuth 1.0 suffered from session fixation attacks. Such a flaw does not exist in OAuth 1.0a and above. \n\nOAuth 2.0 uses grant types to define different flows or use cases. OAuth 1.x worked well for server-side applications but not so well for browser web apps or native apps. OAuth 2.0 is not compatible with OAuth 1.0 or 1.0a (sometimes called 1.1). A deeper technical comparison of versions 1.x and 2.0 is available at OAuth.com. \n\nControversially, it's been mentioned that OAuth 2 is \"more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.\" \n\n\n### What are the problems with OAuth?\n\nOne of OAuth's original creators, Eran Hammer, claimed that OAuth 2 is more complex and less useful than its earlier version. It's been \"designed-by-committee\" with enterprise focus. He claims OAuth should have been a protocol rather than a framework. The end result is too much flexibility and very few interoperable implementations. \n\nIn terms of security risks, one analysis showed how developers can mistakenly use OAuth for the purpose of user authentication. In 2014, the \"Covert Redirect\" vulnerability was discovered where phishing along with URL redirection can be used to gain access to protected resources. This can be solved by having a whitelist of redirect URLs on the authorization server. In 2016, researchers discovered that man-in-the-middle attacks were possible with OAuth and OpenID Connect. Because OAuth 2.0 uses TLS, it doesn't support signature, encryption, channel binding and client verification. \n\n## Milestones\n\nNov \n2006\n\nBlaine Cook starts looking at OpenID but wants a better delegated authentication method for the Twitter API. \n\nApr \n2007\n\nOpenAuth Google group is started but when AOL releases their own protocol named OpenAuth, this group changes its name to OAuth in May 2007. \n\nDec \n2007\n\nOAuth Core 1.0 final draft is released. \n\nMay \n2009\n\nOAuth Working Group is created at IETF. One of the creators of OAuth later commented that bringing OAuth into IETF was a mistake. \n\nJun \n2009\n\nOAuth Core 1.0 Revision A is released. It fixes the session fixation attack. \n\nApr \n2010\n\nIETF releases RFC 5849 that replaces OAuth Core 1.0 Revision A. \n\nOct \n2012\n\nOAuth 2.0 is released by IETF as RFC 6749.","meta":{"title":"OAuth","href":"oauth"}} {"text":"# ImageNet\n\n## Summary\n\n\nImageNet is a large database or dataset of over 14 million images. It was designed by academics intended for computer vision research. It was the first of its kind in terms of scale. Images are organized and labelled in a hierarchy.\n\nIn Machine Learning and Deep Neural Networks, machines are trained on a vast dataset of various images. Machines are required to learn useful features from these training images. Once learned, they can use these features to classify images and perform many other tasks associated with computer vision. ImageNet gives researchers a common set of images to benchmark their models and algorithms. \n\nIt's fair to say that ImageNet has played an important role in the advancement of computer vision.\n\n## Discussion\n\n### Where is ImageNet useful and how has it advanced computer vision?\n\nImageNet is useful for many computer vision applications such as object recognition, image classification and object localization. \n\nPrior to ImageNet, a researcher wrote one algorithm to identify dogs, another to identify cats, and so on. After training with ImageNet, the same algorithm could be used to identify different objects. \n\nThe diversity and size of ImageNet meant that a computer looked at and learned from many variations of the same object. These variations could include camera angles, lighting conditions, and so on. Models built from such extensive training were better at many computer vision tasks. ImageNet convinced researchers that large datasets were important for algorithms and models to work well. In fact, their algorithms performed better after they were trained with ImageNet dataset. \n\nSamy Bengio, a Google research scientist, has said of ImageNet, \"Its size is by far much greater than anything else available in the computer vision community, and thus helped some researchers develop algorithms they could never have produced otherwise.\" \n\n\n### What are some technical details of ImageNet?\n\nImageNet consists of 14,197,122 images organized into 21,841 subcategories. These subcategories can be considered as sub-trees of 27 high-level categories. Thus, ImageNet is a well-organized hierarchy that makes it useful for supervised machine learning tasks.\n\nOn average, there are over 500 images per subcategory. The category \"animal\" is most widely covered with 3822 subcategories and 2799K images. The \"appliance\" category has on average 1164 images per subcategory, which is the most for any category. Among the categories with least number of images are \"amphibian\", \"appliance\", and \"utensil\". \n\nAs many as 1,034,908 images have been annotated with **bounding boxes**. For example, if an image contains a cat as its main subject, the coordinates of a rectangle that bounds the cat are also published on ImageNet. This makes it useful for computer vision tasks such as object localization and detection. \n\nThen there's **Scale-Invariant Feature Transform (SIFT)** used in computer vision. SIFT helps in detecting local features in an image. ImageNet gives researchers 1000 subcategories with SIFT features covering about 1.2 million images. \n\nImages vary in resolution but it's common practice to train deep learning models on sub-sampled images of 256x256 pixels. \n\n\n### Could you explain how ImageNet defined the subcategories?\n\nIn fact, ImageNet did not define these subcategories on its own but derived these from WordNet. **WordNet** is a database of English words linked together by semantic relationships. Words of similar meaning are grouped together into a synonym set, simply called **synset**. Hypernyms are synsets that are more general. Thus, \"organism\" is a hypernym of \"plant\". Hyponyms are synsets that are more specific. Thus, \"aquatic\" is a hyponym of \"plant\". \n\nThis hierarchy makes it useful for computer vision tasks. If the model is not sure about a subcategory, it can simply classify the image higher up the hierarchy where the error probability is less. For example, if model is unsure that it's looking at a rabbit, it can simply classify it as a mammal. \n\nWhile WordNet has 100K+ synsets, only the nouns have been considered by ImageNet. \n\n\n### How were the images labelled in ImageNet?\n\nIn the early stages of the ImageNet project, a quick calculation showed that by employing a few people, they would need 19 years to label the images collected for ImageNet. But in the summer of 2008, researchers came to know about an Amazon service called **Mechanical Turk**. This meant that image labelling can be crowdsourced via this service. Humans all over the world would label the images for a small fee. \n\nHumans make mistakes and therefore we must have checks in place to overcome them. Each human is given a task of 100 images. In each task, 6 \"gold standard\" images are placed with known labels. At most 2 errors are allowed on these standard images, otherwise the task has to be restarted. \n\nIn addition, the same image is labelled by three different humans. When there's disagreement, such ambiguous images are resubmitted to another human with tighter quality threshold (only one allowed error on the standard images). \n\n\n### How are the images of ImageNet licensed?\n\nImages for ImageNet were collected from various online sources. ImageNet doesn't own the copyright for any of the images. This has implication on how ImageNet shares the images to researchers. \n\nFor public access, ImageNet provides image thumbnails and URLs from where the original images were downloaded. Researchers can use these URLs to download the original images. However, those who wish to use the images for non-commercial or educational purpose, can create an account on ImageNet and request access. This will allow direct download of images from ImageNet. This is useful when the original sources of images are no longer available.\n\nThe dataset can be explored via a browser-based user interface. Alternatively, there's also an API. Researchers may want to read the API Documentation. This documentation also shares how to download image features and bounding boxes. \n\n\n### What is the ImageNet Challenge and what's its connection with the dataset?\n\nImageNet Large Scale Visual Recognition Challenge (ILSVRC) was an annual computer vision contest held between 2010 and 2017. It's also called **ImageNet Challenge**.\n\nFor this challenge, the training data is a subset of ImageNet: 1000 synsets, 1.2 million images. Images for validation and test are not part of ImageNet and are taken from Flickr and via image search engines. There are 50K images for validation and 150K images for testing. These are hand-labeled with the presence or absence of 1000 synsets. \n\nThe Challenge included three tasks: image classification, single-object localization (since ILSVRC 2011), and object detection (since ILSVRC 2013). More difficult tasks are based upon these tasks. In particular, **image classification** is the common denominator for many other computer vision tasks. Tasks related to video processing, but not part of the main competition, were added in ILSVRC 2015. These were object detection in video and scene classification. \n\nFor more information, read the current state-of-the-art on image classification for ImageNet.\n\n\n### What is meant by a pretrained ImageNet model?\n\nA model trained on ImageNet has essentially learned to identify both low-level and high-level features in images. However, in a real-world application such as medical image analysis or handwriting recognition, models have to be trained from data drawn from those application domains. This is time consuming and sometimes impossible due to lack of sufficient annotated training data. \n\nOne solution is that a model trained on ImageNet can use it's weights as a starting point for other computer vision task. This reduces the burden of training from scratch. A much smaller annotated domain-specific training may be sufficient. By 2018, this approach was proven in a number of tasks including object detection, semantic segmentation, human pose estimation, and video recognition. \n\n\n### How is Tiny ImageNet related to ImageNet?\n\nTiny ImageNet and its associated competition is part of Stanford University's CS231N course. It was created for students to practise their skills in creating models for image classification. \n\nThe Tiny ImageNet dataset has 100,000 images across 200 classes. Each class has 500 training images, 50 validation images, and 50 test images. Thus, the dataset has 10,000 test images. The entire dataset can be downloaded from a Stanford server. \n\nTiny ImageNet is a strict subset of ILSVRC2014. Labels and bounding boxes are provided for training and validation images but not for test images. All images have a resolution of 64x64. Since the average resolution of ImageNet images is 482x418 pixels, images in Tiny ImageNet might have some problems: object cropped out, too tiny, or distorted. It's been observed that with a small training dataset overfitting can occur. Data augmentation is usually done on the images to help models generalize better. \n\nSimilarly, *Imagenette* and *Imagewoof* are other subsets of ImageNet, created by fast.ai. \n\n\n### What are the criticisms or shortcomings of ImageNet?\n\nThough ImageNet has a large number of classes, most of them don't represent everyday entities. One researcher, Samy Bengio, commented that the WordNet categories don't reflect the interests of common people. He added, \"Most people are more interested in Lady Gaga or the iPod Mini than in this rare kind of diplodocus\". \n\nImages are not uniformly distributed across subcategories. One research team found that by considering 200 subcategories, they found that the top 11 had 50% of the images, followed by a long tail. \n\nWhen classifying people, ImageNet uses labels that are racist, misogynist and offensive. People are treated as objects. Their photos have been used without their knowledge. About 5.8% labels are wrong. \n\nImageNet lacks geodiversity. Most of the data represents North America and Europe. China and India are represented in only 1% and 2.1% of the images respectively. This implies that models trained on ImageNet will not work well when applied for the developing world. \n\nAnother study from 2016 found that 30% of ImageNet's image URLs are broken. This is about 4.4 million annotations lost. Copyright laws prevent caching and redistribution of these images by ImageNet itself. \n\n## Milestones\n\n1985\n\nGeorge A. Miller and his team at Princeton University start working on **WordNet**, a lexical database for the English language. It's really a combination of a dictionary and a thesaurus. This would enable applications in the area of Natural Language Processing (NLP). \n\n2006\n\nFei-Fei Li at the University of Illinois Urbana-Champaign gets the idea for ImageNet. The prevailing conviction among AI researchers at this time is that algorithms are more important and data is secondary. Li instead proposes that lots of data reflecting the real world would improve accuracy. By now, WordNet itself is mature, with version 3.0 getting released in December. \n\n2007\n\nFei-Fei Li meets Christiane Fellbaum of Princeton University, a WordNet researcher. Li adopts WordNet for ImageNet. \n\n2008\n\nIn July, ImageNet has 0 images. By December, ImageNet reaches 3 million images categorized across 6000+ synsets. By April 2010, the count is 11 million images across 15,000+ synsets. This is impossible for a couple of researchers but is made possible via crowdsourcing on the Amazon's Mechanical Turk platform. \n\n2009\n\nImageNet is presented for the first time at the Conference on Computer Vision and Pattern Recognition (CVPR) in Florida by researchers from the Computer Science Department, Princeton University. \n\nMay \n2010\n\nThe first ever ImageNet Challenge is organized, along with the well-known image recognition competition in Europe called the *PASCAL Visual Object Classes Challenge 2010 (VOC2010)*. \n\n2012\n\nImageNet becomes the world's largest academic user of Mechanical Turk. The average worker identifies 50 images per minute. \r The year 2012 also sees a big breakthrough for both Artificial Intelligence and ImageNet. **AlexNet**, a deep convolutional neural network, achieves top-5 classification error rate of 16% from the previous best of 26%. Their approach is adapted by many others leading to lower error rates in following years. \n\n2015\n\nThe best human-level accuracy for classifying ImageNet data is 5.1% and GoogLeNet becomes the nearest neural network counterpart with 6.66%. PReLU-Net becomes the first neural network to surpass human-level of accuracy by achieving 4.94% top-5 error rate. \n\n2017\n\nThis year witnesses the final ImageNet Competition. Top-5 classification error drops to 2.3% and the competition is now considered a solved problem. Subsequently, the competition is hosted at Kaggle. \n\nMay \n2019\n\nEfficientNet claims to have achieved top-5 classification accuracy of 97.1% and top-1 accuracy of 84.4% for ImageNet, dethroning it's predecessor GPipe (December 2018) by a meagre 0.1% in both top-1 and top-5 accuracies. \n\nJun \n2019\n\nImageNet wins **Longuet-Higgins Prize** at CVPR 2019, a retrospective award that recognizes a CVPR paper for having significant impact and enduring relevancy on computer vision research over a 10-year period. \n\nJul \n2019\n\n**ImageNet-A** fools the best AI models 98% of the time, due to their over-reliance on colour, texture and background cues. Unlike adversarial attack in which images are modified, ImageNet-A has 7500 original images that have been handpicked from ImageNet. This shows that current AI models are not robust to new data.","meta":{"title":"ImageNet","href":"imagenet"}} {"text":"# Domain-Driven Design\n\n## Summary\n\n\nWriting software involves software architects and programmers. They understand software concepts, tools and implementation details. But they may be disconnected from the business and hence have an incomplete understanding of the problem they're trying to solve. **Domain-Driven Design (DDD)** is an approach towards a shared understanding within the context of the domain. \n\nLarge software projects are complex. DDD manages this complexity by decomposing the domain into smaller subdomains. Then it establishes a consistent language within each subdomain so that everyone understands the problem (and the solution) without ambiguity. \n\nDDD is object-oriented design done right. Among its many benefits are better communication, common understanding, flexible design, improved patterns, meeting deadlines, and minimizing technical debt. However, DDD requires domain experts, additional effort and hence best applied for complex applications.\n\n## Discussion\n\n### What do you mean by 'domain' in the context of domain-driven design?\n\nDomain can be defined as \"a sphere of knowledge, influence or activity.\" For example, *Accountancy* is a domain. An accountant is someone who knows this domain well. She is considered a domain expert. She's perhaps not a programmer and therefore can't build an accounting software. But she can advise developers on the intricacies and workings of the domain. \n\nConsider the domain of air traffic. A developer might imagine that pilots decide on the route (a sequence of 3D points) to a destination. A domain expert might clarify that routes are pre-determined and each route is actually a ground projection of the air path. \n\nTo better manage complexity, a domain can be broken down into **subdomains**. In the e-commerce domain, *Payment*, *Offer*, *Customer* and *Shipping* are possible subdomains. \n\nDomain is the business or problem to be solved. Model is the solution. Likewise, subdomains in the problem space are mapped to **bounded contexts** in the solution space. \n\n\n### Could you explain the relevance of bounded contexts?\n\nConsider the terms *Member* and *Payment* used in a country club. For some stakeholders the terms relate to club membership fees; for others, they're about tennis court booking fees. This disconnect is an indication that the domain is not really one and indivisible. There are subdomains hiding in there and they're best modelled separately. Bounded contexts are the solution. \n\nWhen a model is proposed for a subdomain, it's applied only within the boundaries of the subdomain. These boundaries in the solution space define a bounded context. When teams understand the bounded contexts, it becomes clear what parts of the system have to consistent (within a bounded context) and what parts can develop independently (across bounded contexts). Bounded contexts therefore imply a clear **separation of concerns**. \n\nWithout bounded contexts, we'll end up with a single large complex model of many entities and relationships. Entities will get tightly coupled. The end result is often called a **Big Ball of Mud**. \n\nBounded contexts may overlap or may be neatly partitioned. Bounded contexts often relate to one another and this is captured in a **context map**. \n\n\n### What is meant by the term Ubiquitous Language?\n\nA developer might state that \"a database was updated and triggered an SMTP service.\" A domain expert unfamiliar with such technical jargon will be left confused. **Ubiquitous Language (UL)** is an attempt to get everyone to use words well-understood within the bounded context. Using UL, the developer would now state that \"the pizza was delivered and a coupon was sent to the customer.\" \n\nUL must be consistent and unambiguous. It should evolve as understanding of the domain changes. A change in language implies a change to the model. In fact, the model is not just a design artifact used just to draw UML diagrams. Model is the backbone of the language. Within a bounded context, use the same language in diagrams, writing, speech and code. \n\nTo create an UL, have an open discussion, analyze existing documents, express the domain clearly, and define an agreed glossary. Glossary alone won't help. Use it consciously to arrive at a common understanding of the model. \n\n\n### How should I implement Universal Language in my code?\n\nSince documents can get outdated quickly, code is an enduring expression of Universal Language. \n\nAdopting UL in **naming convention** leads to clean readable code. The purpose of variables, methods, classes and APIs become easier to see. We call this **intention-revealing interfaces**. Example names that follow UL are `TaskReservation`, `ReservationAttempt`, `IsFulfilled`, `BookSeats`, `Reservation`, and `Confirm`. In fact, there are tools to check if names in code follow the domain's defined vocabulary. *NDepend* is one such tool. \n\nWithout UL, a shared understanding is hard to achieve and teamwork suffers. Even among technical folks, one may refer to `coupon` in an API but call it `Discounts` in the backend. Another mismatch is when a checkout workflow is mapped to `RideCommerce` service. Perhaps, this was documented somewhere but the documentation was not read by everyone. \n\nAny code refactoring mustn't happen without discussion using the UL. For example, discussion could involve these questions to clarify concepts: \"When you say `User`, do you mean `Driver`?\" or \"Do you think a `Coupon` is applied to the `BookingAmount`, or is it added?\" \n\n\n### Could you describe some essential terms of DDD?\n\nFrom a comprehensive online DDD glossary, we describe some essential terms: \n\n + **Entity**: An object that has attributes but primarily defined by an identity.\n + **Value Object**: An object with attributes but no identity.\n + **Aggregate**: A cluster of objects treated as a single unit. External references are restricted to only one member, called the *Aggregate Root*. A set of consistency rules applies within the aggregate's boundaries.\n + **Factory**: A mechanism to encapsulate and abstract away the details of creating a complex object. A factory ensures aggregates are initialized to a consistent state.\n + **Repository**: A mechanism to encapsulate storage, search and retrieval from a collection of objects. Its implementation is not a domain concern.\n + **Service**: A stateless functionality that renders its service via an interface, typically used when a workflow doesn't fit the current model.\n\n### Could you explain entities and value objects?\n\nEntities and value objects both follow principles of object-oriented design. They both encapsulate data (attributes) and behaviour (methods). The key difference is that an entity is distinguished by its **identity**, which must be unique within the system. On the other hand, value objects are descriptive with no conceptual identity. \n\nWhen we say that two entities are the same, we mean that their identities match, with possibly different attributes. When we compare two value objects, we're only checking if their attributes match. Thus, entities use **identifier equality** and value objects use **structural equality**. \n\nAn entity has a **lifecycle**. Its form and content can change but not its identity. Identity is used to track the entity. Value objects are ideally **immutable**. \n\nIn a banking application, transactions are entities, each with a unique transaction number. The amount transacted is a value object. A cheque's date may differ from the date of clearing but entries are always reconciled not by date but by the cheque number. This is an example of comparing entities by identities rather than by attributes. \n\n\n### Could you explain the concept of aggregates in DDD?\n\nWhen a cluster of entities or value objects control a significant area of functionality, it's easier to abstract them into a single consistent unit. This is the **aggregate pattern**. One way to identify aggregates is to look at common transactions and the entities that get involved. \n\nSince an aggregate is a cohesive unit, it's best to ensure consistency by using a single **aggregate root** for all updates. Changing a child entity independently will break consistency since the root is unaware of such direct updates. \n\nAs an example, consider two aggregates *Buyer* and *Order*. *Order* has as entities Order and OrderItem; and Address as a value object. However, all external interactions are via the Order entity, which is the root. This entity refers to the root of the *Buyer* by identity (foreign key) . \n\nKeep an aggregate on one server and allow aggregates to be distributed among nodes. Within an aggregate, update synchronously. Across aggregate boundaries, update asynchronously. Often, NoSQL databases can manage aggregates better than relational databases.\n\n\n### Could you share some tips for practising DDD?\n\nObjects with hardly any behaviour represent poor application of object-oriented design. They're little more than procedural-style design. This anti-pattern is called **Anemic Domain Model**. Instead, put business logic in domain objects. Other DDD anti-patterns to avoid include repetitive data access objects (use repositories instead), fat service layers, and classes that frequently access other classes' data. \n\nAdopt a **layered architecture** to avoid details of application, presentation or data persistence from creeping into the domain layer. \n\nAnalyze the problem domain before deciding if a concept should be an entity or a value object. Don't link a value object to an entity, which would require an identity for the value object. Instead, the value object can be inlined into the entity. \n\nDifficulties in implementing an aggregate usually indicates a modelling problem. Instead, attempt to refine the model. \n\nIf an operation doesn't fit the current model, evolve the model. Only if that's not possible, consider introducing a service. Since services represent activities, use verbs rather than nouns in naming. Indeed, DDD is not for perfectionists. It's okay if the model can't handle some special cases. Use services rather than a leaky or confusing abstraction. \n\n## Milestones\n\n1960\n\nSoftware engineers recognize from the late 1960s that procedural languages are inadequate in handling the growing complexity of software projects. When *Simula 67* is released in 1967, it becomes the first **object-oriented programming (OOP)** language. Concepts of OOP and OOD reach maturity in the early 1980s. \n\nSep \n1997\n\nFoote and Yoder observe that while there are many high-level software architectural patterns, what's really prevalent in industry is \"haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.\" They call this the **Big Ball of Mud**. Code is dictated by expediency than design. The mantra has been, \"Make it work. Make it right. Make it fast.\" Popular practices include immediate fixes and quick prototypes. Programmers work without domain expertise. No thought is given to elegance and efficiency of code. \n\n2003\n\nEric Evans publishes a book titled *Domain-Driven Design: Tackling Complexity in the Heart of Software*. Evans is credited for coining the term **Domain-Driven Design**. This is also called the \"Blue Book\". \n\n2009\n\nAt QCon, Phil Wills presents a case study about how The Guardian website was almost completely rebuilt following the principles of DDD. He mentions some key points: domain experts got more involved; they kept the model flexible to changes even when deadlines were met; they focused on core subdomains while leveraging on off-the-shelf software for the rest. \n\nMay \n2011\n\nThe term **microservices** gets discussed at a workshop of software architects near Venice, to describe a common architectural style that several of them were exploring at the time. An application is decomposed into loosely coupled services, each being self-contained and managing its own data. From the perspective of DDD, a microservice maps to a subdomain and bounded context. \n\n2013\n\nVaughn Vernon publishes a book titled *Implementing Domain-Driven Design*. This is also called the \"Red Book\". \n\nSep \n2017\n\nEric Evans notes at the *Explore DDD* conference that DDD remains relevant today, fourteen years after he coined the term. Many tools have come to support and adopt DDD. Though DDD is not about technology, it's not indifferent about technology. With technology's support, we can focus on building better models. He gives some examples. NoSQL databases make it easier to implement aggregates. With modern functional languages, it's easier to implement immutable value objects. Microservices have come to represent bounded contexts. \n\nJan \n2019\n\nA report published by InfoQ shows DDD in *Late Majority* stage of technology adoption. This is proof of its effectiveness in software development. This stage implies that more than half the software folks have adopted DDD and the skeptics are starting to adopt them too. Only a year earlier, DDD was in the *Early Majority* stage.","meta":{"title":"Domain-Driven Design","href":"domain-driven-design"}} {"text":"# DevOps Metrics\n\n## Summary\n\n\nDevOps encourages incremental changes and faster releases while also improving quality and satisfaction. But how do we know if DevOps is making an impact? How do we decide what needs to change? We need to measure and this is where DevOps metrics come in. \n\nMetrics give insights into what's happening at all stages of the DevOps pipeline, from design to development to deployment. Metrics are objective measures. They strengthen the feedback loops that are essential to DevOps. Collecting metrics and displaying them via dashboards or scorecards should be automated. It's important to map these metrics to business needs.\n\n## Discussion\n\n### What factors make for a good DevOps metric?\n\nA good DevOps metric must ideally be all of these: \n\n + **Obtainable**: A metric that can't be measured is useless.\n + **Reviewable**: It must be relevant to the business and stand up to scrutiny.\n + **Incorruptible**: It should be free from influence of teams and team members.\n + **Actionable**: It should suggest improvements to workflows, policies, incentives, tools, etc.\n + **Traceable**: It should be possible to trace the metric to root causes.\n\n### What's the process of working with DevOps metrics?\n\nA typical process involved identifying the metrics, putting in place methods to measure them, measuring and displaying them on dashboards, evaluating the metrics in terms status and trends, acting on the metrics to effect change, and continually assessing if the metrics are aligned to business goals. \n\nSince DevOps is cross-functional (process, people, tools) and cross-teams (dev, ops, testing), metrics should not narrowly focus on only some parts of the value chain. Metrics should capture a holistic view of the entire value chain.\n\n\n### What are some important DevOps metrics?\n\nThere are dozens of metrics spread across all phases of a DevOps pipeline. Some have attempted to group them into categories: \n\n + **Velocity**: lead time, change complexity, deployment frequency, MTTR\n + **Quality**: deployment success rate, application error rate, escaped defects, number of support tickets, automated test pass percentage\n + **Performance**: availability, scalability, latency, resource utilization\n + **Satisfaction**: usability, defect age, subscription renewals, feature usage, business impact, application usage and trafficAnother grouping can be host-based metrics, application metrics, network metrics, server pool metrics and external dependency metrics. \n\nThere are also metrics for application build cycles, metrics for application performance, metrics for delivery performance, metrics organized by infrastructure, system and team health, and metrics for building or running apps.\n\nAt a minimum, aim for more deployments per week, shorter lead time from code commit to deployment, lower failure rate in production, and shorter time to repair failures. Have metrics to measure these. \n\nLikewise, a study from 2019 identified lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage as key metrics. \n\n\n### What are the important metrics in the world of microservices and serverless architectures?\n\nFor microservices, metrics to include are number of requests per second, number of failed requests per second, and distribution of request service times. \n\nFor serverless, the concern shifts from monitoring infrastructure to the application itself. Metrics include performance such as function runtime; scaling such as concurrency limits or memory limits; tracing event-triggered call flows across services or functions; and errors such as code bug, wrong invocation or function timeout. \n\nFor both microservices and serverless, it's important to instrument the code. **OpenTracing** provides a vendor-neutral API for distributed tracing. Observability is an important aspect, which means that communication across services and functions needs to be accessible. A single request must be correlated to the sequence of service calls that followed it. **Istio** is a tool that requires strong observability. \n\n\n### Could you describe some DevOps metrics adopted from traditional engineering practices?\n\nFrom traditional engineering, DevOps has adopted the following metrics: \n\n + **Mean Time To Detect (MTTD)**: This is the average time to discover a problem. It's an indication of how effective is your incident management tools and processes.\n + **Mean Time To Failure (MTTF)**: This is an indication of how long on average the system or a component can run before failing. This can suggest preventive maintenance. This metric relates to improving system uptime.\n + **Mean Time Between Failures (MTBF)**: This is the average time between failures. It's a measure of reliability and availability.\n + **Mean Time To Repair (MTTR)**: This is the average time to repair/resolve/recover after failure is detected. This metric relates to reducing system downtime. Code complexity is one aspect that affects MTTR.The goal is to reduce MTTD and MTTR while increasing MTTF and MTBF. DevOps is about incremental changes. If many changes are introduced at once, it will take longer to detect and fix issues. \n\n\n### Are there DevOps metrics that one should avoid?\n\nTeams transitioning to DevOps might end up adopting the wrong metrics. In fact, **traditional metrics** such as MTBF could be seen as irrelevant for DevOps where some failures are expected due to the speed of delivery. Look beyond such costs. Instead, improve total economic impact. Others to avoid are metrics that focus on business velocity at the expense of quality or culture; metrics that are optimized for one team and causing negative impact on others. \n\nAvoid **conflict metrics** that promote individuals rather than teams or pit one team versus another. These include ranking individuals or teams based on failure metrics (broken builds, etc.), rewarding top performers who don't collaborate or having different standards for different teams. \n\nAvoid **vanity metrics** that promote quantity or speed over quality: number of lines of code, number of deployments per week, number of bugs fixed, number of tests added. \n\nDon't collect a specific metric just because it's easy. Don't use a metric that encourages negative behaviours. It's been said, \n\n> Human beings adjust behavior based on the metrics they’re held against... What you measure is what you’ll get.\n\n\n### Could you mention some best practices when using DevOps metrics?\n\nFor those new to DevOps metrics, start with metrics that are simpler to collect and manage. Get the momentum going. For better focus, don't apply too many metrics. Choose metrics aimed at broader organizational goals or process health issues. Measure fast to enable real-time feedback loops. \n\nBecause automated system-based metric collection is hard to do, you may want to start with surveys. In fact, both these are complementary. Surveys are good for metrics on culture or things outside the system. \n\nUse metrics that suit your business model. Adopt **value stream mapping** in which each metric is mapped to business values. For example, measuring website responsiveness becomes more useful if you can map it to business outcomes such as customer churn or abandoned shopping carts. \n\nMetrics can also be role-based (business vs engineering): give teams the choice to customize their own dashboards. In fact, dashboards are essential for tracking all metrics in one place. Compare trends, not teams. Look for outliers. Measure lead time to production, not just completion. \n\nEvolve your metrics as new technologies and tools enter your DevOps pipeline. \n\n\n### Are there tools to help teams collect metrics for DevOps?\n\nMany tools are available for various DevOps tasks. Some of these show metrics and even do real-time monitoring. We briefly mention some of them. In any case, there's a need to provide teams a single unified dashboard regardless of the tool that collects the metrics.\n\nNagios is widely used for IT infrastructure monitoring. Zabbix, Sensu and Prometheus are alternatives. Prometheus is for service monitoring. It's often used with the visualization and analytics of Grafana. \n\nFor application performance monitoring, there are New Relic, AppDynamics, Compuware and Boundary. For deeper integration, cross-platform data aggregation and monitoring, there's BigPanda and PagerDuty. \n\nJIRA Software does issue and project tracking. Code Climate automates code review and analysis. OverOps detects bugs proactively. For build automation, there's Apache Ant. Jenkins is useful for continuous integration and delivery. Ansible, Chef and Puppet help with continuous deployment. Ganglia is for cluster and grid monitoring. Snort is for real-time security. For logging, we have Logstash. Monit does system monitoring and recovery. \n\nCloud providers offer their own monitoring tools: AWS CloudWatch from Amazon or StackDriver from Google. \n\n## Milestones\n\n2009\n\nDevOps has its beginnings at the O'Reilly Velocity conference where John Allspaw and Paul Hammond present a talk titled *10+ Deploys a Day: Dev and Ops Cooperation at Flickr*. Even in these early days, the importance of metrics is realized. Some metrics identified include CPU load, memory usage, network throughput, and aggregated job queue. \n\n2016\n\nThere's a growing realization among practitioners that we can end up collecting a lot of wrong DevOps metrics. It's important to relate metrics to business values, needs or outcomes. One such proposal is the value-based approach that measures how value flows through the DevOps pipeline. \n\nJul \n2017\n\nGartner publishes a report titled *Data-Driven DevOps: Use Metrics to Guide Your Journey*. This report includes a pyramid of metrics for DevOps.","meta":{"title":"DevOps Metrics","href":"devops-metrics"}} {"text":"# In-Memory Database\n\n## Summary\n\n\nAn **In-Memory Database (IMDB)**, also called Main Memory Database (MMDB), is a database whose primary data store is the RAM. This is in contrast with traditional databases which keep their data in disk storage and use RAM as a buffer cache for frequently/recently used data.\n\nIMDB has gained traction in recent times because of increasing speeds, reducing costs and growing size of RAM devices, combined with powerful multi-core processors. Since only RAM is accessed and there is no disk operation for an IMDB query, speeds are extremely high.\n\nHowever, RAM storage is volatile. So there will be data loss when the device loses power. To overcome this, IMDB employ several techniques such as checkpoints, transaction logging, and non-volatile RAM. \n\nAmong commercially available IMDB is SAP HANA. Open source options are Redis, VoltDB, Memcached, and an extension of SQLite.\n\n## Discussion\n\n### What's the context for the growing interest in IMDB?\n\nFrom 1995 to 2015, RAM got 6000 times cheaper. Prior to that, disk drives were the main option to store databases. Disks were great for sequential access but poor for random access since lot of time was spent in rotating the disk and seeking the exact location. \n\nMeanwhile, computing power has increased via faster clocks and multi-core processors. Networking speeds have also gone up. However, disk access speeds have not gone up fast enough. In fact, they've become the bottleneck in computing systems. Worse still, the amount of data being generated has grown exponentially. There's also a need to analyse all this data (via Machine Learning) in almost real time. \n\nThis is where in-memory databases become suitable. They're now affordable enough to store large amounts of data, particularly compressed columnar data. They're fast enough for real-time analytics. We can continue to use disks where sequential access is desired, such as for logging. The notable database scientist Jim Gray once supposedly said, \n\n> Memory is the new disk\n\n\n### What are the key features of IMDB? How do they vary from disk-based RDBMS?\n\nDisk access is sequential. In disk-based DBs, the seek time to locate records on the physical disk is the biggest contributor to query time. In an IMDB, since entire data is in memory, this burden is entirely eliminated. \n\nWhile RAM is volatile, data persistence is achieved through multiple methods and it’s done very efficiently. These safeguard from data loss due to power failure. Since only a minority of DB operations are data change operations (about 10-15%), disk operations are minimal.\n\nMost IMDB support columnar data storage, where table records are stored as a sequence of columns, in contiguous memory locations. This speeds up analytical querying greatly and minimizes CPU cycles. When data is stored in columnar form, column-wise compression techniques (such as sparse columns) are used to minimise memory footprint. Moreover, there's less dependence on indexing. IMDB delivers performance similar to having an index on every column, but with much less transactional overhead. \n\n\n### What are the popular applications of IMDB?\n\nIMDB works well for applications that require very fast data access, storage and manipulation. Real-time embedded systems, music and call databases in mobile phones, telecommunication access networks, programming data in set-top boxes, e-commerce applications, social media sites, equity financial services are the most prevalent applications of in-memory databases. IMDBs have also gained a lot of traction in the data analytics space. \n\n\n### How do the performance characteristics compare between IMDB and traditional DB?\n\n \n\n + **CPU and Memory** - Accessing data in memory is much faster than writing to/reading from file systems. IMDB design is simpler than on-disk databases, so they have significantly lower memory/CPU requirements. Even if a machine hosting an RDBMS has enough memory on board to fit the entire data, IMDB would be faster. That's because it performs fewer copy operations and has more advanced in-memory data structures, optimized for working with memory. However, IMDB scales poorly to multiple CPU cores.\n + **Data Query and Update functions** - Applications requiring random data access under 1ms (such as online/real-time applications) benefit from IMDB. If access time over 100ms is acceptable, traditional RDBMS works fine. For sequential access, the difference is even more pronounced. Persistence operations don’t affect data update times in IMDB since they happen offline.\n + **Size constraints** - Generally, not more than 200-300GB RAM is installed on a machine in order to keep machine start-up time acceptable. So when DB size is in TB range supporting millions of transactions per second, memory sharding is done to partition one logical DB into multiple physical DBs.\n\n### What are the ways in which persistence is supported in IMDB?\n\nThere are many methods by which IMDB might persist the data state on disk. End goal is to ensure complete recovery of data but without compromising on query speeds.\n\n + **Transaction Logs** - Each data update is applied to the IMDB and also on a transaction log on disk. Change entries done at the end of the append-only log file. When the file size rolls over, its contents are archived.\n + **Checkpoint Images** - Checkpoint files contain an image of the database on disk. Some IMDB use dual checkpoint files for additional safety, in case the system fails while a checkpoint operation is in progress. For recovery, the database checkpoint on disk is merged with the latest transaction log entries.\n + **High Availability** - To protect against memory outages in data centers, the data cluster is replicated asynchronously into a second read-only cluster. If outage occurs, hot swap gets triggered to configure the secondary as primary.\n + **Non-volatile RAM** - Using battery powered RAM devices or supercapacitors, all write operations can be persistent even after power loss. These are slightly slower than DRAM, but much faster than RDBMS disk operations.\n\n### What are some commercial IMDB products?\n\nWithout being exhaustive, we describe three commercial IMDB products:\n\n + **SAP HANA** - In-memory, column-based data store from SAP. Available as local appliance or cloud service. The in-memory data is CPU-aligned, no virtual expensive calculation of LRU, logical block addresses, just direct (pointer) addressing of data. Supports server scripts in SQLScript, JSON and R formats. Good support for predictive analytics, spatial data processing, text analytics, text search, streaming analytics, graph data processing, and ETL operations. Guarantees microsecond response and extremely high throughput performance.\n + **Oracle TimesTen** - In-memory OLTP RDBMS acquired by Oracle. Guarantees microsecond response and extremely high throughput performance. Provides application level data cache for improved response time. High availability through replication.\n + **eXtremeDB** - Combines on-disk and in-memory data storage in a single embedded database system. It can be deployed as an embedded database system or elastically scalable client/server distributed database.\n\n### What are some open source IMDB platforms?\n\nHere are some open source IMDB platforms to consider:\n\n + **Apache Ignite** – Java-based middleware that forms an in-memory layer over any existing DB. Can work in a single or distributed environment. Seamless integration with MapReduce and Hadoop systems.\n + **Altibase** - Hybrid database that combines an in-memory database and an on-disk database into a single product to achieve the speed of memory and the storage capacity of disk.\n + **SQLite** - Instruct an SQLite database to exist purely in memory using the special filename `:memory:` instead of the real disk filename.\n + **Redis** - Key-value store based system with support for key data structures. Schema free. Data durability feature is optional. Good programming language support.\n + **VoltDB** - Traditional RDBMS with schema support. Works using Java Stored Procedures that applications can invoke through JDBC. The company is collaborating with Samsung for Scaling In-Memory Data Processing with Samsung DRAM/SSD devices.\n\n### What are the use cases where an IMDB is not suitable?\n\n Volatile memory in affordable and servers with support for 24TB of RAM are now available. But IMDB cannot replace the traditional RDBMS in all scenarios. Following are some of the use cases where IMDB is not suitable when:\n\n + **Persistence is critical** - Applications with confidential/critical data undergoing frequent updates might be at risk in IMDB. Unless persistence features are in the IMDB, there's risk of data loss during power failure.\n + **Very small scale data** - Small and medium enterprises can simply run on low-cost server with acceptable performance.\n + **Memory-intensive applications** - When IMDB is used, the bulk of RAM is going be occupied by the DB itself. So if the application itself requires high memory (such as 3D games, live streaming), then memory costs will rise significantly.\n + **Very large scale data applications** - Memory requirements would be prohibitively expensive, hence not recommended.\n + **Non-mission-critical operations** - Backend operations or applications where data can be batch processed offline don't require millisecond query response times of IMDB.\n\n\n## Milestones\n\n1970\n\nThe term \"relational database\" is invented by E. F. Codd at IBM in 1970. \n\n1978\n\nIBM's IMS/VS FastPath is one of the earliest in-memory engines. \n\n1980\n\nIn telecom and defence domains, some companies start using in-memory databases. However, these are internal to those who used them and generally not available for purchase. \n\n1992\n\n**Oracle 7** version supports data buffers where a snapshot of a data block would be taken from disk and stored in RAM for faster access. \n\n1997\n\n**TimesTen** releases its first version in-memory database. TimesTen is later acquired by Oracle. \n\n2009\n\n**Redis**, an in-memory data structure project, makes its first release with a BSD license. It becomes one of the most popular key-value store databases. \n\n2012\n\nEarly versions of **SAP HANA** were present from 2005. In 2012, SAP promotes its commercial version for cloud based applications. HANA now includes a business suite of applications covering ERP and CRM domains under one umbrella. \n\n2014\n\nOracle releases its **Oracle 12c** cloud RDBMS business suite with in-built support for in-memory DB operations.","meta":{"title":"In-Memory Database","href":"in-memory-database"}} {"text":"# API Testing\n\n## Summary\n\n\nApplications today rely on APIs. Whether it's a web client requesting a service from a web application server, or one microservice requesting data or operation from another microservice, APIs play a key role. Via APIs, developers give others access to their service. \n\nAt the same time, organizations are embracing Agile methodology and making frequent product releases. It's therefore important to test these APIs. \n\nAPI testing is useful to validate a solution and to find errors. API testing complements unit testing and end-to-end testing. It enables more efficient use of test resources. Problems can be caught earlier in the development cycle.\n\nHTTP RESTful API is the most widely used architecture. However, this article describes API testing in general and is therefore relevant to other API types such as SOAP or GWT RPC.\n\n## Discussion\n\n### Do I need API testing for my application?\n\nA web application typically consists of three layers: user interface, business logic and database. End-to-end testing would test all layers of the app but it's also slower. Problems are hard to isolate. Business logic may need many tests, for which we will end up unnecessarily exercising the UI in the same way. Moreover, end-to-end testing can begin only when all layers are available. API testing solves this problem by bypassing the UI. It executes tests at the service or business layer. \n\nWhile unit tests are typically written by developers and take a white-box testing approach, API tests are usually written by the QA team and view the system under test (SUT) as a black box. However, not everyone agrees on this division of roles. Some feel that developers should do API testing since they created the APIs. Others say that since APIs specify a contract, they need to be validated by testers. \n\nWhile unit tests exercise the business logic directly, API tests go through the API layer. \n\n\n### What's the flow for a typical API test?\n\nAn API test first calls an API endpoint, which is really a URL. HTTP headers are set as required for the test. The type of HTTP request may be GET, POST, PUT, DELETE, etc. With each of these, the necessary data is sent to the API endpoint. \n\nOnce a response is received, the response code and contents are validated. HTTP headers that specify access control, content type, or server might be validated. \n\nIn a sequence of API calls, some parts of a response may be used for the next API call. For example, a POST request might return an identifier. A subsequent GET request might verify that the response includes this identifier. \n\nAPI testing is generally black-box testing. We don't look at what happens behind the API server. We only validate the responses. But sometimes we may want to validate if an API request triggers another API request or updates the database. For example, an API request may trigger a request to Google Maps API. During testing, we could mock Google Maps API and validate the request made to the mocked API. \n\n\n### What are the possible benefits of API testing?\n\nSince data is exchanged in standard formats (XML, JSON), API testing is language agnostic. Any programming language can be used to create API tests. API responses can be easily validated since most languages have libraries to compare data in these formats. \n\nEnd-to-end testing can't be done unless all parts of the application are ready. With API testing, the business logic can be tested early on even when the GUI is still under development. This also facilitates easier end-to-end testing at a later stage. \n\nBecause APIs are usually well specified, API testing leads to high test coverage. Moreover, UI changes often during development. API tests based on well-defined specifications are easier to maintain. \n\nAPI testing is faster than UI testing. More tests can be performed in a shorter time. Releases can happen faster. When an API test fails, it's easier to find the source of failure. API testing also enables automation of CI/CD pipelines. \n\n\n### What types of tests are possible at the API layer?\n\nA wide variety of tests can be done at the API layer, both functional and non-functional: \n\n + Validation and functional tests ensure that APIs behave as desired and deliver specific functionalities.\n + Security and penetration tests would consider user authentication or authorization, threat detection, and data encryption.\n + Load testing checks app performance at normal and peak loading conditions, or if throttling is correctly applied at theoretical maximum load. For example, we may want to know how many API requests can be served per minute with a specific response time.\n + Tests can be designed to check for runtime errors, resource leaks and general monitoring of the app.\n + Fuzz tests include random data in API requests. App is expected to be robust against such requests.\n + UI events and interactions trigger API calls. Thus, UI testing is also an approach to API testing.\n + Since APIs interface various services, they play a key role in integration testing. However, they're also useful in end-to-end testing to validate dataflows across services.\n\n### What are some best practices for API testing?\n\nBefore creating API tests, document and understand the API. This should include API purpose, application workflows, supported integrations, and so on. \n\nSome tools for API testing include ReadyAPI, AcceIQ, Katalon, SoapUI, Postman, Apigee, JMeter, REST-assured, and more. Manual API testing could be a starting point. Tools such as Postman can help create tests manually, save them, and replay them later. For automated API testing, adopt an automation framework such as Robot Framework. \n\nCreate client code and components that can be reused across many tests. Write clear tests so that debugging and maintenance is easier. Organize each test into three parts: setup, execution and teardown. It should be possible to configure tests for different environments or customer requirements. \n\nWrite tests in a modular fashion. For example, user authentication and password change can be two separate tests, and the latter can be made dependent on the former. \n\nMeasure how long each test takes. This can help in scheduling tests. Schedule tests to execute every day. When a test fails, make the failure state explicit in the response or report. Test system should record failures for later analysis. \n\n## Milestones\n\n2000\n\nWhile APIs existed in earlier decades, the early 2000s mark the birth of modern APIs. During this time companies such as Salesforce, eBay and Amazon popularize the use of APIs. In these APIs, the use of XML data format becomes common. \n\nDec \n2009\n\nMike Cohn makes the point that test automation must be done at the correct level. He identifies three levels: unit, service and UI. He visualizes these into a **test automation pyramid**. We wish to do lots of unit testing and as little UI testing as possible. API testing happens in between and avoids unnecessary repetitions of the same UI actions. Although he calls the middle layer the service layer, it's not restricted to just service-oriented architecture (SOA). \n\n2018\n\nSince API specifications are formal, and with the recent progress of Natural Language Processing (NLP), some tools such as Functionize explore the possibility of automatically generating API tests from the specifications. This takes test automation to another level so that human testers can focus on exploratory and security tests.","meta":{"title":"API Testing","href":"api-testing"}} {"text":"# RISC-V Instruction Sets\n\n## Summary\n\n\nThe design of RISC-V instruction sets is modular. Rather than take the approach of a large and complex monolith, a modular design enables flexible implementations that suit specific applications. \n\nRISC-V defines base user-level integer instruction sets. Additional capability to these are specified as optional extensions, thus giving implementations flexibility to pick and choose what they want for their applications. The specifications of the base ISA has been frozen since 2014. Some of the extensions are also frozen while many others are being defined.\n\n## Discussion\n\n### Could you give an overview of RISC-V instruction set?\n\nRISC-V comprises of a base user-level 32-bit integer instruction set. Called **RV32I**, it includes 47 instructions, which can be grouped into six types: \n\n + R-type: register-register\n + I-type: short immediates and loads\n + S-type: stores\n + B-type: conditional branches, a variation of S-type\n + U-type: long immediates\n + J-type: unconditional jumps, a variation of U-typeRV32I has `x0` register hardwired to constant 0, plus `x1-x31` general purpose registers. All registers are 32 bits wide but in RV64I they become 64 bits wide. RV32I is a **load-store architecture**. This means that only load and store instructions access memory; arithmetic operations use only the registers. User space is 32-bit byte addressable and little endian. \n\nCorrespondingly, **RV64I** is for 64-bit address space and **RV128I** is for 128-bit address space. The need for RV128I is debatable and its specification is evolving. We also have **RV32E** for embedded systems. RV32E has only 16 32-bit registers and makes the counters of RV32I optional. \n\n\n### What RISC-V extensions have been defined?\n\nRISC-V defines a number of extensions, all of which are optional. Some of them are frozen and these are noted below:\n\n + M: Integer multiplication and division.\n + A: Atomic.\n + F: Single-precision floating point compliant with IEEE 754-2008.\n + D: Double-precision floating point compliant with IEEE 754-2008.\n + Q: Quad-precision floating point compliant with IEEE 754-2008.\n + C: Compressed instructions (16-bit instructions) to yield about 25-30% reduced code size. \"RVC\" refers to compressed instruction set.Among the evolving or future extensions are L (decimal float), B (bit manipulation), J (dynamically translated languages), T (transactional memory), P (packed SIMD), V (vector operations), N (user-level interrupts), and H (hypervisor support). \n\nWhen multiple extensions are supported, that ISA variant can be described by concatenating the letters; such as, RV64IMAFD. To represent the standard general purpose ISA, \"G\" is defined as a short form for \"IMAFD\". \n\nRV32I uses one-eighth of the encoding space. This means there's plenty of room for custom extensions. \n\n\n### What are pseudo-instructions?\n\nTo ease the job of an assembly language programmer or a compiler writer, some base instructions can be represented by what are called **pseudo-instructions**. For example, a no operation is `addi x0, x0, 0` for which `nop` is the pseudo-instruction. Likewise, branch if zero is `beq rs, x0, offset` for which `beqz rs, offset` is the pseudo-instruction. \n\n\n### What are privileged instructions?\n\nApplication code usually runs in user mode or **U-mode**. RV32I and RV32G are user mode ISAs. Two more modes are available:\n\n + **Machine mode (M-mode)**: For running trusted code. This is the most privileged mode in RISC-V and has complete access to memory, I/O and anything else to boot and configure the system. It's most important feature is to handle synchronous exceptions and interrupts. The simplest RISC-V microcontrollers need to support only M-mode.\n + **Supervisor mode (S-mode)**: For supporting operating system needs of say Linux, FreeBSD or Windows. This is more privileged than U-mode but less privileged than M-mode. Where OS needs to process exceptions/interrupts, *exception delegation* is used to pass control to S-mode selectively. S-mode also provides a virtual memory system.\n\n### Could you describe some technical considerations in the design of RISC-V instructions?\n\nDesign of RISC-V ISA considered cost, simplicity, performance, implementation-independence, room for growth, program size, and ease of use. \n\nRV32I includes generously 32 integer registers, making it easier for compilers to use them more often than memory. By keeping instructions simple, RISC-V instructions typically require only one clock cycle and deliver predictable performance. For dynamic linking, it adopts PC-relative branches. \n\nInstructions offer three register operands, avoiding the extra move required by ISAs with only two register operands. These are also in the same positions so that access can begin before decoding the instruction. \n\nThe design was also informed by mistakes of other ISAs. For example, initial Alpha ISA did not have byte or half-word load/store. The shift operation in ARM can be seen as an overdesign. Delayed branches of MIPS and SPARC affected their ISAs. ARM Thumb and MIP16 added 16-bit instructions in hindsight. \n\n\n### Are RISC-V instructions without precedents?\n\nWhen we consider the 122 instructions of RV32G, only 6 of them are without precedents. 98 instructions appear in at least three prior ISAs. 18 instructions appear in one or two prior ISAs. This study included 18 prior RISC ISAs, including the CDC 6600 dating back to 1964. \n\nIt's been commented that one cannot design a flawless ISA, nor an ISA with flaws doomed to fail. \n\n## Milestones\n\n1981\n\nRISC-I is defined by David A. Patterson at UC Berkeley as an alternative to increasingly complex instructions of the computers of the day. This is followed by RISC-II (1983), SOAR (1985) and SPUR (1986) projects. \n\n2010\n\nResearchers at the UC Berkeley conceive RISC-V as the fifth generation of their RISC design from the 1980s. \n\nMay \n2011\n\nVersion 1.0 of the RISC-V base user-level ISA is published as volume 1. This version is not frozen. Volume 2 is for supervisor-level ISA. \n\nMay \n2014\n\nVersion 2.0 of the user-level ISA is published. This is a frozen version. \n\nMay \n2017\n\nVersion 2.2 of the user-level ISA is published. This does not modify the ISA base plus extensions IMAFDQ version 2.0. Extension C is frozen in this version.","meta":{"title":"RISC-V Instruction Sets","href":"risc-v-instruction-sets"}} {"text":"# Multi-Access Edge Computing\n\n## Summary\n\n\nIn traditional cloud computing, apps and services are hosted in data centres. Devices connect to the data centres via multiple hops traversing the internet. If devices are smartphones, they probably connect via the RAN and the CN of the mobile network operator. Multi-Access Edge Computing (MEC) brings apps and services closer to the network edge. MEC also exposes real-time radio network and context information. The benefit is better user experience, network performance, and resource utilization. \n\nMEC is just one among many edge computing paradigms, others being cloudlets, fog computing and Mobile Cloud Computing (MCC). MEC is perhaps more popular since it's standardized. ETSI is the main organization standardizing MEC. \n\nOne possible definition is that, \n\n> MEC offers application developers and content providers cloud-computing capabilities and an IT service environment at the edge of the network.\n\n## Discussion\n\n### What are the benefits of MEC for users, service providers and network operators?\n\nMEC enables many use cases that require high bandwidth, ultra-low latency and high device density. Users get a richer experience. Safety is improved for critical applications such as industrial automation and self-driving vehicles. If 5G promises many novel use cases, it's MEC that delivers them. \n\nMEC helps network operators reduce their CAPEX by purchasing general-purpose equipment rather than specialized telecom equipment. By reducing bandwidth requirements between RAN and CN, OPEX is also reduced. Due to virtualization, reliability and scalability is improved. Heterogenous configurations can be supported. Network entities can be rapidly deployed leading to just-in-time service initiation. Network performance can be optimized by adapting to changing radio conditions. Operators can offer innovative applications and unlock new revenue streams by safely opening up their networks to third-parties. \n\nTraditionally, it made sense only for big service providers such as Netflix to deploy edge servers for their traffic. With MEC, even smaller service providers and independent software vendors can get their applications to the edge on multi-vendor platforms. This is mainly because MEC is standardized with open interfaces and protocols. \n\n\n### What are some use cases that can benefit from MEC?\n\nMEC brings proximity, ultra-low latency, high bandwidth and virtualization. These characteristics can benefit **many use cases**: data/video analytics, location tracking services, augmented reality, IoT, local hosting of video content, data caching, remote surgery, patient management, radio-aware video optimization, autonomous vehicles, and more. \n\nAs more and more applications become **virtualized**, MEC will become increasingly important. Applications can be dynamically deployed, scaled and moved between the cloud and the edge. \n\nOn college campuses, business parks, hospitals or factories **private/local networks** can be deployed. For **mission critical communications**, MEC can continue delivering services even when backhaul communications fail. \n\n**IoT applications** require ultra-low latency, mobility management, geo-distribution, location awareness and scalability. MEC is able to provide these for diverse IoT applications such as smart home, smart city, smart agriculture, smart energy, healthcare, wearables and industrial internet. \n\n\n### Which are the main enablers for MEC?\n\nMEC is made possible by the following technologies: \n\n + **Network Functions Virtualization (NFV)**: NFV is about deploying network functions in virtual environments rather than with dedicated hardware. MEC reuses the NFV Infrastructure (NFVI) and NFV Management and Orchestration (MANO). In other words, MEC platform and applications appear as VNFs and run on the same infrastructure as RAN or CN VNF components.\n + **Software-Defined Networking (SDN)**: Involves separating the control plane and user plane, logically centralizing the control plane and programming the control plane in a more flexible manner using APIs. An SDN controller can be in the MEC server. SDN brings scalability, availability, resilience, interoperability and extensibility to MEC operation.\n + **Service Function Chaining (SFC)**: Built from NFV, operators can use SFC to interconnect multiple NFs (virtual or physical) in a specific order to achieve end-to-end services and steer traffic flows.\n + **Network Slicing**: A network slice is a logical network set up for specific performance requirements. Using SDN/NFV, slices can be dynamically instantiated, modified or terminated.\n + **Information-Centric Networking (ICN)**: Replaces client-server model of the internet with publish-subscribe model involving caching, replication and optimal content distribution.\n\n### Could you describe ETSI's MEC reference architecture?\n\nETSI's MEC reference architecture has the following main entities: \n\n + **MEC Host**: Contains MEC platform. Virtualized infrastructure providing compute, storage, and network resources for running MEC applications. Routes traffic among applications, services and access/local/external networks. Executes traffic rules received by MEC platform.\n + **MEC Application**: Runs on a Virtual Machine (VM) or container on the MEC host. Configured and validated by MEC management.\n + **MEC Platform**: Has essential functionality needed to run MEC applications. Enables applications to discover, advertise, offer or consume MEC services. Platform can also provide services. Receives traffic rules from MEC platform manager. May be interfaced with an API gateway for apps to access MEC service APIs.\n + **MEC Management**: Comprises of system-level and host-level management. The former oversees the complete MEC system and includes Multi-Access Edge Orchestrator (MEO) as a core component. The latter manages a particular host and its applications and includes MEC Platform Manager (MEPM) and Virtualisation Infrastructure Manager (VIM).The reference architecture contains three groups of reference points among entities: **Mp** for platform functionality, **Mm** for management, and **Mx** for external entities. \n\n\n### What are the different MEC deployment models?\n\nMEC servers can be deployed at base station sites, aggregation points in the radio network, mobile EPC sites or regional data centres. A distributed data centre or a gateway at the edge of the CN could be MEC deployment sites. There's a trade-off: closer to the edge, greater are the benefits but so is the cost of deploying at many locations. It's therefore expected that most operators will initially deploy at a few EPC sites and central offices. As more applications emerge demanding 1ms latency, operators will start deploying closer to the edge. To reduce CAPEX, operators may even share MEC infrastructure. \n\nAn MEC server can be indoors at a multi-RAT cell aggregation site serving an enterprise; or it could be outdoors for public coverage scenarios such as stadiums and shopping malls. Ultimately, deployment depends on scalability, physical constraints, performance criteria, cost of deployment, etc. Some MEC services may be unavailable in some deployment scenarios. \n\nVendors offer equipment optimized for specific deployment locations. For example, GIGABYTE's H242-Z10 is for base station tower whereas G242-Z10 is for aggregated sites. \n\n\n### Could you mention some resources to learn more about MEC?\n\nMEC specifications from ETSI are available for download via a search feature. \n\nApart from the specifications, via the DECODE Working Group, ETSI provides API implementations, testing, a proof-of-concept framework and a sandbox environment. This makes it easier for vendors, operators and application developers to implement MEC. \n\nA beginner can start with ETSI's white papers on MEC. There's also an MEC blog for latest updates.\n\nThe main MEC wiki page is an entry point for MEC ecosystem, testing, sandbox, proof-of-concept updates, deployment trials and hackathons. The MEC Ecosystem page lists MEC applications and solutions. \n\nAmong the many open source projects related to edge computing, two important ones are Akraino and EdgeX Foundry. These come under LF Edge, an umbrella organization under the Linux Foundation. A useful resource is LF Edge's own Wiki page.\n\n## Milestones\n\nSep \n2014\n\nETSI forms the Mobile Edge Computing Industry Specification Group (ISG). \n\nDec \n2014\n\nThe **first meeting** of MEC ISG takes place and is attended by 24 organizations, including network operators, vendors, technology suppliers and Content Delivery Network (CDN) providers. \n\n2015\n\nETSI's MEC ISG publishes two specification documents: *Proof of Concept Framework* and *Service Scenarios*. The group develops three PoC scenarios. These relate to video optimization and orchestration by adapting to RAN or radio conditions. \n\nSep \n2016\n\nAt the MEC World Congress, at the ETSI MEC PoC Zone, six **multi-vendor proofs of concept** are demonstrated based on the MEC PoC framework. It's hoped that such PoCs lead to a diverse and open MEC ecosystem. \n\n2017\n\nETSI renames MEC from *Mobile Edge Computing* to *Multi-Access Edge Computing*. The new name reflects its relevance to mobile, Wi-Fi, and fixed access networks. \n\n2018\n\nETSI's MEC group works on **Phase 2 activities** to address charging, regulatory compliance, mobility support, containerization, support of non-3GPP mobile networks, automotive vertical, and more. This year sees approval of 12 MEC PoCs and 2 MEC Deployment Trials (MDTs). A new working group named *DECODE* is also created to focus on deployment and ecosystem development. \n\nJun \n2019\n\n**Akraino** Release 1 is released with ten \"ready and proven\" blueprints. Blueprints are tested by Akraino community members on real hardware. They serve as an easy starting point for real-world edge implementations and edge use cases. Akraino comes under LF Edge, an umbrella organization founded in January 2019 to bring together multiple edge-specific projects. In Release 3 (Aug 2020), **MicroMEC** is specified. By Release 4 (Feb 2021), Akraino has 27 blueprints.","meta":{"title":"Multi-Access Edge Computing","href":"multi-access-edge-computing"}} {"text":"# PBX Hacking\n\n## Summary\n\n\nPBX (Private Branch Exchange) is a private telephone network that handles an organization's internal and external communications. For external connections, the PBX connects to the Public Switched Telephone Network (PSTN) using a Telecommunication Service Provider (TSP) or an Internet Service Provider (ISP). \n\nHackers target PBX networks in ways that can impact the company. Using the PBX, they might place long-distance calls for free, leaving the company to pay the bills. Hackers might steal data or simply render the network unusable. Such hackers who specialize in hacking phone systems are called *phreakers*. \n\nPBX hacking was traditionally done on analogue PBX systems using various methods. When IP PBX systems were introduced in the 1990s, hacking methods were adapted to these newer networks. There are many best practices that companies can follow to mitigate the dangers of PBX hacking.\n\n## Discussion\n\n### Why is it important to know about PBX Hacking and its role worldwide?\n\nPBX is a system connecting the communication lines such as switches, hubs, telephone adapters and routers. There are different types of PBX Systems from which digital PBX or IP PBX/VoIP PBX is preferred in the business phone systems. \n\nAccording to the survey done by CFCA(Communications Fraud Control Association), around 2013-2017the cases of PBX and IP-PBX was the top five frauds that you can see in the table given in fig.1. In France, the PBX toll fraud losses in companies estimate $220 million a year. Even before 2013, PBX Hacking cases has been there since the switchboards introduction to the world.\n\nTom Mulhall, in his paper “Where have all the hacker gone”,1997, states that though it might look like hacking cases has deteriorated while they have migrated from computer hacks to PBX/Voicemail attack. \n\n\n### How does PBX Hacking affect the company or an organisation?\n\nPBX Hacking/PBX Toll Fraud or PBX Fraud are some terms mentioned concerning phreakers cracking loopholes of the PBX System.\n\nIn different types of PBX Systems, some functions are the same like managing calls with also being connected with companies' systems that contain sensitive data(call records of customers). The system hacked through viruses or worms results not only the loss of sensitive data but an extra charge to fix the system. Also, there’s a possibility where the hacker can listen to phone conversations or voicemails or shut down PBX entirely. \n\nAfter hacking PBX, phreakers usually make free long-distance calls or run call-sell operations from phone booths or private phones which builds a mode to generate funds illegally. By offering the calls for less than the actual cost to dial directly, the result is the companies owning that PBXs end up paying the bill. \n\nIt is also difficult to track these call-sellers as they hide their activities from law enforcement officials by PBX-looping(using one PBX to place calls out through another PBX). \n\n\n### What are the ways the PBX System can be hacked?\n\nDISA and voice mail are classic ways for phreakers to penetrate the organisation PBX System. Also, DoS acts as one of severe attacks towards the PBX System. \n\n + Brute force attack: It is a trial and error method until the password or log-in credentials or encryption keys.\n + DISA and Voice Mails: DISA(Direct Inward System Access) is a service provided by PBX where the user’s dials into PBX and then give information for authorisation to use PBX service dial local user. The authorisation process comprises the user’s account, user’s password and callerID. Fraudsters with tricks and hacks try to obtain this information and retrieve it, which then helps them to use PBX to generate outbound calls. While in the voicemail, fraudsters plan to reach voicemail boxes, obtain passwords and retrieve them to gain access to the system.\n + Denial of Service Attack(DoS): The denial of service attack makes telephony service unavailable by causing physical damage or software strategies and denying telephones to place or receive calls. It works by flooding networks with fake traffic or server request generated by machines compromised by viruses and malware.\n\n### What are the other famous ways for the PBX System to be hacked?\n\n + Internal Enemy: One such example to comprehend the internal enemy is when the employee of that organisation forwards their work number to their private number either overseas or in their country, where for per call the organisation foot up the bill. In some instances, revenge (due to a poorly treated employee) becomes the cause for them to turn against the company.\n + Poor Port Management: A poorly protected port linked with PBX can offer hackers a chance to create a “back door” into critical assets such as customer databases and business applications. Also giving them a path to target modem.\n + Social Engineering: It is a technique where hackers just by impersonating or manipulating the person or the entire organisation get the sensitive data. One such example is Frank Abagnale, who is known as the infamous social engineer in history.\n\n### How the ‘SIP’ has an impact on IP-PBX?\n\nSIP(Session Initiation Protocol) is a protocol that initiates a session in an IP network while aiming to provide functions(similar to traditional PSTN) over the internet. It deals with signalling messages, address resolution, user management and packet transfer and services and is as vulnerable as HTTP or any available service which is public on the Internet. \n\nIt can be easily hacked through DoS attacks where :\n\n‘SIP Register Flooding’ attack happens to create traffic by sending streams of SIP REGISTER messages and ‘Call Flooding’ attack which is a way of forming traffic by sending streams of the SIP INVITE requests. These attacks make it difficult to get any legitimate calls.\n\nSIP Injection attacks are also a way where phreaker exploit the SIP by injecting malicious code where Buffer Overflow attack, RTP(Real-Time Transport) Injection Attack and SQL Injection Invites are some examples.\n\nDue to SIP not enforcing any SIP source message validation mechanisms, it creates an opportunity for the attackers to have their request processed without authentication, making a path for spoofing or modification of the SIP control messages.\n\n\n### How to analyse whether we have become the victim of PBX Hacking?\n\nThere are some indications to notify the victim whether they have been hacked:\n\n + Overload occurring in the incoming and outgoing trunks.\n + Sudden change of call patterns mostly the increase in international calls.\n + Any kind of lengthy calls or calls to premium rate services during late-night hours or weekends and holidays.\n + Any strange messages left in the voicemail boxes.\n + Any signs of war dialing like short calls or wrong number calls or any sign of social engineering.\n + Any kind leakage found with respect to the business secrets and sensitive data indicating that the phone conversations are intercepted(hacker listened to the phone conversation).And lastly, webcams and microphones activate automatically in the case of VoIP systems. \n\n\n### What can we do to prevent them?\n\n + A regular audit must happen in the organisation to assess its vulnerability to fraudulent exploit.\n + In case of securing DISA, Call Detail Recording help in identifying call activity linked with individual authorisation codes, while also keeping limited print copies of these records.\n + Always change security codes regularly, and give access limit of the administration of authorisation codes to some carefully chosen employees of the company or organisation.\n + With help of control functions within the firewall control the traffic covering the subsystem, which will protect network resources and confidentiality of all traffics, also encrypting signal and media streams will increase the security level.\n + A strong Firewall to evaluate SIP message contents for attacks and block non-SIP traffic.\n + Regular maintenance and analysis of logs.\n + Give PBX users Security advice like never leaving phone-set unlocked, not sharing any type of security codes and regular change of security codes should be done and never having any kind of sensitive data in the phone’s memory.\n + Having strong passwords are always recommended.\n + Channels or services that are not in use should be disabled.\n + Further securing system VPN (in case of VoIP PBX) for remote access and enable endpoint filtering.\n\n### What techniques or tools do PBX vendors use to mitigate hacking?\n\nSome of the PBX vendors have already working on providing any network security in their systems such as Network firewalls, DDoS(Distributed Denial of Service) prevention, network posture assessments and encryption during data transfer, while at the time while purchasing they try to brief all the risk of hacking and the ways you can avoid it.\n\nThere are certain analyses to see whether the provider is doing their part such as checking for accreditations(certificates to show whether the industry meets the security standards) like TEC certification, and prevention measures in the system to prevent hacking, also whether the regular updates are available and are they up to date and also look for call encryption. \n\n## Milestones\n\n1878\n\nAt a time when telephone calls are routed via **manual switchboards**, two switchboard operators of the Bell Telephone Company intentionally disconnect or misdirect calls. Historically, this is probably the first attempt at hacking telephone networks. Hackers in these early days are simply practical jokers who don't cause serious damage. These operator hacks disappear once switchboards get automated in the 1890s via electro-mechanical switching. \n\nNov \n1963\n\nMIT's student newspaper, *The Tech*, features an article titled *Telephone Hackers Active*. It describes how student hackers are occupying tie-lines between MIT and Harvard, or making long-distance calls for free. A PDP-1 computer searches for outside lines by listening for a dial tone. Some students are even expelled for this. Subsequently, the MIT phone system is updated to prevent calls over tie-lines. \n\n1968\n\nIn the U.S., a community of phone hackers begins to take shape during 1968-1969. About this time, the term **phone freak** comes into use. By the end of the decade, whistles and flutes are readily available to aid hackers. For instance, since the mid-1960s, Cap'n Crunch whistle that came with a box of cereals was used by hackers. It emitted 2,600 Hz, a useful tone for hacking. Years later, phone companies install digital filters to block this and other specific tones. \n\n1971\n\nIn June, a newsletter of the Youth International Party Line, uses the word **phreek** in its first issue. In October, an Esquire article titled *Secrets of the Little Blue Box* uses the term **phreak** in what may be the first published use of the term. The article describes how toll-free 800 numbers and a tone at 2,600 Hz could be used to occupy tandem lines and place long-distance calls for free. \n\n1972\n\n**Semiconductors** are used in PBX equipment and switching. Through the 1970s, dropping cost of hardware and lower operational expenses make PBX attractive. More and more companies install PBX systems. This also implies more opportunities for phreakers.\n\nApr \n1992\n\nFor greater awareness, the NIST publishes **NISTIR 4816**, *PBX Administrator’s Security Standards*. This discusses the different techniques that PBX hackers use and what PBX administrators can do to mitigate them. These include setting passwords, educating users, protecting voicemails, monitoring PBX options, reviewing billing records and limiting outgoing international calls. A related publication from 2001 is **NIST 800-24**, *PBX Vulnerability Analysis*. \n\n1993\n\nIn the U.K., the National Computing Centre includes in its regular survey report a new item name \"PBX hacking\" that has cost organizations £10,000. Although computer hacking dropped from 29.7% (1987) to 8.8% (1993), it's clear that hackers have simply moved to hacking PBX and voicemail systems. This is the result of traditional PBX systems giving way to **IP PBX systems**.\n\nNov \n2011\n\nFour hackers in the Philippines are arrested for PBX hacking funded by a terrorist organization. The phreakers used PBX systems to call Premium Rate Service (PRS) numbers. The revenue from this was split between the phreakers and the terrorists. One of the phreakers had previously hacked PBX systems by exploiting default passwords and unused extensions. He and his associates then offered long-distance call services to customers at low rates, placing calls worth $55 million during 2005-2008. \n\nDec \n2014\n\nA Pakistani national named Qasmani is arrested for scamming American telecom companies to the loss of $19.6 million during 2008-2012. Qasmani has been an active phreaker since the late 1990s. He and his associates exploited unused extensions of PBX systems and placed calls to pay-per-minute PRS numbers that they had set up. In June 2017, he's sentenced to four years in prison. \n\n2019\n\nA survey of Communications Service Providers (CSPs) by the Communications Fraud Control Association reveals that the estimated global loss in 2019 due to communications fraud is $28.3 billion. Of this, PBX hacking and IP PBX hacking account for $1.82 billion each.","meta":{"title":"PBX Hacking","href":"pbx-hacking"}} {"text":"# Orthogonal Frequency Division Multiplexing\n\n## Summary\n\n\nOrthogonal Frequency Division Multiplexing (OFDM) is a key wideband digital communication method used in wireless transmission. Data is split into several streams and transmitted on multiple narrowband channels to reduce interference and crosstalk.\n\nDue to its good spectral efficiency and relatively less complexity, it's one of the popular techniques used in telecommunications. It's used in multiple standards including DAB, HDTV, WLAN (802.11a/g/ac), WiMAX, and LTE. It enables transmission of high data rates (in the order of 1 Gbps) on a wireless channel.\n\n## Discussion\n\n### Could you give an overview of OFDM?\n\nOFDM is a **Multi-Carrier Modulation (MCM)** scheme, which uses closely spaced multiple subcarriers to transmit data. Data to be transmitted is split and transmitted using multiple subcarriers instead of using a single carrier. The key idea is instead of transmitting at a very high bit rate, the data is transmitted over multiple subchannels each carrying lower bit rates. \n\nUnlike the traditional Frequency Division Multiplexing (FDM), the OFDM does not use guard bands to separate the various subchannels. One of the key features of OFDM is the orthogonality of the subcarriers used to transmit data. The orthogonality of subcarriers results in more subcarriers in a given bandwidth. This improves spectral efficiency. It also eliminates the interference between subcarriers, often called **Inter-Carrier Interference (ICI)**.\n\n\n### Why is OFDM used in wireless transmission?\n\nOne of the key challenges in wireless transmission as compared to wired transmission is the phenomenon of multipath fading and **Inter-Symbol Interference (ISI)**. OFDM helps in mitigating both these effects, making it one of the key technologies to be used in wireless transmission. OFDM uses spectrum in a more efficient way compared to some of the other techniques used to overcome multipath fading and Inter Symbol Interference. \n\n\n### How does OFDM ensure subcarriers do not interfere with each other?\n\nOFDM splits the available spectrum into multiple subbands and transmits data using multiple subcarriers. The subcarriers are chosen such that they are orthogonal to each other. This ensures that data from one subcarrier does not interfere with the data on the other. To maintain orthogonality between subcarriers, the subcarriers are chosen such that they are all integer multiples of the base frequency. If the total bandwidth of the system is B Hz. Then the base frequency (f0) is given by B/N, where N is the number of subcarriers in the system. The subcarriers used are f0, 2f0, 3f0 ... (N-1)f0.\n\nThe spectrum of each transmitted subcarrier in OFDM system is a `sinc` function with side-lobes that produce overlapping spectra between subcarriers. Since the carriers are orthogonal, the peak of each subcarrier coincides with nulls of other subcarriers. Even though there's overlap of spectra between subcarriers, there's no interference between subcarriers.\n\n\n### What's the role of FFT and IFFT in OFDM implementation?\n\nOFDM system involves mapping of symbols onto a set of orthogonal subcarriers that are multiples of the base frequency. This can be implemented in digital domain using Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT). These transforms are important from OFDM perspective as they can be viewed as mapping digital input data onto orthogonal subcarriers.\n\nThe IFFT takes frequency-domain input data and converts it to the time-domain output data (analog OFDM symbol waveform). This waveform is transmitted by the OFDM transmitter. The receiver receives the waveform and uses FFT transform to convert the data back from time-domain into frequency domain to recover the data back.\n\n\n### What's guard band and cyclic prefix in OFDM?\n\nOFDM systems make use of guard band and cyclic prefix (CP) to overcome the issue of ISI. While guard band is not required to achieve orthogonality of subcarriers, it helps in overcoming ISI in a multipath channel. The duration of the guard band should be more than the channel spread of the wireless medium.\n\nThe cyclic prefix is transmitted during the guard band interval. After IFFT, some end bits of the OFDM symbol are copied to the guard band before the symbol to form the CP.\n\nReceivers often have a channel equalizer to combat channel distortion. CP simplifies equalizer implementation. Essentially, CP converts a linear convolution to a circular convolution. Circular convolution in time domain is equivalent to a simple multiplication in the frequency domain. The channel equalizer in the receiver multiplies the received symbol by the inverse of the channel coefficients in frequency domain to recover the original transmitted symbol, assuming that fading is constant over the subband.\n\n\n### How is a typical OFDM transmitter and receiver implemented?\n\nIn an OFDM transmitter, the input bits are first grouped into symbols in frequency domain by using a serial-to parallel-converter. These frequency domain symbols are then taken as input by the IFFT block. The IFFT block converts the input symbol into time domain symbol by doing an IFFT operation on the input. The cyclic prefix is added to the output of IFFT block by the cyclic prefix block. This symbol is then converted back to series of bits by the parallel-to-serial converter and transmitted. \n\nIn the OFDM receiver the input signal is passed through the channel equalizer block first, to cancel any impairments introduced by the wireless channel. The output of the equalizer is then input to the prefix extraction block to remove the cyclic prefix. The output of the prefix extraction block is then given to the FFT block. This block converts the input to frequency domain output by doing an FFT operation. Thus the OFDM receiver recovers the original bits back by doing a parallel-to-serial operation. \n\n\n### What are the advantages of OFDM?\n\nHere are some advantages: \n\n + **High spectral efficiency**: Compared to other schemes like spread-spectrum, OFDM uses the available spectrum in a more efficient way.\n + **Robust against multipath fading and Inter-Symbol Interference**: Due to low data-rate in each subchannel, OFDM is more resilient to inter-symbol interference caused by multipath propagation.\n + **Simpler channel equalizer in receiver**: In case of OFDM the channel equalization can be done in frequency domain and is a multiplication operation of the received symbol with the channel equalizer.\n + **Efficient implementation**: Implementation can be done using IFFT and FFT, thus eliminating the need for multiple mixers in transmitter and receiver.\n + **Robustness against selective fading**: Since the transmission is done using multiple smaller subbands, frequency selective fading appears as flat fading for each subband.\n + **Resilience to narrow band interference**: Due to narrow band interference, contents in some of the subchannels will be lost. It is possible to recover it by using channel coding and interleaving data before transmission.\n + **Tuned subchannel receivers not required**: Unlike conventional FDM, tuned subchannel receivers are not required, thus simplifying the receiver design\n\n### What are the disadvantages of OFDM?\n\nHere are some disadvantages: \n\n + **Sensitive to Carrier offset and frequency drift**: In case there is an offset between the transmitter and receiver carrier frequencies, the orthogonal property of the OFDM is lost. Thus OFDM systems are very sensitive to carrier frequency offsets.\n + **High Peak-To-Average power ratio**: Since the output of multiple subbands are combined to get the OFDM signal, the OFDM signal has a very high dynamic range of amplitude. This leads to complex RF design as the amplifiers need to be linear for the entire amplitude range. This also leads to lower efficiency of the RF amplifier.\n\n\n## Milestones\n\n1870\n\nAlexander Graham Bell is initially funded by his future father-in-law Gardiner Hubbard to work on *harmonic telegraphy*, which is an **FDM** transmission of multiple telegraph channels. With FDM, more than one low rate signal is carried over a relatively wide channel using a separate carrier frequency for each signal.\n\n1957\n\nCollins Radio Company develops the **Kineplex** system to overcome multipath fading. 20 tones are modulated by differential 4-PSK without filtering. Tones can be separable by bank of filters at the receiver. \n\n1961\n\nFranco and Lachs propose a multitone, code-multiplexing scheme using a 9-point QAM constellation for each carrier. Sine and Cosine waves used to generate orthogonal signals. \n\n1966\n\nWhat could be termed as the birth of modern OFDM, Robert W. Chang publishes *Synthesis of Band-Limited Orthogonal Signals for Multichannel Data Transmission*. He uses Fourier transform to make the subcarriers orthogonal. His system is able to transmit signals in parallel without either ISI or ICI. \n\n1967\n\nB.R. Saltzberg extends Chang's work to complex data, that is, Quadrature Amplitude Modulation (QAM). He shows that I and Q streams should be staggered by T/2, and adjacent channels the other way. Zimmerman and Kirsch publish a paper on the design of an HF (high frequency) radio OFDM transceiver (KATHRYN). This uses 34 subchannels in a 3kHz bandwidth. KATHRYN uses analog hardware to generate orthogonal signals using Discrete Fourier transform (DFT). \n\n1971\n\nWeinstein and Ebert use **Fast Fourier Transform (FFT)** implementation of DFT. This greatly reduces the cost and complexity of OFDM systems. However, Weinstein notes later Bell Labs didn't show much interest in this. The big applications of OFDM (ADSL, wireless communications, digital audio/video broadcasting) came years later. Weinstein and Ebert also introduce the **guard band** for multipath channels. \n\n1980\n\nAlthough earlier work made the subcarriers orthogonal, in a time dispersive channel the orthogonality was lost resulting in ISI. Peled and Ruiz solve this by introducing **Cyclic Extension (CE)**. Today we use the more familiar term **Cyclic Prefix (CP)**. Effective data rates are reduced but the gain in terms of zero ISI is worth it. \n\n1985\n\nFor better channel estimation, L.J. Cimini introduces a pilot-based method to reduce interference from multipath and co-channels. This work becomes important in the context of cellular mobile systems where channels experience fast selective fading. \n\nJan \n1993\n\nAmati's prototype of an ADSL modem wins a competition with Carrierless Amplitude-Phase (CAP) Modulation in a Bellcore-sponsored test. The technique used is **Discrete Multi-Tone (DMT)**, which is essentially OFDM. Soon ADSL becomes the first major consumer-oriented application of OFDM. It uses 256-point DFT with subcarriers separated by 4.3125 kHz and a (block) symbol rate of 4000/s. Early deployments using Amati equipment happen with British Telecom in late 1993 and early 1994, offering 2 Mbps downstream. \n\n1999\n\n**802.11a** WLAN standard is published as an amendment of 802.11 from 1997. It uses OFDM in the physical layer for data transmission. 802.11a's OFDM has 52 subcarriers (4 pilot + 48 data), 64-point FFT, and 312.5 kHz of subcarrier spacing.","meta":{"title":"Orthogonal Frequency Division Multiplexing","href":"orthogonal-frequency-division-multiplexing"}} {"text":"# Tor Browser\n\n## Summary\n\n\nTor Browser is a standalone desktop application that helps users browse the web safely and anonymously. Since many users use the Internet via a web browser, Tor Browser is a useful application for anonymity. It hides your device's IP address and its location. It prevents third parties from snooping into your online activities or even tracking you. \n\nTor Browser is based on the open sourced code of Firefox browser. It adds further privacy and security settings. \n\nUnderneath, Tor Browser relies on the Tor network of relay nodes. In the past, it was difficult to use Tor since one had to install a number of tools separately. Today, Tor Browser bundles all the necessary tools into a single archive of files, thus making it easier for users to adopt Tor.\n\n## Discussion\n\n### What are the features of the Tor Browser?\n\nTor Browser is cross-platform and available for x86 and x86\\_64 architectures. Encryption and decryption are done automatically. It can update itself to latest version. What previously required separate installations of Tor, Firefox browser, Torbutton (Firefox add-on) and Polipo (HTTP proxy), are now conveniently bundled within the Tor Browser. \n\nA new separate circuit is automatically created for each domain though they share a common guard node. This process is transparent to users but Tor Browser allows users to inspect the current circuit and request a new circuit if so desired. From within the browser, users can also request Tor bridges by solving a captcha. Each circuit lives for only ten minutes. \n\nSince each website is isolated from another, this prevents tracking from third parties. Tor Browser resists browser fingerprinting. To prevent cookie-based tracking, cookies and browsing history are cleared when the browser is closed. It also prefers DuckDuckGo over Google as the search engine, since Google tracks you and logs your search queries. \n\n\n### What are the variants of Tor Browser for different platforms?\n\nTor Browser is available for Microsoft Windows, Apple MacOS and GNU/Linux. As of March 2019, an experimental version 8.5a8 is also available for Android. \n\nTwo popular add-ons that come with Tor Browser by default are NoScript and HTTPS Everywhere. \n\nFor Android, there's also **Orbot**, a proxy that enables Tor access for any mobile app. **Orfox** is a browser for Android that was developed as part of the Guardian Project. This is likely to be discontinued when a stable version of Tor Browser for Android is released. \n\nThird-party **Onion Browser** is the one to use for iOS. \n\n\n### I already use private/incognito mode in Firefox/Chrome. Why do I need Tor Browser?\n\nPrivate modes avoid saving your browsing history and cookies. Otherwise, they are vulnerable to fingerprinting and network adversaries. Shortcomings are plugins, fingerprinting, DNS leaks, SSL state leaks, autofill and site-specific zoom. This is where Tor Browser becomes useful. \n\nOn a related note, Tor provides anonymity at the routing layer. However, Tor can't protect you if your hardware is compromised, such as a key logger. Tor doesn't encrypt traffic between the exit node and final destination, for which you should use app-level encryption, such as SSL for HTTPS traffic. In fact, an add-on such as *HTTPS Everywhere* can help in enforcing security for sites that support it. Electronic Frontier Foundation has a nice animation to show how HTTPS works alongside Tor.\n\nTor will also not protect you from improper usage. For example, BitTorrent over Tor is not anonymous. \n\n\n### What are some alternatives to the Tor Browser for anonymous web browsing?\n\nTor Browser is not the only way to achieve anonymity while browsing the web. *I2P* provides a peer-to-peer distributed communications layer. It's suited for hidden services. *Freenet* uses a similar P2P technology. Linux-based distributions that focus on privacy and anonymity (some of which use Tor underneath) include *Tails*, *Subgraph* and *Freepto*. \n\n*Qubes OS* is a desktop OS focused on security. It uses virtualization to give users any OS of their choice. *Whonix* OS is integrated into Qubes. With Whonix, we can connect to Tor from inside a VM. \n\n*Epic Browser* doesn't use any special networking architecture. However, it disables history, third-party cookies, DNS pre-fetching, and autofill in forms. These are common ways in which privacy can be compromised. \n\n*Brave* browser integrates Tor. Users can open a private tab with Tor. \n\nOther alternatives are listed at alternative.me.\n\n## Milestones\n\nOct \n2003\n\nTor network is deployed. Tor code is open sourced under MIT license. Tor itself is an improvement over Onion Routing that started as a research project in 1995. \n\nMar \n2008\n\nVersion 1.0.0 of Tor Browser Bundle is released. \n\nJun \n2017\n\nTor Browser 7.0 is released with multiprocess mode and content sandbox as two major features. Sandboxing is not yet available on Windows. Until this release, lack of sandboxing was a problem and was exploited by the FBI. This version is based on Firefox 52 Extended Support Release (ESR). ESR is meant to help organizations to mass deploy the browser.\n\nSep \n2018\n\nTor Browser version 8 is released based on Firefox Quantum codebase that came out in November 2017. User interface is Photon UI that Firefox Quantum uses. Also in September, alpha release of **Tor Browser for Android** happens.\n\nMay \n2019\n\nTor Browser version 8.5 becomes the first stable release for Android. Tor Browser also gets newly designed logos compatible with Firefox's Photon UI.","meta":{"title":"Tor Browser","href":"tor-browser"}} {"text":"# NumPy\n\n## Summary\n\n\nNumPy is an open source Python library that enables efficient manipulation of multi-dimensional numerical data structures. These are called **arrays** in NumPy. NumPy is an alternative to Interactive Data Language (IDL) and MATLAB. \n\nSince it's release in 2005, NumPy has become a fundamental package for numerical and scientific computing in Python. In addition to efficient data structures and operations on them, it provides many high-level mathematical functions that aid scientific computation. Pandas, SciPy, Matplotlib, scikit-learn and scikit-image are just a few popular scientific packages that make use of NumPy.\n\n## Discussion\n\n### What does NumPy do differently from core Python?\n\nPython is slower than compiled languages such as C but it's easy to learn. Python is suited for rapid prototyping and iterative development. \n\nWhile Python's `list` data type can be used to construct multi-dimensional data structures (lists containing lists), NumPy is faster and provides a better API for developers. Python's lists are general purpose. They can contain data of different types. This means that types are also stored, type-dispatching code is invoked at runtime and types are checked. Lists are processed using loops or comprehensions and can't be vectorized to support elementwise operations. NumPy sacrifices some of Python's flexibility to improve performance. \n\nSpecifically, NumPy is better at these aspects: \n\n + **Size**: NumPy data structures take up less space. Each Python integer object takes 28 bytes whereas in NumPy an integer is just 8 bytes. A Python list of `n` items requires `64+8n+28n` bytes whereas in NumPy it's `96+8n` bytes.\n + **Performance**: NumPy code runs faster than Python code, particularly for large input data.\n + **Functionality**: NumPy provides lots of functions and methods to simplify operations. High-level operations such as linear algebra are also included.\n\n### What are some of the main features of NumPy?\n\nNumPy arrays are **homogeneous**, meaning that array elements are of the same type. Hence, no type checking is required at runtime. All elements of an array take up same amount of space. \n\nThe spacing between elements along an axis is also constant. This is called **striding**. This is useful when the same data in memory can be used to create a new array without copying. Different arrays are therefore different **views** into memory. Thus, it's easier to modify data subsets in memory. \n\nOperations are **vectorized**, which means that the operation can be executed in parallel on multiple elements of the array. This speeds up computation. Developers need not write `for` loops. \n\nNumPy provides APIs for easy manipulation of arrays. Some of these are indexing, slicing, reshaping, stacking and splitting. **Broadcasting** is a feature that allows operations between vectors and scalars, or vectors of different sizes. \n\nNumPy **integrates easily** with C/C++ or Fortran code that may provide optimized implementations. Useful functions covering linear algebra, Fourier transform, and random numbers are provided. \n\n\n### Could you share some performance numbers comparing NumPy versus Python implementations?\n\nFor a simple computation of mean and standard deviation of a million floating point numbers, NumPy was **30X faster** than a pure Python implementation. However, optimized Cython and C implementations were even faster. Another study showed that if input is small (less than 200 numbers), pure Python did better than NumPy. For inputs greater than about 15,000 numbers, NumPy outperformed C++. \n\nOne experiment in Machine Learning compared pure Python, NumPy and TensorFlow (on CPU) implementations of gradient descent. Runtimes were 18.65, 0.32 and 1.20 seconds respectively. NumPy was **50X faster** than pure Python. For more complex ML problems deployed on multiple GPUs, TensorFlow is likely to outperform NumPy. \n\nWhen evaluating NumPy performance, the underlying library for vector/matrix computations matters. NumPy comes with *Default BLAS & Lapack*. Depending on the distribution, alternatives may be included: *OpenBLAS*, *Intel MKL*, *ATLAS*, etc. In general, these alternatives are faster than the default library. For example, SVD is 10X faster on Intel MKL. \n\nHardware platforms may provide further acceleration. For example, Intel AVX2 provides at least 20% improvement on top of OpenBLAS. \n\n\n### Does NumPy automatically make use of GPU hardware?\n\nNumPy doesn't natively support GPUs. However, there are tools and libraries to run NumPy on GPUs.\n\n**Numba** is a Python compiler that can compile Python code to run on multicore CPUs and CUDA-enabled GPUs. Numba also understands NumPy and generates optimized compiled code. Developers specify type signatures for Python functions. Numba uses them towards just-in-time (JIT) compilation. Numba team also provides `pyculib`, which is a Python interface to CUDA libraries such as cuBLAS, cuFFT and cuRAND. \n\n**Grumpy** has been proposed as a framework to seamlessly target multicore CPUs and GPUs. It does a mix of JIT compilation and offloading to optimized libraries such as cuBLAS or LAPACK. \n\n**CuPy** is a Python library that implements NumPy arrays for CUDA-enabled GPUs and leverages CUDA GPU acceleration libraries. The code is mostly a drop-in replacement to NumPy code since the APIs are very similar. **PyCUDA** is a similar library from NVIDIA. \n\n**MinPy** is similar to CuPy and is meant to be a NumPy interface above MXNet for building artificial neural networks. It includes auto differentiation in addition to transparent CPU/GPU acceleration. \n\n\n### What are some essential resources to learn NumPy?\n\nThe main NumPy website is the definitive resource to consult. Beginners can start by reading their Quickstart tutorial or the absolute beginner's guide. The latter includes the basics of installing NumPy. \n\nRougier's book titled From Python to Numpy focuses on Python programmers who wish to learn NumPy and it's vectorization. Perhaps a classic is the PhD thesis titled Guide to NumPy, by Travis E. Oliphant who created NumPy. \n\nMATLAB users might want to read NumPy for Matlab users. It maps MATLAB operations to NumPy equivalents. \n\nDataCamp blog has shared a handy NumPy cheatsheet.\n\nThose who wish to contribute to the NumPy project or study it's source code can head to NumPy's GitHub repository.\n\n## Milestones\n\n1995\n\n*Numeric* is released to enable numerical computations. It's designed to provide **homogeneous numeric arrays**, that is, arrays whose elements all belong to the same data type, and therefore easier and faster to process. \n\n2005\n\n*NumPy* is released based on an older library named *Numeric*. It also combines features of another library named *Numarray*. NumPy is initially named *SciPy Core* but renamed to *NumPy* in January 2006. \n\nOct \n2006\n\n*NumPy v1.0* is released. \n\nApr \n2009\n\n*NumPy v1.3.0* is released. This release includes **experimental Windows 64-bit support**. Support for 64-bit OpenBLAS comes a decade later in December 2019. \n\nAug \n2010\n\n*NumPy v1.5.0* is released. This is the **first release to support Python 3**. \n\nJan \n2019\n\nGitHub publishes a study of Machine Learning (ML) projects hosted on their platform. The study spans contributions from Jan-Dec 2018. It's seen that **74%** of ML Python projects import NumPy. This is followed by SciPy and Pandas. \n\nJul \n2019\n\n*NumPy v1.17.0* is released. This release supports Python 3.5-3.7 but **drops support for Python 2.7**. In fact, NumPy v1.16.x is the last series to support Python 2.7 but being a long term release, v1.16.x will be maintained till 2020. NumPy v1.16.6 is released in December 2019. \n\nFeb \n2020\n\nFollowing the end of life of Python 2 in January 2020, the number of downloads for older NumPy releases based on Python 2 falls sharply. By April 2020, 80% of NumPy downloads are based on Python 3.","meta":{"title":"NumPy","href":"numpy"}} {"text":"# Chomsky Hierarchy\n\n## Summary\n\n\nAny language is a structured medium of communication whether it is a spoken or written natural language, sign or coded language, or a formal programming language. Languages are characterised by two basic elements – syntax (grammatical rules) and semantics (meaning). In some languages, the meaning might vary depending upon a third factor called context of usage. \n\nDepending on restrictions and complexity present in the grammar, languages find a place in the hierarchy of formal languages. **Noam Chomsky**, celebrated American linguist cum cognitive scientist, defined this hierarchy in 1956 and hence it's called **Chomsky Hierarchy**. \n\nAlthough his concept is quite old, there's renewed interest because of its relevance to Natural Language Processing. Chomsky hierarchy helps us answer questions like “Can a natural language like English be described (‘parsed’, ‘compiled’) with the same methods as used for formal/artificial (programming) languages in computer science?”\n\n## Discussion\n\n### What are the different levels in the Chomsky hierarchy?\n\nThere are 4 levels – Type-3, Type-2, Type-1, Type-0. With every level, the grammar becomes less restrictive in rules, but more complicated to automate. Every level is also a subset of the subsequent level. \n\n + **Type-3: Regular Grammar** - most restrictive of the set, they generate regular languages. They must have a single non-terminal on the left-hand-side and a right-hand-side consisting of a single terminal or single terminal followed by a single non-terminal.\n + **Type-2: Context-Free Grammar** - generate context-free languages, a category of immense interest to NLP practitioners. Here all rules take the form A → β, where A is a single non-terminal symbol and β is a string of symbols.\n + **Type-1: Context-Sensitive Grammar** - the highest programmable level, they generate context-sensitive languages. They have rules of the form α A β → α γ β with A as a non-terminal and α, β, γ as strings of terminals and non-terminals. Strings α, β may be empty, but γ must be nonempty.\n + **Type-0: Recursively enumerable grammar** - are too generic and unrestricted to describe the syntax of either programming or natural languages.\n\n### What are the common terms and definitions used while studying Chomsky Hierarchy?\n\n + **Symbol** - Letters, digits, single characters. Example - A,b,3\n + **String** - Finite sequence of symbols. Example - Abcd, x12\n + **Production Rules** - Set of rules for every grammar describing how to form strings from the language that are syntactically valid.\n + **Terminal** - Smallest unit of a grammar that appears in production rules, cannot be further broken down.\n + **Non-terminal** - Symbols that can be replaced by other non-terminals or terminals by successive application of production rules.\n + **Grammar** - Rules for forming well-structured sentences and the words that make up those sentences in a language. A 4-tuple **G = (V , T , P , S)** such that V = Finite non-empty set of non-terminal symbols, T = Finite set of terminal symbols, P = Finite non-empty set of production rules, S = Start symbol\n + **Language** - Set of strings conforming to a grammar. Programming languages have finite strings, most natural languages are seemingly infinite. Example – Spanish, Python, Hexadecimal code.\n + **Automaton** - Programmable version of a grammar governed by pre-defined production rules. It has clearly set computing requirements of memory and processing. Example – Regular automaton for regex.\n\n### What are the corresponding language characteristics in each level?\n\nUnder **Type-3** grammar, we don't classify entire languages as the production rules are restrictive. However, constructs describable by **regular expressions** come under this type.\n\nFor instance, **rule for naming an identifier in a programming language** – regular expression with any combination of case-insensitive letters, some special characters and numbers, but must start with a letter. \n\n**Context-free** languages classified as **Type-2** are capable of handling an important language construct called **nested dependencies**. English example – Recursive presence of “If then ” – “If it rains today and if I don’t carry an umbrella, then I'd get drenched”. For programming languages, the matching parentheses of functions or loops get covered by this grammar. \n\nIn **Type-1** languages, placing the restriction on productions α → β of a phrase structure that β be at least as long as α, they become context sensitive. They permit replacement of α by β only in a ‘context’, [context] α [context] → [context] β [context]. \n\nFinally, **Type-0** languages have **no restrictions** on their grammar and may loop forever. They don’t have an algorithm enumerating all the elements.\n\n\n### What are the type of Automaton that recognizes the grammar in each level?\n\n + **Type-3: Finite-State Automata** - To compute constructs for a regular language, the most important consideration is that there is no memory requirement. Think of a single purpose vending machine for platform tickets or a lift algorithm. The automaton knows the present state and next permissible states, but does not ‘remember’ past steps.\n + **Type-2: Push-Down Automata** - In order to match nested dependencies, this automaton requires a one-ended memory stack. For instance, to match the number of ‘if’ and ‘else’ phrases, the automaton needs to ‘remember’ the latest occurring ‘if’. Only then it can find the corresponding ‘else’.\n + **Type-1: Linear-Bounded Automata** - is a form of a restricted Turing machine which instead of being unlimited, is bounded by some computable linear function. The advantage of this automaton is that its memory requirement (RAM upper limit) is predictable even if the execution is recursive in parts.\n + **Type-0: Turing Machine** - Non-computable functions exist in Mathematics and Computer Science. The Turing machine however allows representing even such functions as a sequence of discrete steps. Control is finite even if data might be seemingly infinite.\n\n### Can you give a quick example for each type of grammar/language?\n\n + Type-3: Regex to define tokens such as identifiers, language keywords in programming languages. A coin vending machine that accepts only 1-Rupee, 2-Rupee and 5-Rupee coins has a regular language with only three words – 1, 2, 5.\n + Type-2: Statement blocks in programming languages such as functions in parentheses, If-Else, for loops. In natural language, nouns and their plurals can be recognized through one NFA, verbs and their different forms can be recognized through another NFA, and then combined. Singular (The girl runs home –> Girl + Runs). Plural (The girls run home –> Girls + Run)\n + Type-1: Though most language constructs in natural language are context-free, in some situations linear matching of tokens has to be done, such as - \"The square roots of 16, 9 and 4 are 4, 3 and 2, respectively.\" Here 16 is to be matched with 4, 9 is matched with 3, and 4 is matched with 2.\n + Type-0: A language with no restrictions is not conducive to communication or automation. Hence there are no common examples for this type. However, some mathematical seemingly unsolvable equations are expressed in this form.\n\n### In which level of the hierarchy do formal programming languages fall?\n\nReading a text file containing a high-level language program and compiling it as per its syntax is done in two steps.\n\nFinite state models associated with **Type-3 grammar** are used for performing the first step of **lexical analysis**. Raw text is aggregated into keywords, strings, numerical constants and identifiers in this step. \n\nIn the second step, to **parse the program constructs** of any high level language according to its syntax, a **Context-Free Grammar** is required. Usually these grammars are specified in *Backus-Naur Form (BNF)*.\n\nFor example, to build a grammar for IF statement, grammar would begin with a non-terminal statement S. Rules will be of the form:\n\nS → IF-STATEMENT\n\nIF-STATEMENT → if CONDITION then BLOCK endif\n\nBLOCK → STATEMENT | BLOCK;\n\nConventionally, all high-level programming languages can be covered under the Type-2 grammar in Chomsky’s hierarchy.\n\nPython language has a unique feature of being white-space sensitive. To make this feature fit into a conventional CFG, Python uses two additional tokens ‘INDENT’ and ‘DEDENT’ to represent the line indentations. \n\nHowever, just syntactic analysis does not guarantee that the language will be entirely ‘understood’. Semantics need to match too.\n\n\n### Where can we place natural languages in the hierarchy?\n\nNatural languages are an infinite set of sentences constructed out of a finite set of characters. Words in a sentence don’t have defined upper limits either. When natural languages are reverse engineered into their component parts, they get broken down into four parts - **syntax, semantics, morphology, phonology**. \n\nTokenising words and identifying nested dependencies work as explained in the previous section.\n\n**Part-of-Speech Tagging** is a challenge. “He runs 20 miles every day” and “The batsman scored 150 runs in one day” – the same word ‘runs’ becomes a noun and verb. Finite state grammars can be used for resolving such lexical ambiguity. \n\n**Identifying cases** (subjective - I, possessive - Mine, objective - Me, etc) for nouns varies across languages: Old English (5), Modern English (3), Sanskrit and Tamil (8). Each case also has interrogative forms. Clear definition of cases enables free word order. The CFG defined for these languages take care of this. \n\nNatural languages are believed to be at least context-free. However, Dutch and Swiss German contain grammatical constructions with **cross-serial dependencies** which make them context sensitive. \n\nLanguages having clear and singular source text of grammar are easier to classify.\n\n\n### Are there any exceptional cases in natural languages that make its classification ambiguous?\n\nNLP practitioners have successfully managed to assign a majority of natural language aspects to the regular and CFG category. However, some aspects don't easily conform to a particular grammar and require special handling.\n\n + **Structural ambiguity** – Example ‘I saw the man with the telescope’. A CFG can assign two or more phrase structures (“parse trees”) to one and the same sequence of terminal symbols (words or word classes).\n + **Ungrammatical speech** – Humans often talk in sentences that are incorrect grammatically. Missing words sometimes are implied in a sentence, not uttered explicitly. So decoding such sentences is a huge challenge as they don't qualify as per any defined grammar, but a native speaker can easily understand them.\n + **Sarcasm or proverb usage** – When we say something but mean something entirely different. Here the semantic analysis becomes critical. We don’t build grammars for these cases, we just prepare an exhaustive reference data set.\n + **Mixed language use** – Humans often mix words from multiple languages. So computing systems need to identify all the constituent language words present in the sentence and then assign them to their respective grammars.\n\n### What are the important extensions to Chomsky hierarchy that find relevance in NLP?\n\nThere are two extensions to the traditional Chomsky hierarchy that have proved useful in linguistics and cognitive science:\n\n + **Mildly context-sensitive languages** - CFGs are not adequate (weakly or strongly) to characterize some aspects of language structure. To derive extra power beyond CFG, a grammatical formalism called Tree Adjoining Grammars (TAG) was proposed as an approximate characterization of Mildly Context-Sensitive Grammars. It is a tree generating system that factors recursion and the domain of dependencies in a novel way leading to 'localization' of dependencies, their long distance behaviour following from the operation of composition, called 'adjoining'. Another classification called Minimalist Grammars (MG) describes an even larger class of formal languages.\n + **Sub-regular languages** - A sub-regular language is a set of strings that can be described without employing the full power of finite state automata. Many aspects of human language are manifestly sub-regular, such as some ‘strictly local’ dependencies. Example – identifying recurring sub-string patterns within words is one such common application.\n\n\n## Milestones\n\n1928\n\nAvram Noam Chomsky is born on December 7, 1928. Decades later, Chomsky is credited with the creation of the theory of generative grammar, considered to be one of the most significant contributions to the field of linguistics made in the 20th Century. \n\n1936\n\nTuring machines, first described by Alan Turing in 1936–7, are simple abstract computational devices intended to help investigate the extent and limitations of what can be computed. \n\n1956\n\nChomsky publishes *Syntactic Structures*. He defines a classification of formal languages in terms of their generative power, to be known as the Chomsky hierarchy. \n\n1963\n\nJohn Backus and Peter Naur introduce for the first time a formal notation to describe the syntax of a given language (for ALGOL 60 programming language). This is said to be influenced from Chomsky's work. In time, this notation is called **Backus-Naur Form**.","meta":{"title":"Chomsky Hierarchy","href":"chomsky-hierarchy"}} {"text":"# Decibel\n\n## Summary\n\n\nDecibel is a unit of measurement that expresses the logarithmic ratio of two physical quantities of the same dimensions. The logarithm is to base 10. This logscale definition is useful when the quantities have a wide range and losses or gains are proportional. \n\nDecibel is dimensionless since it's a ratio. It's a relative measure. However, when a reference is defined, it can be used as an absolute measure. As a relative measure, it's represented as **dB**. As an absolute measure, often an additional suffix is appended to \"dB\". \n\nDecibel originated in telephone networks of the early 20th century. Today it's commonly used in many domains including acoustics, electrical engineering, signal processing, RF power, and more.\n\n## Discussion\n\n### Could you explain the decibel scale with some numbers?\n\nDecibel is defined as \\(10\\ log\\_{10}(P\\_1/P\\_0)\\), where power level P1 is compared against P0. When P0 and P1 are equal, we say their ratio is 0dB. Thus, 0dB doesn't imply zero power or intensity. It implies equal power.\n\nIf P1 is twice the value of P0, we say that P1 is 3dB higher than P0. If P1 is half the value of P0, we say that P1 is 3dB lower than P0, or equivalently -3dB. \n\nFor ratios P1/P0 equalling 10, 100, and 1000, the respective decibels are 10, 20, and 30. Likewise, if the ratios are 0.1, 0.01, and 0.001, the respective decibels are -10, -20, and -30. Thus, we can see that even though the power levels are changing geometrically, the equivalent decibel values change linearly. This is why the logarithmic scale of decibel is useful when quantifying values that have a large range.\n\n\n### What are some examples where the decibel scale is used?\n\nDecibel is used to quantify the intensity of sound. For sound, it's common to use decibel with a reference of 0.02 mPa (millipascals), the minimum sound that the human ear can hear. Normal speech is at 60dB. A jet engine is at 120dB, which is a million times louder than normal speech. \n\nIn electrical engineering and radio engineering, decibel is used. For example, the gain of an amplifier or the loss of signal power due to an obstruction are quantified in decibels. A directional antenna will focus its radiation in specific directions, which are specified on a chart with decibel as the unit. \n\nIn signal processing, it's common to apply filters to signals. *Bode plot* is an example that shows the magnitude response of such a filter: decibels (y-axis) vs frequency (x-axis). This tells which frequencies are allowed to pass and which ones are attenuated. \n\nWhen a signal is compared to noise, called *Signal-to-Noise Ratio (SNR)*, this ratio is specified in decibels. For example, we can compare a processed image with the original and quantify the difference as SNR in decibels. \n\n\n### What's the difference between decibels based on either root-power level or power level?\n\nDecibel is commonly defined in terms of power levels. However, it can be defined in terms of signal or field level. For example, in electrical engineering, electrical signals are voltage and current. Power is proportional to square of these signals. When decibel is therefore defined in terms of voltage, the equation becomes \\(20\\ log\\_{10}(V\\_1/V\\_0)\\), where voltage level V1 is compared against V0. \n\nWhile the term *field quantity* was previously used, ISO Standard 80000-1:2009 introduced the term *root-power quantity*. \n\n\n### What are absolute and relative decibel units?\n\nDecibel is a relative measure that compares two quantities of the same dimension. But in acoustics, it's common to say that noise was at 90dB. We don't say noise was 90dB with respect to something. This is because a reference of 0.02 mPa (millipascals) is implied since it's the quietest sound that we can hear. Thus, sound levels are in absolute decibels. \n\nIn most other applications, **dB** is a relative measure. When we say that path loss of a wireless channel is 120dB, we mean that from the transmitter to receiver, signal power drops by 120dB. However, if we take 1 milliwatt as the reference, then we can use the unit **dBm** to indicate absolute measure. If transmitter sends a 10 kilowatt signal, it's sending a 70dBm signal. If path loss is 120dB (relative measure), the receiver will get a -50dBm (absolute measure) signal. \n\nHere are some examples of absolute measures (reference in parenthesis): dBW (1 watt), dBi (isotropic antenna), dBV (1 volt), dBµV (1 microvolt), dBµV/m or dBu (1µV/m electrical field strength), dBA (\"A\" weighted pressure levels), and more.\n\n\n### What are some limitations of using decibels?\n\nIn image processing, Signal-to-Noise Ratio (SNR) and Peak Signal-to-Noise Ratio (PSNR) are sometimes used to compare a processed image with the original image. SNR and PSNR are expressed in decibels. The problem is that image quality is a matter of human perception. We may sometimes perceive one image as better than another even though it has a lower SNR. Admittedly, this is not a limitation of decibel itself but rather a limitation of its usage for SNR.\n\nThere have been suggestions that decibel is outdated for the modern era. This is probably due to incomplete understanding of logarithms and incorrectly mixing absolute and relative values. \n\n## Milestones\n\n1614\n\nScottish mathematician John Napier (aka Neper) publishes a book detailing his invention of the **logarithms**. This would prove useful centuries later in the definition of the decibel.\n\n1904\n\nIn the early days of telephony, there's a need to quantify transmission losses due to links and nodes in a connection. AT&T proposes to use **Mile of Standard Cable (MSC)**, which is equivalent of a mile of dry-core cable-pair with a loop resistance of 88 ohms and a mutual capacitance of 0.054 farads. Let's note that MSC is not a measure of distance, but one of loss, useful for comparing the efficiencies of two telephone circuits. \n\n1923\n\nMSC unit has dependency on frequency. This was useful in earlier decades when there was a need to characterize distortion. With newer circuits having much less distortion that the standard circuit, there's a need for a distortionless unit. Thus, AT&T invents the **Transmission Unit (TU)** to replace MSC. TU is defined based on the logarithmic scale. The logscale is useful since the losses due to two successive parts can simply be added instead of multiplied. \n\n1924\n\nThe International Advisory Committee on Long Distance Telephony in Europe, plus representatives of the Bell System, decide to adopt two units based on TU: **bel** based on power ratio \\(10^1\\) and **neper** based on power ratio \\(e^2\\). Since TU is based on power ratio \\(10^.1\\), the Bell System decides to use **decibel** as the unit. One decibel is the smallest difference in sound that the human ear can detect. The word \"bel\" itself is honouring Alexander Graham Bell, inventor of the telephone. \n\n1933\n\nBy this time, in the UK at least, the unit of loss measurement becomes decibel rather than msc, though some other countries continue to use neper as the unit. \n\nJul \n1937\n\nAt the First International Acoustical Conference in Paris, decibel is adopted as an international unit for energy and pressure levels. \n\nApr \n2003\n\nThere's talk of including the decibel within the International System of Units (SI) but this proposal is rejected. However, decibel has been recognized by IEC and ISO.","meta":{"title":"Decibel","href":"decibel"}} {"text":"# TLV Format\n\n## Summary\n\n\nTLV (Tag-Length-Value) is a binary format used to represent data in a structured way. TLV is commonly used in computer networking protocols, smart card applications, and other data exchange scenarios. The three parts of TLV are:\n\n* **Tag**: Identifies uniquely the type of data. It's typically a single byte or a small sequence of bytes.\n* **Length**: Length of the data field in bytes. In some protocols, the lengths of tag and length fields are also included.\n* **Value**: Actual data being transmitted, which can be of any type or format.\n\nEntities that send messages would **encode** information into TLV format. Entities that receive such messages would **decode** them to retrieve the information. Many programming languages have libraries for TLV encoding and decoding. Developers can also build their own custom encoders and decoders, perhaps optimized for their applications.\n\n## Discussion\n\n### Could you explain the TLV format with an example?\n\nDevelopers will usually represent data in a form that's most convenient and efficient for processing. For example, this could be an associative array, a linked list, or a class with attributes. When this data needs to be stored or transmitted, it has to be **serialized**. This is where TLV format is used. A TLV encoder reads the data/message and outputs a stream of bytes. A TLV decoder does the reverse.\n\nThe figure shows a TLV example used for F-TEID, an information element (IE) used in 5G's PFCP protocol. The type value 21 indicates that this is F-TEID. The protocol defines other values for other IEs. Since type field is two bytes, it will be encoded as 0x0015.\n\nThe length field is 2 bytes. It's value indicates the number of bytes that follow. The latter are part of the 'V' in TLV. The 4-byte TEID field is mandatory. Byte 5 contains some flags (CHID, CH, V6, V4) to indicate the presence of optional fields. So if V6 is set to 1, 16 bytes of IPv6 address is present.\n\n\n### Does 'T' in TLV refer to \"tag\" or \"type\"?\n\nThe TLV acronym can refer to either \"Tag-Length-Value\" or \"Type-Length-Value.\" Both of these terms are used interchangeably and can be considered correct.\n\nIn some contexts, \"Tag\" and \"Type\" may be used to refer to slightly different things. \"Tag\" may refer to an identifier used within a particular protocol. \"Type\" may refer to a more general data type, such as an integer, string, or binary data. However, in most cases, the terms \"Tag\" and \"Type\" are used interchangeably to refer to the identifier of the data being transmitted.\n\n\n### What are the benefits of using the TLV format?\n\nThe TLV format allows data to be structured in a **flexible** way. Data can be organized into logical groups. The length field allows for variable-length data to be represented.\n\nThe format is also **extensible**, meaning that new tags can be added to the format without requiring changes to existing code. This makes it easy to add new functionality to an existing system.\n\nIt's also an **efficient** way to store and transmit data. By using a binary format, the data can be transmitted more quickly and with less overhead than with text-based formats.\n\nThe use of length fields in the TLV format makes it easy to **detect errors** in the data. If the length of a field does not match the expected value, it is likely that an error has occurred.\n\n\n### Which standards have adopted the TLV format?\n\nThe OSI (Open Systems Interconnection) reference model is a layered architecture. TLV is used in many protocols across the OSI layers. For example, at the data link layer, Ethernet frames and Wi-Fi frames use TLV. At the network layer, IP and ICMP are two examples that use TLV. At the application layer, there are plenty of protocols that use TLV: HTTP, CoAP, DNS, and MQTT are some examples.\n\nTLV is typically not used at the physical layer, which usually deals with raw bits. TLV is typically not used at the transport, session or presentation layers.\n\nHere are more standards that use TLV:\n\n + ISO 7816: This is a communication protocol between smart cards and card readers. The APDU (Application Protocol Data Unit) format is based on TLV.\n + Bluetooth: The Bluetooth Low Energy (BLE) specification uses the TLV format to encode the data for advertising and communication between BLE devices.\n + SIM/eSIM cards: In cellular systems, SIM and eSIM cards use the TLV format to store data, and exchange data with the mobile device.\n\n### What endianness is used in TLV?\n\nThe endianness used depends on the specific protocol or application, which must specify the endianness it's using. If messages are being exchanged between devices with different endianness, proper conversion would be needed before processing those messages.\n\nThere are some protocols that mix endianness for different fields within the same TLV message. One such protocol is the Bluetooth Low Energy (BLE) protocol. The endianness of the length field and the value field may be different. Specifically, the length field in the TLV message is always encoded in little-endian byte order, while the endianness of the value field depends on the type of data being transmitted. For example, if the value field contains a 16-bit unsigned integer, it's encoded in little-endian byte order, since the length field is also in little-endian byte order. However, if the value field contains a 32-bit floating-point value, it's encoded in big-endian byte order.\n\n\n### What are some best practices for designing TLV-based messages?\n\nUse standardized tags to ensure that your messages can be easily understood and implemented by other systems. Consider using a standardized byte order or including byte order information in the message.\n\nDefine a clear message structure that defines the order and the type of fields. When possible, use fixed-length fields. These approaches make parsing and processing more efficient. Where variable-length fields are needed, use a length field to indicate the size of the data. Include error checking in the message to ensure that the message is valid and has not been corrupted during transmission.\n\nWhen designing a TLV message, reserve certain tags for future use. This can ensure that new fields can be added to the message without requiring changes to existing code. Use flags to indicate which fields in the message are optional. This can help reduce the size of the message and make it easier to parse. Flags also help with backward compatibility. A version number included in a message can help older systems interwork with newer systems. Another technique is to use a a variable-length encoding scheme that can represent both older and newer data formats.\n\n\n### How should developers encode/decode messages in TLV format?\n\nAny TLV encoder/decoder must be tested for correctness. Even when invalid input is fed to them, they should fail gracefully. They can log a warning message and keep applications robust and secure.\n\nThe length value can't be set until all message fields are encoded. One approach is to increment the length value as each field is encoded.\n\nIf fields are not byte-aligned, bit manipulation is required. This involves bit shifting and bit masking to get or set specific bits from a field. Here's an example (assuming big-endian byte order):\n\n + Encoder: Data `dt` is of range 0-3 (2 bits). It's to be encoded into bits 5 and 4 without disturbing other bits in a 2-byte field `fld`. We can do `fld |= (dt & 0x0003) << 3`.\n + Decoder: We wish to extract bits 5 and 4 from a 2-byte field `fld`. We can do `dt = (fld & 0x0018) >> 3` or `dt = (fld >> 3) & 0x0003`.Developers can use a well-tested and widely-used TLV library rather than writing a custom implementation. Examples include libtins and tlvcpp (C++); Apache MINA and TlvParser (Java); Construct and TLV-Coding (Python); BouncyCastle and PeterO.TlvLib (.NET).\n\n\n### What are some variations of the TLV format?\n\nTLV has some variations and they offer different trade-offs between simplicity, flexibility, and efficiency:\n\n + **TVL**: The tag comes first, followed by the value, and then the length. This format is sometimes used in legacy systems, but it is less common than the TLV format.\n + **LV**: There's no tag field. This format is simpler than TLV, but it doesn't provide any information about the meaning or purpose of the data.\n + **TFLV**: Type-specific flags field is also included.\n + **Nested TLV**: The value field can contain nested TLV structures, allowing for the representation of more complex data. This is often used in protocols that require hierarchical or nested data structures.\n + **Extended TLV (ETLV)**: This includes an extra \"extended\" field in the tag that provides additional information about the tag's purpose or context. This allows for more flexibility in the use of tags, as well as better support for backward compatibility.\n + **Binary TLV (BTLV)**: BTLV is a binary variant of TLV that's designed for use in low-level system programming. It uses fixed-width fields for the tag, length, and value, and is often used in embedded systems and low-level network protocols.\n\n### What are the alternatives to the TLV format?\n\nTLV is a binary format. Other binary formats include Protocol Buffers, ASN.1 and MessagePack. Protocol Buffers was developed by Google. It's language- and platform-independent. ASN.1 is an older format that's widely adopted. It has different TLV-based encoding rules (BER, DER) and non-TLV-based encoding rules (PER, XER). MessagePack is designed to be fast and compact. These alternatives allow developers to define and maintain message definitions in readable syntax while encoding them into binary formats.\n\nSometimes an efficient binary format is not essential. Developers may prefer a textual format that's easier to read and parse. In such situations, XML and JSON formats could be used. These are widely used in web services and mobile applications. XML is also used for document formatting.\n\n## Milestones\n\n1980\n\nThe use of TLV encoding can be traced back to the development of the **Abstract Syntax Notation One (ASN.1)** standard in the 1980s. This happens within the telecommunications industry. ASN.1 uses TLV encoding to represent complex data structures in a compact and efficient format.\n\n1990\n\nIn the 1990s, TLV is used in the development of the **EMV (Europay, Mastercard, and Visa)** standard for payment cards. The EMV standard uses TLV encoding to represent the data stored on a payment card, including the cardholder's name, card number, expiration date, and other information.\n\n2000\n\nThrough the 2000s, the use of TLV encoding becomes more common in computer networking protocols, such as the Simple Network Management Protocol (SNMP) and the Link Layer Discovery Protocol (LLDP).","meta":{"title":"TLV Format","href":"tlv-format"}} {"text":"# Siri\n\n## Summary\n\n\nSiri is a voice-controlled virtual assistant for Apple devices. It can be used for multiple purposes such as getting weather reports, setting an alarm, sending a message to someone, scheduling a meeting, or locking a car. At times, Siri can also respond with wit and sarcasm. Siri Shortcuts and Siri Suggestions are additional features that enhance how users can interact with Siri. \n\nSiri is available in many countries and in different languages. Initially, it was available only on the iPhone and later expanded to include iPod touch, iPad, Mac, AirPods, Apple Watch, Apple TV, and HomePod. \n\nDespite being the first mover in the industry, Siri has fallen behind the competition. In fact, users may prefer to use Amazon Alexa or Google Assistant on Apple devices.\n\n## Discussion\n\n### What are the capabilities of Siri?\n\nSiri is an artificial intelligence designed to help Apple users in various tasks. It can **perform actions** such as reading your last email, texting your friend, booking a hotel, or calling your parents instantly. It can tell good hotels nearby, set an alarm, give directions to reach a certain place, or describe tomorrow's weather. \n\nBy asking Siri \"What can you do?\", Siri will respond with a list of all things it can do. There are already apps to do these things but Siri brings a voice interface to these apps and therefore a better user experience. \n\nIt can also be trained to **give answers** to some specific questions. Siri can be taught to pronounce your name or which of your contacts are your family members. \n\nSiri can also be funny and sarcastic sometimes. For instance, if you ask it \"What's your favourite animal?\", it will answer \"Software doesn't usually get to choose one, but I'll say birds. What's yours?\" To the question \"What is the meaning of life?\", it will answer \"I can't answer that. Ha ha!\" \n\n\n### How can users launch Siri?\n\nEach Apple device has its unique way to invoke Siri. Typically, a device button (physical or virtual) is pressed and the request is made. Users need to press and hold for longer requests. **\"Hey Siri\"** is a hands-free option to invoke Siri. It's available in latest versions of most devices. \n\nOn iPhone X or later, press the Side button. In other models, press Home or Top button. With AirPods Pro and AirPods (3rd generation), set the force sensor on either left or right AirPod. On AirPods (1st generation) double-tap the outside of either AirPod and wait for a chime. On recent Apples Watch models, \"Hey Siri\" prompt is unnecessary. Another method is to press the Digital Crown for a few seconds. \n\nOn Macs, press the Siri button on Touch Bar, menu bar or Dock (macOS Sierra or later). On vehicles that support CarPlay or Siri Eyes Free, hold down the voice-command button on the steering wheel while making a request. On HomePod, press the top of the device. For Apple TV, use Siri button on the Siri Remote. \n\n\n### How can users automate tasks using Siri?\n\nA user may want to automate a sequence of frequent tasks rather than perform them manually. For example, for the way home from work, the user may want directions, send ETA to a family member and start listening to music. Another example is to download all images on a webpage, reduce them in size and upload them to Twitter. These tasks can be automated with **Siri Shortcuts**. \n\n*Shortcuts* is a separate app that's installed by default since iOS 13. It comes with 300+ built-in actions. Users can create or edit shortcuts. Users can browse a gallery of shortcuts and launch any shortcut. Specific shortcuts can also be launched from icons or widgets on the home screen. Or invoke and tell Siri the name of the shortcut to execute. \n\nFor third-party apps that support this feature, users can add shortcuts to automate tasks concerning those apps. \n\nSince iOS 13, shortcuts can also be triggered automatically. For example, a shortcut can be configured to run at 11am daily. A practical example is to automatically enable Do Not Disturb when the user starts watching Netflix. \n\n\n### What are Siri Suggestions?\n\nWe may view Siri as just a voice interface to apps on Apple devices but Siri is more than this. Using its AI capabilities, Siri learns how you use your devices and apps. Via a feature called **Siri Suggestions**, it gives personalized suggestions. For this purpose, Siri looks at your browsing history, emails, messages, images, notifications, contacts and information shared by third-party apps on your devices. For privacy, synchronization across devices is done using end-to-end encryption. On-device processing is used. Any information sent to Apple is anonymized. \n\nHere are some things Siri Suggestions can do: suggests people to include in emails and calendar events based on previous emails or events; based on numbers shared in emails, guesses who may be calling even if the number is not in contacts; gives search suggestions in Safari browser; recommends news stories based on your past reading history; notifies when to leave for an appointment based on current traffic conditions. \n\nSiri Suggestions also makes use of Shortcuts. In fact, suggestions can be seen as informing users what shortcuts to run. \n\n\n### How does Siri work under the hood?\n\nThe fundamental technologies used in Siri are Automatic Speech Recognition (ASR) that converts audio waveforms to text, Natural Language Understanding (NLU) that determines user's intent, and Text-to-Speech (TTS) that enables Siri to speak out a response. \n\nSince iOS 15, on some devices, some of these steps can be done offline without connecting to a server. However, the true utility of Siri comes from connecting to a server. Server-side logic might do a better job with ASR, taking into account regional accent, ambient noise, and linguistic information such as syntax and context. For example, the NLU engine must differentiate between \"byte\" and \"bite\" based on the context. The server may query databases to find answers. If an answer can't be found, Siri may prompt the user if it should search the web. \n\nThe \"Hey Siri\" hands-free prompt is based on an acoustic model implemented as a Deep Neural Network (DNN). In fact, Siri does two-pass detection: a small DNN that's always on, a larger DNN that's triggered for more accurate processing. \n\n\n### What patents cover the technologies used in Siri?\n\nWithout being exhaustive, we note the following patents along with filing and publication dates:\n\n + US20120016678A1, Jan 2011 / Jan 2012: Intelligent automated assistant: a conversational agent that uses natural language dialog and external services to perform actions, retrieve information and solve problems rather than simply return search results.\n + US20170358301A1, Sep 2016 / Dec 2017: Digital assistant providing whispered speech: determines that user is whispering and modulates the response to whispers. A patent published April 2021, US20210097980A1, expands this idea to devices that are aware of their environments.\n + US20180329957A1, Aug 2017 / Nov 2018: Feedback analysis of a digital assistant: client device receives user inputs, processes these inputs based on instructions received from a server and sends the result to server.\n + US20180330731A1, Sep 2017 / Nov 2018: Offline personal assistant: an assistant that can receive multiple inputs, rank and score them, and decide which one to act upon. The patent focuses on on-device processing without relying on server backend.\n + US20180329677A1, Mar 2018 / Nov 2018: Multi-modal interfaces: visual interface is used to augment the voice interface.\n\n### What developer tools are available to integrate Siri into apps?\n\nApple provides developers an API called **SiriKit**. Interactions are enabled using Intents and IntentsUI frameworks that are part of SiriKit. An app can have custom vocabulary and sample phrases so that Siri can interact better with the app. Via an app-specific extension, Siri can be used even when the actual app isn't running. Siri handles user interactions while the extension provides necessary information. \n\nSiriKit is available in Swift and Objective-C programming languages. It's available for iOS 10.0+, iPadOS 10.0+, macOS12.0+, Mac Catalyst 13.0, tvOS 14.0+ and watchOS 3.2+ platforms. \n\nDevelopers can read SiriKit documentation for more information.\n\n\n### What are the shortcomings of Siri?\n\nTen years after its launch in 2011, Siri was considered inferior to its competitors Google Assistant and Amazon Alexa. This is despite Siri being the first mover in the industry. New releases came with only insignificant updates. Siri mishears even simple commands. Users become frustrated by its failures to complete basic tasks. Some of these can be blamed on poor management and lack of product focus. \n\nApple being a closed ecosystem, Siri doesn't integrate well with third-party apps and services, which is where Alexa trumps. Different versions of Siri on different Apple devices means that users don't get a consistent experience. \n\nIn 2019, it was revealed that recordings of real conversations were given to external analysts to improve Siri's algorithms. However, these recordings contained sensitive information. \n\nWithout an active internet connection, Siri's capabilities are limited. Back in 2019 it was reported that user interaction with maps was limited to English. Background noise, low-quality audio, spelling variations, strong accent and fast speech all caused problems for Siri. Even if the iPhone is password protected, Siri could be invoked via the Home button without unlocking the phone. \n\n## Milestones\n\n2010\n\nA Norwegian company named Siri, Inc. launches an iOS app based on speech recognition technology. Siri is a virtual assistant. It enables users to interact with their iPhones via a voice interface. In April, **Apple acquires Siri**. \n\n2011\n\nApple makes Siri an integral feature of **Apple iPhone 4S**. Despite Apple's best efforts to find a better name, they stick to \"Siri\". Later, people suggest that Siri stands for \"Speech Interpretation and Recognition Interface\". Competition soon follows: Samsung's S Voice (2012), Google Now (2012), Microsoft Cortana (2014), Amazon Alexa (2014), and Google Assistant (2016). \n\n2012\n\nApple announces many improvements to Siri in iOS 6, mainly based on access to new sources of data such as information about sports, restaurants and movies. Siri is now capable of posting to social media, reading incoming messages and notifications, and opening apps. It's now available in 15 languages. Apple expands support of Siri to iPad and iPod touch. This encourages Apple users to upgrade their devices. \n\n2013\n\nSiri acquires new capabilities including switching on/off Bluetooth and Wi-Fi, using databases to answer search queries, and using more natural sounding voices. Apple introduces \"iOS in the Car\", an extension of Siri Hands-free designed to work with in-car systems. \n\n2014\n\nHands-free interaction with Siri was first introduced in 2012. This is improved with the **\"Hey Siri\"** feature. Users can use Siri without needing to press buttons or touchscreens. In July, Siri gets a major upgrade with adoption of ML techniques including Deep Neural Networks (DNNs). Error rates drop by at least a factor of two. \n\n2015\n\nSiri is added to **Apple TV**. Standalone Siri remote for Apple TV is also released. It gives suggestions based on previous user behaviour such as recommending news stories, music, and apps. It's now 40% faster and 40% more accurate than before. \n\n2016\n\nApple releases the **SiriKit API** for developers. This allows developers to integrate Siri with their apps. However, this integration is limited to some apps such as messaging, phone calls, photo search, ride booking, personal payments, workouts in apps and some in-car apps. Siri can do intelligent scheduling, integrate with the QuickType keyboard, react to text conversations and make useful suggestions based on user behaviour. \n\n2017\n\nDevelopers get support for deploying more Siri features such as to-do lists, notes and payments in SiriKit. Machine Learning becomes deeply embedded into Apple products, including Siri. Apple reveals that Siri is used by over 375 million devices each month in different countries and languages. Siri can also translate from English to Chinese, French, German, Italian, and Spanish. \n\n2018\n\nIntroduced at WWDC 2017, Apple's intelligent speaker system called **HomePod** becomes available along with iOS 11.2.5. Users can interact with HomePod via Siri. \n\n2019\n\n**Siri Shortcuts**, though available earlier as a downloadable app, is now installed by default on iOS 13 and iPadOS devices. Shortcuts can also do things automatically without users invoking it explicitly. \n\n2021\n\nWith the release of iOS 15, Siri can now be used **offline** though with limited capabilities. This is available on recent iPhones with A12 Bionic processor and iPads. On-device speech recognition is now possible but questions such as \"What is the capital of Switzerland?\" or \"Will it rain later today?\" can't be answered without connecting to the server. While offline, Siri can open apps, create timers, make calls, and adjust volume. But it can't add new calendar entries or select downloaded music albums.","meta":{"title":"Siri","href":"siri"}} {"text":"# Hidden Markov Model\n\n## Summary\n\n\nConsider weather, stock prices, DNA sequence, human speech or words in a sentence. In all these cases, current state is influenced by one or more previous states. Moreover, often we can observe the effect but not the underlying cause that remains hidden from the observer. Hidden Markov Model (HMM) helps us figure out the most probable hidden state given an observation. \n\nIn practice, we use a sequence of observations to estimate the sequence of hidden states. In HMM, the next state depends only on the current state. As such, it's good for modelling time series data. \n\nWe can classify HMM as a **generative probabilistic model** since a sequence of observed variables is generated by a sequence of hidden states. HMM is also seen as a specific kind of **Bayesian network**.\n\n## Discussion\n\n### Could you explain HMM with an example?\n\nSuppose Bob tells his friend Alice what he did earlier today. Based on this information Alice guesses today's weather at Bob's location. In HMM, we model weather as *states* and Bob's activity as *observations*.\n\nTo solve this problem, Alice needs to know three things: \n\n + **Transition Probabilities**: Probability of moving from one state to another. For example, \"If today was sunny, what's the probability that it will rain tomorrow?\" If there are N states, this is an NxN matrix.\n + **Emission Probabilities**: Probability of a particular output given a particular state. For example, \"What's the chance that Bob is walking if it's raining?\" Given a choice of M possible observation symbols, this is an NxM matrix. This is also called output or observation probabilities.\n + **Initial Probabilities**: Probability of being in a state at the start, say, yesterday or ten days ago.Unlike a typical Markov chain, we can't see the states in HMM. However, we can observe the output and then predict the state. Thus, the states are hidden, giving rise to the term \"hidden\" in the name HMM. \n\n\n### What types of problems can be solved by HMM?\n\nLet A, B and π denote the transition matrix, observation matrix and initial state distribution respectively. HMM can be represented as λ = (A, B, π). Let observation sequence be O and state sequence be Q. \n\nHMM can be used to solve three types of problems: \n\n + **Likelihood Problem**: Given O and λ, find the likelihood P(O|λ). How likely is a particular sequence of observations? *Forward algorithm* solves this problem.\n + **Decoding Problem**: Given O and λ, find the best possible Q that explains O. Given the observation sequence, what's the best possible state sequence? *Viterbi algorithm* solves this problem.\n + **Learning Problem**: Given O and Q, learn λ, perhaps by maximizing P(O|λ). What model best maps states to observations? *Baum-Welch algorithm*, also called *forward-backward algorithm*, solves this problem. In the language of machine learning, we can say that O is training data and the number of states N is the model's hyperparameter.\n\n### What are some applications where HMM is useful?\n\nHMM has been applied in many areas including automatic speech recognition, handwriting recognition, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics. \n\nIn speech recognition, a spectral analysis of speech gives us suitable observations for HMM. States are modelled after phonemes or syllables, or after the average number of observations in a spoken word. Each word gets its own model. \n\nTo tag words with their parts of speech, the tags are modelled as hidden states and the words are the observations. \n\nIn computer networking, HMMs are used in intrusion detection systems. This has two flavours: anomaly detection in which normal behaviour is modelled; or misuse detection in which a predefined set of attacks is modelled. \n\nIn computer vision, HMM has been used to label human activities from skeleton output. Each activity is modelled with a HMM. By linking multiple HMMs on common states, a compound HMM is formed. The purpose is to allow robots to be aware of human activity. \n\n\n### What are the different types of Hidden Markov Models?\n\nIn the typical model, called the **ergodic HMM**, the states of the HMM are fully connected so that we can transition to a state from any other state. **Left-right HMM** is a more constrained model in which state transitions are allowed only from lower indexed states to higher indexed ones. Variations and combinations of these two types are possible, such as having two parallel left-to-right state paths. \n\nHMM started with observations of discrete symbols governed by **discrete** probabilities. If observations are continuous signals, then we would use **continuous** observation density. \n\nThere are also domain-specific variations of HMM. For example, in biological sequence analysis, there are at least three types including profile-HMMs, pair-HMMs, and context-sensitive HMMs. \n\n\n### Could you explain forward algorithm and backward algorithm?\n\nEvery state sequence has a probability that it will lead to a given sequence of observations. Given T observations and N states, there are \\(N^T\\) possible state sequences. Thus, the complexity of calculating the probability of a given sequence of observations is \\(O(N^{T}T)\\). Both forward and backward algorithms bring down the complexity to \\(O(N^{2}T)\\) through **dynamic programming**. \n\nIn the forward algorithm, we consider the probability of being in a state at the current time step. Then we consider the transition probabilities to calculate the state probabilities for the next step. Thus, at each time step we have considered all state sequences preceding it. The algorithm is more efficient since it reuses calculations from earlier steps. Instead of keeping all path sequences, paths are folded into a forward trellis. \n\nBackward algorithm is similar except that we start from the last time step and calculate in reverse. We're finding the probability that from a given state, the model will generate the output sequence that follows. \n\nA combination of both algorithms, called forward-backward algorithm, is used to solve the learning problem. \n\n\n### What's the algorithm for solving HMM's decoding problem?\n\nViterbi algorithm solves HMM's decoding problem. It's similar to the forward algorithm except that instead of summing the probabilities of all paths leading to a state, we retain only one path that gives maximum probability. Thus, at every time step or iteration, given that we have N states, we retain only N paths, the most likely path for each state. For the next iteration, we use the most likely paths of current iteration and repeat the process. \n\nWhen we reach the end of the sequence, we'll have N most likely paths, each ending in a unique state. We then select the most likely end state. Once this selection is made, we backtrack to read the state sequence, that is, how we got to the end state. This state sequence is now the most likely sequence given our sequence of observations. \n\n\n### How can we solve the learning problem of HMM?\n\nIn HMM's learning problem, we are required to learn the transition (A) and observation (B) probabilities when given a sequence of observations and the vocabulary of hidden states. The **forward-backward algorithm** solves this problem. It's an iterative algorithm. It starts with an initial estimate of the probabilities and improves these estimates with each iteration. \n\nThe algorithm consists of two steps: \n\n + **Expectation or E-step**: We compute the expected state occupancy count and the expected state transition count based on current probabilities A and B.\n + **Maximization or M-step**: We use the expected counts from the E-step to recompute A and B.While this algorithm is unsupervised, in practice, initial conditions are very important. For this reason, often extra information is given to the algorithm. For example, in speech recognition, the HMM structure is set manually and the model is trained to set the initial probabilities. \n\n\n### Could you describe some tools for doing HMM?\n\nIn Python, *hmmlearn* package implements HMM. Three models are available: `hmm.GaussianHMM`, `hmm.GMMHMM` and `hmm.MultinomialHMM`. This package is also part of Scikit-learn but will be removed in v0.17. Stephen Marsland has shared Python code in NumPy and Pandas that implements many essential algorithms for HMM.\n\nIn R, *HMM* package implements HMM. It has functions for forward, backward, Viterbi and Baum-Welch algorithms. Another package *depmixS4* implements dependent mixture models that can be used to fit HMM to observed data. R-bloggers has an example use of depmixS4.\n\n## Milestones\n\n1913\n\nRussian mathematician A. A. Markov recognizes that in a sequence of random variables, one variable may not be independent of the previous variable. For example, two successive coin tosses are independent but today's weather might depend on yesterday's weather. He models this as a chain of linked events with probability assigned to each link. This technique later is named **Markov Chain**. \n\n1966\n\nBaum and Petrie at the Institute of Defense Analyses, Princeton, introduce the **Hidden Markov Model (HMM)**, though this name is not used. They state the problem of estimating transition and emission probabilities from observations. They use maximum likelihood estimate. \n\n1967\n\nAndrew Viterbi publishes an algorithm to decode information at the receiver in a communication system. Later named **Viterbi algorithm**, it's directly applicable to the decoding problem in HMM. Vintsyuk first applies this algorithm to speech and language processing in 1968. \n\n1970\n\nThe **Baum-Welch algorithm** is proposed to solve the learning problem in HMM. This algorithm is a special case of the Expectation-Maximization (EM) algorithm. However, the name HMM is not used in the paper and mathematicians refer to HMM as \"probabilistic functions of Markov chains\". \n\n1975\n\nJames Baker at CMU applies HMM to speech recognition in the DRAGON speech understanding system. This is one of the earliest engineering applications of HMM. HMM is further applied to speech recognition through the 1970s and 1980s by Jelinek, Bahl and Mercer at IBM. \n\n1989\n\nLawrence Rabiner publishes a tutorial on HMM covering theory, practice and applications. He notes that HMM originated in mathematics and was not widely read by engineers. Even when it was applied to speech processing in the 1970s, there were no tutorials to help translate theory into practice. \n\n2003\n\nHMM is typically used when the number of states is small but one research team applies it to large scale web traffic analysis. This involves hundreds of states and tens of millions of observations.","meta":{"title":"Hidden Markov Model","href":"hidden-markov-model"}} {"text":"# Cicada Principle\n\n## Summary\n\n\nPeriodical cicadas are insects of North America that mature into adulthood and emerge every 13 or 17 years. They come out together in their billions, thus making it difficult for predators to eat them all. What's interesting is their periods are **prime numbers**. This means that predators can't easily synchronize their own periods to that of the periodical cicadas. Another theory is that prime numbers prevent the emergence of hybrids from the cicadas with different periods. \n\nInspired by this natural phenomenon, now called Cicada Principle, web frontend designers have sought to use prime numbers to produce variations and patterns in their designs in a more efficient manner. Often CSS containing prime numbers are used to achieve this.\n\n## Discussion\n\n### Could you illustrate how prime numbers are useful in frontend design?\n\nLet's take two prime numbers 3 and 5. If we shade two rows based on these primes, we'll find that the shading will not fall on the same column until the 15th column. This is because 3 and 5 don't have any common factors other than 1. In fact, for this to happen, it's sufficient that the numbers are co-primes.\n\nIn the case of the periodical cicadas, the periods are 13 and 17. This means that it's only once in 13x17 = 221 years that the two different species will come out in the same year.\n\nFrontend designers who wish to create pseudorandom elements in their design need not actually generate random numbers or hardcode many variations. The use of large prime numbers will give them sufficient variations because patterns can be discerned only on a larger scale based on the multiples of those prime numbers.\n\n\n### Where and how can I apply the Cicada Principle in CSS?\n\n**Backgrounds** can be created without having any repeating patterns in them or showing gaps or seams between tiles. We can tile and overlap images of different widths, these widths having prime-numbered pixels. More efficiently, we can implement the same using CSS gradients. Michael Arestad has created interesting backgrounds by using prime numbers for background sizes and positions. \n\nIn general, Alex Walker proposed a \"stacking order model\", where many layers are stacked to create a background. The bottom layer can be small and repetitive since much of it will be obscured by higher layers. The topmost layer should have the largest dimension and also be thinly scattered (largest prime number in the group). It should also preferably not have eye-catching details. \n\nElement **borders**, including border radius, can be varied so that each element of a group gets a different border. These borders can also be animated for a mouse hover event. \n\nWe can use the principle for CSS **animations** too by having the durations as prime numbers, scaled if necessary by a suitable factor. \n\n\n### Could you illustrate an example of creating a CSS background based on Cicada Principle?\n\nLet's create a background by applying Cicada Principle to the `background-size`. By setting, the sizes to either 17px or 37px and then applying `linear-gradient` we get a repetitive pattern, which is not very interesting. \n\nWhen we combine both 17px and 37px widths, we obtain a more interesting background. However, this still shows some visual tiling. By adding another layer of width 53px, which is also a prime number, we get a background that's closer to what we want. \n\nTo create the gradients in the first place, we need to adjust the alpha values. Here too we can make use of prime numbers, with each layer getting a different alpha value for creating the gradients. \n\n## Milestones\n\nApr \n2011\n\nAlex Walker reads about periodical cicadas and gets the idea to use prime numbers for generating pseudorandom background patterns. He uses three images of different prime-numbered widths: 29, 37 and 53 pixels. When these three are overlapped and tiled, the resulting background will not repeat for 29x37x53 = 56,869 pixels. The three images together take up less than 7kB in size. \n\nJun \n2012\n\nEric Meyer uses CSS gradients for implementing the Cicada Principle for backgrounds. Compared to background images, this reduces requests to server and saves on download bandwidth. He calls these gradients **Cicadients**. \n\nJan \n2015\n\nLea Verou produces CSS animations using the Cicada Principle. \n\nSep \n2016\n\nCharlotte Jackson use Cicada Principle to create pseudorandom borders around images. She does this by adjusting the CSS `border-radius` using prime numbers in CSS selectors.","meta":{"title":"Cicada Principle","href":"cicada-principle"}} {"text":"# Word2vec\n\n## Summary\n\n\nWord2vec is a set of algorithms to produce word embeddings, which are nothing more than vector representations of words. The idea of word2vec, and word embeddings in general, is to use the context of surrounding words and identify semantically similar words since they're likely to be in the same neighbourhood in vector space. \n\nWord2vec algorithms are based on shallow neural networks. Such a neural network might be optimizing for a well-defined task but the real goal is to produce word embeddings that can be used in NLP tasks. \n\nWord2vec was invented at Google in 2013. Word2vec simplified computation compared to previous word embedding models. Since then, it has been popularly adopted by others for many NLP tasks. Airbnb, Alibaba and Spotify have used it to power recommendation engines.\n\n## Discussion\n\n### What's the key insight that lead to the invention of word2vec?\n\nBefore word2vec, a feedforward neural network was used to jointly learn the language model and word embeddings. This network had input, projection, hidden and output layers. The complexity is dominated by the mapping from projection to hidden layers. For N (=10) previous words, D-dimensional vectors (500-2000 dimensions), and hidden layer size H (500-1000), complexity is N x D x H. \n\nA recurrent neural network model removes the projection layer. Hidden layer connects to itself with a time delay. Complexity is now H x H. \n\nWord2vec does away with the non-linear hidden layer that was a bottleneck in earlier models. There's a tradeoff. We lose precise representation but training becomes more efficient. This simpler model is used to learn word vectors. The task of learning a language model is considered separately using these word vectors. Finally, not just past words but also future words are considered for context. When input words are projected, their vectors are averaged at the projection layer, unlike earlier models. \n\nFurther simplification of the softmax layer computation, enabled word2vec to be trained on 30 billion words, a scale that was not possible with earlier models. \n\n\n### What are the main models that are part of word2vec?\n\nLet's use a vocabulary of V words, a context of C words, a dense representation of N-dimensional word vector, an embedding matrix W of dimensions VxN at the input and a context matrix W' of dimensions NxV at the output. \n\nWord2vec has two models for deriving word embeddings: \n\n + **Continuous Bag-of-Words (CBOW)**: We take words surrounding a given word and try to predict the latter. Each word is a one-hot coded vector. Via an embedding matrix, this is transformed into a N-dimensional vector that's the average of C word vectors. From this vector, we compute probabilities for each word in the vocabulary. Word with highest probability is the predicted word.\n + **Continuous Skip-gram**: We take one word and try to predict words that occur around it. At the output, we try to predict C different words.\n\n### Could you describe the details of how word2vec learns word embeddings?\n\nWord2vec uses a neural network model based on word-context pairs. With each training step, the weights are adjusted with the goal of minimizing the loss function, that is, minimize the error between predicted output and actual output. An iteration uses one word-context pair. Training on the entire input corpus may be considered one training epoch. \n\nConsider the skip-gram model. A sliding window around the current input word is used to predict the words within the window. Once this iteration adjusts the weights, the window slides to the next word in the corpus. \n\nWord2vec is not a deep learning technique. In fact, there are no hidden layers, although it's common to refer to the embedding layer as hidden layer, or projection layer. A typical pipeline involves selecting the vocabulary from a text corpus, sliding the window to select context, performing extra tasks to simplify softmax computation, and iterating through the neural network model. \n\n\n### Why is the softmax layer of word2vec considered computationally difficult?\n\nThe softmax layer treats the problem of selecting the most probable word as a multiclass classification problem. It computes the probability of each word being the actual word. Probabilities of all words should add up to 1. For skip-gram model, it does this for each contextual word. \n\nConsider a vocabulary of K words, and input and output vectors \\(v\\_w\\) and \\(v'\\_w\\) of word w. For skip-gram, softmax function is the probability of an output word given the input word, $$p(w\\_O|w\\_I) = \\frac{e^{{v'\\_{w\\_O}}^T\\,v\\_{w\\_I}}}{\\sum\\_{w=1}^{K} e^{{v'\\_w}^T\\,v\\_{w\\_I}}}$$\n\nWith a vocabulary of hundreds of thousands of words, computing the softmax probability for each word for each iteration is computationally expensive. **Hierarchical Softmax** solves this problem by doing computations on word parts and reusing the results. **Negative Sampling** is an alternative. It selects a few negative samples and computes softmax only for these and the actual outputs. Both these simplify computation without much loss of accuracy. \n\nSebastian Ruder gives a detailed explanation of different softmax approximation techniques. \n\n\n### What are other improvements to word2vec?\n\nWord2vec implementation has the ability to select a dynamic window size, uniformly sampled in range [1, k]. This has the effect of giving more weight to closer words. Smaller window sizes lead to similar *interchangeable* words. Larger window sizes lead to similar *related* words. \n\nWord2vec can also ignore rare words. In fact, rare words are discarded before context is set. This increases the effective window size for some words. In addition, we can subsample frequent words with the insight that being frequent, they are less informative. The net effect of this is that words that are far away could be topically similar and therefore captured in the embeddings. \n\n\n### What are some tips for those trying to use word2vec?\n\nDevelopers can read sample TensorFlow code for the CBOW model, sample NumPy code, or sample Gensim code.\n\nDesigned by Xin Rong, **wevi** is a useful tool to visualize how word2vec learns word embeddings. \n\nSebastian Ruder gives a number of tips. Use Skip-Gram Negative Sampling (SGNS) as a baseline. Use many negative samples for better results. Use context distribution smoothing before selecting negative samples so that frequent words are not sampled quite so frequently. SGNS is a better technique than CBOW. \n\n## Milestones\n\n2005\n\nMorin and Bengio come up with the idea of **hierarchical softmax**. A word is modelled as a composition of inner units, which are then arranged as a binary tree. Given a vocabulary of V words, probability of an output word is computed from softmax computation of inner units that lead to the word from the root of the tree. This reduces complexity from O(V) to O(log(V)). This idea becomes important later in word2vec models. In 2009, Mnih and Hinton explore different ways to construct the tree. \n\n2012\n\nGutmann and Hyvarinen introduce **Noise Contrastive Estimation (NCE)** as an alternative to hierarchical softmax. The basic idea is that a good model can differentiate data from noise using logistic regression. Mnih and Teh apply NCE to language modelling. This is similar to hinge loss proposed by Collobert and Weston in 2008 to rank data above noise. \n\nJan \n2013\n\nAt Google, Mikolov et al. develop **word2vec** although this name refers to a software implementation rather than the models. They propose two models: continuous bag-of-words and continuous skip-gram. They improve on earlier state-of-the-art models by removing the hidden layer. They also make use of hierarchical softmax, thus making this a **log-linear model**. Softmax uses Huffman binary tree to represent the vocabulary. They note that this speeds up evaluation by 2X. \n\nOct \n2013\n\nMikolov et al. improve on their earlier models by proposing **negative sampling**, which is a simplification of NCE. This is possible because NCE tries to maximize the log probability of the softmax whereas we are more interested in the word embeddings. Negative sampling is simpler and faster than hierarchical softmax. For small datasets, 5-20 negative samples may be required. For large datasets, 2-5 negative samples may be enough. \n\nJan \n2019\n\nInspired by word2vec, some researchers produce **code embeddings**, vector representations of snippets of software code. Called **code2vec**, this work could enable us to apply neural networks to programming tasks such as automated code reviews and API discovery. This is just one example of many advances due to word2vec. Another example is **doc2vec** from 2014. \n\nJun \n2019\n\nWord2vec is sequential due to strong dependencies across word-context pairs. Researchers show how word2vec can be trained on a GPU cluster by reducing dependency within a large training batch. Without loss of accuracy, they achieve 7.5 times acceleration using 16 GPUs. They also note that using Chainer framework, it's easy to implement CNN-based subword-level models.","meta":{"title":"Word2vec","href":"word2vec"}} {"text":"# BERT (Language Model)\n\n## Summary\n\n\nNLP involves a number of distinct tasks each of which typically needs its own set of training data. Often each task has only a few thousand samples of labelled data, which is not adequate to train a good model. However, there's plenty of unlabelled data readily available online. This data can be used to train a baseline model that can be reused across NLP tasks. **Bidirectional Encoder Representations from Transformers (BERT)** is one such model. \n\nBERT is **pre-trained** using unlabelled data on language modelling tasks. For specific NLP tasks, the pretrained model can be **fine-tuned** for that task. Pre-trained BERT models, and their variants, have been open sourced. This makes it easier for NLP researchers to fine-tune BERT and quickly advance the state of the art for their tasks.\n\n## Discussion\n\n### What's the typical process for using BERT?\n\nBERT is an evolution of self-attention and transformer architecture that's becoming popular for neural network models. BERT is an encoder-only transformer. It's **deeply bidirectional**, meaning that it uses both left and right contexts in all layers.\n\nBERT involves two stages: **unsupervised pre-training** followed by **supervised task-specific fine-tuning**. Once a BERT model is pre-trained, it can be shared. This enables downstream tasks to do further training on a much smaller dataset. Different NLP tasks can thus benefit from a single shared baseline model. In some sense, this is similar to transfer learning that's been common in computer vision. \n\nWhile pre-training takes a few days on many Cloud TPUs, fine-tuning takes only 30 minutes on a single Cloud TPU. \n\nFor fine-tuning, one or more output layers are typically added to BERT. Likewise, input embeddings reflect the task. For question answering, an input sequence will contain the question and the answer while the model is trained to learn the start and end of answers. For classification, the `[CLS]` token at the output is fed into a classification layer. \n\n\n### Could you describe the tasks on which BERT is pre-trained?\n\nBERT is pre-trained on two tasks: \n\n + **Masked Language Model (MLM)**: Given a sequence of tokens, some of them are masked. The objective is then to predict the masked tokens. Masking allows the model to be trained using both left and right contexts. Specifically, 15% of tokens are randomly chosen for masking. Of these, 80% are masked, 10% are replaced with a random word, 10% are retained.\n + **Next Sentence Prediction (NSP)**: Given two sentences, the model predicts if the second one logically follows the first one. This task is used for capturing relationship between sentences since language modelling doesn't do this.Unlike word embeddings such as word2vec or GloVe, BERT produces contextualized embeddings. This means that BERT produces multiple embeddings of a word, each representing the context around the word. For example, word2vec embedding for the word 'bank' would not differentiate between the phrases \"bank account\" and \"bank of the river\" but BERT can tell the difference. \n\n\n### Which are some possible applications of BERT?\n\nIn October 2019, Google Search started using BERT to better understand the intent behind search queries. Another application of BERT is to recommend products based on a descriptive user request. Use of BERT for question answering on SQuAD and NQ datasets is well known. BERT has also been used for document retrieval. \n\nBERT has been used for aspect-based sentiment analysis. Xu et al. use BERT for both sentiment analysis and comprehending product reviews so that questions on those products can be answered automatically. \n\nAmong classification tasks, BERT has been used for fake news classification and sentence pair classification. \n\nTo aid teachers, BERT has been used to generate questions on grammar or vocabulary based on a news article. The model frames a question and presents some choices, only one of which is correct. \n\nBERT is still new and many novel applications might happen in future. It's possible to use BERT for quantitative trading. BERT can be applied to specific domains but we would need domain-specific pre-trained models. SciBERT and BioBERT are two examples. \n\n\n### Which are the essential parameters or technical details of BERT model?\n\nBERT pre-trained models are available in two sizes: \n\n + **Base**: 12 layers, 768 hidden size, 12 self-attention heads, 110M parameters.\n + **Large**: 24 layers, 1024 hidden size, 16 self-attention heads, 340M parameters.Each of the above took 4 days to train on 4 Cloud TPUs (Base) or 16 Cloud TPUs (Large). \n\nFor pre-training, a batch size of 256 sequences was used. Each sequence contained 512 tokens, implying 128K tokens per batch. The corpus for pre-training BERT had 3.3 billion words: 800M from BooksCorpus and 2500M from Wikipedia. This resulted in 40 epochs for 1M training steps. Dropout of 0.1 was used on all layers. GELU activation was used. \n\nFor fine-tuning, batch sizes of 16 or 32 are recommended. Only 2-4 epochs are needed for fine-tuning. Learning rate is also different from what's used for pre-training. Learning rate is also task specific. Dropout used was same as in pre-training. \n\n\n### How do I represent the input to BERT?\n\nBERT input embeddings is a sum of three parts: \n\n + **Token**: Tokens are basically words. BERT uses a fixed vocabulary of about 30K tokens. To handle rare words or those not in token vocabulary, they're broken into sub-words and then mapped to tokens. The first token of a sequence is `[CLS]` that's useful for classification tasks. During MLM pre-training, some tokens are masked.\n + **Segment/Sentence**: An input sequence of tokens can be a single segment or two segments. A segment is a contiguous span of text, not an actual linguistic sentence. Since two segments are packed into the same sequence, each segment has its own embedding. Each segment is terminated by `[SEP]` token. For example in question answering, question is the first segment and answer is the second.\n + **Position**: This represents the token's position within the sequence.In practice, input embeddings can also contain an **input mask**. Since sequence length is fixed, the final sequence may involve padding. Input mask is used to differentiate between actual inputs and padding. \n\nBeginners may wish to look at a visual explanation of BERT input embeddings. \n\n\n### What are some variants of BERT?\n\nBERT has inspired many variants: RoBERTa, XLNet, MT-DNN, SpanBERT, VisualBERT, K-BERT, HUBERT, and more. Some variants attempt to compress the model: TinyBERT, ALERT, DistilBERT, and more. We describe a few of the variants that outperform BERT in many tasks:\n\n + **RoBERTa**: Showed that the original BERT was undertrained. RoBERTa is trained longer, on more data; with bigger batches and longer sequences; without NSP; and dynamically changes the masking pattern.\n + **ALBERT**: Uses parameter reduction techniques to yield a smaller model. To utilize inter-sentence coherence, ALBERT uses Sentence-Order Prediction (SOP) instead of NSP.\n + **XLNet**: Doesn't do masking but uses permutation to capture bidirectional context. It combines the best of denoising autoencoding of BERT and autoregressive language modelling of Transformer-XL.\n + **MT-DNN**: Uses BERT with additional multi-task training on NLU tasks. Cross-task data leads to regularization and more general representations.\n\n### Could you share some resources for developers to learn BERT?\n\nDevelopers can study the TensorFlow code for BERT. This follows the main paper by Devlin et al. (2019). This is also the source for downloading BERT pre-trained models.\n\nGoogle has shared TensorFlow code that fine-tunes BERT for Natural Questions.\n\nMcCormick and Ryan show how to fine-tune BERT in PyTorch. HuggingFace provides `transformers` Python package with implementations of BERT (and alternative models) in both PyTorch and TensorFlow. They also provide a script to convert a TensorFlow checkpoint to PyTorch. \n\nIBM has shared a deployable BERT model for question answering. An online demo of BERT is available from Pragnakalp Techlabs.\n\n## Milestones\n\nJun \n2017\n\nVaswani et al. propose the **transformer** model in which they use a seq2seq model without RNN. The transformer model relies only on **self-attention**, although they're not the first to use self-attention. Self-attention is about attending to different tokens of the sequence. This would later prove to be the building block on which BERT is created. \n\nFeb \n2018\n\nPeters et al. use many layers of bidirectional LSTM trained on a language model objective. The final embeddings are based on all the hidden layers. Thus, their embeddings are deeply contextual. They call it **Embeddings from Language Models (ELMo)**. They show that higher-level LSTM states capture semantics while lower-level states capture syntax. \n\nOct \n2018\n\nDevlin et al. from Google publish on arXiv a paper titled *BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding*. \n\nNov \n2018\n\nGoogle **open sources pre-trained BERT models**, along with TensorFlow code that does this pre-training. These models are for English. Later in the month, Google releases **multilingual BERT** that supports about 100 different languages. The multilingual model preserves case. The model for Chinese is separate. It uses character-level tokenization. \n\nApr \n2019\n\nNAACL announces the **best long paper award** to the BERT paper by Devlin et al. The annual NAACL conference itself is held in June. \n\nMay \n2019\n\n**Whole Word Masking** is introduced in BERT. This can be enabled with the option `--do_whole_word_mask=True` during data generation. For example, when the word 'philammon' is split into sub-tokens 'phil', '##am' and '##mon', then either all three are masked or none at all. Overall masking rate is not affected. As before, each sub-token is predicted independent of the others. \n\nAug \n2019\n\nSince a BERT model has 12 or 24 layers with multi-head attentions, using it in a real-time application is often a challenge. To make this practical for applications such conversational AI, NVIDIA releases **TensorRT optimizations** for BERT. In particular, the transformer layer has been optimized. Q, K and V are fused into a single tensor, thus locating them together in memory and improving model throughput. Latency is 2.2ms on T4 GPUs, well below the 10ms acceptable latency budget. \n\nOct \n2019\n\n**Google Search** starts using BERT for 10% of English queries. Since BERT looks at the bidirectional context of words, it helps in understanding the intent behind search queries. Particularly for conversational queries, prepositions such as \"for\" and \"to\" matter. BERT's bidirectional self-attention mechanism takes these into account. Because of the model's complexity, for the first time, Search uses Cloud TPUs.","meta":{"title":"BERT (Language Model)","href":"bert-language-model"}} {"text":"# Requirements Development\n\n## Summary\n\n\nIt's hard to build the right product without knowing why it's needed or what features are expected of it. Requirements Development provides answers to these questions. It involves many stakeholders: users, designers, developers, architects, and managers. It identifies user needs, studies technical feasibility, derives exact requirements, and validates those requirements.\n\nRequirements are various: business vs technical, functional vs non-functional, system vs component, hardware vs software, etc. Getting the requirements right mitigates risk, avoids costly rework, and improves customer satisfaction. \n\nTraditionally requirements were identified at the start of the project. In Agile or in any iterative process, requirements are developed continuously as features are added/updated or bugs need to be fixed. Requirements development is so important that Karl Weigers once said, \n\n> If you don't get the requirements right, it doesn't matter how well you do anything else.\n\n## Discussion\n\n### What's the requirements engineering process?\n\nRequirements Engineering (RE) has two parts: \n\n + **Requirements Development (RD)**: Requirements are developed by understanding customer needs and stakeholder expectations. These are refined and documented as formal specifications. Once specifications are validated, they set the baseline or the starting point for the architecture and the design phases of the project. Getting the requirements right is an iterative process consisting of elicitation, analysis, specification and validation. With each iteration, gaps and defects are addressed.\n + **Requirements Management (RM)**: Requirements are rarely static. Changes to baseline requirements are common. These changes are managed with a clear process. Requirements are tied to project plans and schedules. Requirements are traced to design, code and tests.RD precedes RM when baseline requirements are defined. Thereafter, requirements are continuously refined even in later phases of the software development life cycle. \n\nIt's useful to clarify verification versus validation. **Requirements verification** checks that requirements are well formed. **Requirements validation** checks that requirements specify the right system fulfilling the needs of stakeholders. \n\n\n### What are the different types of requirements?\n\nFor architects and developers, the word \"requirements\" typically means product requirements. For project managers, it may instead mean project requirements. Project requirements will include development/testing infrastructure, tools, licenses, staff training, compliance, and more. \n\nProduct requirements can be defined at different levels. Business requirements capture why the product is being built. Managers and marketing folks are involved. Then business analysts come in to define user requirements from the perspective of users. Finally, functional requirements are those that the product should satisfy. Functional requirements are inputs to designers, developers and testers. \n\nThen there are non-functional requirements such as usability, reliability, performance, safety, etc. Data requirements, interface requirements and design constraints are also non-functional requirements. \n\nFor example, \"customer can pay for gas at the pump\" is a business requirement. User requirements include \"swipe credit card\" and \"request receipt\". Functional requirements include \"prompt user for card swipe\" and \"parse card's magnetic strip\". A non-functional requirement could be \"complete the entire payment procedure within 60 seconds\". \n\n\n### What's the expected output of requirements development?\n\n**Requirements specification** is the main output of requirements development. This may be a single document or organized as many documents: Business Requirements Specification, Software Requirements Specification (SRS), Stakeholder Requirements Specification, and System Requirements Specification. \n\nSpecifications may be captured as Microsoft Word or PDF documents. Alternatively, they're in spreadsheets, a database or in a custom requirements management tool. In any case, specifications must be well-organized, amenable to tracing and easily accessible by all stakeholders. \n\nDuring requirements analysis, various models, diagrams or tables might be employed. Use case diagrams, class diagrams, data flow diagrams, and state transition diagrams are examples. Specifications must link to these diagrams. \n\nFinally, let's understand **requirements versus specifications**. Requirements can be treated as *questions* and specifications as *answers*. We start with requirements that're rough statements. Through requirements analysis and validation, we refine these into formal specifications. In another interpretation, specification is a structured collection of requirements. \n\n\n### Could you explain requirements elicitation?\n\nRequirements elicitation identifies users, user groups and other stakeholders. It attempts to understand their real needs. This in turn sets the product scope. \n\nIt's not as simple as asking users what they want. Henry Ford once said, \"If I had asked people what they wanted, they would have said faster horses.\" Empathy is needed to see things from the user perspective. Build a rough prototype. Observe how users interact with it. This will likely provide useful insights. \n\nInterviews and user focus groups can help. For diverse perspectives from all stakeholders, *Joint Application Design (JAD)* workshops could be organized. User stories, use cases (aka scenarios) and storyboards are some techniques to elicit requirements. These are effective since they dive into the level of actors, needs and interactions. \n\nSometimes implementations may be wrongly written down as requirements. Asking \"why\" questions to understand the context will reveal the hidden requirements. In fact, requirements elicitation is really about answering the questions what, why, and who. \n\nRequirements elicitation is in fact nuanced. To call it requirements capture or gathering is perhaps an oversimplification. \n\n\n### Could you explain requirements analysis?\n\nRequirements analysis refines and adds details to the requirements identified via requirements elicitation. It's done iteratively until requirements can be clearly written down as specifications. High-level requirements are translated into technical product requirements at various levels. \n\nModelling is done, perhaps using Unified Modelling Language (UML). Visualizations help analyse the system from various perspectives. Some of these are data flow diagrams, entity relationship diagrams, state transition diagrams/tables, class diagrams, sequence diagrams, activity diagrams, process flow diagrams, decision trees, and event/response tables. Through analysis, incomplete, inconsistent or incorrect requirements can be uncovered. \n\nDifferent aspects or viewpoints can be modelled: organizational, data, behavioural, domain, and non-functional requirements. \n\n\n### What roles do business rules play in requirements development?\n\nBusiness requirements frame the problem and the motivation for starting the project. On the other hand, business rules are policies, standards, practices, regulations or guidelines that are defined at the organizational level. Products are expected to adhere to these rules. Business rules affect use cases. System state, input, and user role are some factors that're used to define business rules. \n\nOne study identified five types of business rules: facts, constraints, action enablers, inferences and computations. Atomic business rules are better than one complex rule that combines multiple rules. \n\nConsider a firm making a payment gateway. Due to an ongoing litigation, there may be a business rule prohibiting integration with a specific bank. The firm and another bank may be part of the same parent company. For this bank, there may be a rule waiving transaction fees. Due to local regulations, there may be a rule that sets the maximum value of a transaction. Such rules must be translated into functional requirements that developers need to implement.\n\n\n### Could you give some tips for better requirements development?\n\nBiases can affect requirements development. Some of these are optimism bias, overconfidence bias, strategic misrepresentation, anchoring, and more. \n\nAvoid copying a competitor and jumping straight into implementing features. Likewise, avoid building the opposite of a failed product. Real failure here is skipping requirements engineering and assuming requirements can be transferred from one product to another. \n\nA product packed with features but solicits user feedback very late is risky. Don't be afraid to involve customers to test prototypes and clarify requirements. Early adopters often like to give feedback and ideas. \n\nAvoid \"gold plating\", that is, implementing features that developers thought would be nice to have. Delivering something that the user doesn't need wastes resources. It could also confuse users.\n\nDocument requirements clearly in the specifications. Each requirement must be unambiguous, validated, necessary, complete, consistent, feasible, traceable, verifiable and understandable. \n\n\n### What resources can help me learn more about requirements development?\n\n*Software Requirements* by Wiegers and Beatty (2013) is a good book to read. Other books from Wiegers include *Software Requirements Essentials* and *More About Software Requirements*. An older book is *The Requirements Engineering Handbook* by Young (2004). \n\nThe International Requirements Engineering Board (IREB) certifies engineers in the RE discipline. One study guide is *Requirements Engineering Fundamentals* by Pohl and Rupp (2015). \n\n*ISO/IEC/IEEE 29148:2018* is the main standard to read. Also worth reading are its related standards *ISO/IEC/IEEE 15288:2023* (system life cycle processes) and *ISO/IEC/IEEE 12207:2017* (software life cycle processes). \n\nPenzenstadler's video lectures on Requirements Engineering are available on YouTube. Coursera's Requirements Engineering: Secure Software Specifications Specialization has five courses in RE with security as a focus.\n\nCMU SEI and INCOSE are two useful sources for guides and white papers. Search for the keyword \"requirements\" at their websites. As an example, we mention INCOSE's Guide to Writing Requirements (2023).\n\n## Milestones\n\n1975\n\nBrooks in his book *Mythical Man-Month* writes about the need to **consult users** when writing system specifications. Some important ideas include user feedback, iterative development, separation of specification and design, and user manual as the external specification of the product. Until then, users weren't considered as important to either identifying requirements or writing specifications. \n\n1977\n\nRoss and Schoman propose the use of the term \"requirements definition\" over the more widely used but rather ill-defined term \"requirements analysis\". To them, **Requirements Definition** \"encompasses all aspects of system development prior to actual system design.\" It deals with three things: context analysis (why the system is needed), functional specification (what the system is to be), and design constraints (how the system is to be constructed within defined boundary conditions). Multiple viewpoints (technical, operational, economic) must be considered. \n\n1979\n\nThe TRW Defense and Space group proposes the **Software Requirements Engineering Methodology** to address project failures due to poor requirements. The early evolution of requirements engineering up to this point is from systems engineering. \n\n1983\n\nThe IEEE Std 729-1983 **defines the word \"requirement\"**. The definition encompasses user need, contractual necessity, and the starting point for system development. This definition is revised in 1990 to include the aspect of documentation. \n\n1984\n\nIEEE publishes **IEEE Std 830-1984** titled *IEEE Guide to Software Requirements Specifications*. This standard is updated in 1993 and 1998 as *IEEE Recommended Practice for Software Requirements Specifications*. Also in 1984, the European Space Agency describes the Waterfall Model of software development. They separate user requirements from software requirements. They also state that user requirements is an input to defining software requirements. \n\n1993\n\nThe first **IEEE Symposium on Requirements Engineering** is held. The following year, the **IEEE International Conference on Requirements Engineering** is held. In 2002, these two are merged into the **IEEE International Requirements Engineering Conference**, which is subsequently organized annually. \n\n1994\n\nPohl proposes a **three-dimensional framework** for RE. Requirements have to be approached in the dimensions of specification, representation and agreement. They also note five factors that influence the RE process: methods/methodologies, tools, social aspects, cognitive skills, and economical constraints. \n\n1995\n\nCarroll's *Scenario-Based Design* describes how **user scenarios** can be used in requirements analysis. The idea of using scenarios is also seen in the research of other authors about the same time. Other complementary analysis techniques such as state transition diagrams and entity relationship diagrams were invented in the late 1970s. \n\n2010\n\nJarke et al. point out that RE has changed over the last 30 years. RE is no longer about software alone. It's includes business development, software engineering and industrial design. Challenges include massive reuse, COTS components, system integration, vendor-led requirements, fluid design, short iterations, and distributed requirements. Some of these are reiterated by Jantunen et al. (2019) and noted earlier by Reifer (2000). \n\n2011\n\n**ISO/IEC/IEEE 29148** titled *Systems and software engineering — Life cycle processes — Requirements engineering* is published as an international standard. This replaces earlier standards IEEE 830-1998, IEEE 1233-1998, and IEEE 1362-1998. It's updated in 2018 as **ISO/IEC/IEEE 29148:2018**. \n\n2022\n\nHehn and Mendez propose an artefact-based model that combines design thinking and RE. Design thinking focuses on the user perspective and uses low-fidelity prototypes to understand the problem better. RE uses methodologies and tools to specify the requirements in greater detail. The two approaches complement each other. While design thinking is diverging and multi-disciplinary, RE is converging and technical.","meta":{"title":"Requirements Development","href":"requirements-development"}} {"text":"# CSS Modules\n\n## Summary\n\n\nCSS was invented to style web pages. A typical web page contains many elements or components such as menus, buttons, input boxes, and so on. Styles defined in a CSS file are accessible to the entire web page in which that file is included. In other words, all style definitions have global scope. What if we want some styles to be visible only to one component of the page?\n\nCSS was defined to style documents, not UI components. The lack of modularity in CSS language makes it hard to maintain complex or legacy code. Developers are afraid to make code changes since it's easy to break something when CSS definitions are global. \n\nCSS Modules solves these problems by limiting the scope to components. CSS Modules is not an official standard. It's a community-led effort popularized by the ReactJS ecosystem.\n\n## Discussion\n\n### What are the problems that CSS Modules solve?\n\nThe problem of **global scope** is solved by CSS Modules since class names are local to the component. The same class name can be used in another component with different styling. Even though CSS as a standard has only global scope for names, CSS Modules leverages on tooling. Tools convert local names to globally unique names. \n\nDeeply nested CSS selectors result in what we call **high specificity**. This impacts performance. With CSS Modules, names are globally unique and locally specific. Hence, flat selectors are more than sufficient. Because CSS styles are now encapsulated within components, code becomes more **maintainable**. Code can be refactored. Dead code can be removed. \n\n\n### What are the essential features of CSS Modules?\n\nCSS Modules has the following features: \n\n + **Local Scope**: Class names and animation names are locally scoped by default.\n + **Global Scope**: Using `:global` switch, developer can choose to reuse some styles globally.\n + **Composition**: A selector can extend styles from another, thus promoting reuse of styles within local scope. Composition can't be used for global scope. A selector can be composed from a selector in global scope or selectors in other files.\n + **Naming**: Names should be in camelCase. Though traditional CSS naming convention is to use kebab-case, hyphens can cause problems when accessed within JavaScript. The use of camelCase simplifies the syntax.\n + **Integration**: CSS Modules should have tooling support to compile CSS to a low-level format called *Interoperable CSS (ICSS)*. It should also work nicely with CSS processors (Less, Sass, PostCSS), bundlers (Webpack) and JS frameworks (Angular, React).\n\n### Could you explain how CSS Modules work?\n\nCSS as a language doesn't support local scopes. All names have global scope. To overcome this limitation, developers use tools to automatically transform local names to globally unique names. These names are generated by the CSS Modules compiler. The compiler also generates a JS object to map old names to new names. Therefore, developers can continue to use local names in their component code. \n\nFor example, let's take a UI component called \"Cat\". Its styles are in file \"Cat.css\" where `.meow` is defined. It's possible that the same class is used in another component, let's say, \"WildCat\". To avoid this name conflict, CSS Modules compiler makes each name unique. \n\nCSS Modules is flexible. It doesn't impose a specific naming convention. In our example, the new class name is `.cat_meow_j3xk`, where the module name is used as a prefix and a hash value is used as a suffix. The mapping from old to new names goes into a JS object. \n\n\n### What tools and plugins help with implementing CSS Modules?\n\nWebpack's *css-modules* or PostCSS's *postcss-modules* can be used to implement CSS Modules. PostCSS can be used along with task runners such as Gulp or Grunt. PostCSS is really using JS plugins to transform CSS. \n\nTo tell Webpack how to load CSS files, there are *css-loader* and *style-loader*. These may be enough to use CSS Modules with Angular, TypeScript and Bootstrap. \n\nWith Browserify, the plugin to use is *css-modulesify*. For use in Rails, there's *cssm-rails*. With JSPM, there's *jspm-loader-css-modules*. \n\nCSS Modules can be used within JS frameworks. For React, there's *react-css-modules* or *babel-plugin-react-css-modules*. CSS preprocessors such as SCSS can be used along with CSS Modules using *sass-loader*. Gatsby is another example where CSS Modules can be used. \n\nTo catch problems early, there's eslint-plugin-css-modules.\n\n\n### Are there alternatives to using CSS Modules?\n\nHistorically, CSS started as a single file for an entire app. Thanks to tooling such as Webpack's *css-loader*, it became possible to use one stylesheet file per component. Each component's folder contained its CSS and JS files. However, styles still had global scope. \n\nGlobal scope was partially solved by OOCSS, SMACSS, BEM, etc. **Block-Element-Modifier (BEM)** brought some modularity to CSS via a naming convention. It solved CSS specificity problem. However, class names were long and had to be named by developers. With CSS Modules, naming is taken care of by the build tools. During coding, developers use simpler and intuitive names. We can customize class names, including the use of BEM-style naming if desired. \n\n**CSS in JS** embeds CSS directly in a JS file. The CSS model is at component level rather than at document level. For example, in *JSS*, styling is defined as JS objects. In another library called *styled-components*, tagged template literals are used. Michele Bertoli has a curated comparison list of many CSS-in-JS techniques.\n\n\n### Isn't W3C standardizing CSS Modules?\n\nNo. CSS Modules is not a standard. It doesn't change the CSS specifications in any way. CSS definitions retain global scope but CSS Modules makes them \"behave locally\" by renaming them with unique names, which are then referenced by their components in HTML or JS. This is possible because of tooling support. \n\nBeginners may confuse CSS Modules with another concept called *CSS3 modules*. In June 2011, CSS2.1 became a W3C Recommendation. However, this process took 13 long years, mainly because the specification was a monolith. When work started on CSS3, it was therefore decided to split the specifications into separate modules. Each module can take its own path of standardization. These modules are not related to CSS Modules. \n\nA related concept is the use of *namespaces* in CSS. These refer to XML namespaces and how these can be used within CSS selectors. HTML `style` element has the `scoped` attribute for local scoping but this is now deprecated. \n\n\n### What are some criticisms of CSS Modules?\n\nIt's been said that CSS Modules does nothing more than automate class naming so as to avoid name collisions. Developers can still do wrong things without CSS Modules warning them about it. It's not clear if typo errors will be caught at compile time or runtime. You can't share constants across CSS and JS files. **TypeStyle** is one approach that solves some of these issues. It mimics CSS Modules pattern using JS objects. \n\nBack in 2016, one developer commented that CSS Modules is not ready for prime time. He mentions some limitations of using `@value` and `composes`. He states that using pseudo-selectors can be unreliable. \n\nThe use of camelCase that's not typical of CSS naming convention is seen as a limitation. While some frameworks may have plugins to solve some of these issues (such as *babel-plugin-react-css-modules*) it may prove difficult to make CSS Modules work with others. \n\n## Milestones\n\n2011\n\nFacebook releases a JavaScript library called **ReactJS**. The design approach is modular, with a web page being built from a hierarchy of UI components. Two years later the project is open sourced. \n\nApr \n2012\n\nVersion 0.1.0 of **css-loader** for Webpack is released on NPM repository. This gives control over how CSS is loaded into a project. This becomes useful when CSS is used directly within a UI component implemented in JS. \n\n2014\n\nChristopher Chedeau, a frontend developer at Facebook, lists a number of problems with CSS at scale. He states that while JS recommends avoiding global variables, CSS still uses global names. Facebook initially solves many of the issues by extending CSS and later by using **inline styles**. Styles are specified as JS objects within UI components. Mixing content and styling might seem like a bad approach but every component encapsulates its own styling without relying on global variables. The common name for this approach is **CSS in JS**. \n\nApr \n2015\n\nWebpack's css-loader starts supporting local scope with the use of `:local` switch. A month later some folks implement a module for PostCSS called **postcss-local-scope**, so that CSS styles are local by default even without using the switch. Only globals need to be explicitly specified with the `:global` switch. \n\nMay \n2015\n\nThe PostCSS module postcss-local-scope gets integrated into css-loader of Webpack. Meanwhile, initial commit of **CSS Modules** happens on GitHub. \n\nJun \n2015\n\nThe low-level file format that enables CSS Modules is specified separately as **Interoperable CSS (ICSS)**. This becomes the common format for module systems. The specifications is meant for loader implementers, not end users. Meanwhile, ICSS is implemented in loaders including css-loader (webpack), browersify, and jspm.","meta":{"title":"CSS Modules","href":"css-modules"}} {"text":"# Robot Framework\n\n## Summary\n\n\nRobot Framework is a framework that automates acceptance testing and acceptance test-driven development. Being generic in nature, the framework can also be used to automate business processes, often called Robotic Process Automation (RPA). \n\nThe core of Robot Framework is written in Python but libraries extending it can be in Python or Java. The framework is independent of operating system and application. It's open source and is sponsored by the Robot Framework Foundation. \n\nTest cases in Robot Framework are written using keywords. Keywords themselves are abstracted away from their implementation. This promotes reuse of keywords across tests and easier maintenance of tests, particularly in large projects.\n\n## Discussion\n\n### What are the benefits of using Robot Framework?\n\nSince Robot Framework is keyword-driven, it has the benefit of separating high-level descriptions of tests from the low-level implementation details. Test writers may not be programmers. Since keywords are closer to English than programming, test writers find it easier to write and maintain tests. Keywords can be reused across tests. \n\nTests are easy to read since they often take a tabular form. In this form, the first column contains keywords. Other columns contain arguments to each keyword. For example, \"Create User\" is a keyword that takes username and password as arguments. When implemented, this can translate to launching a web browser, clicking a button, entering the data, and confirming the request to create a new user. \n\nKeywords are composable. This means that keywords themselves can depend on other keywords. This allows us to create tests at different levels of abstraction. \n\nSince implementation is kept separate, test cases can be developed agnostic of any programming language. Testing can be planned at an early stage even before any implementation is ready. \n\n\n### What are the main features of Robot Framework?\n\nWhile keyword-driven tests are common, Robot Framework can also be used to create data-driven and behaviour-driven tests. It can also be used for automating any business process, and thus not limited to testing. \n\nTest cases can be organized into test suites. Execution can be based on test suites, test cases, or tags, all of which can be specified by name or pattern. Test suites and test cases can have setup and teardown. \n\nThe framework produces neat reports and logs. It also allows for debugging. \n\nDevelopers can extend the framework with their own keywords and libraries. \n\nTest cases can be created in a single text file but they can also be created within the Robot Framework IDE (RIDE). Some popular IDEs can integrate with the framework including PyCharm, RED (Robot Editor) for Eclipse, and Robot Framework Intellisense for VS Code.\n\n\n### Could you list some built-in keywords used in Robot Framework?\n\nOnline documentation gives complete descriptions of all built-in keywords along with accepted arguments. We list a few of them: \n\n + **Conditionals**: Keyword Should Exist, Length Should Be, Should Be Empty, Should Be Equal, Should Contain, Variable Should Exist\n + **Control Flow**: Continue For Loop, Exit For Loop, Repeat Keyword, Return From Keyword\n + **Conversions**: Convert to Binary, Convert to Boolean, Convert to Bytes, Convert to Integer\n + **Executions**: Call Method, Evaluate, No Operation, Run Keyword\n + **Library**: Get Library Instance, Import Library, Reload Library, Set Library Search Order\n + **Logging**: Log, Log Many, Log To Console, Log Variables, Set Log Level, Set Test Message\n + **String Operations**: Catenate, Should End With, Should Match, Should Match Regexp, Should Not Be Equal As Strings\n + **Timing**: Get Time, Run Keyword If Timeout Occurred, Sleep, Wait Until Keyword Succeeds\n + **Variables**: Get Variable Value, Import Variables, Set Global Variable, Set Suite Variable, Set Test Variable\n + **Verdict**: Fail, Fatal Error, Pass Execution, Pass Execution If\n\n### Where does Robot Framework fit within a test automation architecture?\n\nThe job of the framework is to read and process data, execute test cases, and generate reports and logs. The core of the framework doesn't know anything about the system under test (SUT). Actual interaction with SUT is handled by various libraries. Libraries themselves rely on application interfaces or low-level test tools to interact with SUT. \n\nLet's take the example of testing a web application. Suppose Selenium is used for testing the application. Robot Framework doesn't directly control interactions with the web browser or databases. *SeleniumLibrary* and *DatabaseLibrary* are two libraries that manage these interactions. These libraries expose relevant keywords that can be used in test cases without worrying about how they are implemented. \n\n\n### Could you share some best practices when writing tests in Robot Framework?\n\nTest cases should be easy to understand. Use descriptive names for test suites and test cases. Likewise, keyword names should clear and descriptive. A common convention is to use title case for keywords. For example, use `Input Text` rather than `Input text`. For variables, use lowercase for local scope and uppercase for global scope. \n\nDocument each test suite. Describe the purpose of tests in that suite and the execution environment. Include links to external documents. At test case level, documentation may not be required. The use of suitable tags is often more useful. \n\nTests within a suite should be related. Don't have too many tests in a single file, unless they are data driven. Each test should be independent of others and should rely only on setup and teardown. Each test case should test something specific. A large test can possibly cover an end-to-end scenario. \n\nIf there are user-defined keywords, document the arguments and return values. Between keywords, assign return values to variables, and then pass these variables as arguments. \n\nAvoid sleeps. Instead, wait for an event with a timeout. \n\n\n### What are some useful resources and tools for Robot Framework?\n\nThe official Robot Framework website is a good starting point. Those new to writing tests, can study the examples there. The User Guide is another useful resource. \n\nBeginners should get familiar with some of the standard libraries including Builtin, OperatingSystem, String, Process, DateTime, Collections, Screenshot, and more. The framework has been extended by many third-party libraries. Choose what suits your application. There are libraries to interface or work with Android, Eclipse, databases, Selenium, Django, SSH, FTP, HTTP, MQTT, and more. \n\nAmong the useful tools are Rebot for generating logs and reports; Tidy for cleaning or formatting test data files; Libdoc and Testdoc for documentation. Pabot is a tool for parallel execution. Robot Corder can record and playback browser test automation. You can choose among various editors and build tools. There's also a Jupyter kernel. \n\n## Milestones\n\n2004\n\nAt the Helsinki University of Technology, Paul Laukkanen/Klärck starts looking into large-scale test automation frameworks. As part of his Master's thesis (published in 2006), he creates a **keyword-driven** prototype framework. \n\n2005\n\nNokia Networks is on the lookout for a generic test automation framework. Robot Framework is born based on Klärck's work. \n\nJun \n2008\n\nNokia Networks open sources Robot Framework with the release of version 2.0. \n\nDec \n2012\n\nVersion 1.0 of **Robot Framework IDE (RIDE)** is released on GitHub. RIDE simplifies the writing and execution of automated tests. Earlier version v0.40, also available on GitHub, can be traced to January 2012. Code before this was hosted at Google Code. \n\nDec \n2015\n\nRobot Framework 3.0 is released. With this release, **Python 3** is supported. \n\nNov \n2016\n\nStandard *ISO/IEC/IEEE 29119-5:2016, Part 5: Keyword-driven testing* is published. This underscores the growing importance of keyword-driven testing. \n\nJan \n2018\n\nCalled **RoboCon**, the first annual Robot Framework Conference is organized in Helsinki, Finland. \n\nDec \n2018\n\nRobot Framework 3.1 is released. With this release, **Robotic Process Automation (RPA)** is supported. This means that the framework can be invoked to automate *tasks* and not just *tests*.","meta":{"title":"Robot Framework","href":"robot-framework"}} {"text":"# Chaos Engineering\n\n## Summary\n\n\nDistributed software systems are an integral part of today's world. Many organizations such as Amazon and Netflix are serving billions of users through such distributed systems. These systems are inherently complex and chaotic. They have many interacting parts whose collective behaviour can be unpredictable. Moreover, frequent deployments add to the uncertainty. The most significant weaknesses should be addressed proactively, before they affect customers in production and lead to loss of revenue. \n\n**Chaos Engineering** addresses these challenges by injecting failures into the system in a controlled manner. We then observe how resilient is the system. Where we find significant flaws, we seek solutions. Chaos Engineering leads to more failure-resistant systems. \n\nIt's been said that, \n\n> Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.\n\n## Discussion\n\n### What's the context in which Chaos Engineering is relevant?\n\nIn a distributed system, even when each component is well-tested, there could be problems when these components come together. Distributed systems are by nature vulnerable to network connectivity issues of low bandwidth, high latency, packet loss or complete link failure. Each component can also fail in unexpected ways. It's hard to test for many real-world failures in a typical software development lifecycle. \n\nIt's possible to design a system to be fault tolerant. Even when one component fails, this should affect the system minimally. The system as a whole is designed to be better than the sum of its parts. But design alone isn't enough. We need to be confident that the system will perform despite failures. This is where Chaos Engineering becomes relevant. \n\nChaos Engineering starts by establishing normal steady state behaviour in terms of observable metrics. Then we inject failures in a controlled manner and record the same metrics. The hypothesis is that such failures don't affect the system. If the hypothesis fails, we know that we need to improve the system to handle those specific failures. \n\n\n### What are the principles of Chaos Engineering?\n\nThe main principles of Chaos Engineering are as follows: \n\n + **Build a Hypothesis around Steady State Behavior**: Measure system output over a short period of time. This is the steady state. It indicates normal behavior. Hypothesize that the steady state will continue during experiments. Steady state should be based on measurable output, not internal attributes of the system. Throughput, error rates, latency percentiles, etc. could be metrics that define steady state behavior.\n + **Vary Real-world Events**: Inject real-world events such as server crashes, malformed responses, or traffic spikes. Observe what happens. Compare against the steady state. Any deviation disproves the hypothesis.\n + **Run Experiments in Production**: Prefer to experiment directly in production. This ensures authenticity and relevance of the currently deployed system.\n + **Automate Experiments to Run Continuously**: Running experiments manually is unsustainable. Automate the process.\n + **Minimize Blast Radius**: Ideally, experiments shouldn't affect customers. At least, minimize the negative impact on customers.\n\n### Could you share more details into implementing Chaos Engineering?\n\nAmazon uses number of orders as a metric since they found that page load times affect this metric . Netflix uses the number of times a user clicks play, which is affected by system failures. Given that metrics are central to Chaos Engineering, they should be easy to measure, report and visualize. \n\nWhen metrics show problems, monitor how long it took to notice the problem, notify engineers, and do self-recovery. Do a post-mortem of every problem: what happened, what was the impact, why it occurred, and how to prevent it. Prioritize fixing these problems over developing new features. \n\nExperiments could inject complete failures or affect performance in any component. Terminate the recommendation engine, the caching service or the load balancer. Increase inter-service latency or packet drops. Failures can even be at the UI layer. If a UI widget fails to load, do the other widgets expand to fill up the extra space? Get the whole team involved since each one is likely to propose a different type of experiment. Study the specifications to identify possible experiments. \n\n\n### What are some myths about Chaos Engineering?\n\nThe following will describe and dispel some myths about Chaos Engineering: \n\n + **Chaos**: It's not about introducing chaos. Experiments are done in a controlled manner. Engineers select which systems should be affected and to what extent. They monitor continuously. If customers get affected, the experiment is stopped.\n + **Reliability**: Apart from adding new features, developers also care about reliability. Developers perform unit testing. Chaos Engineering complements their efforts by finding problems in real-world scenarios. But it's not about testing in production. We expect well-tested systems to come into production.\n + **Tooling**: Chaos Engineering doesn't require any specialized hardware/software tools. In most cases, access to the host OS or containers is adequate.\n + **Observability**: Basic metrics can suffice, often readily available in cloud platforms. We don't need to wait to implement complex metrics collection.\n + **Scale**: Chaos Engineering is not just for large distributed systems. It could be applied to even monoliths that need to be understood better.\n + **ROI**: Teams need to invest time upfront but this is worthwhile. It saves time later by minimizing production outages and troubleshooting. Chaos Engineering gives insights into optimal use of resources, increase productivity, and grow the business.\n\n### What is Chaos Monkey?\n\nChaos Monkey is a software tool developed at Netflix that randomly simulates failures of production instances. In the world of microservices, it should be possible to lose an instance, and replace that with another instance without loss of application functionality or consistency. Instances are meant to be stateless; that is, they don't store data. Depending on the traffic load, instances are meant to be created and destroyed at will. Chaos Monkey validates this capability. \n\nImagine a wild weaponized monkey \"randomly shooting down instances and chewing through cables.\" This is where we get the name Chaos Monkey. In the early years of Netflix, it was common practice to run Chaos Monkey during business hours, monitor for changes and identify weaknesses. This led to better auto-recovery mechanisms. \n\nThe myth that Chaos Engineering breaks things in production comes from the early use of Chaos Monkey. Today, we have many more tools to exercise Chaos Engineering. The randomness of Chaos Monkey is a useful starting point when the system's a black box. With better knowledge of how the system works, more fine-grained and controlled experiments can be done. \n\n\n### What is the Simian Army and what are its different members?\n\nThe Simian Army is a suite of failure-inducing tools designed to add more functionalities beyond Chaos Monkey. Some tools inject failures while others take proactive actions to avoid future failures. Its members include: \n\n + **Latency Monkey**: It causes artificial delays or even service shutdowns in RESTful client-server communications.\n + **Conformity Monkey**: It finds out instances that don't adhere to predefined rule sets and shuts them down.\n + **Doctor Monkey**: It performs health checks (CPU load, memory usage, etc.) to detect unhealthy instances and removes them.\n + **Janitor Monkey**: It ensures that the cloud environment is running free of clutter and waste.\n + **Security Monkey**: It finds security violations or vulnerabilities and terminates the violating instances.\n + **10–18 Monkey**: It detects configurations and runtime problems in instances that are accessible across multiple geographic regions, involving multiple languages and character sets.\n + **Chaos Gorilla**: It simulates an outage of an entire AWS availability zone. Services should automatically rebalance to the other active availability zones.\n + **Chaos Kong**: It simulates region outages. An AWS Region has multiple availability zones.\n\n### What are the benefits of Chaos Engineering?\n\nChaos Engineering has different benefits depending on the perspective: \n\n + **Customer**: Chaos Engineering predicts or prevents failures before they happen. This results in increased availability and durability of services.\n + **Business**: Chaos Engineering helps in preventing extremely large losses in revenue and maintenance costs, which can occur due to failures in production hours of businesses.\n + **Technical**: The insights from experiments help reduce incidents, increase understanding of system failure modes, improve system design and detect high-severity incidents faster.A 2021 study reported that increased availability, lower mean time to resolution (MTTR), lower mean time to detection (MTTD), fewer bugs in production, and fewer outages are some of the benefits. Those who practise Chaos Engineering can achieve more than 99.9% service availability. \n\nIn 2015, Amazon’s DynamoDB service experienced an availability issue in their US-EAST-1 region. This issue caused many other additional AWS services to fail. It resulted in the unavailability of some of Internet's biggest sites and applications. However, Netflix services were affected only in a minor way, thanks to their Chaos Engineering practices. \n\n\n### Beyond Netflix, how has the industry adopted chaos engineering?\n\nIn 2019, a DevOps and Cloud InfoQ Trends report showed that chaos engineering was emerging from the \"innovator adoption\" stage to the \"early adoption\" stage. The primary adopters of chaos engineering have been e-commerce and big tech. Downtime directly impacts revenue in e-commerce. Big tech is about better returns from better reliability. \n\nBy 2020, many businesses from large financial institutions to health care organizations were adopting Chaos Engineering in their culture. Big organizations such as Uber, JP Morgan Chase and GrubHub have also adopted chaos engineering to maximize their service quality. \n\nSingapore's DBS Bank tried out tools Pumba, Toxiproxy and Vizceral. Their approach was to do Chaos Engineering in non-production environments and collect only metrics in production. They still got many insights and made their applications more fault tolerant. They applied Chaos Engineering across the entire software development lifecycle. \n\n\n### What are some best practices in Chaos Engineering?\n\nBefore starting on Chaos Engineering, design the system to be resilient at many levels: infrastructure, networking, data, application, people and culture. Adopt architectural patterns to achieve this. Three techniques adopted at Netflix are timeouts, retries and fallbacks. \n\nWhen proposing a hypothesis, start with what you know and what you understand. Then move towards what you don't know and don't understand. A useful question to ask is, \"What could go wrong?\" Inject only one failure at the time.\n\nA company can have a dedicated Chaos Engineering team of 2-5 engineers. But Chaos Engineering is not the responsibility of this team alone. It's been observed that some teams are quick to embrace it: Traffic Team (e.g. Nginx, Apache, DNS), Streaming Team (e.g. Kafka), Storage Team (e.g. S3), Data Team (e.g. Hadoop/HDFS), and Database Team (e.g. MySQL, Amazon RDS, PostgreSQL). \n\n## Milestones\n\n2000\n\nIn the early part of this decade, Amazon creates a program named *GameDay*. Its purpose is to inject failures into critical systems, and then use monitoring tools and alerts to obtain greater insight into how well Amazon's retail website responds to these failures. GameDay brings out architectural flaws and other defects. While initially successful, the program is later halted due to its impact on customer-facing services. \n\n2010\n\nNetflix Engineering Team creates **Chaos Monkey**. It's created in response to Netflix's move from physical infrastructure to AWS cloud infrastructure. Chaos Monkey is meant to test the capability that the loss of an instance doesn't affect the Netflix streaming experience. Chaos Monkey has its limitations. It requires Spinnaker and MySQL. It lacks recovery capabilities and a user inteface. \n\n2011\n\nAfter the success of Chaos Monkey, Netflix Engineering team develops **Simian Army**. The Simian Army adds additional failure injection modes allowing developers to test the system with different failures. In 2012, Netflix makes Chaos Monkey publicly available by sharing its source code on Github. \n\n2014\n\nNetflix decides to create a new role in organization: **the Chaos Engineer**. Bruce Wong coins the term and Dan Woods shares it with the greater engineering community via Twitter. In October, Netflix shares in blog post that they're using an internal solution called **Failure Injection Testing (FIT)**. FIT allows engineers to break things but control with precision the impact. \n\n2016\n\nKolton Andrus and Matthew Fornaciari establishes *Gremlin*, the world's first **managed enterprise Chaos Engineering solution**. Gremlin becomes publicly available in late 2017 with multiple failure injection modes. \n\nOct \n2016\n\nAt the *IEEE International Symposium on Software Reliability Engineering*, Netflix engineers present their internal platform that automates chaos experiments. They call it **Chaos Automated Platform (ChAP)**. It improves on FIT by routing traffic between control and experimental clusters. These clusters are created with the same configuration as production clusters. Changes in metrics are more easily observed and compared. Complex experiments can be conducted without impacting customers. \n\n2018\n\nGremlin launches *Chaos Conf*, world's **first large-scale conference** on Chaos Engineering. \n\n2020\n\nAmazon Web Services (AWS) adds Chaos Engineering to the reliability pillar of the AWS Well-Architected Framework (WAF). AWS announces Fault Injection Simulator (FIS), a fully managed service for natively running chaos experiments on AWS services. \n\n2021\n\nGremlin publishes the first ever *State of Chaos Engineering* report that shows how the practice of Chaos Engineering has grown among organizations, key benefits of Chaos Engineering, how often top performing teams run chaos experiments, and more.","meta":{"title":"Chaos Engineering","href":"chaos-engineering"}} {"text":"# Git Hooks\n\n## Summary\n\n\nHooks in Git are **executable scripts** that are triggered when certain events happen in Git. It's a way to customize Git's internal behaviour, automate tasks, enforce policies, and bring consistency across the team. \n\nFor example, hooks can check that passwords or access tokens are not committed, validate that commit messages conform to an agreed format, prevent unauthorized developers from pushing to a branch, update local directories after a checkout, and so on.\n\nGit supports both client-side and server-side hooks. Hooks can be written in any programming language though it's common to use Bash, Perl, Python or Ruby.\n\n## Discussion\n\n### What use cases can benefit from Git Hooks?\n\nThe figure illustrates local add/commit followed by a push to the remote repository. Also shown are the relevant hooks. Each hook is given a specific name. Hooks are invoked **by naming convention** before or after specific events.\n\nThe `pre-commit` hook is called when the commit procedure begins. This hook can format the code against the project's styling guide, run linting or execute unit tests. By checking for missing semicolons, trailing whitespace, and debug statements code reviews can focus on more important issues. Hooks `prepare-commit-msg` and `commit-msg` can be used to format/validate the commit message. \n\nAfter a successful commit, the hook `post-commit` could be used to send notifications. More commonly, when collaboration is via a remote repository, notifications are sent from the `post-receive` hook that's triggered after all references are updated. This hook can also deploy the latest changes to production. The `update` hook that runs once per branch can be used to enforce access control, that is, only authorized users can push to a branch. \n\n\n### What are client-side and server-side hooks?\n\nClient-side hooks run on developer machines whereas server-side hooks run on the server that hosts the repository. Committing, merging, rebasing, applying patches, and checkout are common Git operations that developers do on their development machines. Any of these commands can trigger their relevant client-side hooks. Server-side hooks are triggered by the `git-receive-pack` command, which is a result of developers pushing changes to the remote repository on a cloud-hosted Git server. \n\nDevelopers have full control of client-side hooks. Developers can choose to disable all client-side hooks if they wish. If a project's best practices need to be strictly enforced, server-side hooks are the way to do it. \n\nSome Git repository providers impose restrictions on the use of server-side hooks on their cloud-based servers. For example, GitHub's free tier doesn't allow developers to define server-side hooks though GitHub Enterprise Server supports them. However, there are alternatives to achieve server-side automation such as webhooks, GitHub Apps/Actions, GitLab CI/CD, GitLab push rules, etc. \n\n\n### What are pre- and post- Git Hooks?\n\nClient-side and server-side hooks can be either `pre-` and `post-` hooks. The `pre-` hooks can be used to validate stuff before allowing the requested operation. For example, `pre-commit` hook executes at the start of a commit workflow. If the commit is deemed invalid for any reason, the hook can return a non-zero value. This terminates the commit operation. By returning zero, the hook states that the commit operation can continue, including the execution of other relevant hooks. \n\nThe `post-` hooks execute after the requested operation is done. Such a hook's return value doesn't matter since the operation is already complete. For example, `post-commit` executes after a successful commit. These hooks are typically used to send emails or other notifications. They may also be used to trigger CI/CD pipelines. For example, `post-receive` could be used to deploy the updated code to production. \n\n\n### What options help skip the execution of hooks?\n\nIt's possible to skip some hooks if their associated commands are called with certain options. Hooks `pre-commit`, `pre-merge-commit`, `prepare-commit-msg`, and `commit-msg` can be skipped if `commit` and `merge` commands are invoked with `--no-verify` option. \n\nThe hook `post-checkout` can be skipped if `clone` and `worktree add` commands are called with `--no-checkout` option. However, this hook is always called for `checkout` and `switch` commands. \n\n\n### What essentials should a developer know to use Git Hooks?\n\nBy default, hooks are stored in `.git/hooks` within the project's repository. Git populates this folder with sample scripts that developers can use as a starting point. However, the file suffix `.sample` must be removed and scripts must be made executable. Developers can rewrite the scripts in their preferred language. Accordingly, the shebang (`#!` first line in script) must be updated with the correct path to the language. \n\nIf these sample scripts are deleted by mistake, we can retrieve them from Git templates folder. This may be `/usr/local/git-core/templates/hooks` (Linux) or `C:\\Program Files\\Git\\mingw64\\share\\git-core\\templates\\hooks` (Windows). In fact, this copying from templates folder to the project's `.git/` folder is what happens when we run `git init`. \n\nWhile hooks are triggered in response to specific events, we can also manually trigger the execution of a hook. The command for this is `git hook run `. For example, we could trigger one hook from another hook. This command is a low-level internal helper command. \n\nBeginners can start by reading the official documentation on Git Hooks and Chapter 14 of Loeliger's book on Git. \n\n\n### What does it take to create and maintain reusable Git Hooks?\n\nIt's a good practice to commit hooks as part of the repository so that all developers reuse and execute the same hooks. However, it's not permitted to store them within a `.git/hooks` folder within the repository. One approach is store them in another folder, say `hooks`, and then create a soft link from `.git/hooks` to `hooks` folder after cloning the repository. Another approach is to configure the path to the hooks with the command `git config core.hooksPath `. \n\nTo reuse across projects, hooks can be in a separate repository and cloned locally. Configure the hooks path globally to that folder: `git config --global core.hooksPath `. This configuration variable has been available since Git version 2.9.0. \n\nThere's also a framework named pre-commit to manage pre-commit hooks. This framework includes dozens of tested pre-commit hooks written in Python that developers can reuse in their own projects. These include syntax checking (`check-json`, `check-yaml`), fixing problems (`fix-byte-order-marker`, `trailing-whitespace`), protecting secrets (`detect-private-key`, `detect-aws-credentials`), enforcing policies (`no-commit-to-branch`, `forbid-new-submodules`), and more. \n\nThe githooks.com lists many useful hooks and related tools that developers can use.\n\n\n### How do I configure my project environment to use Git Hooks?\n\nNo special configuration is needed other than updating `.git/hooks` folder or configuring `core.hooksPath` variable. However, many developer tools attempt to simplify this further.\n\nIn Maven, a plugin can setup the hooks path. In Gradle, we can have a dependent build script to do this setup. In Node.js, Husky project allows us to wire up all the hooks in the `package.json` file. In PHP, hooks can be set up in `composer.json` file via the `post-install-cmd` configuration. \n\nIn different languages, frameworks are available to manage hooks: pre-commit (Python), Overcommit (Ruby), Lefthook (Ruby or Node.js). \n\n\n### What are some criticisms of Git Hooks?\n\nHooks change Git's default behaviour. If a hook does something unusual, it may confuse developers new to the project. Moreover, hooks can be buggy and make things counterproductive. For this reason, some recommend using Git aliases and invoking scripts directly rather than relying on hooks. \n\nGit doesn't provide any easy mechanism to share hooks among developers or from one repository to another. No doubt this is more secure but at the expense of convenience and reusability. Hooks problem with installation, maintainability, and reusability is to some extent solved by frameworks such as *pre-commit*. \n\nHooks are powerful. Developers may be tempted to run unit tests before each commit. This can slow things quite a bit and discourage developers from committing often. This can lead to data loss. \n\nServer-side hooks are not universally enabled by cloud-hosted Git repository service providers. \n\n## Milestones\n\n2005\n\nLinus Torvalds creates Git as a replacement to BitKeeper that's a distributed version control system. Even in Git v1.0.0 (December), **hooks** are included as a feature. \n\nMar \n2007\n\nHook `post-receive` is introduced to replace `post-update`. The latter is expected to be deprecated from v1.6.0. The hook `post-receive` is more useful since it has access to both old and new references; `post-update` knows only the new references. \n\nSep \n2007\n\nGit 1.5.4 is released. In this release, `git merge` command calls the `post-merge` hook following a successful merge. \n\nFeb \n2008\n\nGit 1.6.0 is released. This release ships sample hook scripts with the **suffix** `.sample` since on some filesystems it's not sufficient to ship them as non-executables. In Windows, all scripts are executable by default. Developers can rename the scripts if they wish to enable the hooks. \n\nDec \n2008\n\nGit 1.6.1 is released. In this release, `git rebase` command when used without the `--no-verify` option calls the `pre-rebase` hook. This hook can be used to prevent the rebasing of a branch. \n\nMay \n2009\n\nGit 1.6.3 is released. In this release, `git clone` command when used without the `--no-checkout` option calls the `post-checkout` hook. This hook is normally called after an update to the worktree, such as with the `git checkout` command. \n\nDec \n2014\n\nA **critical vulnerability** is discovered pertaining to Git hooks. Usually it's not possible to create a top-level folder named `.git` in a repository. However, in case-sensitive filesystems it's possible to create a folder named `.GIT`, `.Git`, etc. In Windows, which is case-insensitive, such a folder overwrites the default `.git` folder. By including malicious hooks, a hacker could execute arbitrary code on the client's machine. This threat is greatest for developers using third party or untrusted repositories. This vulnerability is patched within a week. \n\nApr \n2015\n\nGit 2.4.0 is released. This release adds a new server-side hook named `push-to-checkout`. By default, pushes to checked out branch would fail. With this hook, we can customize the behaviour by merging with or overwriting server-side changes. \n\nApr \n2016\n\n**GitHub Enterprise Server 2.6** is released. This includes the `pre-receive` server-side hook that can be used to enforce push policies and optimize workflows. However, GitHub has included server-side hooks since 2013, it's first hook tasked with warning users when a repository is renamed. \n\nJun \n2016\n\nGit 2.9.0 is released. The hooks directory can now be configured via the `core.hooksPath` configuration variable. \n\nNov \n2019\n\nGit 2.24.0 is released. This release adds a new hook named `pre-merge-commit`. This hook is called after a successful merge but before obtaining the commit message. It can be bypassed with `--no-verify` option to the merge command. \n\nFeb \n2022\n\nOn GitHub, it's noticed that pushes to repositories can take as long as 880ms on average. The implementation is in Ruby. A lot of time is spent loading Ruby dependencies. By rewriting the hook as a Go service, average processing time is brought down to 10ms. This improvement also becomes part of GitHub Enterprise Server 3.4.","meta":{"title":"Git Hooks","href":"git-hooks"}} {"text":"# Apache Log4j\n\n## Summary\n\n\nApache Log4j is a logging framework for Java. A Java application can use Log4j to log important events during the course of its lifetime. The logs are later used for tracing application behaviour or performing audits. Unlike debugging an application using debuggers, log generation requires no user intervention apart from the initial calls to the logger API. Logs generated from production deployments give lot of useful context. Logs are a permanent record of what happened unlike debugging sessions that are transient. \n\nLog4j is an open-source project managed by the Apache Software Foundation. Log4j 1.x is incompatible with Log4j 2. The latter has better design and richer features. \n\nGUI-based tools are available to view and analyze Log4j logs. Log4j has inspired logging solutions in other languages.\n\n## Discussion\n\n### What's the architecture of Log4j?\n\n`Logger` class is the starting point. Applications use the `LogManager` to return a logger object with a specific `LoggingContext`. If such a logger doesn't exist, it's created. \n\nEvery logger has a `LoggerConfig`. These config objects are initialized from configuration files. LoggerConfig has a class hierarchy, that is, one config class can inherit from another. \n\nA LoggerConfig object is associated with one or more `Appender` objects that actually deliver log events. For example, a logger can log to both the console and to a file. Among the appender types are ConsoleAppender, FileAppender, RollingFileAppender, AsyncAppender, SocketAppender, SMTPAppender, and many more. Because config objects have a hierarchy, events will be sent to appenders of parent classes as well. \n\nEach appender can be configured to log messages in a specific format. Such formatting is the job of the `Layout` class. \n\nA LoggerConfig object is associated with a log level. Levels enable automatic filtering of events. In addition, `Filter` objects can be applied to LoggerConfig objects and Appender objects. Filters give more flexibility at runtime. Only those events that pass the filters are send to appenders. \n\n\n### What are the essential features of Log4j?\n\nLog4j separates its API from implementations. Hence, applications can swap implementations while the code consistently calls Log4j API. \n\nAn application typically has many packages, classes, and methods. Log4j allows us to configure logging for each component separately. However, to save work on managing many configurations, configurations can be inherited and hence reused. Thus, `X.Y.Z` inherits and overrides some configuration from `X.Y`. \n\nLog4j has log levels to control the scope of logging. Developers can also define custom levels. Likewise, custom Message types, Layouts, Filters and Lookups can be created. Because of a plugin architecture, Log4j is easily extensible. \n\nLog4j has built-in support for JMX. Many parts of the Log4j system can be locally or remotely monitored and controlled. \n\nFor better performance, Log4j supports asynchronous logging. It does garbage-free or low garbage logging so that pressure on garbage collector is relieved. For better concurrency, it locks resources at the lowest possible level. Log4j won't lose events when appenders are reconfigured. Hence, it can be also be used for audit logging. \n\n\n### What log levels are available in Log4j?\n\nA log level is assigned to a `LoggerConfig` object. If not, the object inherits the level from one of its ancestors along the class hierarchy. Log4j defines the following built-in levels: ALL (Integer.MAX\\_VALUE), TRACE (600), DEBUG (500), INFO (400), WARN (300), ERROR (200), FATAL (100), and OFF (0). Applications can also define custom levels. Levels OFF and ALL are not generally used in API calls. \n\nEvery event is logged with an indication of its level. For example, if the application sees an unexpected scenario it may log it as WARN; log it as ERROR if the transaction can't be completed; log it as FATAL if application can't proceed further without a reboot or reinitialization.\n\nWe can control how much to log without changing the code. For example, with log level ERROR only errors or more series events will be logged. So the call `logger.error()` will be logged but not `logger.info()`. During debugging, log level can be changed to DEBUG to log more messages. \n\nLog level can be dynamically changed at runtime: `Configurator.setLevel(\"com.example.Foo\", Level.DEBUG);` \n\n\n### How can I format Log4j messages?\n\nLog4j supports a number of logging formats, what are formally called as **Layouts**: CSV, GELF, JSON (pretty or compact), JSON Template, Pattern, RFC5424 (enhanced Syslog), Serialized (deprecated), Syslog (BSD Syslog and used in Log4j 1.2), XML (pretty or compact), and YAML. \n\nIf location information is logged (filename, line number) or location-specific patterns are used (`%C` or `%class`, `%F` or `%file`, `%l` or `%location`, `%L` or `%line`, `%M` or `%method`), there's a performance impact, especially for asynchronous loggers. Default configuration excludes location information. \n\nFor the pattern layout, message formatting relies on the *conversion pattern*, which is similar to `printf` formatting in C. For example, the conversion pattern `\"[%-5p] %d{yyyy/MM/dd HH:mm:ss,SSS} %t %c: %m%n\"` logs right padded five-character width logging level (`%-5p`), datetime in a specific format (`%d{pattern}`), thread name (`%t`), logger name (`%c`), log message (`%m`) and newline (`%n`). The \"-5\" in \"%-5p\" is a format modifier. Modifier \"20.-30\" implies left pad to 20 characters or truncate from the end to 30 characters. \n\n\n### What are Log4j API essentials that a developer should know?\n\nThe main package defining the Log4j API is `org.apache.logging.log4j`. Public message types are in package `org.apache.logging.log4j.message`. An implementation is available in package `org.apache.logging.log4j.simple`. \n\nEssentially, if you're developing a library you should use the API, not the concrete implementation. The application that uses your library will determine whether to use `org.apache.logging.log4j.simple` or another implementation of the Log4j API. \n\nLog4j API has four main interfaces: \n\n + `LogBuilder`: To construct log events before logging them.\n + `Logger`: Main interface of the `log4j` package. Class `org.apache.logging.log4j.simple.SimpleLogger` implements this interface.\n + `Marker`: Adds filterable information to log messages. Markers are hierarchical. For example, \"Error\" marker can have children \"SystemError\" and \"ApplicationError\".\n + `ThreadContext.ContextStack`: ThreadContext Stack interface.Among the classes are `EventLogger`, `Level`, `LogManager`, `MarkerManager`, `ThreadContext`, and more. `LogManager` is the anchor point for the Log4j system. Methods of this class include `getContext()`, `getLogger()`, `getRootLogger()`, and more. \n\n`LoggingException` is thrown when logging error occurs. \n\nFor detailed information, consult the complete Log4j API documentation.\n\n\n### What's SLF4J and is it better than Log4j 2?\n\nSLF4J is a \"simple logging facade\", which is really a logging interface that applications can call. SLF4J forwards API calls to a concrete implementation used by the application. Such an implementation could be `java.util.logging`, Log4j, Logback, etc. A logging implementation can conform to SLF4J API and thereby enable developers to easily switch implementations. We may say that SLF4J does for logging what ORMs have done for database interfacing.\n\nBoth Log4j 2 and SLF4J separate the API from the implementation. This gives developers flexibility. To use Log4j, application code can call SLF4J API that are then routed to Log4j's specific implementation. An alternative is to make Log4j calls but use the *log4j-to-slf4j* adapter to route calls to any SLF4J-compliant library. Hence, calling Log4j API directly in code doesn't tie you down to only Log4j. \n\nIn fact, Log4j has some advantages over SLF4J. SLF4J can log only strings but Log4j can also log objects (implementations do the necessary conversions). Log4j supports Java 8 lambda expressions. Log4j has better support for garbage-free logging. \n\n\n### What are some best practices when using Log4j?\n\nLogging gives us insights into application use and helps in troubleshooting. Too much logging can affect application's performance. Too little logging can be ineffective. Logs should include descriptive messages along with sufficient context to make them useful. \n\nDon't log sensitive information such as passwords, credit card numbers or access tokens. \n\nUse suitable log levels for your messages and use them consistently. \n\nWhen an exception in logged, include the exception object for more context, such as `try {...} catch (IOException ioe) { LOGGER.error(\"Error while executing main thread\", ioe); }`. This will include the stack trace, which can be very useful. \n\nLogging in JSON can be better for storing logs in a log management system. JSON is also better for multiline messages such as stack traces. An appender can be configured for logging in JSON. \n\nWhere performance is critical, consider logging asynchronously. Logging directly over a network may not be ideal since network errors can cause some log entries to be lost. Inside containers, log files are not permanent. Hence, log to the standard output or over a network to a centralized system. \n\n\n### What other projects are inspired from or associated with Log4j?\n\nApache Log4j comes under an umbrella project named *The Apache Logging Services Project*. Apart from Log4j, this includes Apache Log4j Kotlin and Apache Log4j Scala. Both these are APIs in their respective languages for Log4j. There's also Apache log4cxx and Apache Log4Net that are ports of Log4j for C++ and .NET respectively. Apache Log4j Audit is meant for audit logging. \n\nOther ports include log4c (C); log4js, JSNLog (JavaScript); log4perl (Perl); Apache log4php (PHP); and log4r (Ruby). Log4j can also be used in languages Clojure and Groovy. \n\nApache Chainsaw is a GUI-based log viewer. It simplifies the analysis of logs via colour coding, tooltips, filtering, search function, navigation, etc. In fact, there are many other log viewers out there: OtrosLogViewer, Lilith, Eclipse LogViewer Version, Splunk, LogSaw, LogMX, and more. \n\nComplete log management is possible with SolarWinds Log Analyzer, Sematext Logs, Datadog, Fluentd, LogDNA, Splunk, Elastic Stack, and more. \n\n\n### As a beginner, how do I get started with Log4j?\n\nThe official Apache Log4j 2 project site is an ideal place to get started. It has links to the user's guide, articles, tutorials and FAQ. It also hosts the project's Javadoc API documentation.\n\nInstalling and configuring Log4j 2 is also described. You need to update project dependencies in file `pom.xml` used by Maven builder. Log4j 2 configuration can be in XML, JSON, YAML or properties formats. This file would include the appenders, their pattern layouts, the loggers, and the mapping between loggers and appenders. \n\nDocumentation informs how to configure Log4j for other builders including Maven, Ivy, Gradle, and SBT. Project dependencies are also listed. \n\nMigration guides are available for those who wish to migrate from Log4j 1.x to 2.x. \n\n## Milestones\n\n1991\n\nJames Gosling and others start work on a new language for interactive television. They name it *Oak*. In 1995, this is renamed to *Java*. The first public release of the language happens in 1996. \n\n1996\n\nThe E.U. SEMPER project creates its own tracing API for Java, which later evolves to *Log4j*. In these early years of Java, there's no universal logging framework or library. Every application includes its own logging or tracing API. \n\nJan \n1999\n\nThe Apache Software Foundation adopts Log4j under its *The Apache Logging Services Project*. This project \"creates and maintains open-source software related to the logging of application behavior.\" First versions of Log4j appears in October. \n\nJan \n2001\n\n**Apache Log4j v1.0** is released. Package hierarchy starts at `org.apache.log4j`. In subsequent years, Log4j proves to be the first of its kind to gain traction in the Java world for logging. It's also credited for introducing hierarchical logging, a concept that other loggers go on to adopt. \n\nFeb \n2002\n\nSun Microsystems releases Java 1.4 that includes `java.util.logging` (available since mid-2001 in its beta release). Its API appears to be similar to Log4j. Open source implementations of this API for Java 1.2 and 1.3 become available. \n\nMay \n2002\n\n**Apache Log4j v1.2** is released. This release is maintained till May 2012 when v1.2.17 is released. In August 2015, Apache announces **end of life for Log4j 1.x**. \n\nJul \n2012\n\nWork starts on **Apache Log4j 2**. It's a complete rewrite of Log4j and it's incompatible with Log4j 1.x. It's inspired by Log4j 1.x and `java.util.logging`. Log4j v2.0 released in July 2014. Package hierarchy changes in Log4j 2 to `org.apache.logging.log4j`. \n\nSep \n2017\n\nJava 9 is released. This **breaks compatibility for Log4j 1.x** due to a feature called Mapped Diagnostic Context (MDC). MDC's equivalent in Log4j 2 is Thread Context Map. Applications using Log4j 1.x but without MDC may still work. But given Log4j 2's many features, developers will find it worthwhile to migrate to it. \n\nNov \n2021\n\nA member of the Alibaba Cloud Security Team finds a vulnerability in Log4j 2. The team informs Apache. Called **Log4Shell**, the vulnerability is made public in December. Given the widespread use of Java and hence Log4j 2, this is considered a series vulnerability affecting cloud servers, enterprise applications, IoT devices, routers, firewalls, printers, etc. Apache fixes the issue in releases 2.15.0, 2.16.0 and 2.17.0 (Dec 2021). Meanwhile, websites and businesses list affected software and mitigation strategies.","meta":{"title":"Apache Log4j","href":"apache-log4j"}} {"text":"# Local Area Network\n\n## Summary\n\n\nLocal Area Network (LAN) is a computer network used for connecting computers within a limited geographical area such as offices, schools and buildings. Interconnected LANs dispersed over a wider area is called a Wide Area Network (WAN).\n\nA LAN consists of cables, switches, routers, and bridges that together enable devices to connect and communicate with one another. Devices connected in a LAN have shared access to the interconnecting medium. When the medium is made of physical wires or cables, we called it **Wired LAN**. If the medium is radio waves we called it **Wireless LAN (WLAN)**.\n\nThe most common technology used for wired LANs is Ethernet. For wireless LANs, IEEE 802.11 standards are used and their commercialized technology is called Wi-Fi.\n\n## Discussion\n\n### What are the characteristics of a LAN?\n\nLAN is a low-cost and effective network type capable of connecting multiple devices on a single transmission medium so that every device in the network can communicate with one another. \n\nLAN coverage is over a limited extent, from meters to a few kilometres. The quality of data transmission in LAN is comparatively higher than other network types due to the shorter coverage area. Switches or routers are used to construct a LAN network most of the time. This ensures higher data transmission speeds and reliability. \n\nSetting up a LAN network can be done at low costs. If there's a need for expansion, it can be done quickly. \n\n\n### What are the different LAN types?\n\nBased on the communication mode, LAN can be generally classified into two types: \n\n + **Client-Server LAN**: In this mode, only a single central server is used to connect with multiple clients through wired or wireless medium. If a client device needs to send a packet to another client, communication happens via the central server.\n + **Peer-to-Peer LAN**: In this mode, there's no need for a central server. Clients can communicate with each other directly. Sending a file is easier and faster, thus enabling higher speeds.\n\n### What's the difference between Wireless LAN and Ethernet LAN?\n\nWireless LAN as the name suggests uses radio waves to transmit data whereas Ethernet uses wires to transmit data. Data transfer or communication through Ethernet LAN is more stable and faster, though both have their benefits. \n\nThe particular Radio Frequency (RF) spectrum used in WLAN determines the coverage. Signal propagation and hence radio coverage are affected by walls, metal objects and even people. Coverage can be increased with wireless repeaters, bridges and access points. Data rates are steadily increasing for WLAN with the recent Wi-Fi 6 release (Feb 2021) that supports a maximum throughput speed of 9.6Gbps across multiple channels with 75% lower latency. \n\nEthernet LAN primarily uses electrical signals to transmit data through cables. It has less interference than WLAN. Unshielded Twisted Pair (UTP) cables are commonly used to establish Ethernet connections. 10 Gigabit Ethernet (10GbE) is the latest standard that can offer a maximum throughput speed of 10Gbps. \n\n\n### Could you describe some limitations of LANs?\n\nLAN networks are only suitable for transmission over a limited distance. The cost of installation rises as the coverage area expands.\n\nLAN requires security to avoid threats from malware and viruses. Since computers are interconnected, any security breach can impact the entire network. Apart from securing devices physically, LAN security can be compromised by misconfigurations including weak passwords and improper allocation of devices. Obviously WLANs are easier to attack because physical access to network devices is not required. However, even in wired LANs, physical access to LAN sockets in hallways and reception areas can pose risks. It becomes important to keep the entire network under a policy guideline to avoid possible threats.\n\n\n### What type of devices are commonly found on a LAN?\n\nA LAN has many types of devices: \n\n + **Host**: End device (client or server) that can send and receive data.\n + **Repeater**: Operates at the physical layer. Receives data on one port, boosts the signal and sends it out on another port.\n + **Hub**: Essentially a multi-port repeater. All devices connected to the hub belong to the same LAN segment. A packet coming on one port is forwarded to all other ports. More devices connected to a single hub will result in packet collisions and drop in throughput.\n + **Bridge**: Connects two or more LANs in the network. Isolates its ports to accommodate more devices. Bridges forward frames between different LAN segments by looking at MAC addresses. It works at physical and data link layers.\n + **Switch**: Similar to a bridge but more sophisticated. Whereas a bridge typically implements functionality in hardware, a switch does it in software.\n + **Router**: A layer 3 or network layer device. It routes packets by looking at IP addresses. Essential for connecting to the Internet.\n + **Gateway**: Interconnects to another network type, either LAN or WAN, by translating between their protocols.\n\n### How does Virtual LAN differ from a normal LAN?\n\nVirtual LAN (VLAN) increases the capabilities of a normal LAN. It's a logical way of separating traffic between multiple logical or virtual networks that exists physically on the same network. VLANs are mostly used by organizations with a large number of devices.\n\nIn a normal LAN, a broadcast message reaches all nodes whether they need it or not. In a VLAN, only a subset of nodes that belong to that VLAN (aka broadcast domain) will receive the broadcasted packet. Thus, VLANs mitigate packet flooding and network congestion. \n\nVLANs minimize security risks by reducing the number of hosts that receive a packet. Moreover, hosts that hold sensitive information can be on a separate VLAN. VLANs enable flexible network configuration, such as, grouping hosts by department rather than physical location. VLANs can be easily reconfigured just by changing port configurations. \n\nImplementation of VLAN will be cost-effective as it's built using switches. Routers are needed only for sending data outside the LAN. \n\n\n### Could you describe the different types of LAN topologies?\n\nTopology in networking refers to the way of devices are interconnected. Some major LAN topologies are: \n\n + **Bus Topology**: Also known as the backbone network. Simplest and widely used network design that needs only a single cable for connectivity. A device known as a terminator is placed at both ends to avoid signals reflecting back to each other.\n + **Star Topology**: A central device acts as a hub and connects to all other devices. Data on the network passes through devices in between before reaching the destination.\n + **Ring Topology**: Nodes are arranged in a ring topology. Data passes through each node before reaching the destination. Traffic can be bidirectional and the chances of packet collisions are less. Broken network links affect bi-directional communication.\n + **Mesh Topology**: Nodes are connected to each other in point-to-point configuration. Each node has multiple paths to reach another node. If one path fails, an alternate path is used, making this topology more fault-tolerant.\n\n### What are the different data transmission methods used in LANs?\n\nIn communications, data can be transmitted in a few ways: \n\n + **Unicast**: It's a one-to-one data transmission method that typically uses a connection-oriented protocol such as TCP (Transmission Control Protocol). A web server that transmits or streams data to a single user via a unique connection is an example of unicast.\n + **Multicast**: It's a one-to-many data transmission method that uses a connectionless protocol such as UDP (User Datagram Protocol). A simple example is an email addressed to a specific group of recipients.\n + **Broadcast**: It's a one-to-all data transmission method. Data originates from a single point and is sent to all users simultaneously. Cable TV is an example of a broadcast network.In LAN, unicasting is predominant with many applications using it (FTP, HTTP, Telnet). Multicasting is implicit in the way a hub operates. VLAN is also multicasting in some sense. Explicit multicasting is possible via the IGMP protocol. Broadcasting is usually employed when a node wants to discover something, such as in ARP and DHCP protocols. \n\n\n### How does a LAN interface with a WAN?\n\nRouters connect LAN devices to WAN and then the wider internet. Most wireless routers come with two types of physical ports labelled as LAN and WAN. A WAN port is mostly connected to a modem that makes the router capable of accessing the internet. LAN port can be used to connect with devices that don't have Wi-Fi support. For secure communications, the LAN and WAN ports are internally separated by a firewall. Devices connected via WAN ports can't access the LAN devices unless port forwarding is configured. \n\nOne of the essential operations done by a router is called **Network Address Translation (NAT)**. NAT is designed to conserve IP addresses. Rather than give every device in the world a unique IP address, LAN devices have unregistered IP addresses that are valid only within that LAN. Router acts as an intermediary and presents to the internet a single public IP address. Router translates between internal private addresses and the public address. \n\n\n### Could you describe some recent advancements in LAN technology?\n\nLAN has seen significant development since its inception in the 1970s and has grown exponentially with changing demands and needs of users. Availability of next-generation WLAN cloud services, 5G, IoT, smart buildings, and cabling trends are transforming the industry. \n\n**Single-Pair Ethernet (SPE)** has become the trend for being faster and cost-effective. It provides transmission speeds up to 1Gbps with a single pair of twisted copper wires rather than the classic four-pair cables or RJ45 connectors. \n\n**All over IP** is an approach to building automation with IP alone. It's an integrated approach that brings together Ethernet/IP cabling, Power over Ethernet (PoE), and WLAN. Since all devices understand IP, there's no need for protocol translation in between. Such standardization improves reliability and availability. \n\nAdoption of Passive Optical LAN (POLAN) cable has changed the fundamental LAN architecture by replacing copper cables with single-mode fiber optic cables, providing higher bandwidth and numerous other advantages. \n\n## Milestones\n\n1972\n\nDr. Robert M. \"Bob\" Metcalfe and his colleagues at Xerox PARC develop the first experimental Ethernet network called **Alto Aloha Network** for connecting with Altos, a personal workstation. \n\nMay \n1973\n\nMetcalfe renames Alto Aloha Network to **Ethernet**. He clarifies that this network can support any device. In a memo titled *Alto Ethernet* he gives a schematic representation in which personal computers can connect to a printer via a coaxial cable. \n\n1974\n\nCambridge University initiates the **Cambridge RING Project** for communication of devices through the *Laboratory* within the campus. Information flows on twisted pair cables between the printers and computers. \n\nJul \n1976\n\nDavid Boggs and Bob Metcalfe publish a paper titled *Ethernet: Distributed Packet Switching for Local Computer Networks*. The same year Ethernet becomes an open networking standard funded by three companies Xerox, DEC and Intel. \n\nDec \n1977\n\nThe **first commercial LAN** is installed by Datapoint Corp. at Chase Manhattan Bank in New York. It uses the Attached Resource Computer (ARC) network using a token ring scheme for Ethernet. \n\nDec \n1984\n\nThe Institute of Electrical and Electronics Engineers (IEEE) publishes the standard **IEEE 802.3** that specifies the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) access method, which is essential for the working of a LAN. \n\n1991\n\nKalpana Corporation launches a new device called **LAN switch**. Each port can run at full capacity simultaneously, something not possible with hubs. This is later called an Ethernet switch and it provides high-throughput communications. \n\n1995\n\n**Fast Ethernet (FE)** is introduced with transmission speed up to 100Mbps to meet the increasing the needs of users. It's specified by the IEEE 802.3u standard. \n\n1997\n\nFirst version of **Wireless Local Area Network (WLAN)** standards IEEE 802.11 is published with two raw data rates of 1 or 2Mbps. \n\nJun \n2003\n\n**Power over Ethernet (PoE)** is defined by the IEEE 802.3af and 802.3at standards. It can pass electric power along with data on twisted pair Ethernet cable. PoE can provide 48 volts over a 4-wire or 8-wire twisted pair. It's useful at locations without a power outlet.\n\nDec \n2013\n\nIn WLAN, the standard IEEE 802.11ac is defined. It operates at the 5GHz band with support for MU-MIMO (Multi-User MIMO). This allows access points to communicate with multiple devices simultaneously. \n\nDec \n2017\n\n**400GbE Ethernet** interface is approved by IEEE under the IEEE 802.3bs standard. It can transmit data up to 400Gbps, which is four times faster than 100Gbps standard while also consuming less power. \n\nFeb \n2021\n\nThe IEEE 802.11ax standard or **Wi-Fi 6** is launched. It provides more speed and greater stability on a higher bandwidth channel than its predecessors. MU-MIMO in Wi-Fi 6 can work in both 2.4GHz and 5GHz bands.","meta":{"title":"Local Area Network","href":"local-area-network"}} {"text":"# IoT Operating Systems\n\n## Summary\n\n\nThe use of operating systems for IoT hardware is often categorized into two groups: end devices and gateways. End devices or nodes are often lot smaller in capability as compared to gateways. As more and more processing is pushed to the network edges (to gateways and nodes), traditional devices that used to run without an OS are embracing new OS implementations customized for IoT.\n\nWhile IoT OS are an evolution of embedded OS, IoT brings its own additional set of constraints that need to be addressed. A mix of open source and closed source IoT OS exist in the market. Since IoT is varied in terms of applications, hardware and connectivity, we expect the market will sustain multiple OS rather than just a couple of them.\n\n## Discussion\n\n### Do IoT end devices require an OS in the first place?\n\nWhile having an OS is not mandatory, devices are growing in complexity. This complexity is due nodes having more sensors, more data processing and increased connectivity to send that data out. Some devices have rich user interfaces that might include graphical displays, face recognition and voice recognition. End devices that were often based on 8-bit or 16-bit MCUs are moving to 32-bit architectures as costs drop and complexity increases. Addressing these changes without an OS is not only a challenge but also inefficient. The use of OS, particular an RTOS, simplifies the job of application programmers and system integrators because many of the low-level challenges are taken care of by the OS. \n\nAs a thumb rule, a system that consumes less than 16KB of RAM and Flash/ROM does not require an OS. Such systems most often run on 8-bit or 16-bit MCUs. With such systems, we can get away with a single event loop that polls and processes events as they occur. But if more complexity is added to the system, the response times will be limited by the worst-case processing time of the entire loop. \n\n\n### What are the parameters for selecting a suitable IoT OS?\n\nThe following parameters may be considered for selecting an IoT OS:\n\n + Footprint: Since devices are constraint, we expect OS to have low memory, power and processing requirements. The overhead due to the OS should be minimal.\n + Scalability: OS must be scalable for any type of device. This means developers and integrators need to be familiar with only one OS for both nodes and gateways.\n + Portability: OS isolates applications from the specifics of the hardware. Usually, OS is ported to different hardware platforms and interfaces to the board support package (BSP) in a standard way, such as using POSIX calls.\n + Modularity: OS has a kernel core that's mandatory. All other functionality can be included as add-ons if so required by the application.\n + Connectivity: OS supports different connectivity protocols, such as Ethernet, Wi-Fi, BLE, IEEE 802.15.4, and more.\n + Security: OS has add-ons that bring security to the device by way of secure boot, SSL support, components and drivers for encryption.\n + Reliability: This is essential for mission-critical systems. Often devices are at remote locations and have to work for years without failure. Reliability also implies OS should fulfil certifications for certain applications.\n\n### What parameters are important for selecting a suitable IoT OS?\n\nThe short answer is that there's no universal subset of important parameters. While there are many parameters for selection, some of them may be more important than others depending on the hardware type and application. For example, a small memory footprint may be important for end devices but not so for gateways. Compliance to standards may be important for an industrial application but not so for a hobby project. Someone looking at only ARM-based hardware might choose ARM mbed at the expensive of platform portability. A device that has access to a power supply might not expect power optimization from its OS; whereas a battery-powered device might expect the OS to do power management so that the battery can last for 10 years. \n\n\n### What certifications might an IoT OS require?\n\nThis is dependent on the vertical. The following is a non-exhaustive list: \n\n + DO-178B for avionics systems\n + IEC 61508 for industrial control systems\n + ISO 62304 for medical devices\n + SIL3/SIL4 IEC for transportation and nuclear systems\n\n### What are the open source IoT OS?\n\nThe following is a non-exhaustive list: TinyOS, RIOT, Contiki, Mantis OS, Nano RK, LiteOS, FreeRTOS, Apache Mynewt, Zephyr OS, Ubuntu Core 16 (Snappy), ARM mbed, Yocto, Raspbian.\n\nSome of these have come from academic institutions. TinyOS and Contiki are among the oldest. RIOT is more recent and has an active community of developers. FreeRTOS is among the popular ones. LiteOS is from Huawei. ARM mbed is single-threaded, event-driven and modular. It's has good connectivity and low footprint. In Zephyr OS the kernel is statically compiled, which makes it safe from compile time attacks. With Ubuntu Core, rather than having a monolithic build, kernel, OS and apps are expected be packaged and delivered as snaps. Android Things, previously named Brillo, is Google's offering. Yocto is not exactly an embedded Linux distribution. It's a platform to create a customized distribution for your particular application. \n\n\n### What are the closed or commercial IoT OS?\n\nThe following is a non-exhaustive list: Android Things, Windows 10 IoT, WindRiver VxWorks, Micrium µC/OS, Micro Digital SMX RTOS, MicroEJ OS, Express Logic ThreadX, TI RTOS, Freescale MQX, Mentor Graphics Nucleus RTOS, Green Hills Integrity, Particle.\n\nWindows 10 IoT comes in three flavours: IoT Enterprise, IoT Mobile Enterprise and Core. TI RTOS and Freescale MQX target the respective chipsets. Where reliability and safety are important, some of these commercial systems are preferred, particularly in aerospace, automotive, healthcare and industrial systems. Windows 10 IoT and Particle are examples that enable easy integration to cloud services.\n\n\n### What are the popular IoT OS out there?\n\nFrom the IoT Developer Survey 2018 conducted by Eclipse Foundation, it was found that 71.8% of the respondents like or use Linux-based OS. Within Linux, Raspbian takes the lead. Windows and FreeRTOS follow at 22.9% and 20.4%. However, the sample size in this survey was small. \n\nIt's interesting that 19.9% of developers prefer bare-metal programming. **Bare-metal** is a term that's used when no OS is used. Bare-metal is preferred for constrained devices. It's a cheaper option but development and support costs may increase. If the device's processing, memory and power requirements can allow it, Embedded Linux is preferred. It will have a shorter time-to-market, better security, wider support base and well-tested connectivity solutions. \n\n\n### Do IoT OS need to be real-time OS?\n\nRTOS will be required where data has to be processed within time constraints, often without buffering delays. Where multiple threads are required that have to meet deadlines and share resources effectively, RTOS will be needed. \n\nThere will also be a class of devices that may not have strict real-time constraints. For some, a richer user interface may be more important. Some may buffer data and transmit them occasionally to save power. Such devices need not have an RTOS and may adopt a simpler OS. However, a survey from 2015 has shown that many designers who choose an OS mention real time as one of the top reasons. \n\nDesigners may choose to use multiple processors where it makes sense. For example, an 8-bit MCU may be used to interface to sensors and actuators; a 32-bit processor will run the RTOS for connectivity and multithreading. \n\n\n### What are typical memory requirements for IoT OS?\n\nSensor nodes will have less than 50KB of RAM and less than 250KB of ROM. Contiki requires only 2KB of RAM and 40KB of ROM. Similar numbers are quoted for Mantis and Nano RK. Zephyr requires only 8KB of memory. Apache Mynewt requires 8KB of RAM and 64 KB of ROM. It's kernel takes up only 6KB. Communication protocols typically take up 50-100KB of ROM. \n\nIn all cases, the numbers will increase when more components are added as required by the application. For example, one experiment with SMX RTOS showed 40KB/70KB of RAM/ROM for a node; the same OS for a gateway came up to 200KB/300KB. \n\nDevices based on Ubuntu Core, Windows 10 IoT, Android Things and MicroEJ are likely to require memories in the order of gigabytes. Gateways will typically have a footprint in the order of gigabytes. This is because of extra functionality that they are require to do: device management, security, protocol translation, retransmissions, data aggregation, data buffering, and so on. \n\n\n### What design techniques are used by IoT OS?\n\nTinyOS and ARM mbed use a monolithic architecture while RIOT and FreeRTOS use a microkernel architecture. ARM mbed uses a single thread and adopts an event-driven programming model. Contiki uses protothreads. RIOT, FreeRTOS and µC/OS use a multithreading model. In static systems (TinyOS, Nano RK), all resources are allocated at compile time. Dynamic systems are more flexible but also more complex. File systems may not be required for the simplest sensor nodes but some OS support single level file systems. With respect to power optimization, this is done for the processor as well as its peripherals. \n\n## Milestones\n\n1999\n\nThe first implementation of TinyOS happens within UC Berkeley. It is released to public in 2000. \n\n2003\n\nAdam Dunkels releases Contiki. Dunkels goes on to found Thingsquare in 2012.\n\n2013\n\nRIOT is released to the public. The origins of RIOT lie in FeuerWare (2008) and µkleos (2010). \n\nJun \n2016\n\nIrish firm Cesanta releases 1.0-rc1 version of open source OS named **Mongoose**. Version 2.3 is released in June 2018. Mongoose makes it easy to connect your IoT devices to the cloud and do over-the-air (OTA) updates. Microcontrollers supported include STM32F4, STM32F7, ESP32, ESP8266, CC3220, and CC3200. \n\nOct \n2016\n\nUbuntu Core 16 beta version, based on snaps, is released. \n\nDec \n2016\n\nGoogle rebrands Brillo as Android Things. Brillo was announced earlier at Google I/O 2015. \n\nNov \n2017\n\nAmazon announces Amazon FreeRTOS. Based on the FreeRTOS kernel, this makes it easier for developers to connect their IoT devices locally or to the cloud, to make their devices more secure and do over-the-air updates.","meta":{"title":"IoT Operating Systems","href":"iot-operating-systems"}} {"text":"# Computer Vision\n\n## Summary\n\n\nComputer Vision is about enabling computers to see, perceive and understand the world around them. This is achieved through a combination of hardware and software. Computers are trained using lots of images/videos and algorithms/models are built. An understanding of human vision also informs the design of these algorithms/models. In fact, computer vision is a complex interdisciplinary field at the intersection of engineering, computer science, mathematics, biology, psychology, physics, and more. \n\nSince the early 2010s, neural network approaches have greatly advanced computer vision. But given the sophistication of human vision, much more needs to done.\n\nComputer vision is an important part of Artificial Intelligence. This is because we \"see\" less with our eyes and more with our mind. It's also because it spans multiple disciplines.\n\n## Discussion\n\n### How does Computer Vision compare with human vision?\n\nComputers first see every image as a matrix of numbers. It's the job of algorithms to transform these low-level numbers into lines, shapes and objects. This isn't so different from human vision where the retina triggers signals that are then processed by the visual cortex in the brain, leading to perception. \n\nCV uses Convolutional Neural Network (CNN). This is a model inspired by how the human visual cortex works, processing visual sensory inputs via a hierarchy of layers of neurons. While more work is needed to achieve the accuracy of human vision, CNNs have brought us the best results so far. CNNs lead to Deep CNNs where the idea is to match 2D templates rather than construct 3D models. This again is inspired by our own vision system. \n\nAmong the obvious differences are that CV can see 360 degrees; that CV is not limited to just visible light; that CV is not affected by fatigue or physiology; CV sees uniformly across the field of view but our peripheral vision is better in low-light conditions; CV has its own biases but they're free from biases and optical illusions that affect humans. \n\n\n### Isn't Computer Vision similar to image processing?\n\nImage processing comes from the disciplines of Electrical Engineering and Signal Processing whereas computer vision is from Computer Science and Artificial Intelligence. Image processing takes in an image, enhances the image in some way, and outputs an image. Computer vision is more about image analysis with the goal of extracting features, segments and objects from the image. \n\nAdjusting the contrast in an image or sharpening the edges via a digital filter are image processing tasks. Adding colour to a monochrome image, detecting faces or describing the image are computer vision tasks. It's common to combine the two. For example, an image is first enhanced and then given to computer vision. Computer vision can detect faces or eyes, then image processing improves facial skin tone or removes red eye. \n\nLet's note that Machine Learning can be used for both CV and image processing, although it's more commonly used for CV. \n\n\n### How is Computer Vision (CV) related to Machine Vision (MV)?\n\nMachine vision is more an engineering approach to enable machines to see. It's about image sensors (cameras), image acquisition, and image processing. For example, it's used on production lines to detect manufacturing defects or ensure that products are labelled correctly. Machine vision is commonly used in controlled settings, has strong assumptions (colour, shape, lighting, orientation, etc.) and therefore works reliably. \n\nComputer vision incorporates everything that machine vision does but adds value by way of image analysis. Thus, machine vision can be seen as a subset of computer vision. CV makes greater use of automation and algorithms, including machine learning but the line between CV and MV is blurry. Typically, vision systems in industrial settings can be considered as MV. \n\n\n### What are some applications of Computer Vision?\n\nCV has far-reaching applications. Wikipedia's category on this topic has many sub-categories and dozens of pages: \n\n + **Recognition Tasks**: Recognition of different entities including face, iris, gesture, handwriting, optical character, number plate, and traffic sign.\n + **Image Tasks**: Automation of image search, synthesis, annotation, inspection, and retrieval.\n + **Applications**: Enabling entire applications such as augmented reality, sign language translation, automated lip reading, remote sensing, mobile mapping, traffic enforcement camera, red light camera, pedestrian detection and video content analysis.Facebook uses CV for detecting faces and tagging images automatically. Google's able to give relevant results for an image search because it analyzes image content. Microsoft Kinect uses stereo vision. Iris or face recognition are being used for surveillance or for biometric identification. Self-driving cars employ a variety of visual processing tasks to drive safely. \n\nGauss Surgical uses real-time ML-based image analysis to determine blood loss in patients. Amazon Go uses CV for tracking shoppers in the store and enabling automated checkout. CV has been used to study society, demographics, predict income, crime rates, and more. \n\n\n### Could you describe some common tasks in Computer Vision?\n\n**Image Segmentation** groups pixels that have similar attributes such as colour, intensity or texture. It's a better representation of the image to simplify further processing. This can be subdivided into semantic or instance segmentation. For instance, the former means that persons and cats are segmented; the latter means that each person and each cat is segmented. \n\n**Image Classification** is about giving labels to an image based on its content. Thus, the image of a cat would be labelled as \"cat\" with high probability. \n\n**Object Detection** is about detecting objects and placing bounding boxes. Objects are also categorized and labelled. In a two-stage detector, boxing and classification are done separately. A one-stage detector will combine the two. Object detection leads to **Object Tracking** in video applications. \n\n**Image Restoration** attempts to enhance the image. **Image Reconstruction** is about filling in missing parts of the image. With **Image Colourization**, we add colour to a monochrome image. With **Style Transfer** we transform an image based on the style (colour, texture) of another image. \n\n\n### What's the typical data pipeline in Computer Vision?\n\nA typical CV pipeline includes **image acquisition** using image sensors; **pre-processing** to enhance the image such as reducing noise; **feature extraction** that would reveal lines, edges, shapes, textures or motion; **image segmentation** to identify areas or objects of interest; **high-level processing** (also called post-processing) as relevant to the application; and finally, **decision making** such as classifying a medical scan as true or false for tumour. \n\n\n### Could you mention some algorithms that power Computer Vision?\n\nHere's a small and incomplete selection of algorithms. For pre-processing, thresholding is a simple and effective method: conventional, Otsu global optimal, adaptive local. Filters are commonly used: median filter, top-hat filter, low-pass filter; plus filters for edge detection: Roberts, Laplacian, Prewitt, Sobel, and more. \n\nFor feature-point extraction, we can use HOG, SIFT and SURF. Hough Transform is another feature extraction technique. Viola-Jones algorithm is for object or face detection in real time. There's also the PCA approach called eigenfaces for face recognition. \n\nLucas-Kanade algorithm and Horn-Schunk algorithm are useful for optical flow calculation. Mean-shift algorithm and Kalman filter are for object tracking. Graph Cuts are useful for image segmentation. For 3D work, NDT, ICP, CPD, SGM, and SGBM algorithms are useful. \n\nBresenham's line algorithm is for drawing lines in raster graphics. To relate corresponding points in stereo images, use Fundamental Matrix. \n\nFrom the world of machine learning algorithms we have CNNs and Deep CNNs. We also have SVM, KNN, and more. \n\n\n### What are the current challenges in Computer Vision?\n\nIt's been shown that \"adversarial\" images in which pixels are selectively changed can trick image classification systems. For example, Google Cloud Vision API thinks it's looking at a dog when really the scene has skiers. \n\nAlgorithms are capable of deductive reasoning but are poor with understanding context, analogies and inductive reasoning. For example, CV can recognize a book but the same book when used as a doorstop will be seen only as a book. In other words, CV is incapable of understanding a scene. \n\nWhile CV has progressed with object recognition, accuracy can suffer if the background is cluttered with details or the object is shown under different lighting in a different angle. In other words, *invariant* object recognition is still a challenge. \n\nThere are also challenges in creating low-powered CV solutions that can be used in smartphones and drones. Embedded vision is becoming mainstream in automotive, wearables, gaming, surveillance, and augmented reality with a focus towards object detection, gesture recognition, and mapping functions. \n\n\n### What software tools would you recommend for doing Computer Vision?\n\nThe easy approach to use CV into your applications is to invoke CV APIs: Microsoft Azure Computer Vision, AWS Rekognition, Google Cloud Vision, IBM Watson Visual Recognition, Cloud Sight, Clarifai, and more. These cover image classification, face detection/recognition, emotion detection, optical character recognition (OCR), text detection, landmark detection, content moderation, and more. \n\n**OpenCV** is a popular multiplatform tool with C/C++, Java and Python bindings; but it doesn't have native GPU support. For coding directly in Python, there's also NumPy, SciPy and scikit-image. **SimpleCV** is great for prototyping before you adopt OpenCV for more serious work. \n\nComputer Vision Toolbox from MathWorks is a paid tool but it can simplify the design and testing of CV algorithms including 3D vision and video processing. C# and .NET developers can use AForge.NET/Accord.NET for image processing. For using CNNs, **TensorFlow** is a popular tool. **CUDA Toolkit** can help you get the best performance out of GPUs. Try **Tesseract** for OCR. \n\nFor more tools, see eSpace on Medium, ResearchGate and Computer Vision Online.\n\n## Milestones\n\n1957\n\nRussell Kirsch at the National Bureau of Standards (now called NIST) asks, \"What would happen if computers could look at pictures?\" Kirsch and his colleagues develop equipment to scan a photograph and represent it in the world of computers. They scan a 5cm x 5cm photograph of Kirsch's infant son into an array of 176 x 176 pixels. \n\n1959\n\nNeurophysiologists David Hubel and Torsten Wiesel discover that a cat's visual cortex is activated not by objects but by simple structures such as oriented edges. It's only decades later that we use this in the design of CNNs. \n\n1963\n\nLarry Roberts publishes his PhD thesis at MIT. His idea is to create a 3D representation based on perspectives contained in 2D pictures. This is done by transforming images into line drawings. Soon after this, Roberts joins DARPA and becomes one of the founders of the ARPANET that eventually evolves into the Internet. Roberts is considered at the father of computer vision. \n\n1966\n\nThe summer of 1966 is considered the official birth of computer vision. Seymour Papert of MIT's AI Lab defines the \"Summer Vision Project\". The idea is to do segmentation and pattern recognition on real-world images. The project proves too challenging for its time and not much is achieved. \n\n1971\n\nResearch in computer vision continues in the direction suggested by Roberts. David Huffman and Max Clowes independently publish **line labelling algorithms**. Lines are labelled (convex, concave, occluded) and then used to discern the shape of objects. \n\n1974\n\nTo overcome blindness, Kurzweil Computer Products comes up with a program to do OCR. This comes at a time when funding and confidence in AI was at its lowest point, now called as the AI Winter of the 1970s. \n\n1979\n\nDavid Marr suggests a **bottom-up approach** to computer vision (his book is published posthumously in 1982). He states that vision is hierarchical. It doesn't start with high-level objects. Rather, it starts with low-level features (edges, curves, corners) from which higher level details are built up. In the 1980s, this leads to greater focus on low-level processing and goes on to influence deep learning systems. Marr's work is now considered a breakthrough in computer vision. \n\n1980\n\nIn 1980, Kunihiko Fukushima designs a neural network called *Neocognitron* for pattern recognition. It's inspired by the earlier work of Hubel and Wiesel. It includes many convolutional layers and may be called the first deep neural network. Later this decade, **math and stats** begin to play a more significant role in computer vision. Some examples of math-inspired contributions include Lucas-Kanade algorithm (1981) for flow calculation, Canny edge detector (1986), and eigenface for facial recognition (1991). \n\n1990\n\nIn the early 1990s, in criticism of Marr's approach, **goal-oriented** computer vision emerges. The idea is that we often don't need 3D models of the world. For example, a self-driving car only needs to know if the object is moving away or towards the vehicle. One of the proponents of this approach is Yiannis Aloimonos. \n\n2012\n\n**AlexNet** wins the annual ImageNet object classification competition with the idea that depth of a neural network is important for accurate results. AlexNet uses five convolutional layers followed by three fully connected layers. It's considered a breakthrough in computer vision, inspiring further research in its footsteps. In 2015, Microsoft's ResNet of 152 layers obtains better accuracy.","meta":{"title":"Computer Vision","href":"computer-vision"}} {"text":"# Log Aggregation\n\n## Summary\n\n\nLog aggregation is the process of collecting, organizing, and analyzing log data from various sources across a system or network. Logs are records of events or messages generated by software, devices, or systems. Such logs are used to diagnose problems, monitor system performance, and identify security issues.\n\nLog aggregation is especially important in complex systems and distributed environments, where logs are generated by a large number of components and can be difficult to access and analyze. By aggregating logs from different sources into a single centralized logging system, administrators can get a holistic view of system health and detect issues that may not be apparent when looking at individual logs.\n\nLogs come in different formats, types and sources. Log aggregation tools are available to handle all aspects of log management including dashboards, alerts, and reports.\n\n## Discussion\n\n### Why do we need log aggregation?\n\nIn a monolithic architecture, all the components of the application are running on a single host. The logs are relatively easier to access and analyze. However, as the application grows, the logs can become unwieldy and difficult to manage. So log aggregation is still useful to centralize logs and make it easier to identify and resolve issues.\n\nLog aggregation is even more critical in a microservices architecture because the services are distributed. Services generate logs on different hosts. This makes it harder to troubleshoot issues when they arise. Log aggregation leads to faster and more effective incident response.\n\n\n### What's a typical log aggregation architecture for microservices?\n\nIn a typical architecture, each microservice stores log entries into a file or sends them to a log stream. A **log stream** can be thought of as a real-time, unstructured data feed of the activities and events occurring within a specific component or application.\n\nOn each host is an agent software called the **log collector**. It forwards the log to a **central log repository**. The centralized repository may be a database, a file system, or a cloud-based storage service. The log collector itself may be a separate process running on the host. Where the deployment is in a Kubernetes cluster, the log collector may be a separate container (aka sidecar container) running in the same pod as the main container.\n\nOnce logs are in the repository, there are **tools** to search, visualize and analyze them. ELK stack is one such tool. Logs can also be analyzed as they're being moved into the repository, what's called streaming analytics. This makes sense for critical applications that require real-time monitoring/metrics/alerts and immediate corrective action.\n\n\n### What types of logs are involved in log aggregation?\n\nSince it's hard to predict what sort of problems may crop up and where, a well-designed application must collect many types of logs. **Application logs** provide information about the behaviour of the application. **Infrastructure logs** provide information about the underlying system and its components. Both are important for troubleshooting issues. A good practice is to tag and structure the logs so that they can be easily searched and analyzed.\n\nApplication logs can be further sub-divided into security logs, audit logs, database logs, application server logs, middleware logs, and so on. Likewise, infrastructure logs can be further sub-divided into security logs, audit logs, operating system logs, network logs, and so on. Each log type must contain sufficient details relevant to its context. For example, security logs would include unauthorized access attempts. Database logs would include transactions. Middleware logs would contain information about message queues.\n\nMetrics complement logs towards better monitoring and observability. **Metrics** are structured data points (CPU usage, memory usage, network traffic, response times, etc) collected at regular intervals. While logs provide a detailed record of individual events, metrics provide a high-level view of overall system performance.\n\n\n### How are logs from different sources interleaved?\n\nLogs can be interleaved at different stages of the log aggregation process, such as when they are collected by log collectors, stored in a database or file system, or analyzed by log search and analysis tools. The specific interleaving method used depends on the requirements of the system and the use case.\n\nThe most method is **time-based interleaving** that orders logs based on the time they were generated. This is useful for correlating events across different services or devices.\n\n**Source-based interleaving** is more suited for analyzing logs from a specific component or service.\n\n**Event-based interleaving** uses events or messages to organize logs. This is useful for identifying patterns and trends in the log data. A related method is **transaction-based interleaving**. This is useful for tracking the progress of a specific transaction or operation across different services.\n\n\n### What log format should I adopt for log aggregation?\n\nWhen choosing a log format for log aggregation, there are several factors to consider: level of detail, analysis tools, logging libraries in the selected programming language, etc. Some frameworks and libraries support many different formats. Examples include Log4j (Java logging library) and Fluentd (open-source data collector).\n\nA popular format is **JSON (JavaScript Object Notation)** because of its flexibility and support in many programming languages. It's lightweight and text-based. It's easy to read and parse.\n\n**Syslog** is a standard for logging messages and events. It's supported by many operating systems and network devices. It's commonly used in enterprise environments. Syslog messages are sent over UDP or TCP and can be easily collected by a Syslog server.\n\nApache's **Common Log Format (CLF)** is used for logging HTTP server requests. It's widely supported by web servers and web application frameworks. Like JSON, it's text-based and easy to parse and analyze.\n\n\n### What are the best practices for log aggregation?\n\nDefine a clear logging strategy. Set out the goal of log analysis: identify bottlenecks, optimize performance, regulatory compliance, etc. Determine what log data you need to collect, how long you need to retain it, and what tools you will use for analysis and visualization. Regularly review the strategy since log aggregation is an ongoing process that requires regular maintenance and updates.\n\nCollect logs in real time, rather than relying on batch processing or manual collection. If real-time analysis is not possible, at least regularly analyze logs. Use a standardized logging format, such as JSON or Syslog, to make it easier to parse and analyze logs.\n\nStoring all logs in a central location is a pre-requisite. Implement security controls to protect log data, such as encrypting logs in transit and at rest, and restricting access to logs based on user roles.\n\nRotate log files regularly to prevent them from becoming too large. Monitor the system and set up alerts to warn of lost logs, bottlenecks or unusual activity.\n\nMany of these best practices are also part of the CNCF guidelines for logging in cloud-native environments.\n\n\n### What should developers and log analysts know about log aggregation?\n\nLogs provide a wealth of information about application behaviour, including errors, warnings, performance metrics, trends and patterns. Developers should strive to make logs as useful as possible given this context. Too much logging can impact performance, especially if logging is synchronous. Developers must balance the need for information with the impact on performance.\n\nLog data should be designed for machine consumption. This means logs should be standardized so that tools can process them effectively.\n\nLogs are also data. Like in any data analytics workflow, log analytics has similar challenges: data volume, variety, quality, security, and accessibility. Authentication credentials and personally identifiable information (PII) should not be in the logs, or if present, should be perhaps encrypted. Logs should be clean of errors, duplicates, or other inconsistencies.\n\nWhen troubleshooting problems with logging, check that the log source is actually generating and sending the logs. Check the log configuration. If logs are not reaching the central repository, check the log collector, network connections or firewall settings. Another technique is to increase the logging level to view detailed logs. To deal will log loss, implementing redundancy (multiple logging systems) may be necessary.\n\n\n### What are some case studies of log aggregation?\n\nAirbnb uses a combination of open-source tools, including Logstash, Kibana, and Elasticsearch, for log aggregation. The company collects log data from over 50,000 servers and analyzes over 200 GB of log data per day. Airbnb uses log aggregation to monitor application performance, detect security incidents, and troubleshoot issues.\n\nUber uses a custom-built log aggregation system called Marmaray to collect and analyze log data from its microservices architecture. Marmaray provides real-time log analysis and allows Uber to identify and troubleshoot issues quickly. Uber also uses Apache Kafka to collect log data from its applications and store it in a centralized location.\n\nNetflix uses a combination of open-source tools, including Apache Kafka, Apache Cassandra, and Elasticsearch, for log aggregation. The company collects over 1 trillion events per day and also does real-time log analysis. In-house tools for log aggregation include the Spectator library for collecting application metrics, the Atlas service for storing and querying metric data, and the Mantis service for real-time stream processing.\n\n## Milestones\n\n1984\n\nThe **syslog** protocol is developed for UNIX systems. Syslog is a standard for logging messages, and it allows system administrators to collect and store logs from multiple sources. In 2000, a more advanced implementation of syslog called **rsyslog** is launch. It supports features such as filtering, log rotation, and remote logging.\n\n1999\n\nThe concept of log aggregation gains popularity with the advent of **distributed systems and service-oriented architectures (SOA)** in the late 1990s. As the number of systems and services in these architectures increase, the need for a centralized approach to log management becomes more critical.\n\n2000\n\nIn the early 2000s, the rise of **cloud computing** and the popularity of **containerization** leads to a new wave of log aggregation solutions. These solutions were designed to collect logs from virtual machines and containerized environments and store them in cloud-based log management platforms.\n\n2006\n\nThe Apache Hadoop project is launched. Hadoop is a distributed computing platform that includes the Hadoop Distributed File System (HDFS) for storing and processing large datasets. Hadoop includes the **Hadoop Distributed Logging Service (HDFS)** for aggregating and analyzing log data.\n\n2008\n\n**Graylog** is launched. It's an open-source log management platform that includes features such as log aggregation, search, and visualization. Other tools follow: Logstash (2009), Elasticsearch (2010), Fluentd (2012) and Prometheus (2016).\n\n2019\n\nThe Cloud Native Computing Foundation (CNCF) releases a set of **guidelines for logging in cloud-native environments**. The guidelines emphasize the importance of structured logging and recommend the use of tools such as Fluentd and Elasticsearch for log aggregation.","meta":{"title":"Log Aggregation","href":"log-aggregation"}} {"text":"# Express.js\n\n## Summary\n\n\nExpress is a minimalistic web framework based on Node.js. It's essentially a series of *middleware function calls*, each of which does something specific. Express in not opinionated, which is also why you're free to use it in different ways. For example, it doesn't enforce a particular design pattern or a folder structure. \n\nExpress is an open source project that's been managed by Node.js Foundation since 2016. Express has good adoption. It's part of the MEAN stack. It's also being used by many other web frameworks such as Kraken, LoopBack, Keystone and Sails. However, it's been said that Express is not suited for large projects/teams.\n\n## Discussion\n\n### Why should I use Express.js when there's already Node.js?\n\nWhile it's possible to build a web app or an API service with only Node.js, Express.js simplifies the development process. Express itself is based on Node. It's been said that Express... \n\n> provides a thin layer of fundamental Web application features, without obscuring Node.js features that you know and love.\n\nFor example, sending an image is complex in Node but easily done in Express. In Node, the route handler is a large monolith but Express enables more modular design and maintainable code. \n\nNode is a JavaScript runtime for server-side execution. Thus, Node can be used as the app server for your web application. Express is seen as a lightweight web framework. Express comes with a number of \"middleware\" software that implement out-of-the-box solutions for typical web app requirements. HTTP requests/responses are relayed by Node to Express, whose middleware then do the processing. \n\n\n### Could you explain the concept of middleware in Express.js?\n\nWhen client requests are received, the server doesn't handle all of them alike. For example, submitting a form is handled differently from clicking a like button. Thus, each request has a well-defined handler, to which it must be properly routed.\n\nMiddleware sits in the routing layer. Express is basically a routing layer composed of many modular processing units called middleware. Requests are processed by middleware functions before sent to the handlers.\n\nA request-response cycle can invoke a series of middleware functions. A middleware can access and modify the request and response objects. Once a middleware function has completed, it can either pass control to the next middleware function (by calling `next()`) or end the cycle by sending a response to the client. Since middleware can send responses, even the request handlers can be treated or implemented as middleware.\n\nMiddleware can also be called after the request is processed and response is sent to the client. But in this case, middleware can't modify the request and response objects.\n\n\n### What are the different types of Express.js middleware?\n\nThere are a few types of Express middleware: \n\n + **Application-level**: These are bound to an instance of the application object `express()`. They are called for every app request. The middleware signature is `function (req, res, next)`.\n + **Router-level**: These are bound to an instance of the router `express.Router()`. Otherwise, they work similar to app-level middleware.\n + **Error-handling**: Any middleware can throw an error. These can be caught and handled by error-handling middleware. These middleware have an extra argument in their signature: `function (err, req, res, next)`.\n + **Built-in**: These are built into default Express installation. These include `express.static`, `express.json` and `express.urlencoded`.\n + **Third-party**: Express has a rich ecosystem of third-party developers contributing useful middleware. Some of these are maintained by the Express team, while others come from the community. For an example list of some Express middleware, visit Express Resources Middleware page.\n\n### What are some useful or recommended middleware for Express.js?\n\nWithout being exhaustive here are some useful middleware: \n\n + *body-parser*: To parse HTTP request body.\n + *compression*: Compresses HTTP responses.\n + *cookie-parser*: Parse cookie header.\n + *cookie-session*: Establish cookie-based sessions.\n + *cors*: Enable Cross-Origin Resource Sharing (CORS).\n + *csrf*: Protect from Cross-Site Request Forgery (CSRF) exploits.\n + *errorhandler*: Development error-handling/debugging.\n + *morgan*: HTTP request logger. Alternatives are *winston* and *bunyan*.\n + *timeout*: Set a timeout period for HTTP request processing.\n + *helmet*: Helps secure your apps by setting various HTTP headers.\n + *passport*: Authentication using OAuth, OpenID and many others.\n + *express-async-handler*: For Async/await support. An alternative is *@awaitjs/express*.\n + *express-cluster*: Run express on multiple processes.\n\n### As a beginner, how can I get started with Express.js?\n\nA beginner should first learn the basics of JavaScript, Node.js and Express.js, in that order. Express website has useful tutorials. The routing guide is a good place to start. For in-depth understanding, or as a handy reference, use the API Reference.\n\nInstall Node.js first. Then install Express.js: `npm install express --save` \n\nSince Express is unopinionated, your folder structure can be anything that suits your project. However, *express-generator* is a package that gives beginners a basic structure to start with. This installs the *express* command-line tool, which can be used to initialize your app. You get to choose your preferred templating engine (pug, ejs, handlebars) and styling support (less, stylus, compass, sass). \n\n\n### Where is Express.js not a suitable framework?\n\nExpress is not recommended for large projects. \n\nAlthough Express is lightweight, for performance-critical apps such as highly scalable REST API services, prefer to use specialized Node frameworks such as fastify, restana, restify, koa, or polka. \n\n\n### What are some best practices when using Express.js?\n\nAdopt a modular structure to make your code more manageable. \n\nIn production, log requests and API calls but minimize debug logs. A package such as debug might help in configuring the logging contexts. Pipe logs from `console.log()` or `console.error()` to another program since saving them to file or printing to console make the calls synchronous. \n\nIn general, avoid synchronous calls. Use Node's command-line flag `--trace-sync-io` (during development only) to be warned when synchronous calls happen. \n\nFor performance, use compression middleware. For security, use helmet. To run multiple instances on multicore systems, launch your app on a **cluster** of processes. The cluster can be frontended with a **load balancer**, which can be a reverse proxy such as Nginx or HAProxy. \n\nHandle errors using try-catch or promises. Don't listen for `uncaughtException` event to handle errors. This might be a way to prevent your app from crashing but a better way is to use a **process manager** (StrongLoop Process Manager, PM2, Forever) to restart the app automatically. \n\n## Milestones\n\n2009\n\nRyan Dahl and others at Joyent develop Node.js with initial support for only Linux. The name *Node* is coined in March. Being open source, version 0.1.0 is released on GitHub in May. In November, Ryan Dahl presents Node.js at JSConf in Berlin. \n\nNov \n2010\n\nAs a web development framework based on Node.js, v1.0.0 of **Express.js** is released. The development of Express can be traced to January 2010 when v0.1.0 was committed on GitHub. \n\n2011\n\nTo manage packages for Node.js, **Node Package Manager (NPM)** is released. An early preview of NPM can be traced to 2009, by Isaac Z. Schlueter. \n\nApr \n2013\n\nValeri Karpov at MongoDB proposes a full-stack solution that he terms the **MEAN stack**: MongoDB, Express, Angular and Node. Given that three of these are based on JavaScript, this promotes the demand for full-stack developers who can handle both frontend and backend code. \n\nJul \n2014\n\nStrongLoop acquires commercial rights to Express. StrongLoop itself offers an open source API framework called **LoopBack** on top of Node and Express. Some community folks criticize this \"sponsorship\" or \"sale\" of what was an open source project. \n\nFeb \n2015\n\nJoyent, IBM, Microsoft, PayPal, Fidelity, SAP and The Linux Foundation come together to establish **Node.js Foundation**. \n\nSep \n2015\n\nIBM acquires StrongLoop, a company that specializes in Node.js, Express.js and API management. StrongLoop is said to offer \"robust implementation, making Node.js enterprise-grade\". This acquisition could mean better adoption of Node and Express in the corporate world. \n\nFeb \n2016\n\nDoug Wilson, one of the main contributors to Express, decides to stop working on Express. This is mainly because what was once an open source project is now controlled by IBM/StrongLoop. Express community also state their unhappiness with the way the framework is managed. Meanwhile, in a move towards open governance, Express is handed over to Node.js Foundation as an incubation project. \n\nOct \n2018\n\nVersion 4.16.4 of Express is released. The same month version 5.0.0-alpha.7 is released. Version 5.0.0-alpha.1 can be traced back to November 2014.","meta":{"title":"Express.js","href":"express-js"}} {"text":"# Git\n\n## Summary\n\n\nGit is a popular free and open source distributed version control system for managing small to large-scale projects. It keeps track of the changes made to files in a project, making it easy to roll back when necessary. It is compatible with a wide range of operating systems and integrated development environments making it accessible to a large range of developers. \n\nGit is a software that tracks Files and their history. Online hosting providers of git such as Github helps developers to remotely collaborate and combine each other's work.\n\n## Discussion\n\n### Why do developers need to learn Git?\n\nGit is useful for everyone from web developers to app developers who develop code or make changes to their files , if a set of people are working on one project, there is a risk of overwriting or conflicting with each others work to avoid these problems we can use Git which help teams to collaborate by implementing version control system(VCS). The most significant advantage is they can track and compare the modifications made to the files and revert back to their earlier versions if necessary. \n\nVersion Control System (VCS) are specialized software which keeps the code in a central location and allows a team of people for sharing, collaborating and to track and make changes to their work which makes it easier to develop software in a continuous and straightforward manner. \n\n\n### What is a Git Repository?\n\nA Repository in Git stores all the project related files as well as the revised history of all the files. It is virtual storage for your projects as it creates a sub-directory named .git (Usually hidden) inside your project directory to store these files. This directory is the heart of the repository where it also tracks the changes done to the files. So that deleting the.git/ subdirectory deletes the history of your project. \n\nThere are different ways to create a repository git init command is used to create a new repository locally and the git clone creates a local copy of a full copy of an existing repository stored on a remote server .\n\n\n### What is the difference between Bare and Non-Bare git repository?\n\nA bare repository is a git repository that is just used for collaboration among multiple individuals for push and pull, with no active development or code written directly. It can serve as a remote repository for team members to share code. Bare Repository can be created using `git init --bare` command and if you list the contents inside the directory you can see the contents of what is normally found inside the .git directory. Individual users can't make changes or create new versions. .\n\nIn a non-bare repository, files under that reside under .git directory retain the snapshot of tracked files and allow you to edit the files under the working directory, whereas in a bare repository, you will not see the .git directory and instead all of the files that normally reside under .git will be present right over in the root directory \n\n`git init` command, by default creates a non-bare repository. \\* Except for a few variations,both bare and non-bare repositories are almost identical .\n\n\n### What is the Difference between Git and SVN?\n\nGit is a distributed version control system (DVCS) that saves a local copy of the repository on each member's machine. Any modification made to these files, referred to as a commit, is compared with the previous version and updates happen only to the alterations made in the files. Each user has their own repository as well as a working copy. Other members won't be able to see your modifications until you push changes to the central repository. \n\nSVN (Subversion) is a centralised version control system (CVCS) that requires individuals working on a repository to download the most recent version from a central location. Limitations of CVCS is that it need to be connected to the central repo to get the latest version of repository and also it can't perform other repo operations locally like DVCS. Despite the fact that each user has their own working copy, there is only one central repository where other members can see and update your changes as soon as you push them. \n\n\n### Tell me some interesting features of Git\n\n + **Troubleshooting**: git blame command shows line-by-line revision history done by an author. git bisect command is also used to assess good or bad commits by going throughout the project history. When we have a little idea of how a certain bug was introduced git bisect can be useful whereas git blame can be used when we couldn't find the exact cause for the problem.\n + **Branching and Merging**:Branches create separate copies of files in a repository for users and changes made to these files by users is not reflected on the main code and is independent of each other. This can help users to test and develop new features and commit to the main code only if required. A Sequence of commits from branches are called merging.\n + **Rebasing**: Changes made to the files can be replayed in the same sequence. Commits can be made on the main branch on top of other different commits made recently by another user. Each commit is made one by one so that if a conflict happens the rebasing is paused until it's fixed.\n\n### What is a Git workflow?\n\nIt is a method of using Git in a productive and feasible manner. There are no specific rules these are some common practices and can be considered as guidelines for the development. Some commonly used workflows are described below :\n\n \n\n + **Centralized Workflow**: Similar to SVN a single collaboration model is conceptually centralized though it is decentralized technically. Though the project is contributed by multiple users there is only a single-point-of-entry for the changes made. The work done by each developer is merged before the changes are pushed no overwriting is allowed here.\n + **Trunk-based development**: Development happens on a single branch called Trunk the users divide work into small pieces and merged it into the trunk frequently within few hours. This also decreases the chances of merge conflicts.\n + **Git Flow Workflow**: There are many branches split for different purposes. Developments happen in the Develop branch with feature branches split for new features combining back to develop after getting ready. Release branch contains the code after test and is ready to be merged into the main branch. Final code ready for the release is only available in the main branch.\n\n### Can you describe Git add and commit?\n\nThe Git add command includes the new or modified files in our working directory to the staging area.Files in the staging area are the ones ready to be added in the next commit. It asks Git to include these files for the upcoming commit. You can add directories, any files, even a specific part of a file. If a deleted file is added using this command then the deletion will be staged for commit. \n\nThe term Committing is similar to saving files. Git commit would never happen without using git add for adding files to the staging area. Commits are like a recent snapshot of the repository and can show the history over a period of time. It also includes metadata like author name, timestamp etc. git revert command can be used to undo or redo a specific commit . git reset removes a specific commit and rollsback to the required commit by user . If you use git reset accidentaly and lose commits, you can use git reflog to get them back .\n\n\n### Mention the three states of Git?\n\nGit considers each file in a directory initially to be in one of two states: tracked or untracked. Files that have already been captured by git are known as tracked files. Everything else is untracked files, which are any files in your working directory not yet captured or not part of git's version control. \n\nThese three file states are possible after the files are tracked by Git .\n\n + **Unmodified/Modified**: State of the file changes to modified whenever Files in a Repository is being edited until it is committed and stored to the local git database.\n + **Staged/Staging area**: When we've completed all the modifications to the files it will be moved to the staged state and is ready to be committed.\n + **Committed**: All modifications are saved in the local database, and the updated files are committed.\n\n### Can you suggest some best practices in using Git?\n\nSoftware development teams can adapt their own best practices according to their needs so there are no definite answers. Listed below are some recommended practices followed in the industry. \n\n + **Keep the changes minimum**: Use the least amount of code possible to solve an issue so it is easy to revert back if that doesn't work as expected. Git is known to have atomic behaviour means if there is an issue git will commit files partially and avoid the rest. So only one fix or one upgrade at a time is recommended\n + **Writing Effective commit messages**: Describe Commit messages in a meaningful way by using a verb in the present tense. Clear messages benefit both teams and self when we look back later.\n + **Doing Code Reviews**: Code shall be evaluated by one or a team of developers to avoid mistakes, potential bugs.\n + **Make use of Branches**: Branches can be used to separate work among developers for efficient workflow. Code is verified and tested before merging into the main or master branch.\n\n### Which should I choose Mono-Repo vs Multi-Repo?\n\nThe term \"Mono-Repository\" refers to the storage of all codebase related to a project such as its libraries, dependencies, or services, in a single repository. This approach helps new contributors to make their initial setup easy since everything related to the project is stored in a single location. Deploying a component independently becomes difficult since all the files are tightly coupled with each other also the size of the repository would be huge in large projects. \n\nIn Multi-Repository all the project related services are stored in a separate repository. Libraries, dependencies can run altogether from an isolated environment. Teams can work autonomously with different responsibilities but need to better communicate and sync to avoid breaking the code while all the work happens independently.\n\nThere is a Multi-Mono approach known as submodules that keeps one git repository as a subdirectory of another repository. \n\nThough these approaches have the same goal to manage the codebase with their own benefits and drawbacks. Some challenges are managing the final release, collaborating and communication among the team. Finally, it is up to the team to decide which would suit their workflow.\n\n## Milestones\n\nApr \n2005\n\nLinus Torvalds designed the main principles of Git in a week of time to maintain the large Linux Kernel Project. Previously, a free DVCS named Bitkeeper was used until they revoked the free-of cost usage . The Word Git is a random three letter word not used by any UNIX command it doesn't have any meaning officially.\n\n\nJul \n2005\n\nLinus Torvalds established Git as an open source for contributions, and Junio C Hamano is designated as Git's maintainer. \n\nDec \n2005\n\nGit 1.0 version was launched \n\nJan \n2006\n\nThe .dircache directory is renamed to .git which stores all the information about the repository. . A command init-db to initalize repository is changed to git-init-db. \n\nOct \n2007\n\nPreston-Werner and Wanstrath wish to develop a platform where where programmers could host and collaborate in Git and started working on their idea of building Github.\n\n\nMay \n2012\n\nGit logo was updated \n\nNov \n2018\n\ngit range-diff command is introduced in version Version 2.19 it can be used to compare two sequences of commits and order of the changes made. \n\nMar \n2021\n\ngit maintenance is introduced to optimize the repository data by speeding up commands and reducing the storage requirements.","meta":{"title":"Git","href":"git"}}