repo
stringlengths
27
90
file
stringlengths
57
176
language
stringclasses
2 values
license
stringclasses
13 values
content
stringlengths
25
180k
https://github.com/edgarogh/f4f1eb
https://raw.githubusercontent.com/edgarogh/f4f1eb/main/README.md
markdown
# f4f1eb <sub>yes name-choosing is hard</sub> A Typst theme for a nice and simple looking letter that's not completely black and white. Inspired by a Canva theme. Features: * A neutral-warm beige background that feels cosier and softer to the eyes than pure white, while still looking kinda white-ish * Short content is vertically padded to look a bit more centered * Long content overflows gracefully on as many pages as necessary | Basic example | Short text (vertically centered) | Multi-page overflowing text | |----------------------------------------------------------|--------------------------------------------------------|------------------------------------------------------| | [`_rendered/demo_medium.pdf`](_rendered/demo_medium.pdf) | [`_rendered/demo_short.pdf`](_rendered/demo_short.pdf) | [`_rendered/demo_long.pdf`](_rendered/demo_long.pdf) | # Usage * If using Typst locally, install the [HK Grotesk](https://fonts.google.com/specimen/Hanken+Grotesk) font * _Note: it is already installed on the https://typst.app/ IDE_ * Move [`template.typ`](template.typ) in the same directory as your entry point file (usually `main.typ`) * Insert the setup `show` statement ```typst #import "template.typ": * #show: project.with( title: [Anakin \ Skywalker], from_details: [ Appt. x, \ Mos Espa, \ Tatooine \ <EMAIL> \ +999 xxxx xxx ], to_details: [ Sheev Palpatine \ 500 Republica, \ Ambassadorial Sector, Senate District, \ Galactic City, \ Coruscant ], ) Dear Emperor, ... ``` * If your text overflows on multiple pages, you might want to add [page numbering](https://typst.app/docs/reference/layout/page/#parameters-numbering), as shown in [`demo_long.typ`](demo_long.typ) (line 3) * Don't hesitate to edit the template if it doesn't exactly fit your needs # Parameters ```typst background: rgb("f4f1eb"), # Override the background color (why would you :sad:) title: "", # Set the top-left title. It looks best on two lines from_details: none, # Letter sender (you) details to_details: none, # Letter receiver details margin: 2.1cm, # Page margin vertical_center_level: 2, # When the content is small, it is vertically centered a bit, but still kept closer to the top. This controls how much. Setting to none will disable centering. body ``` # License * `template.typ` is licensed as CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/legalcode) * The demo files are licensed as CC0 (https://creativecommons.org/publicdomain/zero/1.0/legalcode) * Any document fully or partially generated using this template may be licensed however you wish
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history_CN/2011/MS-03.typ
typst
#set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (1 - 32)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [1], [马龙], [CHN], [3222], [2], [王皓], [CHN], [3194], [3], [许昕], [CHN], [3172], [4], [蒂姆 波尔], [GER], [3169], [5], [张继科], [CHN], [3145], [6], [马琳], [CHN], [3074], [7], [王励勤], [CHN], [3030], [8], [水谷隼], [JPN], [3003], [9], [陈玘], [CHN], [2988], [10], [郝帅], [CHN], [2940], [11], [柳承敏], [KOR], [2923], [12], [弗拉基米尔 萨姆索诺夫], [BLR], [2882], [13], [朱世赫], [KOR], [2878], [14], [张一博], [JPN], [2808], [15], [卡林尼科斯 格林卡], [GRE], [2798], [16], [米凯尔 梅兹], [DEN], [2791], [17], [迪米特里 奥恰洛夫], [GER], [2776], [18], [庄智渊], [TPE], [2766], [19], [帕特里克 鲍姆], [GER], [2754], [20], [吴尚垠], [KOR], [2749], [21], [克里斯蒂安 苏斯], [GER], [2749], [22], [阿德里安 克里桑], [ROU], [2737], [23], [蒂亚戈 阿波罗尼亚], [POR], [2718], [24], [李廷佑], [KOR], [2712], [25], [维尔纳 施拉格], [AUT], [2709], [26], [高宁], [SGP], [2705], [27], [MATTENET Adrien], [FRA], [2699], [28], [岸川圣也], [JPN], [2675], [29], [巴斯蒂安 斯蒂格], [GER], [2673], [30], [高礼泽], [HKG], [2663], [31], [KONECNY Tomas], [CZE], [2663], [32], [松平健太], [JPN], [2653], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (33 - 64)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [33], [KUZMIN Fedor], [RUS], [2648], [34], [博扬 托基奇], [SLO], [2647], [35], [PROKOPCOV Dmitrij], [CZE], [2645], [36], [CHTCHETININE Evgueni], [BLR], [2642], [37], [让 米歇尔 赛弗], [BEL], [2641], [38], [吉田海伟], [JPN], [2630], [39], [陈卫星], [AUT], [2623], [40], [罗伯特 加尔多斯], [AUT], [2618], [41], [侯英超], [CHN], [2610], [42], [唐鹏], [HKG], [2610], [43], [帕纳吉奥迪斯 吉奥尼斯], [GRE], [2603], [44], [约尔根 佩尔森], [SWE], [2602], [45], [YANG Zi], [SGP], [2601], [46], [李静], [HKG], [2598], [47], [马克斯 弗雷塔斯], [POR], [2595], [48], [KOSOWSKI Jakub], [POL], [2593], [49], [KIM Junghoon], [KOR], [2589], [50], [<NAME>], [AUT], [2576], [51], [CHO Eonrae], [KOR], [2574], [52], [沙拉特 卡马尔 阿昌塔], [IND], [2574], [53], [丁祥恩], [KOR], [2571], [54], [郑荣植], [KOR], [2560], [55], [阿列克谢 斯米尔诺夫], [RUS], [2559], [56], [江天一], [HKG], [2557], [57], [SIMONCIK Josef], [CZE], [2552], [58], [MONTEIRO Joao], [POR], [2550], [59], [上田仁], [JPN], [2550], [60], [DIDUKH Oleksandr], [UKR], [2549], [61], [佐兰 普里莫拉克], [CRO], [2547], [62], [LIN Ju], [DOM], [2547], [63], [GERELL Par], [SWE], [2536], [64], [尹在荣], [KOR], [2535], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (65 - 96)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [65], [SVENSSON Robert], [SWE], [2533], [66], [<NAME>], [GER], [2532], [67], [LEGOUT Christophe], [FRA], [2531], [68], [<NAME>], [QAT], [2531], [69], [<NAME>], [FRA], [2526], [70], [JANG Song Man], [PRK], [2524], [71], [SKACHKOV Kirill], [RUS], [2519], [72], [丹羽孝希], [JPN], [2518], [73], [SEO Hyundeok], [KOR], [2517], [74], [雅罗斯列夫 扎姆登科], [UKR], [2510], [75], [MACHADO Carlos], [ESP], [2499], [76], [GORAK Daniel], [POL], [2492], [77], [RUBTSOV Igor], [RUS], [2491], [78], [<NAME>], [TUR], [2489], [79], [金珉锡], [KOR], [2486], [80], [HE Zhiwen], [ESP], [2484], [81], [李尚洙], [KOR], [2480], [82], [BLASZCZYK Lucjan], [POL], [2478], [83], [LIVENTSOV Alexey], [RUS], [2476], [84], [安德烈 加奇尼], [CRO], [2470], [85], [彼得 科贝尔], [CZE], [2470], [86], [韩阳], [JPN], [2469], [87], [利亚姆 皮切福德], [ENG], [2468], [88], [艾曼纽 莱贝松], [FRA], [2463], [89], [KARAKASEVIC Aleksandar], [SRB], [2451], [90], [张钰], [HKG], [2450], [91], [<NAME>], [HUN], [2448], [92], [SALIFOU Abdel-Kader], [FRA], [2445], [93], [闫安], [CHN], [2443], [94], [BENTSEN Allan], [DEN], [2443], [95], [DRINKHALL Paul], [ENG], [2442], [96], [LIU Song], [ARG], [2439], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (97 - 128)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [97], [KASAHARA Hiromitsu], [JPN], [2433], [98], [SHIBAEV Alexander], [RUS], [2430], [99], [KIM Hyok Bong], [PRK], [2415], [100], [<NAME>], [SVK], [2413], [101], [<NAME>], [KOR], [2412], [102], [BURGIS Matiss], [LAT], [2410], [103], [CANTERO Jesus], [ESP], [2407], [104], [MATSUDAIRA Kenji], [JPN], [2404], [105], [<NAME>], [POL], [2400], [106], [<NAME>], [CZE], [2399], [107], [<NAME>], [KOR], [2394], [108], [斯特凡 菲格尔], [AUT], [2390], [109], [<NAME>], [TUR], [2388], [110], [奥马尔 阿萨尔], [EGY], [2386], [111], [<NAME>], [CRO], [2385], [112], [<NAME>], [POL], [2381], [113], [林高远], [CHN], [2372], [114], [HUANG Sheng-Sheng], [TPE], [2370], [115], [VRABLIK Jiri], [CZE], [2369], [116], [JEVTOVIC Marko], [SRB], [2363], [117], [WU Chih-Chi], [TPE], [2361], [118], [VLASOV Grigory], [RUS], [2361], [119], [<NAME>], [AUS], [2358], [120], [BAGGALEY Andrew], [ENG], [2356], [121], [LASHIN El-Sayed], [EGY], [2353], [122], [TAKAKIWA Taku], [JPN], [2348], [123], [<NAME>], [SVK], [2345], [124], [<NAME>], [PRK], [2342], [125], [<NAME>], [ESP], [2340], [126], [马蒂亚斯 法尔克], [SWE], [2336], [127], [<NAME>], [CHN], [2336], [128], [詹斯 伦德奎斯特], [SWE], [2335], ) )
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/root_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test root with more than one character. $A = sqrt(x + y) = c$
https://github.com/tingerrr/typst-project
https://raw.githubusercontent.com/tingerrr/typst-project/main/README.md
markdown
# typst-project typst-project is a small library for interacting with typst projects. It is currently used in [typst-test], but may be upstreamd to other projects or typst itself at some point. Planned features are reading and writing 3rd-party tool configs directly into the typst.toml manifest file with minimal churn for the user using toml-edit. THe LICENSE of this project is currently unspecified but will likely be Apache 2.0 as a lot of code was hoisted the [typst package bundler][bundler]. [typst-test]: https://github.com/tingerrr/typst-test [bundler]: https://github.com/typst/packages
https://github.com/Otto-AA/definitely-not-tuw-thesis
https://raw.githubusercontent.com/Otto-AA/definitely-not-tuw-thesis/main/src/styles/back-matter.typ
typst
MIT No Attribution
#import "utils/state.typ": is-back-matter #let back-matter-styles = rest => { is-back-matter.update(true) rest }
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024
https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/decide-program-structure/entry.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "/packages.typ": notebookinator #import notebookinator: * #import themes.radial.components: * #show: create-body-entry.with( title: "Decide: Program Structure", type: "decide", date: datetime(year: 2023, month: 8, day: 12), author: "<NAME>", witness: "<NAME>", ) We rated each option by: - Scalability on a scale of 0 to 5 - Simplicity on a scale of 0 to 3 - Asynchrony on a scale of 0 or 5 We rated scalability so high due to the high planned complexity of our program. We want many features like position tracking and movement algorithms, and scalability allows us to have an over all lower complexity with a larger codebase. We rated asynchrony so high because we really do not want some parts of the programs to block others when not needed. This will allow us to have move complex control loops without causing problems. #decision-matrix( properties: ((name: "Scalability"), (name: "Simplicity"), (name: "Asynchrony")), ("Option 1", 5, 1, 5), ("Option 2", 3, 2, 0), ("Option 3", 0.5, 3, 0), ) #admonition( type: "decision", )[ We ended up choosing option 3 due to its high scalability and ability to support asynchronous code. This is also the approach we used during the Spin Up season, so we have experience using it.] = Implementation == State Machine A state machine is a simple and robust way to handle a system that can be in many states. We decided to use this approach after realizing that each subsystem can only be in 1 state at a time. For example a catapult might have the following states: - Loading - Ready - Firing The catapult cannot be loading and ready, or loading and firing, or any other combination of the 3. These states are perfectly modeled as an enumeration. ```cpp enum class CatapultState { Loading, Ready, Firing, } ``` We can then make a variable that tracks the current state and then have the system change based on the current state. ```cpp CatapultState current_state = CatapultState::Loading; while(true) { switch (current_state) { case CatapultState::Loading: { // Load the catapult break; } case CatapultState::Ready: { // Ready the catapult break; } case CatapultState::Firing: { // Fire the catapult break; } } } ``` == Tasks Making each subsystem into a task presents some problems: - We want to be able to choose when the task starts so that we can choose to have other processes start before it. - Concurrency (the tasks cannot interfere with each other) The first problem is surprisingly the hardest to solve. The PROS's Task class takes a callback to another function in order to be constructed. However the address of that function needs to be known at compile time in order for this to work. This means that if the function is part of a class (its a method) it must be static. However this conflicts with our goal because we want to start the task at runtime. The solution to this is to define the task as nullptr originally, and then initialize it later with a lambda function, which looks like this: ```cpp if (task == nullptr) { task = new pros::Task([=] { while (true) { loop(); } }); } ``` == File Structure Due to how PROS projects are structured we are forced to have `src/` and `include/` be the top level directories for our body files and header files respectively. However inside those directories we can organize our project however we choose. We chose to have `src/` and `include/` mirror each to make it clearer which body and header files go together. Inside the top level directory we have a `lib/` directory, which contains all of our utility code. `lib/` contains `subsystems/` and `utils/`. `subsystems/` obviously contains all of our classes for our subsystems. `utils/` contains utility functions and classes for things like mathematical conversions. == Style Guide We chose to implement a style guide in order to write cleaner, more consistent code. We chose to base our style guide off of Google's C++ Style Guide #footnote(link("https://google.github.io/styleguide/cppguide.html")), with a few exceptions. We will document these in our project's README. #footnote(link("https://github.com/Area-53-Robotics/53E#readme"))
https://github.com/PA055/5839B-Notebook
https://raw.githubusercontent.com/PA055/5839B-Notebook/main/Entries/inventory/taking-inventory.typ
typst
#import "/packages.typ": notebookinator #import notebookinator: * #import themes.radial.components: * #show: create-body-entry.with( title: "Taking Inventory", type: "management", date: datetime(year: 2024, month: 3, day: 10), author: "<NAME>", witness: "Praful Adiga" ) Before any designing can take place it is key to know the constraints one is placed under. For vex a key way of doing this other then reading the rules is too see what parts are available to your team. You may have the best idea for a design but without hte parts to build that idea is jsut waisted time.In order to see what parts we had an excel spread sheet was created with all Vex Parts that were in our Inventor parts library as well as newer ones found the Vex website. Additionally Tools and other accesories from the Robosource website were included that we deemed may prove useful. #figure( rect(fill: black.lighten(10%))[ #image(".\pre-inventory top half.png", width: 80%) ], caption: [ Top half of the Inventory Spread Sheet ] ) <odomDiagram> #figure( rect(fill: black.lighten(10%))[ #image(".\pre-inventory bot half.png", width: 80%) ], caption: [ Bottom half of the Inventory Spread Sheet ] ) <odomDiagram> The Spreadsheet will take a while to fill out, but for now parts with known quanitities such as zero have been filled out. Those were marked with yellow to indicate more were needed. These were then taken to a second spreadsheet. This lays out all the parts and tools we want, their price, quanity, link to purchase, and priority. The priority is key as it allows us to make decisions on what to get within our budget. #figure( rect(fill: black.lighten(10%))[ #image(".\order list frame work.png", width: 80%) ], caption: [ Order Spread Sheet with a calculation test ] ) <odomDiagram> admonition(type: "note")[ Currently the budget is at zero as dues are yet to be collected and no fund raisers have been planned. It also improt to consider that a small percentage of the school's 15,000 dollar engineerign budget is randomly allocated to us so we will have that to work with. ]
https://github.com/SillyFreak/typst-prequery
https://raw.githubusercontent.com/SillyFreak/typst-prequery/main/docs/manual.typ
typst
MIT License
#import "template.typ" as template: * #import "/src/lib.typ" as prequery #let package-meta = toml("/typst.toml").package #let date = datetime(year: 2024, month: 3, day: 19) #show: manual( title: "Prequery", // subtitle: "...", authors: package-meta.authors.map(a => a.split("<").at(0).trim()), abstract: [ Extracting metadata for preprocessing from a typst document, for example image URLs for download from the web. ], url: package-meta.repository, version: package-meta.version, date: date, ) // the scope for evaluating expressions and documentation #let scope = (prequery: prequery) = Introduction Typst compilations are sandboxed: it is not possible for Typst packages, or even just a Typst document itself, to access the "ouside world". The only inputs that a Typst document can read are files within the compilation root, and strings given on the command line via `--input`. For example, if you want to embed an image from the internet in your document, you need to download the image using its URL, save the image in your Typst project, and then show that file using the `image()` function. Within your document, the image is not linked to its URL; that step was something _you_ had to do, and have to do for every image you want to use from the internet. This sandboxing of Typst has good reasons. Yet, it is often convenient to trade a bit of security for convenience by weakening it. Prequery helps with that by providing some simple scaffolding for supporting preprocessing of documents. The process is roughly like that: + You start authoring a document without all the external data ready, but specify in the document which data you will need. (With an image for example, you'd use Prequery's #ref-fn("image()") instead of the built-in one to specify not only the file path but also the URL.) + Using `typst query`, you extract a list of everything that's necessary from the document. (For images, the command is given in #ref-fn("image()")'s documentation.) + You run an external tool (a preprocessor) that is not subject to Typst's sandboxing to gather all the data into the expected places. (There is a _not very well implemented_ Python script for image download in the gallery. For now, treat it as an example and not part of this package's feature set!) + Now that the external data is available, you compile the document. == Fundamental issues and limitations === Breaking the sandbox As I said, there's a reason for Typst's sandboxing. Among those reasons are - *Repeatability:* the content hidden behind URLs on the internet can change, so not having access to them ensures that compiling a document now will have the same result as compiling it later. The same goes for any other nondeterministic thing a preprocessor might do. - *Security and trust:* when compiling a document, you know what data it can access, so you can fearlessly compile documents you did not write yourself. This is especially important as documents can import third-party packages. You don't need to trust all those packages to be able to trust a document itself. The sandboxing is something that Typst ensures, but the preprocessors mentioned in step 3 above will necessarily _not_ do the same. So using prequery (in the intended way, i.e. utilizing external preprocessing tools) - *you need to trust the preprocessors that you run, because they are not (necessarily) sandboxed,* and - *you need to trust the documents that you compile, including the packages they use, because the documents provide data to the preprocessors, possibly instructing them to do something that you don't want.* This doesn't mean that using Prequery is necessarily dangerous; it just has more risks than Typst alone. === Compatibility The preprocessors you use will not necessarily work on all machines where Typst runs, including the wep app. This package assumes that you are using Typst via the command line. #pagebreak(weak: true) = Usage With that out of the way, here's an example of how to use Prequery: #{ let example = raw(block: true, lang: "typ", read("/gallery/test.typ").trim()) example = crudo.lines(example, "5-") example } Instead of the built-in `image()`, we're using this package's #ref-fn("image()"). That function does the following things: - it emits metadata to the document that can be queried for the use of preprocessors; - it "normally" displays the image (note that this fails if the image has not been downloaded yet); - in "fallback mode" (i.e. "not normally"), it doesn't try to load the image so that compilation succeeds; - it is implemented on top of the #ref-fn("prequery()") function to achieve these easily. We call a function of this sort "a prequery", and the image prequery is just a very common example. Other prequeries could, for example, instruct a preprocessor to capture the result of software that can't be run as a #link("https://typst.app/docs/reference/foundations/plugin/")[plugin]. As mentioned, this file will fail to compile unless activating fallback mode as described in the commented out part of the example. The next step is thus to actually get the referenced files, using query: ```sh typst query main.typ '<web-resource>' --field value \ --input prequery-fallback=true ``` This will output the following piece of JSON: ```json [{"url": "https://upload.wikimedia.org/wikipedia/commons/a/af/Cc-public_domain_mark.svg", "path": "assets/public_domain.svg"}] ``` ... which can then be fed into a preprocessor. As mentioned, the gallery contains a Python script for processing this query output: #{ let example = raw(block: true, lang: "py", read("/gallery/download-web-resources.py").trim()) // example = crudo.filter(example, l => l != "" and not l.starts-with(regex("\s*#"))) example } I repeat: I *don't* consider this script production ready! I have made the minimal effort of not downloading existing files multiple times, but files are only downloaded sequentially, and can be saved _anywhere_ on the file system, not just where your Typst project can read them. This script is mainly for demonstration purposes. Handle with care! Assuming Linux and a working Python installation, the query output can be directly fed into this script: ```sh typst query main.typ '<web-resource>' --field value \ --input prequery-fallback=true | python3 download-web-resources.py ``` The first time this runs, the image will be downloaded with the following output: ``` assets/public_domain.svg: downloading https://upload.wikimedia.org/wikipedia/commons/a/af/Cc-public_domain_mark.svg ``` Success! After running this, compiling the document will succeed. = Authoring a prequery This package is not just meant for people who want to download images; its real purpose is to make it easy to create _any_ kind of preprocessing for Typst documents, without having to leave the document for configuring that preprocessing. While the package does not actually contain a lot of code, describing how the #ref-fn("image()") prequery is implemented might help -- especially because it relies on a peculiar behavior regarding file path resolution. Here is the actual code: #{ let example = raw(block: true, lang: "typ", read("/src/lib.typ").trim()) example = crudo.lines(example, "62-") example = crudo.filter(example, l => not l.starts-with(regex("\s*//"))) example } This function shadows a built-in one, which is of course not technically necessary. It does require us to keep an alias to the original function, though. The Parameters to the used #ref-fn("prequery()") function are as follows: the first two parameters specify the metadata made available for querying. The last one is also simple, it just specifies what to display if prequery is in fallback mode: the Unicode character "Frame with Picture". The third parameter, written as ```typc _builtin_image.with(..args)``` is the most involved: first of all, this expression is a function that is only called if not in fallback mode. More importantly, `args` is an `arguments` value, and such a value apparently remembers where it was constructed. Compare these two functions (here, `image()` is just the regular, built-in function): ```typ // xy/lib.typ #let my-image(path, ..args) = image(path, ..args) #let my-image2(..args) = image(..args) ``` #pagebreak(weak: true) While they seem to be equivalent (the `path` parameter of `image()` is mandatory anyway), they behave differently: ```typ // main.typ #import "xy/lib.typ": * #my-image("assets/foo.png") // tries to show "xy/assets/foo.png" #my-image2("assets/foo.png") // tries to show "assets/foo.png" ``` With `my-image`, passing `path` to `image()` resolves the path relative to the file `xy/lib.typ`, resulting in `"xy/assets/foo.png"`. With `my-image2` on the other hand, the path seems to be relative to where the `arguments` containing it were constructed, and that happens in `main.typ`, at the call site. The path is thus resolved as `"assets/foo.png"`. This is of course very useful for prequeries, which are all about specifying the files into which external data should be saved, and then successfully reading from these files! As long as the file name remains in an `arguments` value, it can be passed on and still treated as relative to the caller of the package. = Module reference #module( read("/src/lib.typ"), name: "prequery", label-prefix: none, scope: scope, )
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/013%20-%20Magic%202015/002_Nissa%2C%20Worldwaker.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Nissa, Worldwaker", set_name: "Magic 2015", story_date: datetime(day: 25, month: 06, year: 2014), author: "<NAME>", doc ) #emph[The elf Planeswalker <NAME> has led a difficult life. She's been exiled from her tribe, the Joraga, on more than one occasion, and becoming a Planeswalker set her even further apart. She traveled to different worlds, seeking to understand the nature of elves' responsibility toward nature, but she always returned to her home plane of Zendikar.] #emph[Whatever peace she managed to find for herself came to an end with the rising of the monstrous Eldrazi. These vast, interplanar beings, devourers of entire worlds, had been imprisoned on Zendikar millennia before. Desperate to save her world, Nissa broke the lock that kept the Eldrazi on Zendikar. Her hope was that the Eldrazi, freed of their confines, would travel out into the Multiverse. Their threat would spread, but Zendikar would be saved.] #emph[It didn't work.] #emph[At least one of the three Eldrazi titans remains on Zendikar, threatening all life on the plane with annihilation. Nissa stayed to fight the Eldrazi, but she fears it's hopeless. To defeat the monstrosities that assault the plane, all of Zendikar would have to fight as one…] #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Nissa's eyes opened. Smoke and ash swirled above her as she awoke to chaos. Nissa lay pinned on her back and she could feel earth rumble beneath her. She looked around, her mind trying to cling onto anything familiar. Nissa could hear yelling but it was distorted, as if she was hearing the ambient tumult through a long, echoing tunnel. There was a pressure that made her ears ring. She blinked her eyes hard. And then, piece by piece, she began to remember. Eldrazi had swarmed the Joraga. There had been a great blast of energy. Ulamog had returned. Then it all rushed back into her mind in a sickening flood. Her tribe was under attack. Many had died. As her vision cleared, Nissa could see the twisted bodies of elves amid the broken trees. Misshapen Eldrazi corpses were scattered about, the steaming remains of Ulamog's freakish spawn. #figure(image("002_Nissa, Worldwaker/01.jpg", width: 100%), caption: [Disaster Radius | Art by James Paick], supplement: none, numbering: none) She had to move. She moved to get up and was jerked back to the ground—her legs were trapped. A tree had been split in half and she was caught under it. She wrestled with the massive limb like a wild animal caught in a snare until the pain kicked in and she let out an involuntary scream. As she caught her breath, a series of irregular, staccato blasts tore through the ash-filled air from something high above the ground. Nissa clapped her hands to her mouth and lay as still as a stone as a low ticking sound clicked from all around her. She could only see a few feet in any direction, but she knew the source of the terrible sound was close. Its vocalizations vibrated through her bones. It was hunting. She tried to pull forth mana to summon some form of aid, but she was drained. Planeswalking was out of the question. All Nissa could muster was a small glow of healing energy to ease the pain, but that was enough to sap her last reserves. She fell back exhausted against ground that had turned to mud from her own blood. In desperation, she called out a few times to shapes that fled the Eldrazi swarm, some humanoid, some animal, but no one answered. The taste of dirt and ash filled her mouth as she labored for breath, and she could feel her life as it left her with each heartbeat. Then she saw the massive silhouette of Ulamog grow like a dark cloud over the shattered trees, until it blocked out the hazy light of Zendikar's sun. As its shadow passed over her she heard the crystalline chewing sound it made as it devoured the life from Zendikar, leaving its signature path of destruction. She could smell the acrid stench of it and felt her stomach convulse. #figure(image("002_Nissa, Worldwaker/02.jpg", width: 100%), caption: [Ulamog, the Infinite Gyre | Art by <NAME>], supplement: none, numbering: none) Tears streaked her cheeks as <NAME> looked into the swirling, smoke-filled air and waited for death. A wild-eyed human face, streaked with dirt, looked at her. A calloused hand grabbed hers. Nissa was too weak to move. The human called out over his shoulder as the chewing sound of the titan closed in. "Bahkut! Alira! Over here." He turned to Nissa. His calloused hand touched her face, and Nissa could feel the life force as it flowed from him. "Stay alive. We'll get you out of here." "Khalni bless you," Nissa said and slipped into blackness. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Nissa awoke to smells and sounds unfamiliar to her. Her legs ached as she moved them tentatively and looked around. She felt weak but could feel a small bit strength returning. She heard footsteps approaching. The flap of the tent lifted and the large, dark-skinned human who had pulled her from under the tree entered. "You're awake." He smiled. "That's good." "Where am I?" Nissa said. Mistrust everyone. Even though the human saved her, the old Joraga instincts remained. She felt vulnerable, naked under the furs, and she knew her full power was a long way from returning. The human sensed her unease and held up both hands. "Easy. You're still healing." He picked up her clothes from a nearby stool and set them by her. He moved slowly and deliberately as he spoke. "You're a day's travel from Jalesh. My name is Hamadi. You're safe here." "My tribe…" Nissa's mind recoiled from the memories. She forced herself to ask the question. "Did you see what happened to my tribe?" Hamadi looked at her and told her what she already knew. "Ulamog was there. The valley, the forest. The Eldrazi left nothing but ash. I'm sorry, but from what we saw, the Joraga are no more." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Nissa climbed through the woods. Her legs still felt stiff, but their strength was returning. It felt good to move and feel the forest flow around her like a verdant tapestry once again. Hamadi walked behind her and, for a human, he made little noise. "Up there," he said. Nissa looked through the thick forest and saw an outcropping of rock high up above them where the trees gave way to the granite of the mountain. "That ledge way up there?" Nissa said. "You must really think highly of your healing, druid." She raised her eyebrows at Hamadi, who smiled. "I think highly of your will, Shaya," Hamadi said back. "That's a nice way to put it," Nissa said with a smirk. "Hey. What does #emph[Shaya] mean?" Hamadi chuckled. "I'll tell you later…Shaya." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) They sat on the ledge overlooking the forest below and ate nuts, dried fruit, and chakri root. Nissa could feel vigor from the root soothing her tired muscles. "You have been more than kind, Hamadi," Nissa said after a while. "Thank you." "We are in different times now. Gone are the days when we Zendikari fight one another," Hamadi said as he handed Nissa more food. "It took creatures like the Eldrazi to teach us to live together in peace. Either they are good teachers or we are stubborn beings, eh?" He laughed. "I guess it is in everyone, this tribalism, this need to isolate and separate." Nissa looked at the forest below. "The Joraga drilled it into us from the moment we were born. 'Trust no outsiders' was something I heard since…forever. I have come a long way since then, Hamadi. I have seen too much to have such a small view of life. The kindness you have shown me has played a part as well." Hamadi smiled, then paused for a moment and looked at a piece of dried fruit as he turned it in his fingers. "I was from Graypelt, just outside the great forest of Turntimber." Hamadi popped the fruit in his mouth. #figure(image("002_Nissa, Worldwaker/03.jpg", width: 100%), caption: [Graypelt Refuge | Art by Philip Straub], supplement: none, numbering: none) "I'm sorry, Hamadi," Nissa said. "I knew of your people. I traveled through your lands." "Small world, Shaya," Hamadi said. "As you know, my people were good hunters and guides, but we made our living storing mana. Our magic was tied to the land and the trees gave to us more than we could ever repay." Hamadi leaned back against a stone and took a drink from a water skin. "Many mana-hungry expedition houses came to us for trade. Back then, our pouches and stomachs were full. Our tents were warm. We thought we had reached a summit, #emph[the] summit, but it was a false one. My tribe, the expedition houses, we were all blind." Hamadi set down the water skin and continued. "We had heard the rumors of the titans' devastation but we didn't believe them. How could anyone have believed it was true unless they were actually there? Never did we think that the Eldrazi would come to our lands and annihilate our people." Hamadi sighed. "We are short-lived and short-sighted beings, Shaya. Now that I have seen the titans, I know that there are realities that exist beyond our wildest imaginings. How were we to prepare for such things?" Hamadi hung his head for a brief moment, then looked at Nissa. But Nissa couldn't meet Hamadi's eyes. As she listened to his story, a growing ache welled up within her body and lodged itself in her throat. She was responsible for all of it, all his loss and all of Zendikar's devastation. Hamadi had pulled her, a Joraga elf, from certain death. He had risked his life and had saved hers. And she was the cause. Dark memories started to crawl into Nissa's mind from all the worst places. All her failures, her foolish choices, her selfishness and arrogance, poured into her gut like a lead weight. She became tangled in the web of her past that was filled with the bodies of a thousand innocents who had fallen to the Eldrazi. She could have saved them all. "Hamadi," she said, as she hugged her knees. "I'm so sorry." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Nissa, followed Hamadi, Bahkut, Alira, and a pair of kor twins along a mountain path. She could feel Zendikar, restless beneath her feet, as if a great, unborn beast was moving within it. Nissa could see large chunks of the mountain floating and spinning over the trail far ahead. "We have to rope up. Gravity fields," the kor named Khali said. Her twin brother Sha'heel never seemed to speak, preferring to use subtle hand signals or nothing at all. Khali and Sha'heel removed their packs. Sha'heel pulled out ropes, hooks and straps, handing them to his sister as she outfitted Hamadi, Bahkut, Alira, and Nissa. #figure(image("002_Nissa, Worldwaker/04.jpg", width: 100%), caption: [Kor Hookmaster | Art by <NAME>], supplement: none, numbering: none) "You ever been sky-climbing, Joraga?" Khali said to Nissa as she secured the straps of Nissa's harness. "I hate being off the earth," Nissa said. "But I hate the Eldrazi even more." "Then just think about gutting those bastards and you'll do fine." Khali smiled as she pulled the leather strap tight and snaked a twine rope through the metal loop which effectively connected Nissa to Sha'heel. "Sha'heel's got you now, Joraga, but don't think you're just along for the ride. Follow his lead as best you can." Khali left to see to Hamadi and the other two humans. Sha'heel looked over his shoulder, stonefaced, then winked at Nissa, and went back to tying knots and coiling rope. Once everyone was secured, they moved into the gravity fields. Nissa could feel her body as it lurched and lifted, buffeted by the powerful gravity waves, until they finally went airborne. Nissa felt stiff and awkward, like a newborn gladehart, as she watched Khali float up and hook a line to a passing boulder, her movements smooth and relaxed. The party's anchor-lines went taut as, one by one, they all lifted off, held to the rock face by the slender but strong Kor ropes. Khali led them through the maze of floating boulders. Nissa watched the brother and sister communicate with a series of intricate hand gestures as they made their way deeper into the gravity field. Dozens of massive boulders moved in apparent chaos as they collided and rolled. Khali and Sha'heel moved their party along, picking through the hazards with ease. Their ropes were like magical extensions of their arms, as they flicked out lines to catch passing rock faces or swung onto new boulders as they floated by. Nissa had seen Kor operate from a distance, but she never appreciated their skill and knowledge of Zendikar's pulse and flow until then. As the sun reached its zenith, they paused, suspended on ropes between three boulders that floated over a canyon. Alira mentioned how silent and peaceful it was a moment before a scream echoed around them. Nissa swiveled her head to find its location but the sound reflected off of the rock; she couldn't get a bearing until Khali whisteled and pointed down. Far below, Nissa could see a Pathrazer—a spawn of the titan Ulamog—emerging from under the earth before a scattered group of adventurers. Nissa couldn't tell whether they were humans, elves, or kor, but it didn't matter. Her eyes were on the Eldrazi and she wanted it dead. #figure(image("002_Nissa, Worldwaker/05.jpg", width: 100%), caption: [Pathrazer of Ulamog | Art by <NAME>], supplement: none, numbering: none) Nissa looked over her shoulder to see Sha'heel pull out a knife and cut his anchor lines with one smooth motion. Nissa heard the ropes pop as Sha'heel plunged toward the ground in a free fall. "That bastard's mine, Sha'heel!" Nissa yelled as she pulled at her buckles and wriggled out of her harness. Energy surged within Nissa and vines writhed from the side of the boulder in an explosion of growth and sunk their roots deep into the rock. Nissa willed them to her and rode the tangle of vines down to the ground. Below Nissa, Sha'heel had opened up a kitesail and sped toward the Eldrazi, a hook-line in his hand, as the Eldrazi emerged in a cloud of dust and rock. Ten travelers were strewn about. Some had been hurled to the ground when the Pathrazer lifted an immense chunk of earth skyward. Others stood awestruck. Some bolted in panic. Before anyone could act, the Pathrazer had grabbed several of the expedition and crushed them in its tentacled hand. Sha'heel swooped low and set an anchor hook into the Pathrazer's rubbery flesh. The Eldrazi swatted at him as Sha'heel executed a series of tight spirals and let out a tangling line of rope. Then Khali streaked across the Eldrazi's legs and trailed another tangling line. The Eldrazi swiped at the Kor as the remaining expedition warriors shot arrows in blind hope of hitting something vital. "This isn't going to hold it, Joraga!" Khali yelled as she ditched her kitesail into some rocky cover and rolled out of sight. "Do something!" Nissa hit the ground running. She felt the power well up within her and a savage smile broke across her face. She was going to destroy that freak of nature. It was going to pay for everything. Twice. #figure(image("002_Nissa, Worldwaker/06.jpg", width: 100%), caption: [Nissa, Worldwaker | Art by <NAME>], supplement: none, numbering: none) The Kor's ropes popped as the Eldrazi freed itself, but all sounds were overwhelmed by the bone-shaking rumble of the land as it came to life. Green fire shot out from Nissa into the earth. She could feel her reservoir of power swelling and she emptied it all into her spell. A massive elemental emerged in several enormous chunks, tearing itself from the side of the canyon in a shower of earth. The Eldrazi turned, only to be grabbed by a massive hand made from rock, roots, and dirt. Nissa pushed every ounce of energy into the elemental and, as it squeezed the Eldrazi, she crushed the pain of her past along with it. Nissa gritted her teeth. No more would she allow such monsters to exist. No more would she be a bystander on this plane—on any plane. Zendikar flowed into her being and its power focused her will. The Eldrazi struggled and clawed at the earthen colossus but the elemental only tightened its grip. Nissa willed two other elementals to emerge in a thunder of earth. The towering behemoths closed in on the writhing Eldrazi as it emitted a burst of staccato sounds that tore through the air, but its sounds were soon cut off as the other two hulks bludgeoned the Pathrazer into a wriggling mass of unrecognizable flesh. #figure(image("002_Nissa, Worldwaker/07.jpg", width: 100%), caption: [Stirring Wildwood | Art by Eric Deschamps], supplement: none, numbering: none) As the dust settled around them, Sha'heel looked at Nissa. "Whoa." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Khali, Alira, and Bakhut attended to the surviving expedition members while Sha'heel inspected the massive corpse of the Pathrazer. Hamadi clambered down the long trail of vines, and Nissa met him at the bottom. "I knew the land spoke to you," Hamadi said with a laugh. "But I never thought it #emph[roared] to you!" "I can't lie, Hamadi," Nissa said, as she wiped the sweat from her brow. "That felt #emph[really] good." "Zendikar wants to be free of these things. It gives you its power like no other I have seen." Hamadi clapped Nissa on the shoulder. "It has chosen you, Shaya." "Are you going to tell me what that means?" Nissa asked. Hamadi smiled at her. "It means 'Worldwaker.'" #figure(image("002_Nissa, Worldwaker/08.png", height: 40%), caption: [], supplement: none, numbering: none)
https://github.com/imkochelorov/ITMO
https://raw.githubusercontent.com/imkochelorov/ITMO/main/src/algorithms/s2/l3/main.typ
typst
#import "template.typ": * #set page(margin: 0.4in) #set par(leading: 0.55em, first-line-indent: 0em) #set text(font: "New Computer Modern") #show par: set block(spacing: 0.55em) #show heading: set block(above: 1.4em, below: 1em) #show heading.where(level: 1): set align(center) #show heading.where(level: 1): set text(1.44em) #set page(height: auto) #set page(numbering: none) #show: project.with( title: "Алгоритмы и Структуры Данных. Лекция 3", authors: ( "_scarleteagle", "imkochelorov" ), date: "21.02.2024", ) #set raw(tab-size: 4) #set raw(lang: "py") = Двоичное дерево поиска (BST) *Введение:*\ Реализуем структуру данных set - `insert(x)` --- добавить в множество элемент, если ранее его в нём не было - `find(x)` --- проверить, есть ли число в множестве - `remove(x)` --- удалить элемент из множества, если он в нём присутствует *Инвариант дерева:*\ #columns(2)[ #align(center)[#image("1.png", width: 40%) $forall y in L quad y < x$\ $forall y in R quad y > x$ \ ] #colbreak() #align(center)[#image("2.png", width: 70%) _Пример двоичного дерева поиска_]] *Реализация операций:* #columns(2)[ #align(center)[#image("4.png", width: 70%) _работа `insert(7)` на дереве из примера_] #colbreak() #align(center)[#image("3.png", width: 100%) _3 случая работы `remove(x)`_]] Визуализация работы `find(x)` представляется очевидной ``` class Node: def __init__(self, key: int): self.key = key self.l = None self.r = None ``` `find(x)`: очевидно\ `insert(x)`: достаточно визуального описания\ `remove(x)`: - Если `x` лист, то удаляем `x` - Если у `x` один сын, то заменяем `x` на сына - Если у `x` два сына, то идем до конца в правого сына (самая большая вершина в поддереве), перекинем его вместо `x` и удалим его на исходном месте одним из 2 предыдущих способов Асимптотика операций: $O(h)$, где $h$ --- высота дерева.\ Что грустно, так как в худшем случае мы получим дерево вида "бамбук" с высотой равной числу всех элементов // "если у нас 2 сына, то все плохо" // "какая следующая буква после y? пусть будет t." == Сбалансированное двоичное дерево. AVL-дерево _by_ <NAME> (1962)\ #v(0.2cm) Сбалансированное двоичное дерево --- $h = O(log n)$\ #v(0.2cm) *Инвариант:* $forall v quad abs(h(v.l) - h(v.r)) <= 1$ #v(0.2cm) #columns(2)[ Пусть $f(h)$ --- $min$ возможное кол-во вершин с высотой $h$\ #v(0.2cm) $f(0) = 0$\ #v(0.2cm) $f(1) = 1$\ #v(0.2cm) $f(h) = f(h - 1) + f(h - 2) + 1$\ #v(0.2cm) $f(h) >= F_h tilde phi^h$\ #v(0.2cm) $f(h) = Omega(phi^h)$ \ #v(0.2cm) $h = O(log n)$ #colbreak() #align(center)[#image("5.png", width: 40%) _$h(v)$ считается с учётом вершины начала и вершины конца поддерева_] ] _Поворот ребра_:\ #columns(2)[ ``` def rotateRight(v, p): A = v.l B = v.r C = p.r par = p.p p.l = B B.p = p v.r = p p.p = v v.p = par if par.l = p: par.l = v else: par.r = v ``` $h(v) = max(h(v.l), h(v.r)) + 1$ #colbreak() #align(center)[#image("6.png", width: 100%) _Визуализация поворота вершины_] ] \ Поворотов вершин достаточно, чтобы сохранять инвариант AVL-дерева. Для реализации этого будем смотреть, у какой вершины сломался инвариант \ После вставки или удаления вершины, пройдёмся снизу-вверх, проверяя инвариант вершин. Пусть $p$ --- первая вершина, в которой сломался инваривант:\ #align(center)[#image("7.png", width: 30%) $h(v) - h(A) = 2$] _Рассмотрим случаи добавления в дерево:_\ #columns(3)[ *1)* $h(u) = h(D) = h-2$\ Повернём $(p, v)$\ #align(center)[#image("8.png", width: 90%)] #colbreak() *2)* $h(u) = h-2, space h(D) = h-3$\ Повернём $(u, v)$\ Повернём $(p, u)$ #align(center)[#image("9.png", width: 90%)] #colbreak() *3)* $h(u) = h-3, space h(D) = h-2$\ Повернём $(p, v)$ ]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/leipzig-glossing/0.1.0/leipzig-gloss.typ
typst
Apache License 2.0
#let gloss_count = counter("gloss_count") #let build_gloss(spacing_between_items, formatters, gloss_line_lists) = { assert(gloss_line_lists.len() > 0, message: "Gloss line lists cannot be empty") let len = gloss_line_lists.at(0).len() for line in gloss_line_lists { assert(line.len() == len) } assert(formatters.len() == gloss_line_lists.len(), message: "The number of formatters and the number of gloss line lists should be equal") let make_item_box(..args) = { box(stack(dir: ttb, spacing: 0.5em, ..args)) } for item_index in range(0, len) { let args = () for (line_idx, formatter) in formatters.enumerate() { let formatter_fn = if formatter == none { (x) => x } else { formatter } let item = gloss_line_lists.at(line_idx).at(item_index) args.push(formatter_fn(item)) } make_item_box(..args) h(spacing_between_items) } } #let gloss( header_text: none, header_text_style: none, source_text: (), source_text_style: emph, transliteration: none, transliteration_style: none, morphemes: (), morphemes_style: none, additional_gloss_lines: (), //List of list of content translation: none, translation_style: none, spacing_between_items: 1em, gloss_padding: 2.0em, //TODO document these left_padding: 0.5em, numbering: false, breakable: false, ) = { assert(type(source_text) == "array", message: "source_text needs to be an array; perhaps you forgot to type `(` and `)`, or a trailing comma?") assert(type(morphemes) == "array", message: "morphemes needs to be an array; perhaps you forgot to type `(` and `)`, or a trailing comma?") assert(source_text.len() == morphemes.len(), message: "source_text and morphemes have different lengths") if transliteration != none { assert(transliteration.len() == source_text.len(), message: "source_text and transliteration have different lengths") } let gloss_items = { if header_text != none { if header_text_style != none { header_text_style(header_text) } else { header_text } linebreak() } let formatters = (source_text_style,) let gloss_line_lists = (source_text,) if transliteration != none { formatters.push(transliteration_style) gloss_line_lists.push(transliteration) } formatters.push(morphemes_style) gloss_line_lists.push(morphemes) for additional in additional_gloss_lines { formatters.push(none) //TODO fix this gloss_line_lists.push(additional) } build_gloss(spacing_between_items, formatters, gloss_line_lists) if translation != none { linebreak() if translation_style == none { ["#translation"] } else { translation_style(translation) } } } if numbering { gloss_count.step() } let gloss_number = if numbering { [(#gloss_count.display())] } else { none } style(styles => { block(breakable: breakable)[ #stack( dir:ltr, //TODO this needs to be more flexible left_padding, [#gloss_number], gloss_padding - left_padding - measure([#gloss_number],styles).width, [#gloss_items] ) ] } ) } #let numbered_gloss = gloss.with(numbering: true)
https://github.com/Skimmeroni/Appunti
https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Metodi%20Algebrici/Strutture/Gruppi.typ
typst
Creative Commons Zero v1.0 Universal
#import "../Metodi_defs.typ": * Un monoide $(G, *)$ viene detto *gruppo* se l'operazione $*$ definita sull'insieme $G$ ammette inverso per ogni elemento di $G$. Un gruppo $(G, *)$ dove $*$ gode della proprietá commutativa viene detto *gruppo abeliano*. #example[ - La coppia $(QQ, dot)$ é un semigruppo ed un monoide, ma non un gruppo. Questo perché non esiste l'inverso di $0$ rispetto a $dot$ (richiederebbe di dividere per $0$, che non é possibile); - La coppia $(QQ - {0}, dot)$ é, per gli stessi motivi per cui lo é $(QQ, dot)$, sia un semigruppo che un monoide. É peró anche un gruppo, perché per ogni $a in QQ$ esiste sempre un $frac(1, a) in QQ$ tale per cui $a dot frac(1, a) = frac(1, a) dot a = 1$; - Sia $"GL"(n, RR) = {A in "Mat"(n, RR): det(A) != 0}$ l'insieme che contiene tutte le matrici quadrate di dimensione $n$ che hanno il determinante non nullo. La struttura algebrica $("GL"(n, RR), dot)$, dove $dot$ indica l'operazione di prodotto fra matrici, forma un semigruppo, in quanto il prodotto fra matrici gode della proprietá associativa. É inoltre un monoide, in quanto il prodotto fra matrici ammette elemento neutro nella forma della matrice identitá. É infine anche un gruppo, in quanto il prodotto fra matrici ammette inverso nella forma della matrice inversa (che esiste per tutte le matrici che hanno il determinante non nullo, pertanto ogni matrice in $"GL"(n, RR)$ ha per definizione una inversa). In particolare, il gruppo $("GL"(n, RR), dot)$ prende il nome di *gruppo lineare generale*. ] #lemma[ Sia $(G, *)$ un gruppo. Per qualsiasi $x, y, z in G$, vale: - Unicitá dell'inverso: $exists! x^(-1) : x * x^(-1) = 1$; - Unicitá dell'elemento neutro: $exists! 1_(*) : x * 1_(*) = x$; - Legge di cancellazione (a destra): $x * y = x * z => y = z$; - Legge di cancellazione (a sinistra): $y * x = z * x => y = z$. ] Dato un gruppo $(G, *)$, la struttura algebrica $(H, *)$ si dice *sottogruppo* di $(G, *)$ se $H$ é un sottoinsieme (anche improprio) di $G$ e se $(H, *)$ é a sua volta un gruppo. In altre parole, $(H, *)$ é un sottogruppo di $(G, *)$ se: - L'elemento neutro di $(G, *)$ appartiene ad $H$; - L'insieme $H$ é _chiuso_ rispetto all'operazione $*$, ovvero $forall h, k in H$ si ha $h * k in H$; - $forall h in H$, l'inverso $h^(-1)$ di $h$ é a sua volta membro di $H$. Per indicare che $(H, *)$ é un sottogruppo di $(G, *)$ si usa la notazione $(H, *) lt.eq (G, *)$. Se $(H, *)$ é un sottogruppo di $(G, *)$ e $H != G$, si dice che $(H, *)$ é un *sottogruppo proprio* di $(G, *)$, e si indica con $(H, *) < (G, *)$. Si noti come le notazioni $<$ e $lt.eq$ non hanno nulla a che vedere con le relazioni d'ordine "minore" e "minore o uguale" rispetto ai numeri, cosí come non si riferiscono alla cardinalitá dei sostegni dei gruppi. Infatti, é accettato che due gruppi possano essere l'uno il sottogruppo dell'altro pur avendo la stessa cardinalitá. #example[ É stato provato nell'@Some-groups che la struttura algebrica $("GL"(n, RR), dot)$ sia un gruppo. Sia $"SL"(n, RR) = {A in "Mat"(n, RR): det(A) != 1}$ l'insieme che contiene tutte le matrici quadrate di dimensione $n$ che hanno il determinante pari ad $1$. Naturalmente, $"SL"(n, RR)$ é un sottoinsieme di $"GL"(n, RR)$, perché se una matrice ha il determinate pari ad $1$ allora tale determinante é evidentemente diverso da $0$. Inoltre, l'elemento neutro di $("GL"(n, RR), dot)$ é la matrice identitá di dimensione $n$, che avendo determinante pari ad $1$ é membro di $"SL"(n, RR)$. Inoltre, dato che per qualsiasi $A, B in "SL"(n, RR)$ vale $det(A) = det(B) = 1$, anche la matrice ottenuta dal loro prodotto ha determinante $1$, perché il determinante é una funzione moltiplicativa e quindi $det(A B) = det(A) det(B) = 1 dot 1 = 1$. Infine, se $A$ é una matrice con determinante pari ad $1$, anche la sua inversa ha determinante pari ad $1$. É possibile allora concludere che $("SL"(n, RR), dot)$ sia un sottogruppo di $("GL"(n, RR), dot)$. Il (sotto)gruppo $("SL"(n, RR), dot)$ prende il nome di *gruppo lineare speciale*. ] #lemma[ Sia $(G, *)$ un gruppo. La struttura algebrica $(H, *)$ con $H subset.eq G$ é un sottogruppo di $(G, *)$ se e soltanto se, per ogni coppia di elementi (non necessariamente distinti) $h, k in H$, vale $h * k^(-1) in H$. ] <Is-a-subgroup> #proof[ Se é noto che $(H, *)$ sia un sottogruppo di $(G, *)$, allora $(H, *)$ rispetta certamente la proprietá richiesta. Infatti, se $(H, *)$ é un gruppo, allora é chiuso rispetto a $*$, e quindi $forall h, k in H$ vale $h * k in H$. Inoltre, $forall h in H, h^(-1) in H$, pertanto $k^(-1) in H$, e si ha quindi $h * k^(-1) in H$ per ogni $h, k in H$. Viceversa, si supponga che $H$ sia un sottoinsieme di $G$ tale per cui $forall h, k in H$ vale $h * k^(-1) in H$: - Se $h = k$, allora, per l'unicitá dell'inverso, $h^(-1) = k^(-1)$, e quindi $h * h^(-1) = h * k^(-1) = 1_(*)$, quindi l'elemento neutro di $(G, *)$ appartiene ad $H$; - Se $h = 1_(*)$ (ed é lecito, avendo appena mostrato che appartiene ad $H$), allora per un qualsiasi $k$ vale $1_(*) * k^(-1) in H$, ma $1_(*) * k^(-1) = k^(-1)$ per definizione di elemento neutro. Si ha quindi che $forall h in H$, vale $h^(-1) in H$; - Siano $h, k in H$. Avendo appena provato che $k^(-1)$ appartiene ad $H$ per un qualsiasi $k in H$, vale $h * (k^(-1))^(-1) in H$, ma $(k^(-1))^(-1) = k$, pertanto $h * k in H$. Si ha quindi che $(H, *)$ rispetta la definizione di sottogruppo, pertanto $(H, *) lt.eq (G, *)$. ] Il @Is-a-subgroup é un possibile criterio che permette di determinare se, dati due gruppi $(G, *)$ ed $(H, *)$, $(H, *)$ sia un sottogruppo di $(G, *)$. #example[ É stato provato nell'@Some-groups che la struttura algebrica $(ZZ, +)$ sia un gruppo. La struttura algebrica $(n ZZ, +)$, dove $n ZZ = {n k : k in ZZ}$ é l'insieme che contiene tutti i multipli (interi) di $n$, é un sottogruppo di $(ZZ, +)$. Siano infatti $a$ e $b$ due elementi di $(n ZZ)$. Si ha: $ a + (-b) = n k_(1) + (-n k_(2)) = n k_(1) - n k_(2) = n (k_(1) - k_(2)) $ Dato che $(k_(1) - k_(2)) in ZZ$, si ha $n (k_(1) - k_(2)) in n ZZ$. Pertanto, per il @Is-a-subgroup si ha che $(n ZZ, +)$ é sottogruppo di $(ZZ, +)$. Si noti inoltre come i due insiemi sostegno, $ZZ$ e $n ZZ$, abbiano la stessa cardinalitá. ] <nZ-is-subgroup-Z> #lemma[ Per un qualsiasi gruppo $(G, *)$, le strutture algebriche $(G, *)$ e $({1_(*)}, *)$ sono sottogruppi di $(G, *)$. ] #proof[ - L'insieme sostegno di $(G, *)$ é lo stesso insieme che figura nell'insieme $G$ del gruppo $(G, *)$. Pertanto, il @Is-a-subgroup é certamente verificato; - L'unico elemento che figura nell'insieme ${1_(*)}$ della struttura algebrica $({1_(*)}, *)$ é precisamente $1_(*)$. A prescindere di come $*$ sia definita, si ha $1_(*)^(-1) = 1_(*)$, pertanto $1_(*) * 1_(*)^(-1) = 1_(*)^(-1) * 1_(*) = 1_(*)$. Dato che $1_(*) in {1_(*)}$, il @Is-a-subgroup é verificato. ] Per un qualsiasi gruppo $(G, *)$, il sottogruppo $(G, *)$ viene detto *sottogruppo improprio*, mentre il sottogruppo $({1_(*)}, *)$ viene detto *sottogruppo banale*. #lemma[ Per qualsiasi gruppo $(G, *)$, l'intersezione di piú sottogruppi di $(G, *)$ é a sua volta un sottogruppo di $(G, *)$. ] // #proof[ // Dimostrabile, da aggiungere // ] Siano $(G, *)$ e $(K, diamond.small)$ due gruppi. Una funzione $phi.alt: G |-> K$ si dice *omomorfismo* (da $G$ a $K$) se vale: $ forall g_(1), g_(2) in G, space phi.alt(g_(1) * g_(2)) = phi.alt(g_(1)) diamond.small phi.alt(g_(2)) $ Un omomorfismo iniettivo si dice *monomorfismo*, un omomorfismo suriettivo si dice *epimorfismo*, un omomorfismo biettivo si dice *isomorfismo* ed un isomorfismo che mappa due insiemi uguali si dice *automorfismo*. Se esiste (almeno) un isomorfismo fra due gruppi $(G, *)$ e $(K, diamond.small)$, si dice che tali gruppi sono _isomorfi_, e si indica con $(G, *) tilde.eq (K, diamond.small)$. #example[ - Rispetto ai gruppi $(RR^(+), dot)$ e $(RR, +)$, la funzione $f: RR^(+) |-> RR, f(x) = ln(x)$ é un omomorfismo. Infatti: $ ln(x_(1) dot x_(2)) = ln(x_(1)) + ln(x_(2)) $ Inoltre, essendo $f$ biettiva, si ha $(RR^(+), dot) tilde.eq (RR, +)$. - Rispetto ai gruppi $(RR, dot)$ e $(RR, +)$, la funzione $f: RR |-> RR, f(x) = sin(x)$ non é un omomorfismo. Infatti: $ sin(x_(1) dot x_(2)) != sin(x_(1)) + sin(x_(2)) $ - Rispetto ai gruppi $(RR^(+), +)$ e $(RR, dot)$, la funzione $f: RR^(+) |-> RR, f(x) = ln(x)$ non é un omomorfismo. Infatti: $ ln(x_(1) + x_(2)) != ln(x_(1)) dot ln(x_(2)) $ ] <ln-is-homomorphism> #theorem[ L'isomorfismo fra gruppi é una relazione di equivalenza. ] #proof[ Per provare che l'isomorfismo fra gruppi é una relazione di equivalenza, occorre provare che tale relazione é riflessiva, simmetrica e transitiva. - L'isomorfismo fra gruppi é riflessivo se, per un qualsiasi gruppo $(G, *)$, si ha $(G, *) tilde.eq (G, *)$. Si consideri, a tal proposito, la funzione identitá $id_(g) (x)$, definita come $f: G |-> G, f(x) = x$. Tale funzione, oltre che biettiva, é chiaramente un omomorfismo da $(G, *)$ a $(G, *)$, in quanto: $ f(g_(1) * g_(2)) = f(g_(1)) * f(g_(2)) => g_(1) * g_(2) = g_(1) * g_(2) space forall g_(1), g_(2) in G $ Pertanto, $id_(G)$ é un isomorfismo da $(G, *)$ a $(G, *)$, e quindi $(G, *) tilde.eq (G, *)$; - L'isomorfismo fra gruppi é riflessivo se, per una qualsiasi coppia di gruppi $(G, *)$ e $(K, diamond.small)$, si ha che $(G, *) tilde.eq (K, diamond.small)$ implica $(K, diamond.small) tilde.eq (G, *)$. Se $(G, *)$ e $(K, diamond.small)$ sono isomorfi, allora per definizione esiste (almeno) un isomorfismo da $(G, *)$ a $(K, diamond.small)$, sia questo $phi.alt: G |-> K$. Essendo $phi.alt$ un isomorfismo, ed essendo quindi una funzione biettiva, esiste certamente la funzione inversa di $phi.alt$, ovvero $phi.alt^(-1): K |-> G$. Tale funzione, oltre che biettiva a sua volta, é anche un omomorfismo da $(K, diamond.small)$ a $(G, *)$, in quanto: $ phi.alt^(-1)(k_(1) diamond.small k_(2)) = phi.alt^(-1)(phi.alt(g_(1)) diamond.small phi.alt(g_(2))) = phi.alt^(-1)(phi.alt(g_(1) * g_(2))) = g_(1) * g_(2) = phi.alt^(-1)(k_(1)) * phi.alt^(-1)(k_(2)) space forall k_(1), k_(2) in K $ Pertanto, $phi.alt^(-1)$ é un isomorfismo da $(K, diamond.small)$ a $(G, *)$, e quindi se vale $(G, *) tilde.eq (K, diamond.small)$ allora vale anche $(K, diamond.small) tilde.eq (G, *)$; - L'isomorfismo fra gruppi é transitivo se, per una qualsiasi tripla di gruppi $(G, *)$, $(K, diamond.small)$ e $(H, dot.circle)$, si ha che $(G, *) tilde.eq (K, diamond.small)$ e $(K, diamond.small) tilde.eq (H, dot.circle)$ implicano $(G, *) tilde.eq (H, dot.circle)$. Se $(G, *)$ e $(K, diamond.small)$ sono isomorfi, allora per definizione esiste (almeno) un isomorfismo da $(G, *)$ a $(K, diamond.small)$, sia questo $f: G |-> K$. Allo stesso modo, se $(K, diamond.small)$ e $(H, dot.circle)$ sono isomorfi, allora per definizione esiste (almeno) un isomorfismo da $(K, diamond.small)$ a $(H, dot.circle)$, sia questo $g: K |-> H$. Si consideri a tal proposito la composizione di $f$ e di $g$, ovvero $f compose g: G |-> H$. Tale funzione esiste certamente, essendo $f$ e $g$ biettive in quanto isomorfismi, ed é biettiva a sua volta per il @Composition-preserves-jection. Inoltre, é un omomorfismo da $(G, *)$ a $(H, dot.circle)$, in quanto: $ (f compose g)(h_(1) * h_(2)) = f(g(h_(1)) diamond.small g(h_(2))) = f(g(h_(1))) dot.circle f(g(h_(2))) = (f compose g)(h_(1)) dot.circle (f compose g)(h_(2)) space forall h_(1), h_(2) in H $ Pertanto, $f compose g$ é un isomorfismo da $(G, *)$ a $(H, dot.circle)$, e quindi se valgono $(G, *) tilde.eq (K, diamond.small)$ e $(K, diamond.small) tilde.eq (H, dot.circle)$ allora vale anche $(G, *) tilde.eq (H, dot.circle)$. ] Sia $phi.alt: G |-> K$ un omomorfismo tra i gruppi $(G, *)$ e $(K, diamond.small)$. Prende il nome di *nucleo* di $phi.alt$, denotato con $ker(phi.alt)$, il sottoinsieme di $G$ cosí definito: $ ker(phi.alt) = {g in G: phi.alt(g) = 1_(diamond.small)} $ Prende invece il nome di *immagine* di $phi.alt$, denotata con $Im(phi.alt)$, il sottoinsieme di $K$ cosí definito: $ Im(phi.alt) = {k in K: exists g in G, phi.alt(g) = k} $ #example[ Come mostrato nell'@ln-is-homomorphism, la funzione $f: RR^(+) |-> RR, f(x) = ln(x)$ é un omomorfismo per i gruppi $(RR^(+), dot)$ e $(RR, +)$. Essendo $0$ l'elemento neutro rispetto alla somma in $RR$, il nucleo di $phi.alt$ é l'insieme che contiene tutti gli elementi $x in RR^(+)$ tali per cui $ln(x) = 0$. L'unico valore che soddisfa tale espressione é $1$, pertanto $ker(phi.alt) = {1}$. L'immagine di $phi.alt$ é l'insieme che contiene tutti gli elementi di $y in RR$ tali per cui $y = ln(x)$. Essendo il logaritmo naturale una funzione suriettiva, si ha $Im(phi.alt) = RR$. ] #lemma[ Sia $phi.alt: G |-> K$ un omomorfismo tra i gruppi $(G, *)$ e $(K, diamond.small)$. La struttura algebrica $(ker(phi.alt), *)$ é un sottogruppo di $(G, *)$. ] <Kernel-is-subgroup> // #proof[ // Dimostrabile, da aggiungere // ] #lemma[ Sia $phi.alt: G |-> K$ un omomorfismo tra i gruppi $(G, *)$ e $(K, diamond.small)$. La struttura algebrica $(Im(phi.alt), diamond.small)$ é un sottogruppo di $(K, diamond.small)$. ] // #proof[ // Dimostrabile, da aggiungere // ]
https://github.com/gvallinder/KTHThesis_Typst
https://raw.githubusercontent.com/gvallinder/KTHThesis_Typst/main/main.typ
typst
MIT License
#import "Template/kth_thesis.typ": kth_thesis #show: doc => kth_thesis( title: "Your Thesis Title:", subtitle: [Subtitle if Your \ Main Title Wasn't Enough], author: "<NAME>", degree: "Doctoral Thesis in Electrical Engineering", add_dummy_titlepage: false, doc ) // Abstract #include "abstract.typ" // Preface #include "preface.typ" // Paperlist #include "paperlist.typ" // Table of content #pagebreak(to: "odd") #outline(depth: 3, indent: auto) // List of Tables #pagebreak(to: "odd") #outline( title: [List of Tables], target: figure.where(kind: table)) // List of Figures #pagebreak(to: "odd") #outline( title: [List of Figures], target: figure.where(kind: image), ) #set heading(numbering: "1.1") // Chapter 1 #include "Chapter1.typ" // Chapter 2 #include "Chapter2.typ" // References #bibliography("references.bib", title: [References], style: "ieee")
https://github.com/rabotaem-incorporated/algebra-conspect-1course
https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/02-complex-numbers/03-complex-roots.typ
typst
Other
#import "../../utils/core.typ": * == Корни из комплексных чисел Рассмотрим уравнение $z^n = w, space n in NN, space w in CC$. #th[ Пусть $n in NN, space w in CC$ + Если $w = 0$, То уравнение $z^n = w$ имеет единственный корень $z = 0$. + Если $w eq.not 0$, То уравнение $z^n = w$ имеет ровно $n$ различных корней: $ z_k = root(n, r) (cos(phi + 2 pi k)/(n) + i sin(phi + 2 pi k)/(n)), space k = 0, 1, ..., n-1 $ ] #proof[ + $w = 0 ==> z = 0$ + #[ $w eq.not 0 ==> display(cases( w = r (cos phi + i sin phi) : quad r > 0\, space phi in RR, z = p (cos alpha + i sin alpha) : quad p > 0\, space alpha in RR ))$ $z^n = w <==> p^n (cos n alpha + i sin n alpha) = r (cos phi + i sin phi) <==>$ $display(cases( p^n = r, n alpha = phi + 2 pi k\, space k in ZZ )) <==> display(cases( p = root(n, r), alpha = (phi + 2 pi k)/(n)\, space k in ZZ ))$ $z^n = w <==> z = underbrace(root(n, r) (cos(phi + 2 pi k)/(n) + i sin(phi + 2 pi k)/(n)), z_k), space k in ZZ$ При каких $k, space l: space z_k = z_l$? $z_k = z_l <==> (phi + 2 pi k)/(n) = (phi + 2 pi l)/(n) + 2 pi s, space s in ZZ <==>$ $(k)/(n) = (l)/(n) + s, space s in ZZ <==> k = l + n s, space s in ZZ <==> $ $k equiv_(n) l <==> z in \{z_0, z_1, ..., z_(n-1)\}$ ] ] *Изображение на окружности* #align(center)[ #image("../../images/roots.svg", width: 40%) ] Комплексные корни образуют правильный $n$-угольник на окружности. #lemma[ Пусть $z_0, z_1, ..., z_(n - 1)$ --- все корни $z^n = w, space n > 1$ Тогда $z_0 + z_1 + ... + z_(n - 1) = 0$ ] #proof[ Заметим, что $z_k = z_(k - 1) underbrace((cos(2 pi)/(n) + i sin(2 pi)/(n) ), xi)$, тогда $z_k = z_0 dot xi^k$. Обозначим $S = z_0 + z_1 + ... + z_(n - 1)$, значит $xi dot S = z_1 + z_2 + ... + underbrace(z_n, = z_0) = S ==> xi S = S ==> (xi - 1) S = 0$ Из того что $n > 1 ==> xi eq.not 1$, а значит $(xi - 1) S = 0 ==> S = 0$ ] #def[ _Группа_ --- это множество $G$ с операцией $*: G times G -> G$ такая, что: + $*$ --- ассоциативна: $(a * b) * c = a * (b * c)$ + Существует нейтральный элемент $e in G$ такой, что $a * e = e * a = a$ для любого $a in G$ + У любого элемента $a in G$ существует обратный элемент $a^(-1) in G$ такой, что $a space * space a^(-1) = a^(-1) * a = e$ ] #examples[ + $(ZZ, +)$ + $((factor(ZZ, n ZZ))^*, dot)$ + Если $R$ --- ассоциативное кольцо с $1$, то $R^* = \{r divides exists s in R : r s = s r = 1\}$ --- группа относительно умножения. ] #pr[ $mu_n = \{z in CC divides z^n = 1\} = \{underbrace(cos (2 pi k)/(n) + i sin (2 pi k)/(n), xi_k) divides k = 0, 1, ..., n-1\}$ --- группа относительно умножения. ] #proof[ - Ассоциативность --- так как есть ассоциативность в $CC$ - $1 in mu_n space (1 = xi_0)$ - $xi_k dot xi_(-k) = (cos(2 pi k)/(n) + i sin(2 pi k)/(n)) (cos(2 pi (-k))/(n) + i sin(2 pi (-k))/(n)) = 1$ ] #lemma[ $xi_k = xi_1^k$ ] #proof[ $(1 dot cos(2 pi k)/(n) + 1 dot i sin(2 pi)/(n))^k = 1^k dot (cos(2 pi k)/(n) + i sin(2 pi k)/(n))$ (по формуле Муавра) ] #def[ $G$ --- группа с операцией $*$, $g in G, space n in ZZ$, тогда: $g^n = display(cases( g space * space g space * space ... space * space g\, & n > 0, e\, & n = 0, g^(-1) * g^(-1) * space ... space * g^(-1)\, & n < 0 ))$ ] #def[ Группа $G$ называется _циклической_, если $exists g in G : space G = \{g^n divides n in ZZ\}$ Пишут: $G = angle.l g angle.r$ ] #def[ $g$ --- _образующий элемент_ группы $G$ ] #examples[ - $ZZ = angle.l 1 angle.r = angle.l -1 angle.r$ (по сложению) $g^n = display(cases( 1 + 1 + ... + 1 & n > 0, 0 & n = 0, -1 + -1 + ... + -1 quad & n < 0 ))$ - $factor(ZZ, 5 ZZ) = angle.l overline(1) angle.r = angle.l overline(2) angle.r = angle.l overline(3) angle.r = angle.l overline(4) angle.r$ (по сложению) - $factor(ZZ, 6 ZZ) = angle.l overline(1) angle.r = angle.l overline(5) angle.r$ (по сложению) - $(factor(ZZ, 5 ZZ))^* = angle.l overline(2) angle.r = angle.l overline(3) angle.r$ (по умножению) - $(factor(ZZ, 8 ZZ))^*$ --- не циклическая группа $g^2 = e ==> g^(2 k) = e, space g^(2 k + 1) = g$ ] #def[ $G$ --- группа, $g in G$ Если $forall n in NN: g^n eq.not e$, то говорят, что $g$ --- _бесконечный порядок_ Если $exists n in NN: g^n = e$, то минимальное такое $n$ называют _порядком_ $g$ (пишут: $ord g = n$) ] #example[ $factor(ZZ, 5 ZZ)$ $ord overline(1) = 1$ $ord overline(2) = 4$ $ord overline(3) = 4$ $ord overline(4) = 2$ ] #pr[ Пусть $G$ --- конечная группа, $abs(G) = n, space g in G$. Тогда: $G = angle.l g angle.r <==> ord g = n$ ] #proof[\ "$arrow.r.double$": $exists k, space l in \{0, 1, ..., n\}, space k eq.not l: space g^k = g^l$ $k < l$: $g^(-k) dot g^k = g^(-k) dot g^l = g^(l - k) = e$ $0 < l - k <= n$ Таким образом, порядок $g$ не превосходит $n$ Предположим, $ord g = m < n$ $G = \{ g^k divides k in ZZ \} = \{ g^(m q + r) divides q in ZZ, space 0 <= r < m \} = \{g^0, g^1, ..., g^(m-1)\}$ --- противоречие, так как $abs(G) <= m < n$, а мы знаем что $abs(G) = n$. "$arrow.l.double$": $ord g = n$ $==> g^0, g^1, g^2, ..., g^(n - 1)$ --- попарно различны $==> \{g^0, g^1, ..., g^(n - 1)\} = G$ $==> G = angle.l g angle.r$ ] #def[ _Первообразным корнем_ из $1$ степени $n$ называется такой элемент $z in CC^*$, что $ord z = n$ ] #example[ $mu_6 = \{1, xi_1, xi_2, xi_3, xi_4, xi_5\}$ $ord 1 = 1$, $ord xi_1 = 6$, $ord xi_2 = 3$, $ord xi_3 = 2$, $ord xi_4 = 3$, $ord xi_5 = 6$ $xi_2$ --- первообразный корень из $1$ степени $3$ ]
https://github.com/danisltpi/seminar
https://raw.githubusercontent.com/danisltpi/seminar/main/notes.md
markdown
# Fibonacci Heaps ## Todo - [x] setup writing - [x] setup typst / latex - [x] setup zotero for literature - [x] setup presentation - collect literature - [x] <NAME>, <NAME>: Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM). Volume 34, Issue 3, July 1987 (Pages 596-615). - [x] clrs - research - read literature - [x] read grokking algorithms - understand the algorithm - try the algorithm on paper ## Structure 1. Einleitung Einführung in das Thema: Warum sind Datenstrukturen wie Heaps wichtig? Motivation: Wofür werden Fibonacci-Heaps verwendet, und was ist ihr Nutzen? Ziel der Arbeit: Was soll die Arbeit vermitteln? 2. Grundlagen Datenstrukturen und Heaps: Eine kurze Einführung in Heaps (Binäre Heaps, Binomial Heaps, etc.). Fibonacci-Folge: Erklärung der Fibonacci-Folge und deren Bedeutung für Fibonacci-Heaps. Problemstellung: Typische Probleme, die mit Heaps gelöst werden (z. B. Dijkstra-Algorithmus, Prim-Algorithmus). 3. Fibonacci-Heap: Definition und Aufbau Definition: Was ist ein Fibonacci-Heap? Struktur: Wurzel-Liste Rangkonzept Knotenstruktur (z. B. markierte und unmarkierte Knoten) Besonderheiten im Vergleich zu anderen Heaps: Was unterscheidet den Fibonacci-Heap von binären oder binomialen Heaps? 4. Operationen in Fibonacci-Heaps Insert: Einfügen eines neuen Elements. Find Minimum: Suchen des kleinsten Elements. Extract Minimum: Entfernen des kleinsten Elements. Union: Verschmelzen von zwei Heaps. Decrease Key: Verringern des Schlüssels eines Knotens. Delete: Löschen eines Knotens. Laufzeitanalyse der Operationen: Amortisierte Laufzeiten für jede Operation. 5. Amortisierte Analyse Amortisierte Kosten: Definition und Bedeutung für Fibonacci-Heaps. Potentialfunktion: Erklärung der Potentialmethode zur Analyse der Operationen. Laufzeit-Komplexitäten: Vergleich der amortisierten und realen Laufzeiten für verschiedene Operationen. 6. Anwendungen von Fibonacci-Heaps Graphenalgorithmen: Prim-Algorithmus, Dijkstra-Algorithmus. Andere Algorithmen: Weitere Einsatzmöglichkeiten von Fibonacci-Heaps in der Praxis. 7. Vergleich mit anderen Heaps Binomial Heaps vs. Fibonacci-Heaps: Unterschiede in Struktur und Laufzeiten. Binäre Heaps vs. Fibonacci-Heaps: Wann ist ein Fibonacci-Heap vorteilhafter? Vor- und Nachteile: Wann sollten Fibonacci-Heaps verwendet werden? 8. Fazit und Ausblick Zusammenfassung: Wiederholung der wichtigsten Punkte. Ausblick: Mögliche Weiterentwicklungen und zukünftige Forschungsthemen. 9. Literaturverzeichnis Auflistung aller verwendeten Quellen. ## Guide 1. Recherche und Informationsbeschaffung Schritt 1: Grundlagenliteratur lesen [current] - Beginne mit einem Überblick über grundlegende Datenstrukturen und Heaps in Algorithmen-Büchern (z.B. "Introduction to Algorithms" von Cormen et al.). - Schaue nach Artikeln und Lehrvideos zu Fibonacci-Heaps (YouTube, Coursera, etc.). - Nutze wissenschaftliche Datenbanken (z.B. Google Scholar, SpringerLink, IEEE Xplore) für Forschungspapiere und vertiefte Materialien. - Schritt 2: Fibonacci-Heaps im Detail Lies detaillierte Erklärungen zu Fibonacci-Heaps. Finde Ressourcen, die den Aufbau, die Operationen und die amortisierte Analyse erklären. Stelle sicher, dass du auch die mathematischen Konzepte (wie die amortisierte Analyse und Potentialfunktion) verstehst, da sie für das Thema wichtig sind. Schritt 3: Beispiele und Anwendungen Studiere, wie Fibonacci-Heaps in Algorithmen wie dem Dijkstra- oder Prim-Algorithmus angewendet werden. Verstehe genau, welche Vorteile sie gegenüber anderen Heaps bieten. Finde Code-Beispiele und Implementierungen von Fibonacci-Heaps (z.B. auf GitHub oder in Lehrbüchern). 2. Erstellen der Seminararbeit Schritt 1: Struktur planen (siehe Outline oben) Plane die Struktur deiner Arbeit (Titel, Einleitung, Hauptteil, Fazit). Nutze die Outline aus der vorherigen Antwort als Vorlage. Beginne mit den Kapiteln, in denen du sicherer bist (z.B. Grundlagen oder Definitionen), und arbeite dich zu den komplexeren Themen (z.B. amortisierte Analyse) vor. Schritt 2: Schreibprozess Schreibe in Etappen: Arbeite jeden Abschnitt einzeln ab, anstatt alles auf einmal zu schreiben. Beginne mit einem einfachen Entwurf und erweitere diesen nach und nach. Erkläre komplexe Konzepte einfach: Für einen Leser, der das Thema nicht kennt, sind klare und verständliche Erklärungen entscheidend. Nutze Beispiele und Diagramme. Belege deine Aussagen: Nutze Zitate und Verweise auf Literatur, um deine Erklärungen zu stützen. Schritt 3: Überarbeitung und Feinschliff Proofreading: Lies deine Arbeit mehrmals, um Fehler zu korrigieren und den Text zu verbessern. Feedback einholen: Lasse die Arbeit von Kommilitonen oder einem Betreuer lesen, um Feedback zu erhalten. Diagramme und Abbildungen einfügen: Visualisiere Prozesse und Algorithmen, um die Verständlichkeit zu erhöhen. 3. Vorbereitung der Präsentation Schritt 1: Präsentationsstruktur Eröffnung: Beginne mit einer allgemeinen Einführung zu Heaps und warum Fibonacci-Heaps nützlich sind. Hauptteil: Erläutere die Struktur und Operationen von Fibonacci-Heaps. Nutze Diagramme, um die einzelnen Schritte (z.B. Insert, Extract-Min, Union) zu veranschaulichen. Schluss: Fasse die Vorteile von Fibonacci-Heaps zusammen und gehe auf deren Einsatz in bekannten Algorithmen ein. Schritt 2: Foliengestaltung Visualisierung: Nutze Diagramme und Animationen (sofern möglich), um Prozesse und Operationen wie Extract-Min oder Decrease-Key zu zeigen. Text auf Folien: Halte die Texte kurz und prägnant. Nutze Stichpunkte anstelle von langen Textblöcken. Zeitmanagement: Teile die Präsentation so auf, dass du pro Thema nicht zu viel Zeit aufwendest. Ein klarer, strukturierter Fluss ist entscheidend. Schritt 3: Vorbereitung auf die Präsentation Üben: Probiere die Präsentation mehrfach, um sicherzustellen, dass du flüssig sprichst und die wichtigsten Punkte ohne Notizen erklären kannst. Fragen antizipieren: Bereite dich auf mögliche Fragen vor. Zum Beispiel, welche Vorteile Fibonacci-Heaps gegenüber binären Heaps haben, oder was amortisierte Kosten bedeuten. Technik testen: Stelle sicher, dass die Präsentation auf dem genutzten Gerät funktioniert (Projektor, Laptop, etc.). 4. Zeitplan und Organisation Woche 1-2: Recherche Grundlagen und fortgeschrittene Literatur zum Thema sammeln und lesen. Woche 3: Struktur erstellen Erstelle eine detaillierte Gliederung und plane den Aufbau der Seminararbeit und der Präsentation. Woche 4-6: Schreiben Verfasse die Seminararbeit, schreibe sie in Etappen und hole Feedback ein. Woche 7: Überarbeitung Finalisiere die Arbeit und bereite sie zur Abgabe vor. Woche 8: Präsentationsvorbereitung Erstelle die Folien und übe deine Präsentation. Woche 9: Probevortrag Übe die Präsentation mit einer kleinen Gruppe (z.B. Kommilitonen), um Feedback zu erhalten. ## stuff - The fastest implementations of Prim’s and Kruskal’s algorithms use Fibonacci heaps (skiena cited ft87/fredman) - The fastest algorithm known for single-source shortest-path for positive edge weight graphs is Dijkstra’s algorithm with Fibonacci heaps, running in O(m + n log n) time [FT87] ## Heaps - (binary) heap - != java heap - every element of the array is a node - nearly complete binary tree (full leaves on the bottom) - every level is filled except bottom most and from there from the left - A.length gives number of elements in the array - A.heap-size gives the number of elements the heap holds from the array (total number of nodes) - A[1] is root - parent(i) = floor(i/2) - left(i) = 2i - right(i) = 2i + 1 - used in heapsort and priority queues - 2 types: max-heap, min-heap - max-heap: greatest element is root (used for heapsort) - max-heap-property: A[Parent(i)] >= A[i] - min-heap: lowest element is root (used for pq) - min-heap-property: A[Parent(i)] <= A[i] - height: O(log n) - number of edges on the longest simple downward path from node to leaf ## Priority Queue - max-priority-queue - data structure for maintaining a set of S element - every element has a value called key - insert(S, x): inserts x into S - maximum(S): returns element of S with largest key: return A[1] - extract-max(S): removes and returns largest key element of S - increase-key(S, x, k): increases value of x's key to k, which is at as large - handle is needed, exact implementation depends ## Mergeable Heap - implemented by fibonacci heaps ![CLRS 19.1](2024-10-08-15-49-03.png) - for union: binary heap would concatenate the two arrays and run build-min-heap which would take Theta(n) time - better asymptotic time bounds than binary heaps for Insert, Union, Decrease Key and the rest the same - but its amortized not worst case per operation time bounds ## Amortized - averages out run time if there is a significant difference between expensive and cheap cases ## Theory and Practice - fibonacci heap good if we dont use many delete and extract min operations - this happens often in applications - e.g. graph algorithms call decrease-key once per edge - that's very good for dense graphs since Theta(1) run time for each call instead of Theta(lg n) worst cast time of binary heap [clrs] - practical view point: not so good compared to ordinary binary/k-ary heaps because - constant factors - programming complexity - but still good if applications uses a lot of data - daher fibonacci heap eher theoretisch - wenn es eine datenstruktur gäbe, die simpler ist mit den selber amortisierten laufzeit, wäre die praktikabler - fibonacci heap und binary heap sind beide ineffizient bei search - daher delete und decrease key, die ein element bearbeiten, brauchen ein pointer auf das element als input ## Alternatives - relaxed heaps from tarjan [https://dl.acm.org/doi/abs/10.1145/50087.50096, ] - strict fibonacci heaps [https://dl.acm.org/doi/abs/10.1145/2213977.2214082, stoc12] - implementation matching worst case time bounds of fibonacci heaps using pointer based heaps - key simplification: - discard structure of smaller heap when melding - pigeon hole principle in place of counter ## Structure of fibonacci heaps - collection of rooted trees, min-heap ordered (could also be max heap tho) - min heap property: A[parent] >= A[node] - every node has a pointer to: parent (x.p), child (x.child), left and right (x.left/x.right) - child list: children of x in a circular doubly linked list (linked list with pointers to the prev and next node) - advantages: - insert node into any location or remove node from anywhere in the circular linked list with O(1) time - given two of these, we can concatenate them to one in O(1) time - x.degree is the number of children a node has (number of nodes in child list) - x.mark indicates whether x has lost a child since x was made a child of another node, newly created nodes are unmarked and becomes unmarked if it becomes a child of another node - to access a fibonacci heap H, we have a pointer to H.min, which is the root of the tree containing the minimum key (called minimum node) - if there are multiple nodes with the minimum value, then any of them can be the root - if H is empty, then h.min is NIL - root list: the circular doubly linked list of roots of all trees in the fibonacci heap, using their left and right pointers, where the trees appear in any order - H.n is the number of nodes in the fibonacci heap H currently ## Laufzeit-Analyse - durch potential function - potential wird definiert: $\Phi(H) = t(H) + 2m(H)$ - $t(H)$: anzahl bäume in H (root list) - $m(H)$: anzahl markierte knoten - das potential eines fibonacci heaps ist die summe der potentiale ihrer teil fibonacci heaps - potential kann als konstante für die höhe an arbeit dienen - am anfang ist das potential 0 - für die potenzial method gilt, dass die obere schranke für amortisierte zeiten eine obere schranke für die tatsächliche laufzeit bietet
https://github.com/N3M0-dev/Notes
https://raw.githubusercontent.com/N3M0-dev/Notes/main/ECON/Principles_of_Finance/Note_PoF/Pt_1/pt1.typ
typst
#import "@local/note_template:0.0.1":* #import "@local/tbl:0.0.4" #set par(justify: true) #set page(numbering: "1", number-align: center) #set outline(indent: auto) #set heading(numbering: "1.1") #frontmatter(authors: ("Nemo",), // date: "2023 Oct 8 - "+str(datetime.today().display("[year] [month repr:short] [day padding:none]")), date: "2023 Oct 8 - 2023 Nov 9", title: "Part 1: Value" ) #show : tbl.template.with(tab: "|") #outline() #pagebreak() = Goals and Governance of the Firm == Basics Concepts Involved: - *Tangible assets*: Assets that have physical substance, e.g. plant and machinery. - *Intangible assets*: Assets that have physical substance, e.g. brand names and patents. - *Opportunity cost of capital*: The potential benefits that an individual, investor, or business misses out on when choosing one alternative over another. To make it simple, it means that if you cannot choose one thing because you have chosen another, the thing you cannot choose is the opportunity cost.\ *e.g.* The benefits that raising chicken generates is the opportunity cost of the benefits raising pig on the same land generates. What exactly is the cost? Here, I already have the capital, now because I want to raise pigs, so I cannot raise chicken, so I cannot get the reutrn of raising chicken. Suppose that raising chickens have the return rate of 10%, which means in the beginning I invest all my capital, then I get 110% capital in return. Now because I have chosen to raise pigs, so I cannot raise chickens, I cannot get the 10 percent return, so the "cost" is 10% of capital. To make it a sentence, doing sth else cost me not raising chicken, costme 10% of capital. - *Real assets*: Belongs to Tangible assets, physical assets that have an intrinsic worth due to their substance and properties. == Intro Basic goal of the corporation: INCREASE its value! The financial question the corporation's manager face: + What investment should the corporation make? + How should the investments be paid for? Borrow, retain or reinvestment? Three main themes of the chapter: + Maximizing value + The opportunity cost of capital + The curcial importance of incentives and governance == Corporate Investment and Financing Decisions #def(( [Financial assets: A financial asset is a non-physical asset whose value is derived from a contractual claim, such as bank deposits, bonds, and participations in companies' share capital.], [Security: A security is a tradable financial asset.])) #note(name: [Security])[ The term commonly refers to any form of financial instrument, but its legal definition varies by jurisdiction. In some countries and languages people commonly use the term "security" to refer to any form of financial instrument, even though the underlying legal and regulatory regime may not have such a broad definition. In some jurisdictions the term specifically excludes financial instruments other than equities and fixed income instruments. In some jurisdictions it includes some instruments that are close to equities and fixed income, e.g., equity warrants. In the United Kingdom, the Financial Conduct Authority functions as the national competent authority for the regulation of financial markets; the definition in its Handbook of the term "security"[1] applies only to equities, debentures, alternative debentures, government and public securities, warrants, certificates representing certain securities, units, stakeholder pension schemes, personal pension schemes, rights to or interests in investments, and anything that may be admitted to the Official List. In the United States, a "security" is a tradable financial asset of any kind.[2] Securities can be broadly categorized into: ] To carry on business, a corporation needs all kinds of real assets, which need to be paid for.And to pay for these, corporation sells claims on the assets and on the cash flow they will generate, which are called financial assets and securitys. e.g. Bank loan: The bank provide corporation with cash, and the corporation promises(claims) to pay back with interest. So what mentioned above suggests (roughly) the following: + Investment decision = management of real assets + Financing decision = trade of financial assets === Investment Decisions #figure( image("Table_1.1.png",width: 80%) ) The investment decisions are often referred to as capital budgeting or capital expenditure (CAPEX) decisions. === Financing Decisions #term_box("Prerequisite")[ Stock #text(style: "italic",fill: gray)[really fucking complicated ...]: Stocks (also capital stock, or sometimes interchangeably, shares) consist of all the shares by which ownership of a corporation or company is divided. A single share of the stock means fractional ownership of the corporation in proportion to the total number of shares. This typically entitles the shareholder (stockholder) to that fraction of the company's earnings, proceeds from liquidation of assets (after discharge of all senior claims such as secured and unsecured debt), or voting power, often dividing these up in proportion to the amount of money each stockholder has invested. Not all stock is necessarily equal, as certain classes of stock may be issued, for example, without voting rights, with enhanced voting rights, or with a certain priority to receive profits or liquidation proceeds before or after other classes of shareholders. Stock can be bought and sold privately or on stock exchanges. Such transactions are closely overseen by governments and regulatory bodies to prevent fraud, protect investors, and benefit the larger economy. The stocks are deposited with the depositories in the electronic format also known as Demat account. As new shares are issued by a company, the ownership and rights of existing shareholders are diluted in return for cash to sustain or grow the business. Companies can also buy back stock, which often lets investors recoup the initial investment plus capital gains from subsequent rises in stock price. Stock options issued by many companies as part of employee compensation do not represent ownership, but represent the right to buy ownership at a future time at a specified price. This would represent a windfall to the employees if the option is exercised when the market price is higher than the promised price, since if they immediately sold the stock they would keep the difference (minus taxes). Stock bought and sold in private markets fall within the private equity realm of finance. ] As shown in Table 1.1, a corporation can raise money from lenders or from shareholders. A corporation can issue bonds or borrow from bank to raise money from lenders or get the cash from shareholders. The choice between debt and equity financing is called capital structure decision. === What is a Corporation? In brief, a corporation is a legal entity. In the view of law, it is a legal person that is owned by its shareholders. (This concept seems not important?) Following is some intro form wikipedia: #note(name: [Corporation])[ A corporation is an organization—usually a group of people or a company—authorized by the state to act as a single entity (a legal entity recognized by private and public law as "born out of statute"; a legal person in a legal context) and recognized as such in law for certain purposes.[1]: 10  Early incorporated entities were established by charter (i.e., by an ad hoc act granted by a monarch or passed by a parliament or legislature). Most jurisdictions now allow the creation of new corporations through registration. Corporations come in many different types but are usually divided by the law of the jurisdiction where they are chartered based on two aspects: whether they can issue stock, or whether they are formed to make a profit.[2] Depending on the number of owners, a corporation can be classified as aggregate (the subject of this article) or sole (a legal entity consisting of a single incorporated office occupied by a single natural person). One of the attractive early advantages business corporations offered to their investors, compared to earlier business entities like sole proprietorships and joint partnerships, was limited liability.[clarification needed] Limited liability means that a passive shareholder in a corporation will not be personally liable either for contractually agreed obligations of the corporation, or for torts (involuntary harms) committed by the corporation against a third party. Limited liability in a contract is uncontroversial because the parties to the contract could have agreed to it and could agree to waive it by contract. However, limited liability in tort remains controversial because third parties do not agree to waive the right to pursue shareholders. There is significant evidence that limited liability in tort may lead to excessive corporate risk taking and more harm by corporations to third parties.[3][4] Where local law distinguishes corporations by their ability to issue stock, corporations allowed to do so are referred to as stock corporations; one type of investment in the corporation is through stock, and owners of stock are referred to as stockholders or shareholders. Corporations not allowed to issue stock are referred to as non-stock corporations; i.e. those who are considered the owners of a non-stock corporation are persons (or other entities) who have obtained membership in the corporation and are referred to as a member of the corporation. Corporations chartered in regions where they are distinguished by whether they are allowed to be for-profit are referred to as for-profit and not-for-profit corporations, respectively. There is some overlap between stock/non-stock and for-profit/not-for-profit in that not-for-profit corporations are nearly always non-stock as well. A for-profit corporation is almost always a stock corporation, but some for-profit corporations may choose to be non-stock. To simplify the explanation, whenever "stockholder" or "shareholder" is used in the rest of this article to refer to a stock corporation, it is presumed to mean the same as "member" for a non-profit corporation or for a profit, non-stock corporation. Registered corporations have legal personality recognized by local authorities and their shares are owned by shareholders[5][6] whose liability is generally limited to their investment. Shareholders do not typically actively manage a corporation; shareholders instead elect or appoint a board of directors to control the corporation in a fiduciary capacity. In most circumstances, a shareholder may also serve as a director or officer of a corporation. Countries with co-determination employ the practice of workers of an enterprise having the right to vote for representatives on the board of directors in a company. In American English, the word corporation is most often used to describe large business corporations.[7][8] In British English and in the Commonwealth countries, the term company is more widely used to describe the same sort of entity while the word corporation encompasses all incorporated entities.[7] In American English, the word company can include entities such as partnerships that would not be referred to as companies in British English as they are not a separate legal entity. Late in the 19th century, a new form of the company having the limited liability protections of a corporation, and the more favorable tax treatment of either a sole proprietorship or partnership was developed. While not a corporation, this new type of entity became very attractive as an alternative for corporations not needing to issue stock. In Germany, the organization was referred to as Gesellschaft mit beschränkter Haftung or GmbH. In the last quarter of the 20th century, this new form of non-corporate organization became available in the United States and other countries, and was known as the limited liability company or LLC. Since the GmbH and LLC forms of organization are technically not corporations (even though they have many of the same features), they will not be discussed in this article. ] == The Role of the Financial Manager and the Opportunity Cost of Capital #figure( image("FM.png",width: 95%) ) === The Investment Trade-off #figure( image("inv_tradeoff.png",width: 95%) ) Given the figure above, where should the cash go? From the view of the owner of the corporation, the shareholders, the answer should depend which way produces more benefit. If the investment is considered to produce more interest than the shareholders invest themselves, then the cash should be used in the investment, if not, then the cash should be given back to the shareholders. / e.g. : Wal-Mart has cash set aside to build 10 new stores. It could go ahead with the new stores, or it could choose to cancel the investment project and instead pay the cash out to its stockholders. If it pays out cash, the stockholders can invest for themselves.\ Suppose that Wal-Mart's new-store project is just about as risty as the U.S. stock market and that investment in tho stock market offers a 10% expected rate of return. If the new stores offer a superior rate of return, say 20%, then Wal-Mart’s stockholders would be happy.If the new stores offer only a 5% return, then the stockholders are better off with the cash and without the new stores; in that case, the financial manager should turn down the investment project. In the example above, the minimum acceptable rate of return of the investment is 10%, which is called the _hurdle rate_ or _cost of capital_. Actually, it's an *_opportunity cost of capital_*, for it requires the investment opportunity available to investors in the market. Note that the opportunity cost of capital is not just any expected return rate, it should be the expected return rate of the investment that shares the same level of risk with the current one (and it only make sence under this cricumstance). == Goals of the Corporation === Shareholders Want Managers to Maximize Market Value A large corporation may consist of both risk-averse and risk-tolerant investors, but regardless of the difference between them, maximizing market value is never wrong. === A Fundamental Result === ! ! Several Topcis Remained Uncovered ... (not that important, seems so) = How to Calculate Present Values == Future Values and Present Values === Calculate Future Values *_Money can be invested to earn interest. A dollar today is worth more than a dollar tomorrow._* Suppose you have \$100 in bank that pays interest $r=7%$, it's easy to get that you will get $ dollar 100 times (1+r)= dollar 107$ the next year. Similarly, we can get what the number will reach in 2, 3, 4, ... years, which is the _future value_ of the \$100. Future value of $dollar 100= dollar 100 times (1+r)^t$ (at a compound rate) === Calculate Present Value Calculating the present value is actually the reverse of the calculating future value, we want to figure out how much a cashflow in the future is equivalent to the cashflow now. Present value(PV)=$C_t/(1+r)^t=C_t times "DF"_t$, where the discount fator(DF)=$1/(1+r)^t$ === Calculate the Present Value of an Investment Opportunity Suppose that you are considering constracting a office block, which requires you to invest $ dollar 370000$ initially and is expceted to produce a cash flow of $ dollar 420000$ a year later. Assume that the rate of interest on the U.S. government securities is $r=5%$ pre year. Here comes 2 questions: + Is the opportunity worth investing? + If you want to sell the project after investing, at what price should you sell it? Ans: + As said before, _A dollar today is worth more than a dollar tomorrow_, to figure out the problem, we need to calculate the present value of the futrue cash flow. (To simplify, we assume that the $\$420000$ is a sure thing.) So the $"PV"= dollar 420000/(1+5%)= dollar 400000 gt dollar 370000$, the answer is yes. + The project will produce a cash inflow of $ dollar 420000$, which equals to $ dollar 400000$ now (the PV), so the answer is obvious: $ dollar 400000$. #note()[ Here we choose the $r=5%$ as the $r$ in the discount factor. The reason why this is valid isbecause that we think the $ dollar 420000$ inflow is sure to happen, and we also think the U.S. government securities are safe, so they have the same risk. Here I also want to stress the logic here. It's a little complicated. The present value of the investment opportunity here means that the present value of the cashflow of the investment (which is $ dollar 420000$ in a year) in a equally risky investment (here the U.S. government securities). And since the present value of the cashflow investing government securities is greater than the investment I now only need to invest $ dollar 370000$ to get the same return with the same risk, so I choose invest building the office rather than put my money on the government securities. However, with that said in the textbook, I do think in real-life situations, the PV should be calculated with the highest discount rate that the buyer have access to. So if I raise the question that "what is the PV of the project to ME?", I believe that the answer should be $ dollar 370000$. But again, if one ask, I'm pretty sure he don't mean that. ] === Net Present Value The office building is worth $ dollar 400000$ now, but it doesn't mean that you have earned $ dollar 400000$ because you invested $ dollar 370000$ before. So we need the _Net Present Value_. $ "NPV"="PV"-"investment" $ To expand the equation, NPV can be derived from $ "NPV"=C_0 + C_1 dot "DF"_1 +C_2 dot "DF"_2 + dots.h + C_n dot "DF"_n $ === Risk and Present Value *_A safe dollar is worth more than a risky dollar._* Most investors avoid risk when they can do so without sacrificing return. In the example above, we assume that the investment is safe, but it might not be the case IRL, hense the calculation above have defects. How to correct it then? Well, we need to find the return rate of a similarly risky investment to be the discount rate, and with the updated discount factor, we can correct the calculation. e.g. If you think it is as risky as investing in the stock market and the stoct market off a return rate of 12%, then $"DF"_t=1/(1+12%)^t$. And reasonably, the PV and NPV are lower, since there are risks. === Present Value and Rate of Retrun From the example above, we concluded that constructing the office building is worth doing by calculating what we have to invest in the stock securities (a equivalent-risk investment)to earn the same benefit. We can see this another way: I invest this opportunity because that it processes a higher return rate, to be complete its rate of return exceeds the opportunity cost of capital. $ "Return"="profit"/"investment" $ In this case, $"Return"=( dollar 420000- dollar 370000)/( dollar 370000)=.135$, which exceeds 12%, the rate of return of the equivalent-risk stock market. So we choose to invest in a project that have a higher rate of return. Now we have two decision rules for capital investment: + _Net present value rule_. Accept investment that have positive NPV. + _Rate of return rule_. Accept investment that have higher rate of retrun. #note(name: [Caution])[ Sometimes there may be multiple results for the rate of return, and these two rules may confilct in some situations. ] === Calculating Present Value When There Are Multiple Cash Flows Actually it's quite simple and easy to understand --- the following forumla is called the *discounted cash flow* (or *DCF*) formula: $ "PV" = sum_(t=1)^T C_t/(1+r)^t (=sum_(t=1)^T C_t dot "DF"_t) $ and $ "NPV" = C_0 + "PV" = C_0 + sum_(t=1)^T C_t/(1+r)^t $ == Annuity = Valuing Bonds What is a bond? A bond is a promise by the borrower (firm or the government) to pay the lender (bondholder) certain amount of money per period for a certain length of time. So, a bond is a set set of cash flows. == Using the Present Value Formula to Value Bonds $ "PV" = C_1/(1+r)^1 + C_2/(1+r)^2 + dots.h + ("face value" + C_N)/(1+r)^N, $ $ "where", &C_t= "cupon interest payment" = "face value" times "cupon rate"\ &r = "yield to maturity (if paid annually)"\ &N = "maturity" $ Now consider a bond, #show : tbl.template.with(tab: "|", align: center, box:true) ```tbl CSSS CCCC. Cash Pavements _ 2009|2010|2011|2012 $euro 8.50$|$euro 8.50$|$euro 8.50$|$euro 108.50$ ``` Now work out the present value of the bond. Considering there being a equally risky investment that offers a return rate of $3.0%$, so the opportunity cost of capital is $3.0%$. Then we have $ "PV"=8.50/1.03 + 8.50/1.03^2 + 8.50/1.03^3 + 108.50/1.03^4 = euro 120.44 $ So the pirce now of the bond is $euro 120.44$. The price of the bonds are usually in a percentage of face value, so the price above can also be expressed as $120.44%$. #note()[ Note that the equation above also means that if you buy the bond now at the price of $euro 120.44$, you can get a return of $3.0%$ if you hold the bond till its maturity. ] === Yield to Maturity (YTM) Above we calculated the PV of a bond, now if we know the price of the bond, we wish to calculate the return rate of the bond.So, $ euro 120.44 =(euro) thin 8.50/(1+y) + 8.50/(1+y)^2 + 8.50/(1+y)^3 + 108.50/(1+y)^4 $ gives $y=3.0%$, so the rate of return of the bond is $3.0%$ which is called _yield to maturity_. What determines the YTM? I think, the answer is the market. A bond is a certain set of cash flows, so according to the equation above, the YTM is determined by and only by the price of bond. The price however, is determiend by the market. If there are some equally risky investment that offers a better rate of return, the bond wouldn't sell. So the price of the bond must be adjusted lower to meet a higher rate of return. === How Bond Price Vary with Interest Rates Since a bond is a certain set of cash flows, the price should go up as the interest rate decline, and the price should decrease as the interest rate increase. In a word, they change in opposite directions. == Duration and Volatility #def()[ Duration: $ "Duration" = (1 times "PV"(C_1))/"PV" + (2 times "PV"(C_2))/"PV" + (3 times "PV"(C_3))/"PV" + dots + (T times "PV"(C_T))/"PV" $ ] The duration of a bond origin form the following example: #show : tbl.template.with(tab: "|", align: center, box:true) ```tbl CCCCSS CCCCCC LNNNNN. | Price (%) || Cash pavements (%) Bond | Feb. 2009 | Aug. 2009 | Feb. 2010... | ...Aug. 2014 | Feb. 2015 Strip ofr Feb. 2015| 88.74 | 0 | 0 ... | ... 0 | 100.00 4s of Feb. 2015| 111.26 | 2.00 | 2.00 ... | ... 2.00 |102.00 11 1/4s of Feb. 2015| 152.05 | 5.625 | 5.625 ... | ... 5.625 | 105.625 ``` All the three bond above starts at Feb. 2009 and matures at Aug. 2014, but intutively, they cannot share the same duration. #def()[ Volatility: $ "volatility" = "duration"/(1+ "yield") = "modified duration" $ ] The volatility measures the how bond prices change when interest rates change. It's actually the derivative of the price. == The Term Structure of Interest Rates For many purposes, using a single fixed discount rate is good enough, but there are occasions when there's necessity to recognize that short-term interest rates differ from long-term rates. #def()[ Term Structure of Interest Rates: The relationship between short- and long-term interest rates is called the term structure of interest rates. ] === Spot Rates, Bond Prices, and the Law of One Price The _spot rate_ is the interest rate of a certain year. Like the one-year spot rate is (e.g.) $3%$, so the PV of $1 dollar $ at year one is $"PV"=(dollar 1)/1.03$ _The law of one price_ states that in a well-functioning market, the same commodity must sell at the same price. Therefore, all safe cash pavements delivered on the same date must be discounted at the same discount rate. === Measuring the Term Structure We can use the price of the strips to measure the term structure. === Money Machine _A dollar tomorrow cannot be worth less than a dollar the day after tomorrow._ _There is no such thing as a surefire money machine._ #def()[ Arbitrage: The simultaneous buying and selling of securities, currency, or commodities in different markets or in derivative forms in order to take advantage of differing prices for the same asset. ] == Explaning the Term Structure === Expectations Theory of the Term Structure #def()[ Expectations Theory: In equilibrium investment in a series of short-maturity bonds must offer the same expected return as an invistment in a single long-maturity bond. Only if that in the case would investors be prepared to hold both short- and long-maturity bonds. ] === Introducing Risk === Inflation and Term Structure Inflation risk can be reduced by investing short-term and rolling over the investment. You don't need to know the exact inflation rate because the inflation rate will be adapted to by the interest rate. If inflation is an important source of risk for long-term investors, borrowers must offer some extra incentive if they want investors to lend long. == Real and Nominal Rates of Interest Look into inflation rate and interest rate: Suppose you invest $dollar 1000$ in a one-year bond that makes a single pavement of $dollar 1100$ next year. The cash flow is certain, but the government makes no promise what these money can buy. If the inflation rate of this year is above $10%$, the $dollar 1100$ next year is worth less than the $dollar 1000$ now. The best known index measureing the inflation is Consumer Price Index (CPI), which measures the number of dollars that it takes to pay for a typical family's purchases. So, what we should really care about should be the _real_ dollars rather than the _nominal_ dollars. #def()[ Real Cash Flow: $ "real cash flow at date" t = ("nominal cash flow at date" t)/(1+"inflation rate")^t $ ] Suppose that you invested a bond that yields $10%$, with inflation taken into consideration (assume it's $6%$), the real situation looks like this: #show: tbl.template.with(tab: "|", box:true, align: center) ```tbl C|CC. Invest Current Dolars | Receive Dollars in Year 1 | Result $dollar 1000$ | $dollar 1100$ | $10%$ nominal rate of return _ \^ | Expected Real Value of Dollars in Year 1 | Result \^ | $dollar 1037.74 (=1100\/10.6)$ | $3.774%$ expected real rate of return ``` The formula calculating the real rate of return: $ 1+r_"real" = (1+r_"nominal")/(1+"inflation rate") $ The real rate interest is relatively stable. === Indexed Bonds You can buy an indexed bond that makes cash pavement linked to inflation to nail down the real raet of return. === Corporate Bonds and the Risk of Default Corporations that get into finacial distress may also be forced to default on their bonds. Thus the pavements promised to corporate bondholders represent a best-case scenario: The company will never pay more that the promised cash flows, but in hard times, it may pay less. So in a word, there are risks buying corporate bonds. And the safety of the bands can be judged from bond ratings provided by Moody's, Standard & Poor's (S&P), and Fitch. #show: tbl.template.with(tab: "|", box:true, align: center) ```tbl CC LL. Moody's | Standard & Poor's and Fitch _ Aaa | AAA Aa | AA A | A Baa | BBB _ Ba | BB B | B Caa | CCC Ca | CC C | C ``` Bonds whose ratings are Baa and above are called _inevstment grade_, those with a raiting of Ba and below are called _junk bonds_. = The Value of Common Stocks == How Common Stocks Are Traded - If a corporation want to raise now capital, it can do so by selling new shares to investors in the _primary market_. - Most trades in a corporation takes place on stock exchange, where investors trade existing shares. Stock exchange is also called _secondary markets_. - Stock exchaeges like NYSE and Nasdaq - A number of computer networks called _electronic communication networks (ECNs)_ == How Common Stocks Are Valued The following content we take the company GE as an example. One way to value the stock of GE stock is through the company's balance sheet which is published each quarter. The balance sheet lists the value of the firm's assets and liabilities. The assets include GE's plat, machinery, inventories, cash and so on. The liabililties include the money that GE owes the banks, taxes that are due to be paid and so on. The difference of the assets and liabilities is the _book value_ of GE's equity. The book value seem to have successfully evaluated the company's equity, but there are several deficiencies: + The book value of the assets only shows the original (historical) cost. The value now may vary due to inflation, depreciation and some other factor(value may increase). + Can not value intangibles properly === Valuation by Comparables Skip === The Determinants of Stock Prices The shareholders receive cash from the company in the form of a stream of dividends, so $ "PV"("stock")="PV"("expected future dividends") $ === Today's Price Suppose the tcurrent price of a share is $P_0$, that the expected price at the end of a year is $P_1$, and that the expceted dividend per share is $"DIV"_1$. Then $ "Expected return"=r=("DIV"_1+P_1-P_0)/P_0 $ On the other hand, if I'm given forecasts of the dividend and priec and the expected return offered by other equally riskey stocks, you can predict today's price. $ "Price" =P_0=("DIV"_1+P_1)/(1+r) $ What exactly is the discount rate $r$? It's called the _market capitalization rate_ or _cost of equity capital_. And the group of stocks that essentially share the same risks with the target stock is called the riks class of the target stock. _All securities is an equivalent risk class are priced to offer the same expected same expceted return._ === What Determines Next Year's Price? Same method $ P_1=("DIV"_2+P_2)/(1+r) $ $ P_0=sum_(t=1)^H "DIV"_t/(1+r)^t + P_H/(1+r)^H $ Consider a long term investment which the horizon approaches infinity, the present value will be $ P_0=sum^infinity_(t=1)"DIV"_t/(1+l)^t $ == Estimating the Cost of Equity Capital Suppose that the growth of dividends is perpetual and at a constant rate $g$, according to the PV formula and the formula of growing perpetuity, the present value is $P_0="DIV"_1/(r-g)$. So the expected rate of return is $r="DIV"_1/P_0+g$. The expceted return equals the _dividend yield_ ($"DIV"_1/P_0$) plus the expceted rate of growth of the dividends. === Measuring the Rate of Return The rate of return is composed of two parts: the dividend yield and the dividends growth. The former is easy to measure, while the latter is not. The dividend growth rate is measured as follow $ g&="plowback ratio" times "ROE" \ &=(1-"payout ratio") times "EPS"/"book equity per share" \ &=(1-"DIV"/"EPS") times "EPS"/"book equity per share" $ === Dangers Lurk in Constant-Growth Formulas The constant growth formula may derive some unusual high growth rate, which should not hold forever, while the formula think it will. ==== DCF Valuation with Varying Growth (Not in PPT) Consider a firm with $"DIV"_1=dollar .50$ and $P_0=dollar 50$. The firm has plowed back $80%$ of the earnings and has a ROE of $25%$. The statistics show that *_in the past_* $ "Dividend growth rate"="plowback ratio"times "ROE"=.8 times .25=.20 $ In the constant growth formula, we assume the future dividend growth rate is .20, which will reply $r=(.50)/50.00+.20=.21$. But this is obviously silly because no firm can continue growing at this rate. So, in this situation, we should introduce the DCF with varying growth: $ P_0&="DIV"_1/(1+r)+"DIV"_2/(1+r)^2+("DIV"_3+P_3)/(1+r)^3\ &="DIV"_1/(1+r)+"DIV"_2/(1+r)^2+"DIV"_3/(1+r)^3+1/(1+r)^3 times "DIV"_4/(r-g) $ === Stock Price and Earning Per Share $ P_0 = "EPS"_1/r + "PVGO" $ where PVGO is the present value of growth opportunity. == Valuing a Business by DCF (Using DCF) #def()[ Free Cash Flow (FCF): Free cash flow is teh amount of cash that a firm can pay out to investars after paying for all investment necesary for growth. ] $ "PV" =underbrace("FCF"_1/(1+r)+"FCF"_2/(1+r)^2+ dots.h + "FCF"_H/(1+r)^H,"PV"("free cash flow")) + underbrace("PV"_H/(1+r)^H,"PV"("horizon value")) $ = Net Present Value and Other Investment Criteria - NPV - Book rate of return - Payback - IRR - Proficbility Index (PI) Easy, skipping = Making Investment Decisions with the Net Present Value Rule Skip
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/002%20-%20Return%20to%20Ravnica/004_In%20Praise%20of%20the%20Worldsoul%2C%20Part%201.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "In Praise of the Worldsoul, Part 1", set_name: "Return to Ravnica", story_date: datetime(day: 26, month: 09, year: 2012), author: "<NAME>", doc ) Tiptoes!" the young elf squeaked before yanking desperately on the left rein. Her wolf veered sharply and tripped off the low beam. The elf clutched at him, slipped sideways in the saddle, and they both tumbled awkwardly to the ground. Ruzi fought the urge to laugh. "Tiptoes? Is that your new nickname?" The wolf growled in response to his question and gave the young elf an indignant look. "Tell me what you did wrong," Ruzi said to his two-legged student. It was the elf’s first time at the training hall, which was a combination obstacle course and classroom. The large, airy hall had a sprawling training floor with an elaborate network of high-wires and hanging obstacles. The room smelled of cedar shavings and honeysuckle, which flowered on the latticework walls. On sunny days like today, Ruzi opened the ornate glass skylight to let in the summer breeze. As Master Trainer, Ruzi taught the youngest Selesnyans and their wolves to become Wolf Riders. The assortment of boxes, pillars, and beams were designed to mimic the environment his students would face as they traversed the rooftops of the city. #figure(image("004_In Praise of the Worldsoul, Part 1/02.jpg", width: 100%), caption: [], supplement: none, numbering: none) "Pulled the rein too hard?" she mumbled, brushing sand out of her wolf’s reddish fur. The girl and her wolf, both on the early side of adolescence, had the wide-eyed curiosity that he missed from his childhood. They also had the long, gawky limbs that he missed not at all. "And?" Ruzi prompted. When she looked confused, he turned to the rest of his pupils, who watched attentively from the wooden dais that ran around the perimeter of the training floor. This group of youngsters was new to training, but all had been born into the Conclave, so he recognized their faces. Riders had to bond with pups at a young age, so adult recruits to the guild rarely attained the status of Wolf Rider. No one spoke. They just stared at their teacher with quiet reverence. Ruzi was a legend among the Selesnyans, much to his annoyance. But it would only take a few training sessions for the youngsters to forget about his heroism during the District Riots and focus on the work of making themselves into competent riders. #figure(image("004_In Praise of the Worldsoul, Part 1/04.jpg", width: 100%), caption: [], supplement: none, numbering: none) "She slowed down before she reached the beam," Ruzi said. He turned the crank on a pulley and raised the narrow beam high off the ground. "By decreasing speed, you diminish agility. You have to trust your wolf’s instincts and not give way to your fears." "Kuma!" Ruzi called to his wolf, who was already jumping down from the dais to take his place beside his rider. Kuma was without saddle or reins, so Ruzi crouched on the wolf’s broad back. As soon as Kuma felt his rider’s weight, he sprang between the staggered platforms until he reached the highest station near the ceiling. The narrow beam swayed in the wind that swept down through the open skylight. But Ruzi urged his wolf forward without hesitation. Kuma seemed to dance across the span while Ruzi mimicked shooting arrows at his awed students. Back on the ground, Ruzi smiled at Kuma, who gave a toothy grin of his own. "Trust your wolf," he said. "Assume it’s at least #emph[twice] as smart as you." His students laughed and headed for the exit, playfully romping with the wolves they’d been bonded with since before they could walk. When the last one was out the door, he finally acknowledged the dryad who had been seething by the door for most of the session. He’d wanted her to leave, so he’d ignored her and her increasing frustration. "Welcome, Mazena," he said. "What can I do for you?" "We have an important message from Guildmaster Trostani, and you kept us standing for ages," she snapped. "Would you have me neglect my students in favor of you?" he asked. Mazena was one of Trostani’s most trusted advisors. Her charisma could not be refuted, but her legalism was insufferable. She emanated an aura of power and self-assurance that both drew and repelled him. "We would have you spend more time praising the Worldsoul than doing carnival tricks for the children." "It’s hard to learn what you cannot see with your own eyes," Ruzi replied. Ruzi wasn’t popular among the dryads. His discontent with guild leadership was widely known, and only his heroic past kept him in the good graces of the dryad leaders. "And yet the Worldsoul is our greatest teacher and greatest mystery," Mazena said. She was quoting a passage from #emph[Trostani’s Teachings] , the surest way to end any discussion. "What do you want?" Ruzi asked rudely, deciding that philosophical debate was only prolonging the inevitable. Ruzi loved his guild—building gardens, nurturing animals, the honoring of life and community. But secretly, he loved the #emph[work] of the guild more than he worshipped the Worldsoul. Selesnyans believed the Worldsoul was a collective unconsciousness that guided them toward unity and harmony. Ruzi found it easier to trust in his wolf and the bow in his hand, and to use them if necessary. #figure(image("004_In Praise of the Worldsoul, Part 1/06.jpg", width: 100%), caption: [], supplement: none, numbering: none) "We want harmony," Mazena said. "We want peace. We want what is best for the Conclave." Ruzi had seen far more harmony from the threat of violence than from songs sung of the Worldsoul. "Why are you here now? At this moment. Standing in #emph[my] classroom." "Trostani has sensed a troubling… situation," she said. "You must go to the Rubblebelt. You must leave today." "What?" Ruzi asked in surprise. The Rubblebelt was a ruined tract of land that began at the far edge of the Tenth District. "That is not a meager journey." "It is best for the Conclave," Mazena said. "Or as a hero, do you think you are too good for service?" Ruzi fought the urge to argue with her. There was something inaccurate with every facet of that statement. But such was the manipulative nature of dryads. "What do you want me to do there? Spy on the Gruul? Bring back some boar for dinner?" Despite her beauty, the dryad made a remarkably unpleasant face. "Must you really ask?" "You want me to go to Cecilee," he said. "You want me to find my sister." "She is your blood, your family within our family. How can you hesitate?" He hesitated because she had changed from the girl he grew up with. She left Vitu-Ghazi to settle in the wasteland of the Rubblebelt. She left to heal the sick and dying, to bring harmony to those who knew only chaos. She left, and he never expected to see her again. When they were young, the three of them—Cecilee, Ruzi, and his wolf Kuma—had been inseparable. But Cecilee was now the leader of her own community, and the words that came of her mouth never seemed like #emph[her] words. He’d heard that she’d adopted an abandoned Gruul child to raise as her own. The Wolf Riders who visited her told him that his new nephew was even named for him. But Cecilee herself had sent no news or made any attempt to contact her brother. Still, Mazena the Manipulative was right about one thing. When it came to Cecilee, it wasn’t in him to say no. Hours later, Ruzi and Kuma had reached the last roof at the edge of the Rubblebelt. It was not yet dawn, although the far horizon was tinged with the red of the coming sunrise. Ill at ease, he surveyed the unfamiliar ground below. #figure(image("004_In Praise of the Worldsoul, Part 1/08.jpg", width: 100%), caption: [], supplement: none, numbering: none) For years, Ruzi and Kuma had traveled the skyways together. He loved wandering among the domes to spires where he could feel the sunlight unobstructed. Rooftop gardens dotted the rooftops, little patches of growth and color, which gave Ruzi hope whenever he spotted them. Ruzi hated the din and squalor of the ground, where the sun barely filtered to the cobblestones. Down below, everything was tinged with a brownish haze. The brightest colors were the raggedy clothing of children. He and his wolf could travel for days across the tops of building and Kuma’s paws never had to scuff the streets of Ravnica. But it seemed they no longer had a choice. Before them, the inky darkness of the Rubblebelt stretched like a pit of black nothingness. No lanterns, no comforting scent of leaves, nothing but a desperate scrubland of rubble and savages. "This is where Cecilee chooses to lay her head?" Ruzi murmured to his wolf. Somewhere in that darkness was his sister’s small #emph[vernadi] , a burgeoning community of devoted evangels, who were intent on bringing the truth of Selesnya to all who had not yet heard of it. "If anyone can make roots grow here, it’s Cecilee," Ruzi assured himself, as they backtracked to look for a way down to street level.
https://github.com/dipamsen/notebook
https://raw.githubusercontent.com/dipamsen/notebook/main/src/100boxes.typ
typst
#import "@preview/showybox:2.0.1": showybox #set text(1.4em, hyphenate: false) #set par(justify: true) #showybox[A man have been given a random number from 1-100 and then sent to a room having 100 boxes containing a random number. He can check maximum of 50 boxes to find his number. What is the probability that he will find his number?] _Intuitively, the answer to this question is $1/2$, this article tries to mathematically arrive to the answer by using Conditional Probabilities and the Total Probability Theorem. [Disclaimer: This is a very overcomplicated way to arrive to the result.]_ *Solution* Here, there exist two random elements - selection of number, and arrangement of boxes in the room - making it difficult to perform a probability calculation. Let us fix one of them for now. Say, his randomly chosen number is $N = 10$. Now, let us assume that the boxes are arranged randomly, and he checks them in order of their positioning. So he checks the first box, then the second, then the third, and so on. Let $A_n$ denote the event that he finds his number (10) in the $n^"th"$ box. Before he opens any box, he has no knowledge of any number, so the probabilities are: $ P(A_1) = 1/100, P(A_2) = 1/100, P(A_3) = 1/100, ..., P(A_100) =1/100 $ // $A_2 = 99/100 dot 1/99$ (his number is not in box 1, and is in box 2 (which had 99 possibilities))\ // $A_3 = 99/100$ Once he has opened box 1, he suddenly has gained new information, which is the number in box 1. There are two cases: *Case 1:* He has found his number (Success), $A_1$: Now the probabilities will become: $ P(A_1/A_1) = 100/100, P(A_2/A_1) = 0, P(A_3/A_1) = 0, ..., P(A_100/A_1) =0 $ *Case 2:* He has not found his number, $A'_1$: Now the probabilities will become: $ P(A_1/A'_1) = 0, P(A_2/A'_1) = 1/99, P(A_3/A'_1) = 1/99, ..., P(A_100/A'_1) =1/99 $ (He will equally distribute the probabilty in all the remaining boxes) So clearly, the probability values $A_n$ are not static but instead dynamic, i.e. they change as per information gained by the player. These are denoted as conditional probabilities. (Incidentally, this concept is key to understanding many famous probability "paradoxes", like the Monty Hall Problem for example.) So, to find the overall probability $A_2$, we can use the Total Probability theoremn: $ P(A_2) &= P(A_1 sect A_2) + P(A'_1 sect A_2)\ &= P(A_1)P(A_2/A_1) + P(A'_1)P(A_2/A'_1)\ &= 1/100 dot 0 + 99/100 dot 1/99\ &= 1/100 $ Similarly, for $A_3$ we can write $ P(A_3) &=&& P(A_2 sect A_3) + P(A'_2 sect A_3)\ &=&& P(A_1 sect A_2 sect A_3) + P(A'_1 sect A_2 sect A_3) + P(A_1 sect A'_2 sect A_3) + \ &&& P(A'_1 sect A'_2 sect A_3) $ These are all the possibilities of $A_1$ and $A_2$, which might have occured before opening the third box. But, we can clearly see, just like the above case, here the first three terms will become 0. This is because all $A_n$ are disjoint events, and hence they cannot occur simultaneously. So any event of the type $A_i sect A_j$ will obviously become an impossible event. So, we can simplify to: #{ set text(0.8em) $ P(A_1) &= P(A_1) &&= P(A_1) &&= 1/100\ P(A_2) &= P(A'_1 sect A_2) &&= P(A'_1)P(A_2/A'_1) &&= 99/100 dot 1/99\ P(A_3) &= P(A'_1 sect A'_2 sect A_3) &&= P(A'_1)P(A'_2/A'_1)P(A_3/(A'_1 sect A'_2)) &&= 99/100 dot 98/99 dot 1/98\ P(A_4) &= P(A'_1 sect A'_2 sect A'_3 sect A_4) && = P(A'_1)P(A'_2/A'_1) ... P(A_4/(A'_1 sect A'_2 sect A'_3)) &&= 99/100 dot 98/99 dot 97/98 dot 1/97 $ } Following the pattern, we can say that $A_n = 1/100$, for every $n$. (We could've directly stated this without doing any calculations, that it is equally likely that he finds his number in any of the boxes, because the order of arrangement is random.) Now, this result is obviously not specific for $N=10$, it is true for any $N$, and all $N$ are equally likely to be picked. So, the final result we will get, is that $A_n = 1/100$, meaning each $A_n$ is equally likely. Finally, to find probability of success, #set box(stroke: 1pt, outset: (x: 4pt, y: 4pt), baseline: 0.65em) $ P("Success") = frac("Favourable", "Total") &= P(A_1 union A_2 union A_3 union A_4 union ... union A_50) / 1\ & =50 times 1/100 = #box($ 1/2 $) $
https://github.com/appare45/Typst-template
https://raw.githubusercontent.com/appare45/Typst-template/main/util.typ
typst
#let Title(title, credit) = [ #align(center, text(size: 17pt, font: "Hiragino Kaku Gothic ProN", weight: "bold", title)) #align(center, text(font: "Hiragino Kaku Gothic ProN", credit)) #line(length: 100%) ]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/lovelace/0.1.0/examples/doc.typ
typst
Apache License 2.0
#import "../lib.typ": * // #set page(width: 30em, height: auto) #set text(font: "Inria Sans", number-type: "old-style") #show math.equation: set text(font: "GFS Neohellenic Math") #show: setup-lovelace.with( line-number-supplement: "Zeile", ) #pseudocode( line-number-transform: n => numbering("1", 10*n), indentation-guide-stroke: (thickness: 1pt, paint: gray, dash: "solid"), no-number, [*input:* Graph $G = (V, E)$ with edge lengths $e$, source $w$], no-number, [*output:* distances $"dist"$, predecessors $"prev"$], [$Q <- $ empty queue], [*for each* $v in V$ *do*], ind, [$"dist"[v] <- oo$], [$"prev"[v] <- perp$ #comment[$perp$ denotes undefined]], [add $v$ to $Q$], ded, [*end*], [$"dist"[w] <- 0$ #comment[We start at $w$ so the distance must be zero]], no-number, [], [*while* $Q$ is not empty *do*], ind, <line:argmin>, [$u <- op("argmin")_(u in Q) "dist"[u]$], [remove $u$ from $Q$], [*for each* neighbour $v$ of $u$ still in $Q$ *do*], ind, [$d' <- "dist"[u] + e(u, v)$], [*if* $d' < "dist"[v]$ *then*], ind, $"dist"[v] <- d'$, [for demo purposes, here comes a long line: #lorem(10)], $"prev"[v] <- u$, ded, [*end*], ded, [*end*], ded, [*end*], ) The crucial step happens in @line:argmin. Here, we need $"dist"$ to be an instance of a data structure that allows us to find the $op("argmin")$ efficiently. #algorithm( caption: lorem(20), supplement: "Algorithmus", placement: none, pseudocode( indentation-guide-stroke: 1pt + gray, <line:test>, [this is a very short algorithm], ..range(10).map(i => ([or is it?], ind)).flatten() ) ) <the-algo> The line number starts counting from @line:test again in @the-algo.
https://github.com/feiyangyy/Learning
https://raw.githubusercontent.com/feiyangyy/Learning/main/linear_algebra/行列式.typ
typst
#set text( font: "New Computer Modern", size: 6pt ) #set page( paper: "a5", margin: (x: 1.8cm, y: 1.5cm), ) #set par( justify: true, leading: 0.52em, ) // 定义 计数器,自定义 #let theorem_counter=counter("theorem") #theorem_counter.update(1) = 行列式引入 线性方程组求解时,曾说过可以通过方程组的增广矩阵化简成阶梯型来分析判定该方程组解的情况,但这已经接近于求出方程组的解了,所以需要别的方法来从方程组的系数矩阵或者增广矩阵入手直接判定该方程组解的情况,这就引入了行列式 == 二阶行列式 考虑二元一次方程组: $ cases( a_(11)x_1 + a_(12)x_2 = b_1, a_(21)x_1 + a_(22)x_2 = b_2 ) $ 其增广矩阵为: $ mat(a_(11), a_(12), b_1; a_(21), a_(22), b_2) => \ mat(a_(11), a_(12), b_1; 0, a_(22)- a_21/a_(11) a_(12), b_2 - a_(21)/a_(12) b_1) $ 当(设$a_(11)!=0$): $ cases( a_(22) - a_21/a_(11) a_(12) != 0 方 程 组 有 唯 一 解, a_(22) - a_21/a_(11) a_(12) = 0 方 程 组 有 无 穷 多 解,或 无 解 ) \ a_(22) - a_21/a_(11) a_(12) = 0 => a_(11)a_(22) - a_(21)a_(12) = 0 $ 所以二元一次方程组解的判别,就可以使用$a_(11)a_(22) - a_(21)a_(12)$ 是否为0来判定。注意到$a_(11)a_(22)$ 的列标号$1, 2$是数1,2的一个排列。 == n元排列 n 个自然数(互不相同)可形成的排列为$n!$个,当其从小到大排列时,称为n元排列的自然序 n 元排列的一个排列我们记作 === 逆序 以数`range(5)`为例,其自然序为`12345`, 调换末尾位置:`12354`,则 5 4 不是从小到大排列的,称其为逆序对,一个排列的逆序对数称为该排列的逆序数,记为$tau$ 如 $ 12345 -> 逆 序 数 为 0 ;tau(12345) = 0\ 12354 -> 逆 序 数 为 1 \ 12534 -> 逆 序 数 为 2 $ 编写程序判定排列逆序数: ```py def get_inverse_order(nums:list): cnt = 0 for idx in range(len(nums)): for j in range(idx + 1, len(nums)): if nums[idx] > nums[j]: cnt += 1 return cnt ``` 一个排列的逆序数如果是偶数,成为偶排列;如果是奇数,称为奇排列 定理#theorem_counter.display(). `如果交换一个排列中的任意两个元素,则排列的奇偶性会发生变化` 证明: 设有排列:$a_0,a_1, ..., a_i, a_j, ..., a_n$, 设$a_i<a_j$交换$a_i, a_j$ 则排列变为 $a_0,a_1, ..., a_j, a_i, ..., a_n$, 则与$a_0, a_1, ..., a_(i-1)$ 与 $a_i, a_j$的逆序关系没有改变,同理,$a_i, a_j$与$a_(j+1), ..., a_(n)$的逆序关系,也没改变,只有$a_i, a_j$发生了调换,其逆序数+1.而+1 会改变排列的奇偶性 对于任意的情形,可以考虑从上述证明中的相邻元素逐步交换而来,即$a_j$逐步换到$a_i$的位置,经过$j-i$步骤,而此时$a_i$ 被换到 $a_(i+1)$的位置,则将$a_(i+1)$换到原$a_j$位置需要经过$j-i-1$步,总共需要$2(j-i) - 1$ 步(奇数),奇数次调换对应于奇偶性变化,则排列奇偶性改变 #theorem_counter.step() 定理#theorem_counter.display() . `n元排列可以由自然序1,2, ...,n 经过K次变换而来,那么该排列的奇偶性与K的奇偶性相同` 证:偶数次对换不改变奇偶性,奇数次变换改变奇偶性,而自然序为偶排列,偶排列经过偶数次变换仍然为偶排列,...奇数...奇排列。*并且,一个奇数排列一定是要经过奇数次K变换才可得到,但K不唯一*,同理适用于偶排列。 例如 序列$132$ 可以由$123$调换$23$得到(1)次,也可以由$123->321->231->132$ 共3次变换得到 == n 阶行列式定义 了解了n元排列及2阶行列式后,我们可以给出n阶行列式定义 对于n阶`方阵`$A_n$,其行列式记作 $ det(A_n) = |A_n| = sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)...a_(n j_n) $ 其中$j_1,j_2,..., j_n$ 代表列指标的一个n元排列,上述的$sum$是对以列指标做的n元排列(共n!项)形成的元素乘积项求和,其中当列指标的n元排列为奇数时,排列前带负号,否则带正号,注意定义中可见行号是自然排序 例如,对于二阶行列式: $ a_(11)a_(22) - a_(21)a_(12) $ 2列,共2项乘积项求和,其中第1项为$1,2$排列,为偶排列,带+,第2项为 $2, 1$,为奇排列,带负号 对于3阶方阵,求其行列式,先写出其排列$123, 132, 213, 231, 312, 321$, 其中$123, 231, 321$, 是偶排列, $312, 132, 213$是奇排列,所以其行列式就可以写作: $ a_(11)a_(22)a_(33) + a_(12)a_(23)a_(31) + a_(13)a_(22)a_(31) - a_(13)a_(21)a_(32) - a_(11)a_(23)a_(32) - a_(12)a_(21)a_(33) $ 行列式中任意项都是不同行、不同列的元素相乘,只要其中任意1元素为0,则该项为0,如果一个n元方程组(n个方程)的增广矩阵可以化为有n个主元的形式,那么其对应的系数矩阵就可以化为一个上三角矩阵,形如 $ A = mat( a_(11), a_(12), a_(13), ..., a_(1n); 0, a_(22), a_(23), ..., a_(2n); 0, 0, a_(33), ..., a_(3n); ..., ...,...,...,...,; 0, 0, 0, ..., a_(n n); ) $ 我们求这个上三角矩阵的行列式: $ det(A) = sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)...a_(n j_n) $ 对于上三角矩阵第n行的元素,$j_n < n -> a_(n j_n) = 0$,对于非自然序的排列$j_1, j_2, ... j_n$, 其行列式的项为$(-1)tau(j_1, j_2, ..., j_n) a_(1j_1)a_(2j_2)...a(n j_n)$,若要其中各元素不为0,则必须有$j_1 >= 1, j_2 >= 2, ..., j_n >=n$,满足此条件的取值只有$j_1 = 1, j_2 = 2, ..., j_n = n$。这个限定可以考虑用反证法以及递推证明 因此 上三角矩阵的行列式就是: $ a_(11)a_(22)...a_(n n) $ 即对角线元素的乘积 上面的行列式定义是固定行指标为自然序,以列指标排序来定义的。当固定列指标为自然序,以行指标的排序来定义,同样适用。 构造行列式,行的自然顺序可以这么理解:从`1-n行,逐个取其某1列$j_k$的元素,从列取得时候不重复,即形成一个n元排列` --- ① == 行列式的性质 === 性质1. `n阶方阵的转置的行列式和原矩阵行列式相同` $ |A^T|= |A| $ 证明, n阶方阵的转置行列式为: #set math.mat(delim: "|") $ mat(a_11, a_21, ..., a_(n 1); a_12, a_22, ..., a_(n 2); dots.v, dots.v, dots.down, dots.v; a_(1 n), a_(2 n), ..., a_(n n)) = sum(-1)^tau(i_1,i_2,...,i_n)a_(i_1 1)a_(i_2 2)...a_(i_n n) $ 上式可以由行列式的定义直接得出,即从①得出 而又由行列式的行列定义等价性可知,上式也是原矩阵的行列式 由此可见,#highlight[就行列式而言,行列是等价的,对于行的性质,也可以应用于列] === 性质2. `若矩阵中某一行有公共系数k,则计算该行列式时,该数可以提出` #set math.mat(delim: "[") $A =mat(a_(11), a_(12), ..., a_(1n); a_(21), a_(22), ..., a_(2n); dots.v, dots.v, dots.down, dots.v; K a_(k 1), K a_(k 2), ..., K a_(k n); dots.v, dots.v, dots.down, dots.v; a_(11), a_(12), ..., a_(1n);) ->$ #set math.mat(delim: "|") $det(A) = K mat(a_(11), a_(12), ..., a_(1n); a_(21), a_(22), ..., a_(2n); dots.v, dots.v, dots.down, dots.v; a_(k 1), a_(k 2), ..., a_(k n); dots.v, dots.v, dots.down, dots.v; a_(11), a_(12), ..., a_(1n);)$ 证明: 仍然从行列式的定义出发,结合① $det(A) &= sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... [K a_(k j_k)]a_(n j_n) \ &= K sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... a_(k j_k)a_(n j_n) $ === 性质3. 若矩阵A中的某一行,是矩阵B和C的和,并且矩阵B C除了该行外其他行和A相同,则$det(A) = det(B) + det(C)$ 即 $ mat(a_(11), a_(12), ..., a_(1n); a_(21), a_(22), ..., a_(2n); dots.v, dots.v, ..., dots.v; b_(k 1) + c_(k 1), b_(k 2) + c_(k 2), ..., b_(k n) + c_(k n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); )= mat(a_(11), a_(12), ..., a_(1n); a_(21), a_(22), ..., a_(2n); dots.v, dots.v, ..., dots.v; b_(k 1) , b_(k 2), ..., b_(k n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n);) + mat(a_(11), a_(12), ..., a_(1n); a_(21), a_(22), ..., a_(2n); dots.v, dots.v, ..., dots.v; c_(k 1), c_(k 2), ..., c_(k n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n);) $ 证明: 依然从行列式的定义出发: $det(A) &= sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... [b_(k j_k) + c_(k j_k)] a_(n j_n) \ &= sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... [b_(k j_k) ] a_(n j_n) + \ & sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... [c_(k j_k)] a_(n j_n) \ &= det(B) + det(C) $ === 性质4. A矩阵交换任意两行,行列式符号相反 证明:不妨设调换$m, k, m > k$行,则行列式为: $ mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; a_(k 1), a_(k 2), ..., a_(k n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) = sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)...[a_(m j_k)]...[a_(k j_m) ] a_(n j_n) $ -- 式1. 由于$m>k$, 原行列式定义是$sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)...[a_(k j_k)]...[a_(m j_m) ] a_(n j_n)$, 相当于对式1.中的$k, m$项调换,根据n元排列性质,调换两项$(j_m, j_k)$会改变奇偶性,即改变符号,从而行列式相反 上面的公式中,要注意,两个式子比较时,应当固定$j_m, j_k$ 在两个式子中都相等,比如,A从第m行取第1个元素,从第k行取第2个元素;在$A_c$中,就是从第k行取第1个元素(此时第k行全是A第m行的元素),从第m行取第二个元素, 这种情况下,A行列式中该项值和$A_c$的该项值就是符号相反,而n元排列是全排列,包含全部选取情况,全部项都相反,因此整体相反 === #highlight(fill: rgb(128, 231, 21))[性质5. 矩阵A中任意两行相等,行列式为0] 证: 利用性质4,不妨设$i, k$行相等,则交换这两行,矩阵A没有发生变化,又因为性质4,交换2行符号相反,即: $ det(A) = det(A_(i<->k))=-det(A) ->det(A) = 0 $ #highlight(fill: rgb(128, 231, 21))[这是多个性质中,唯一一个逻辑证明的] === 性质6,矩阵A中,两行成比例,行列式为0 即$mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; K a_(m 1), K a_(m 2), ..., K a_(m n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) = 0 $ 证:根据 性质2.(行列式某行公共系数可以提出),上面的行列式可以写作 $ mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; K a_(m 1), K a_(m 2), ..., K a_(m n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) = K mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) = K 0 = 0 $ 根据性质5, #highlight[右边行列式中有2行相同],行列式为0,因此原行列式为0 === 性质7. 矩阵A中某一行的倍数加到另外一行上,行列式不变 即 $ mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; K a_(m 1) + a_(l 1), K a_(m 2) + a_(l 2), ..., K a_(m n) + a_(l n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) = det(A) $ 证: 利用性质3 #highlight()[行元素求和等与分开的两个行列式求和], 性质6. #highlight()[两行成倍数,行列式为0] $ mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; K a_(m 1) + a_(l 1), K a_(m 2) + a_(l 2), ..., K a_(m n) + a_(l n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) &=mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; K a_(m 1), K a_(m 2) , ..., K a_(m n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) + mat( a_(11), a_(12), ..., a_(1n); dots.v, dots.v, ..., dots.v; a_(m 1), a_(m 2), ..., a_(m n); dots.v, dots.v, ..., dots.v; a_(l 1), a_(l 2), ..., a_(l n); dots.v, dots.v, ..., dots.v; a_(n 1), a_(n 2), ..., a_(n n); ) \ &= 0 + det(A) = det(A) $ 性质3 可用于大型矩阵行列式的分解,将其拆分成若干稀疏矩阵的和,使得便于求解。 行列式的性质与矩阵的初等变换息息相关,例如,性质5就对应于系数矩阵中出现0行,此时方程组一定无解或者有无穷多解,性质7对应于矩阵的行叠加,性质4. 对于矩阵的行交换等。 这些操作可能会改变行列式的符号,但不会使其$ m -> 0 或 者 0->m; m!=0$。 == 行列式(行)展开 从行列式的定义出发,在行列式的$n!$ 求和项中,每一项都需要从第$k$行取一个元素,我们以$j_k$表示去了第几个(即第几列元素)。我们不妨以第1行为例子,假设固定第1行取第1个元素,即$j_1 = 1$,这种情况下,后续的$j_2, j_3, ..., j_n$ 还能组成$(n-1)!$ 排列,同时$j_k != 1$。而$j_1$ 一共有 n 种取法$[1, n]$。所以行列式就可以写作 $& det(A) = sum_(j_1,j_2, ..., j_n)(-1)^(tau((j_1,j_2, ..., j_n))) a_(1j_1)a_(2j_2)... a_(n j_n)\ &= sum_(1,j_2,..., j_n)(-1)^(tau(1,j_2, ..., j_n))a_(1 1)a_(2j_2)... a_(n j_n) + sum_(2,j_2,..., j_n)(-1)^(tau(2,j_2, ..., j_n))a_(1 1)a_(2j_2)... a_(n j_n) + ... + \ & sum_(n,j_2,..., j_n)(-1)^(tau(n,j_2, ..., j_n))a_(1 n)a_(2j_2)... a_(n j_n) = \ & sum_(k=1)^n sum_(k,j_2,..., j_n)(-1)^(tau(k,j_2, ..., j_n))a_(1 k)a_(2j_2)... a_(n j_n) $ 观察最后的通项公式$a_(1, k)[a_(2 j_2) ...a_(n j_n)] $中, 排列$j_2, ..., j_n$ 任意一项$j_m != k$, 这相当于划去$a_(1, k)$ 所在列,同时,上式中方括号部分,行指标都不为1(自然序),这相当于划去$a_(1, k)$所在行,总结来说就是$[a_(2 j_2) ...a_(n j_n)]$ 这些项都是取自,将矩阵A中,$a_(1,k)$所在行列划去的剩余的部分,这个部分可以看做一个新的矩阵$A_k$,而 $&sum_(k,j_2,..., j_n)(-1)^(tau(k,j_2, ..., j_n))a_(1 k)a_(2j_2)... a_(n j_n) = a_(1 k)[sum_(j_2, ..., j_n) (-1)^(tau(k,j_2, ..., j_n))a_(2j_2)... a_(n j_n)] ?= a_(1, k)(-1)^(f(1,k)) det(A_k) $ 其中方括号项,几乎接近于$A_k$的行列式定义,主要是符号不同,我们观察$tau(k, j_2, ..., j_n)$ 和$tau(j_2, ..., j_n)$的关系。在第1个位置插入$k$, $j_2, ..., j_n$ 中比k小的,有$k-1$个,比$k$大的有$n-k$个,对于$j_m < k$, 形成逆序,对于$j_m>k$ 形成顺序,所以在第1个位置插入$k$, 相当于增加$k-1$个逆序对,即 $tau(k, j_2, ..., j_n) = tau(j_2, ..., j_n) + k-1 ->(-1)^(tau(k,j_2, ..., j_n)) = (-1)^(k-1)(-1)^(tau(j_2, ..., j_n)) $ 上面我们选择从第1行来展开行列式,k就出现则n元排列的第1个位置,当我们选择任意行$i$时,$k$就出现在第$i$个位置,可以表示为$(j_1,j_3, ..., j_(i-1), k, j_(i+1), ...,j_n)$,这种情况下,相当于是n元排列$tau(k, j_1, ..., j_n) j_m != k$将k右移动$i-1$项。每移动一次,改变一次符号,则共改变$(-1)^(i-1)$,结合前文k处于第一项的情形,可得当k处于第i项时: $ tau(j_1,...,j_(i-1),k,...,j_n) = (-1)^(k-1) (-1)^(i-1) tau(j_1,...,j_n) = \ (-1)^(k+i - 2) tau(j_1,...,j_n) = (-1)^(k+i) tau(j_1,...,j_n); j_m!=k $ 这里说明一下为什么不是通过将k和第i项交换,这是因为当这两项交换时,同时改变了$j_1, j_2, j_(i-1)$ 和$j_i$ 的顺序,得到的结果是$j_i,j_1,..., k, ...,j_n$,不是我们讨论的顺序 因此行列式按行展开就可以写作: $ det(A)_i = sum_(j=1)^n (-1)^(i + j) a_(i, j) det(A_(i j)) $ 上面的公式中,$det(A_(i j))$叫做余子式,记作$M_(i j)$, $(-1)^(i + j) a_(i, j) M_(i j) 或 M(i,j)$ 叫做代数余子式,记作$C_(i j) 或 C(i,j)$, 因此这个公式也写作$det(A)_i=sum_(j=1)^n C_(i j)$ 从这个推导过程和定义,我们自然而然的就得到了一个定理 // todo 计数 #theorem_counter.step() === 定理#theorem_counter.display(). `n阶方阵可以被展开成某一行的各元素与其代数余子式的乘积之和` #theorem_counter.step() === 定理#theorem_counter.display() `n阶方阵可以被展开成某一列的各元素与其代数余子式的乘积之和` 证: 考虑$A^T$的行列式,将其按行展开时,等价于原矩阵A的各列展开,而根据性质1. 矩阵的行列式和其转置的行列式相等,则得证 行列等价性 #theorem_counter.step() === 定理#theorem_counter.display() . n阶矩阵A的第i行元素与第k行的对应元素的代数余子式之和为0 即$sum a_(i n)A_(k n) = 0$ 其中$i!=k$ 前述推导过程中可以发现,行列式按$i$行展开时,其代数余子式只和元素的位置有关和#highlight[元素的值无关],因此为了构造左边,我们可以构造这样的矩阵 #set math.mat(delim: "[") $ mat(a_(11), a_(12), ..., a_(1 n); dots.v, dots.v, dots.v, dots.v; a_(i 1), a_(i 2), ..., a_(i m); dots.v, dots.v, dots.v, dots.v; a_(i 1), a_(i 2), ..., a_(i m) ; dots.v, dots.v, dots.v, dots.v; a_(n 1), a_(n 2), ..., a_(n m); )<- 将 第 k 行 设 置 为 第 i 行 元 素 $ #set math.mat(delim: "|") 这个矩阵,按照第$k$行展开时,就是定理#theorem_counter.display() 的左边,该矩阵有2行一样,根据性质5.(任意两行相等,行列式为0),其行列式为0 由于行列式的行列等价性,自然可得 #theorem_counter.step() === 定理#theorem_counter.display(). n阶矩阵A的第i列元素与第k列的对应元素的代数余子式之和为0 === 范德蒙德行列式 形如 $mat(1,1,..., 1; a_1, a_2, ..., a_n; a_1^2,a_2^2,..., a_n^2; a_1^(n-1), a_2^(n-1), ..., a_n^(n-1) ) $ 的行列式,称为范德蒙德行列式,该行列式计算存在公式$product_(1<=j<i<=n)(a_i -a_j)$,可以直接计算得出。 证明(精彩推演): 对于n=2的情形,$mat(1,1;a_1, a_2) = (a_2 - a_1)$成立,现在,我们假设对于$n-1$阶成立,我们证明n阶也成立 $ mat(1,1,..., 1; a_1, a_2, ..., a_n; a_1^2,a_2^2,..., a_n^2; a_1^(n-1), a_2^(n-1), ..., a_n^(n-1) ) ->(第 n 行 + -a_1 第 n-1行) -> mat(1,1,..., 1; a_1, a_2, ..., a_n; a_1^2,a_2^2,..., a_n^2; 0, a_2^(n-2)(a_2 - a_1), ..., a_n^(n-2)(a_n - a_1) )\ ->(第 n -1 行 + (-a_1) n - 2行) -> mat(1,1,..., 1; a_1, a_2, ..., a_n; a_1^2,a_2^2,..., a_n^2; dots.v, dots.v,dots.v, dots.v; 0, a_2^(n-3)(a_2 - a_1), ..., a_n^(n-3)(a_n - a_1); 0, a_2^(n-2)(a_2 - a_1), ..., a_n^(n-2)(a_n - a_1) ) $ 以此类推,我们从最后一行开始,以此减去前一行的$a_1$倍,最终能使得第1列除第1行外全为0: $ mat(1,1,..., 1; 0, a_2 - a_1, ..., a_n - a_1; 0,a_2(a_2 - a_1),..., a_n(a_n-a_1); dots.v, dots.v,dots.v, dots.v; 0, a_2^(n-3)(a_2 - a_1), ..., a_n^(n-3)(a_n - a_1); 0, a_2^(n-2)(a_2 - a_1), ..., a_n^(n-2)(a_n - a_1) ) $ 将其按第1列展开,则 $ det(A) = 1 (-1)^(1+1) mat( a_2 - a_1, ..., a_n - a_1; a_2(a_2 - a_1),..., a_n(a_n-a_1); dots.v,dots.v, dots.v; a_2^(n-3)(a_2 - a_1), ..., a_n^(n-3)(a_n - a_1); a_2^(n-2)(a_2 - a_1), ..., a_n^(n-2)(a_n - a_1)) <- 第 1 列 有 公 共 系 数 a_2 - a_1 ... $ 根据性质2.(单行或者#highlight[列]公共系数可提前),上式可以化作: $(a_2 - a_1)(a_3 - a_1)(...)(a_n-a_1) mat(1, ..., 1; a_2,..., a_n; dots.v,dots.v, dots.v; a_2^(n-3), ..., a_n^(n-3); a_2^(n-2), ..., a_n^(n-2)) $ 右边的行列式即第(n-1)阶范德蒙行列式(注意,其其实元素下标是$a_2$), 即$product_(1 <= 1 < i <= n)(a_i - a_1)product_(2 <=j<i<=n)(a_i - a_j) ->product_(1 <= j < i <= n)(a_i - a_j)$ 范德蒙德行列式可以通过计算公式直接求解 至此,行列式求解已经有 1. 化成上三角计算(增广矩阵阶梯型) 2. 按行、按列展开 3. 直接分解成多个行列式的和 4. 范德蒙德行列式 公式求解 == 线性方程组的解和行列式的关系 0. 当线性方程组个数少于未知量数目时,一定没有唯一解,因此我们只考虑n个未知量、n个方程的方程组。 - 如果方程组个数多于未知量数目,在有解的情况下,有一部分方程一定是多余的;无解则任意方程出现$0=d$的情况,比如$cases(x_1 = 2,x_1=3)$ 这个未知量只有1个,方程组2个,但是这个方程组无解,因为化简第2行,有$0=1$ 1. 当线性方程组组的增广矩阵$hat(A)$ 化成阶梯型$hat(J)$,其系数矩阵也变成阶梯型$J$,如果$hat(J)$对应的方程组出现了$0=d$这样的方程,则方程组无解,此时$J$中有0行,进而$|J| = 0$。同样的,当出现$hat(J)$ 有$0=0$情况时,方程组有无穷多解,此时 $J$ 中也存在0行, $|J| = 0$ 2. 当$hat(J)$ 没有$0=d$ 或者0行时,此时$J$是一个上三角矩阵,而且其对角线元素均不为0(主元定义),根据上三角矩阵的行列式等于对角线元素乘积,因此$|J| != 0$ 我们注意到,将系数矩阵$A$化简为$J$ 需要经过一系列初等行变换,而初等行变换不会改变行列式的非零性质(但是可能会改变符号,比如行交换),因此有 #theorem_counter.step() === 定理 #theorem_counter.display(). 线性方程组有唯一解的充要条件是系数矩阵行列式不为0 ==== 推论,对于线性齐次方程组,其有非0解的充要条件是其系数矩阵的行列式为0 === 线性方程组的解的公式表示 线性方程组有解的情况下,其解可以表示为$[mat(B_1)/mat(A), mat(B_2)/mat(A), ..., mat(B_n)/mat(A)]$,其中$B_i$是将系数矩阵A的第$i$列替换为常数项得到的矩阵 下面给出形式证明(即验证结果正确性,但不推导),把$x_i = mat(B_i)/mat(A)$代入增广矩阵中的第i行 $ sum_(j=1)^(n) a_(i j)mat(B_j)/mat(A) &= 1/mat(A) sum_j a_(i j) (sum_(k=1)^n b_k A_(k j)) <-|B_j| 按 列 展 开 \ &= 1/mat(A) sum_j sum_k a_(i j) b_k A_(k j) = 1/mat(A) sum_k b_k sum_j a_(i j) A_(k j) <- 此 处 更 改 了 求 和 顺 序 $ 对于$ sum_j a_(i j) A_(k j)$ 当$k!=i$ 时,根据定理5. 其值为0,当$k=i$时,就是$sum_j a_(i j) A_(i j) = mat(A)$ 因此$ 1/mat(A) sum_k b_k sum_j a_(i j) A_(k j)=1/mat(A) b_i mat(A) = b_i$ 这里有几个点要注意: 1. 代数余子式只和元素的位置、有关,和按行、按列展开无关,某个元素的代数余子式,既可以看做按行展开的一部分,也可以看做按列展开的一部分 2. 双重求和的性质 定理7. 和 线性方程组的公式表示合起来称作 `克拉默(Cramer)法则` == 拉普拉斯定理 在n阶矩阵中,任意选择k行、k列,这些行列交叉位置的元素按照原来的排定顺序形成的k阶矩阵的行列式称作A的一个#highlight[k阶子式],即从原矩阵中选择$i_1, i_2, ..., i_k$行,$j_1, j_2, ..., j_k$列,组成的新的行列式$mat(a_(i_1 j_1), a_(i_1, j_2), ..., a_(i_1 j_k);a_(i_2 j_1), a_(i_2, j_2), ..., a_(i_2 j_k); ...,...,...,...;a_(i_k j_1), a_(i_k, j_2), ..., a_(i_k j_k);)$, #set math.mat(delim: "(") 记作$A mat(i_1,i_2, ..., i_k;j_1,j_2, ..., j_k) (*)$ 从A中划去上述选定的行和列,剩余的元素按照原来的次序,组成一个$n-k$阶矩阵,其行列式称为k阶子式的余子式,与单元素的余子式相仿,前面乘以$(-1)^(sum i_k + sum(j_k))$,称作k阶子式的代数余子式。不妨设$i^'_1, i^'_2, ...,i^'_(n-k) = (1, ...,n) \\(i_1,i_2,..., i_k);j^'_1, j^'_2, ...,j^'_(n-k) = (1, ...,n) \\(i_1,i_2,..., i_k)$ 则$(*)$得余子式就是$A mat(i^'_1, i^'_2, ...,i^'_(n-k);j^'_1, j^'_2, ...,j^'_(n-k))$ #highlight(fill: rgb(200, 50, 128,))[拉普拉斯定理 ] // #set math.mat(delim: "|") n阶矩阵A,取$i_1,i_2, ..., i_k$ ($i_1 < i_2 ... < i_k$)行,则由这些行内元素组成的所有k阶子式与它们对应的代数余子式之和等于$mat(A)$ 即$mat(A) = sum_(1 <= j_1 <j_2 < ... < j_k <= n)A mat(i_1,i_2, ..., i_k;j_1,j_2, ..., j_k) (-1)^(sum(i_l)+sum(j_l))A mat(i^'_1, i^'_2, ...,i^'_(n-k);j^'_1, j^'_2, ...,j^'_(n-k))$ 拉普拉斯展开较为麻烦,这里不给出证明过程(以后有时间在写)。这里需要记住拉普拉斯展开的一个重要推论,即: $ O=mat( a_(11),a_(12),...,a_(1 k), 0, 0, 0, ..., 0; a_(21,),a_(22),...,a_(2 k),0, 0, 0, ..., 0; dots.v, dots.v,dots.v,dots.v, dots.v,dots.v, dots.v,dots.v , dots.v; a_(k 1,),a_(k 2),...,a_(k k),0, 0, 0, ..., 0; b_(1 1),b_(1 2), ...,b_(1 k),c_(1,1), c_(1,2), c_(1,3), ..., c_(1,t); b_(2 1),b_(2 2), ...,b_(2 k),c_(2,1), c_(2,2), c_(2,3), ..., c_(2,t); dots.v, dots.v,dots.v,dots.v, dots.v,dots.v, dots.v,dots.v , dots.v; b_(t 1),b_(t 2), ...,b_(t k),c_(t,1), c_(t,2), c_(t,3), ..., c_(t,t); ) $ 可以视作以下4个矩阵组成: $A_k = mat(a_(11),a_(12),...,a_(1 k);a_(21,),a_(22),...,a_(2 k); dots.v, dots.v,dots.v,dots.v;a_(k 1,),a_(k 2),...,a_(k k)); B = mat( b_(1 1),b_(1 2), ...,b_(1 k);dots.v, dots.v,dots.v,dots.v;b_(t 1),b_(t 2), ...,b_(t k)); C_t = mat(c_(1,1), c_(1,2), c_(1,3), ..., c_(1,t);dots.v,dots.v, dots.v,dots.v , dots.v;c_(t,1), c_(t,2), c_(t,3), ..., c_(t,t);), bold(0), O=mat(bold(A), 0;bold(B), bold(C)) -> |O| = |A||C|$
https://github.com/EliasRothfuss/vorlage_typst_doku-master
https://raw.githubusercontent.com/EliasRothfuss/vorlage_typst_doku-master/main/chapter/anhang.typ
typst
= Nutzung von Künstliche Intelligenz basierten Werkzeugen Im Rahmen dieser Arbeit wurden Künstliche Intelligenz (KI) basierte Werkzeuge benutzt. Tabelle~@label:tab:ki-werkzeuge gibt eine Übersicht über die verwendeten Werkzeuge und den jeweiligen Einsatzzweck. #figure( caption: [Übersicht über die KI-Werkzeuge und deren Nutzung], table( columns: (1fr, 2fr), align: (left,left,), stroke: none, column-gutter: 1em, row-gutter: 0.4em, table.header([#strong[Werkzeug];], [#strong[Beschreibung der Nutzung];],), table.hline(), [ChatGPT], [- Grundlagenrecherche zu bekannten Prinzipien optischer Sensorik zur Abstandsmessung siehe Abschnitt …) - Suche nach Herstellern von Lidar-Sensoren (siehe Abschnitt …) - … ], [ChatPDF], [- Recherche und Zusammenfassung von wissenschaftlichen Studien im Themenfeld … - … ], [DeepL], [- Übersetzung des Papers von $[dots.h]$], [Tabnine AI coding assistant], [- Aktiviertes Plugin in MS Visual Studio zum Programmieren des … - … ], […], [- …], table.hline(), ), kind: table )<label:tab:ki-werkzeuge> #pagebreak(weak: true) = Ergänzungen == Details zu bestimmten theoretischen Grundlagen == Weitere Details, welche im Hauptteil den Lesefluss behindern #pagebreak(weak: true) = Details zu Laboraufbauten und Messergebnissen == Versuchsanordnung == Liste der verwendeten Messgeräte == Übersicht der Messergebnisse == Schaltplan und Bild der Prototypenplatine #pagebreak(weak: true) = Zusatzinformationen zu verwendeter Software == Struktogramm des Programmentwurfs == Wichtige Teile des Quellcodes #pagebreak(weak: true) = Datenblättern Auf den folgenden Seiten wird eine Möglichkeit gezeigt, wie aus einem anderen PDF-Dokument komplette Seiten übernommen werden können, z.~B. zum Einbindungen von Datenblättern. Der Nachteil dieser Methode besteht darin, dass sämtliche Formateinstellungen (Kopfzeilen, Seitenzahlen, Ränder, etc.) auf diesen Seiten nicht angezeigt werden. Die Methode wird deshalb eher selten gewählt. Immerhin sorgt das Package #emph[pdfpages];~für eine korrekte Seitenzahleinstellung auf den im Anschluss folgenden nativen~LaTeX-Seiten. Eine bessere Alternative ist, einzelne Seiten mit #emph[$without$includegraphics];~einzubinden.
https://github.com/Kasci/LiturgicalBooks
https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/SK/zalmy/Z003.typ
typst
Pane, jak mnoho je tých, čo ma sužujú! \* Mnohí povstávajú proti mne. Mnohí o mne hovoria: \* „Boh mu nepomáha.“ Ale ty, Pane, si môj ochranca, \* moja sláva, čo mi hlavu vztyčuje. Hlasne som volal k Pánovi \* a on mi odpovedal zo svojho svätého vrchu. A ja som sa uložil na odpočinok a usnul som. \* Prebudil som sa, lebo Pán ma udržuje. Nebudem sa báť tisícov ľudí, čo ma obkľučujú. \* Povstaň, Pane; zachráň ma, Bože môj. Veď ty si udrel mojich nepriateľov po tvári \* a hriešnikom si zuby vylámal. Pane, ty si naša spása. \* Na tvoj ľud nech zostúpi tvoje požehnanie. A ja som sa uložil na odpočinok a usnul som. \* Prebudil som sa, lebo Pán ma udržuje.
https://github.com/katamyra/Notes
https://raw.githubusercontent.com/katamyra/Notes/main/Compiled%20School%20Notes/CS3001/Modules/Therac25.typ
typst
#import "../../../template.typ": * = Therac-25 #definition[ The *therac-25* was a linear accelerator created to treat cancer with radiation therapy ] The therac-25 was an all new version of earlier therac-6 and therac-20 machines, but unlike them did not have manual overrides. The therac-25 had malfunctions and would give too much, or too little (which could also be dangerous) radiation causing lots of harm. == Software Errors Two of the therac-25's main software errors were *race conditions*. #definition[ *Race Conditions* is when a system tries to perform two operations at very similar times but because of the nature of the system, the operations have to be done in specific manner for it to work correctly. ] One race condition was with the system acting on old information that had been changed to new information while the computer wasn't checking the values. == Design Flaws Design process flaws by reusing older code designed to work with a system that had manual overrides and locks. - So the flows were system flaws, not only software errors The system was not designed with being fail safe in mind == Moral Responsibility In order for a moral agent to be responsible for a harmful event, two conditions must hold: - *Causal Condition*: The actions (or inactions) of agents must have caused the harm - *Mental Condition*: The actions (or inactions) of agents must have been intended or willed by the agent\* \*This also includes unintended harm if it came from carelessness or negligence
https://github.com/swouf/alina-poster-template
https://raw.githubusercontent.com/swouf/alina-poster-template/main/alina-poster.typ
typst
Creative Commons Attribution Share Alike 4.0 International
/** * * Alina Poster Template * * Version: 0.0.1 * License: CC-BY-SA 4.0 International * Author: <NAME> <<EMAIL>> */ #let palette = ( primary: ( main: color.rgb("#5C4DCB"), container: color.rgb("#BEB9E6"), onContainer: color.rgb("#171333"), ), secondary: ( main: color.rgb("#A3B08D"), container: color.rgb("#E0E6D7"), onContainer: color.rgb("#2F3329"), ), ) #let display_authors(authors: array) = { set text(size: 32pt, fill: palette.primary.container) let affiliations = () for author in authors { if "affiliation" in author { affiliations.push(author.affiliation) } } affiliations = affiliations.dedup() let nbrOfAuthors = authors.len() let authorNbr = 0 block(above: 2em, below: 1em, [ #for author in authors { let affiliationNbr = none if "affiliation" in author { affiliationNbr = affiliations.position(a => a == author.affiliation) + 1 } let correspondingAuthorSymbol = if "correspondingAuthor" in author { if author.correspondingAuthor { [#sym.ast.basic] } else { none } } else { none } let terminationString = if authorNbr < nbrOfAuthors - 1 { ", " } else { none } [#author.name#super[#affiliationNbr]#correspondingAuthorSymbol#terminationString] authorNbr = authorNbr + 1 } ]) block(above: 1em, below: 2em, [ #let nbrOfAffiliations = affiliations.len() #let affiliationNbr = 0 #for affiliation in affiliations { let terminationString = if affiliationNbr < nbrOfAffiliations - 1 { ", " } else { none } [#super[#(affiliationNbr + 1)]#affiliation#terminationString] affiliationNbr = affiliationNbr + 1 } ]) } #let header( title: none, authors: array, topAbstract: [], ) = { set align(left) set text(fill: white) block(fill: palette.primary.onContainer, inset: 10%, below: 0pt, height: 100%, width: 100%, [ #heading(level: 1, text(size: 72pt, title)) #display_authors(authors: authors) #par(topAbstract) ]) } #let alina_footer( footer: [], ) = { set align(left) set text(fill: white) block(width: 100%, height: 100%, fill: palette.primary.onContainer, footer) } #let alina_content( content: [], ) = { block(width: 100%, height: 100%, above: 0pt, below: 0pt, content) } #let alina_block( title: [], body: [], ) = { set grid(gutter: 24pt) block(inset: 8pt, width: 100%, [ #stack(dir: ttb, block(fill: palette.secondary.container, width: 100%, inset: 24pt, [ #set align(left) #set text(fill: palette.secondary.onContainer) #set heading(numbering: "1.") = #title ]), block(inset: 24pt, width: 100%, fill: white, body) ) ]) } #let alina_ammo_bar( content: [], ) = { block(fill: palette.secondary.container, width: 100%, height: 100%, inset: 32pt, content) } #let alina_chip(fill: color, textFill: color, content: str) = { box(fill: fill, radius: 50%, inset: 0.6em, text(weight: "semibold", fill: textFill, content)) } #let alina_highlight(content: []) = { set text(fill: palette.primary.onContainer) block(fill: palette.primary.container, inset: 1em, content) } #let alina_poster( title: none, authors: array, topAbstract: [], bottomAbstract: [], footer: [], body: [], ) = { set page("a0", margin: (left: 0pt, right: 0pt, top: 0pt, bottom: 0pt), fill: color.luma(221)) set text(size: 24pt, font: "Helvetica Neue") set par(justify: true) block(height: 100%, width: 100%, grid(rows: (450pt, 100% - 700pt, 250pt), gutter: 0pt, header(title: title, authors: authors, topAbstract: topAbstract), alina_content(content: body), alina_footer(footer: footer), ) ) }
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/numbering_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page #for i in range(0, 9) { numbering("*", i) [ and ] numbering("I.a", i, i) [ for #i \ ] }
https://github.com/AOx0/expo-nosql
https://raw.githubusercontent.com/AOx0/expo-nosql/main/book/src/dynamic.md
markdown
MIT License
# Dynamic slides The PDF format does not (trivially) allow to include animations, as one would be used to from, say, Powerpoint. The solution PDF-based presentation slides use is to create multiple PDF pages for one slide, each with slightly different content. This enables us to have some basic dynamic elements on our slides. In this book, we will use the term _logical slide_ for a section of content that was created by one call to `#slide`, and _subslide_ for a resulting PDF page. Each logical side can have an arbitrary amount of subslides and every subslide is part of exactly one logical slide. Note that the same page number is displayed for all subslides of a logical slide. In the LaTeX beamer package, the functionalities described on this page are called "overlays". ## Reserve space or not? When you want to specify that a certain piece of content should be displayed one some subslides but not on others, the first question should be what should happen on the subslides it is _not_ displayed on. You could either want - that it is completely not existing there, or - that it is invisible but it still occupies the space it would need otherwise (see [the docs of the `#hide` function](https://typst.app/docs/reference/layout/hide/)) The two different behaviours can be achieved using either `#only` or `#uncover`, respectively. The intuition behind it is that, in one case, content is _only_ existing on some slides, and, in the other case, it is merely _covered_ when not displayed. ## General syntax for `#only` and `#uncover` Both functions are used in the same way. They each take two positional arguments, the first is a description of the subslides the content is supposed to be shown on, the second is the content itself. Note that Typst provides some syntactic sugar for trailing content arguments, namely putting the content block _behind_ the function call. You could therefore write: ```typ #only(2)[Some content to display only on subslide 2] #uncover(3)[Some content to uncover only on subslide 3] ``` In this example, we specified only a single subslide index, resulting in content that is shown on that exact subslide and at no other one. Let's explore more complex rules next: ## Complex display rules There are multiple options to define more complex display rules. ### Array The simplest extension of the single-number case is to use an array. For example ```typ #uncover((1, 2, 4))[...] ``` will uncover its content on the first, second and fourth subslide. The array elements can actually themselves be any kind of rule that is explained on this page. ### Interval You can also provied a (bounded or half-bounded) interval in the form of a dictionary with a `beginning` and/or an `until` key: ```typ #only((beginning: 1, until: 5)[Content displayed on subslides 1, 2, 3, 4, and 5] #only((beginning: 2))[Content displayed on subslide 2 and every following one] #only((until: 3))[Content displayed on subslides 1, 2, and 3] #only((:))[Content that is always displayed] ``` In the last case, you would not need to use `#only` anyways, obviously. ### Convenient syntax as strings In principle, you can specify every rule using numbers, arrays, and intervals. However, consider having to write ```typ #uncover(((until: 2), 4, (beginning: 6, until: 8), (beginning: 10)))[...] ``` That's only fun the first time. Therefore, we provide a convenient alternative. You can equivalently write: ```typ #uncover("-2, 4, 6-8, 10-")[...] ``` Much better, right? The spaces are optional, so just use them if you find it more readable. Unless you you are creating those function calls programmaticly, it is a good recommendation to use the single-number syntax (`#only(1)[...]`) if that suffices and the string syntax for any more complex use case. ## Higher level helper functions With `#only` and `#uncover` you can come a long way but there are some reoccurring situations for which helper functions are provided. ### `#one-by-one` and `#line-by-line` Consider some code like the following: ```typ #uncover("1-")[first ] #uncover("2-")[second ] #uncover("3-")[third] ``` The goal here is to uncover parts of the slide one by one, so that an increasing amount of content is shown. A shorter but equivalent way would be to write ```typ #one-by-one[first ][second ][third] ``` And what about this? ```typ #uncover("3-")[first ] #uncover("4-")[second ] #uncover("5-")[third] ``` Now, we still want to uncover certain elements one after the other but starting on subslide 3. We can use the optional `start` argument of `#one-by-one` for that: ```typ #one-by-one(start: 3)[first ][second ][third] ``` `#one-by-one` is especially useful for arbitrary contents that you want to display in that manner. Often, you just want to do that with very simple elements, however. A very frequent use case are bullet lists. Instead of ```typ #one-by-one[ - first ][ - second ][ - third ] ``` you can also write ```typ #line-by-line[ - first - second - third ] ``` The content provided as an argument to `#line-by-line` is parsed as a `sequence` by Typst with one element per line (hence the name of this function). We then simply iterate over that `sequence` as if it were given to `#one-by-one`. Note that there also is an optional `start` argument for `#line-by-line`, which works just the same as for `#one-by-one`. ### `pause` as an alternative to `#one-by-one` There is yet another way to solve the same problem as `#one-by-one`. If you have used the LaTeX beamer package before, you might be familiar with the `\pause` command. It makes everything after it on that slide appear on the next subslide. Remember that the concept of "do something with everything after it" is covered by the `#show: ...` mechanism in Typst. We exploit that to use the `pause` function in the following way. ```typ Show this first. #show: pause(2) Show this later. #show: pause(3) Show this even later. #show: pause(4) That took aaaages! ``` This would be equivalent to: ```typ #one-by-one[ Show this first. ][ Show this later. ][ Show this even later. ][ That took aaaages! ] ``` It is obvious that `pause` only brings an advantage over `#one-by-one` when you want to distribute a lot of code onto different subslides. **Hint:** You might be annoyed by having to manually number the pauses as in the code above. You can diminish that issue a bit by using a counter variable: ```typ Show this first. #let pc = 1 // `pc` for pause counter #{ pc += 1 } #show: pause(pc) Show this later. #{ pc += 1 } #show: pause(pc) Show this even later. #{ pc += 1 } #show: pause(pc) That took aaaages! ``` This has the advantage that every `pause` line looks identical and you can move them around arbitrarily. In later versions of this template, there could be a nicer solution to this issue, hopefully. ### `#alternatives` to substitute content The so far discussed helpers `#one-by-one`, `#line-by-line`, and `pause` all build upon `#uncover`. There is an analogon to `#one-by-one` that is based on `#only`, namely `#alternatives`. You can use it to show some content on one subslide, then substitute it by something else, then by something else, etc. Consider this example: ```typ #only(1)[Ann] #only(2)[Bob] #only(3)[Christopher] likes #only(1)[chocolate] #only(2)[strawberry] #only(3)[vanilla] ice cream. ``` Here, we want to display three different sentences with the same structure: Some person likes some sort of ice cream. Using `#only`, the positioning of `likes` and `ice cream` moves around in the produced slide because, for example, `Ann` takes much less space than `Christopher`. This somewhat disturbs the perception of the constant structure of the sentence and that only the names and kinds of ice cream change. To avoid such movement and only subsitute certain parts of content, you can use the `#alternatives` function. With it, our example becomes: ```typ #alternatives[Ann][Bob][Christopher] likes #alternatives[chocolate][strawberry][vanilla] ice cream. ``` `#alternatives` will put enough empty space around, for example, `Ann` such that it usese the same amount of space as `Christopher`. In a sense, it is like a mix of `#only` and `#uncover` with some reserving of space. By default, all elements that enter an `#alternatives` command are aligned at the bottom left corner. This might not always be the desired or most pleasant way to position it, so you can provide an optional `position` argument to `#alternatives` that takes an [`alignment` or `2d alignment`](https://typst.app/docs/reference/layout/align/#parameters--alignment). For example: ```typ We know that #alternatives(position: center + horizon)[$pi$][$sqrt(2)^2 + 1/3$] is #alternative[irrational][rational]. ``` makes the mathematical terms look better positioned. Similar to `#one-by-one` and `#line-by-line`, `#alternatives` also has an optional `start` argument that works just the same as for the other two. ## Cover mode Covered content (using `#uncover`, `#one-by-one`, `#line-by-line`, or `pause`) is completely invisible, by default. You can decide to make it visible but less prominent using the optional `mode` argument to each of those functions. The `mode` argument takes two different values: `"invisible"` (the default) and `"transparent"`. (This terminology is taken from LaTeX beamer as well.) With `mode: "transparent"`, text is printed in a light gray. Use it as follows: ```typ #uncover("3-5", mode: "transparent")[...] #one-by-one(start: 2, mode: "transparent")[...][...] #line-by-line(mode: "transparent")[ ... ... ] #show: pause(4, mode: "transparent") ``` **Warning!** The transparent mode really only wraps the covered content in a ```typ #text(fill: gray.lighten(50%)[...] ``` so it has only limited control over the actual display. Especially - text that defines its own color (e.g. syntax highlighting), - visualisations, - images will not be affected by that. This make the transparent mode only somewhat useful today. ## Internal number of repetitions **TL;DR:** For slides with more than ten subslides, you need to set the `max-repetitions` argument of the `#slide` function to a higher number. Usually, you can completely ignore this section, though. For technical reasons (this might change in the future when we find a better solution), producing PDF pages for subslides is implemented in the following way: Each dynamic element, such as `#only` or `#beginning` "knows" how many subslides a logical slide must have for it to "make sense". For example, a `#beginning(5)[...]` only makes sense if at least 5 subslides are produced. Internally, when typesetting a slide, we now look at all the dynamic elements in it and find the maximum of those individual "required" subslide counts. So if a slide contains a `#only(2)[...]`, a `#until(4)[...]`, and nothing else, we know that exactly 4 subslides are necessary. However, we only acquire this knowledge _after_ the first subslide has been produced, i.e. when all of the slide's content has been "looked at" once. This is why we cannot simply implement something like "produce 4 pages by iterating this loop 4 times". Instead, the (admittedly hacky) solution is to iterate "very often" and check in each iteration if we still need to produce another page. This works because we always need to produce at least one page for a slide, so we can unhurriedly inspect all dynamic elements and find the maximum subslide count at the first iteration. After that, we have the information we need. Now, the question arises how often "very often" should be. This requires a trade-off: Iterating too few times (say, twice) will lead to frequent situations where we ignore dynamic behaviour that was supposed to happen in later subslides (say, in the third). Iterating, say, a thousand times means that we will practically never encounter such situations but we now perform a thousand iterations _per slide_. Especially when you want to see a live update of your produced PDF as you type, this leads to severe lagging that somewhat defeats the purpose of Typst's speed. (Still faster than LaTeX, though...) It appears reasonable that occasions are rare where one needs more than ten subslides for one slide. Therefore, ten is the default value for how often we try to create a new subslide. This should not produce noticable lag. (If it does for you, consider [creating an issue](https://github.com/andreasKroepelin/typst-slides/issues) so we can discuss this.) For those hopefully rare occasions where you do, in fact, need more than ten subslides, you can manually increase this number using the `max-repetitions` argument of the `#slide` function: ```typ #slide(max-repetitions: 20)[ This is gonna take a while: #until(20)[I can wait...] ] ``` Again, use this feature sparingly, as it decreases typesetting performance. ## Handout mode If you to distribute your slides after your talk for further reference, you might not want to keep in all the dynamic content. Imagine using `one-by-one` on a bullet point list and readers having to scroll through endless pages when they just want to see the full list. You can use `handout: true` in your slides' configuration to achieve this: ```typ #show: slides.with( // ... handout: true ) ``` It has the effect that all dynamic visibility of elements _that reserve space_ is switched off. So ```typ Some text. #uncover("3-")[you cannot always see this] ...or can you? ``` behaves as ```typ Some text. you cannot always see this ...or can you? ``` in handout mode, for example. Note that `only` and `alternatives` are **not** affected as there is no obvious way to unify their content to one slide.
https://github.com/protohaven/printed_materials
https://raw.githubusercontent.com/protohaven/printed_materials/main/class-handouts/class-wood_104-wood_lathe.typ
typst
#import "/meta-environments/env-templates.typ": * #import "./glossary/glossary_terms.typ": * #show: doc => class_handout( title: "Wood Lathe Intro", category: "Wood", number: "104", clearances: ("Wood Lathe", "Woodshop Dust Collection"), instructors: ("Someone",), authors: ("<NAME> <<EMAIL>>",), draft: true, doc ) // Content goes here = Welcome Welcome to the Introduction to Lathe class at Protohaven! #set heading(offset:1) #include "/common-policy/shop_rules.typ" #include "/common-policy/tool_status_tags.typ" #include "/common-policy/filing_a_tool_report.typ" #set heading(offset:0) #pagebreak() = Safety - Use a full face shield (Safety glasses at a minimum) whenever the lathe is turned on. - Tie back long hair, do not wear gloves, and avoid loose clothing or objects that may catch on rotating parts or accessories. - Always check the speed of the lathe before turning it on. Use slower speeds for larger diameter or rough pieces, and higher speeds for smaller diameter and pieces that are balanced. Always start a piece at a slower speed until the work piece is balanced. If the lathe is shaking or vibrating, lower the speed. If the work piece vibrates, always stop the machine to check the reason. - Check that all locking devices on the tailstock and tool rest assembly (rest and base) are tight before operating the lathe. - Position the tool rest close to work, about one inch away from the material. Check tool rest position often and as wood is removed, turn off the lathe and re-position the rest. - Rotate your work piece by hand to make sure it clears the tool rest and bed before turning the lathe's motor on. Be certain that the work piece turns freely and is firmly mounted. A handwheel on the outboard side of the headstock simplifies this process of spinning the lathe by hand before turning on the switch. - Be aware of what the turners call the _red zone_ or _firing zone._ This is the area directly behind and in front of the work piece - the areas most likely for a piece to travel as it comes off the lathe. A good safety habit is to step out of this zone when turning on the lathe. When observing others turn stay out of the area. - Hold turning tools securely on the tool rest, holding the tool in a controlled and comfortable manner. Always contact the tool rest with the tool before contacting the wood. - Turn the lathe off before adjusting the tool rest or tool rest base (banjo). - Remove the tool rest before sanding or polishing operations. - Never leave the lathe running unattended. Turn the power off. Do not leave the lathe until it comes to a complete stop. If you feel unsure of something, feel free to ask! = Introduction == Learning Objectives TODO == Terminology TODO: what other glossary terms are needed here (that are not covered in the tool anatomy section)? #blank_term #burl_term #heartwood_term #sapwood_term #spalted_term = Tools #set heading(offset:1) #include "/common-tools/woodshop_dust_collection.typ" #include "/common-tools/lathe_wood.typ" #set heading(offset:0) = Resources Websites? Videos? Printed materials?
https://github.com/Jollywatt/typst-fletcher
https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/src/node.typ
typst
MIT License
#import "utils.typ": * #import "coords.typ": uv-to-xy, default-ctx, resolve, NAN_COORD, resolve-system #import "shapes.typ" /// Draw a labelled node in a diagram which can connect to edges. /// /// - ..args (any): The first positional argument is #param[node][pos] and the /// second, if given, is #param[node][label]. /// /// - pos (coordinate): Position of the node, or its center coordinate. This may /// be an elastic (row/column) coordinate like `(2, 1)`, or a CeTZ-style /// coordinate expression like `(rel: (30deg, 1cm), to: (2, 1))`. /// /// See the options of `diagram()` to control the physical scale of elastic /// coordinates. /// /// - name (label, none): An optional name to give the node. /// /// Names can sometimes be used in place of coordinates. For example: /// /// #example(``` /// fletcher.diagram( /// node((0,0), $A$, name: <A>), /// node((1,0.6), $B$, name: <B>), /// edge(<A>, <B>, "->"), /// node((rel: (1, 0), to: <B>), $C$) /// ) /// ```) /// /// - label (content): Content to display inside the node. /// /// If a node is larger than its label, you can wrap the label in `align()` to /// control the label alignment within the node. /// /// #example(```typc /// diagram( /// node((0,0), align(bottom + left)[¡Hola!], /// width: 3cm, height: 2cm, fill: yellow), /// ) /// ```) /// /// - inset (length): Padding between the node's content and its outline. /// /// In debug mode, the inset is visualised by a thin green outline. /// /// #example(``` /// diagram( /// debug: 3, /// node-stroke: 1pt, /// node((0,0), [Hello,]), /// edge(), /// node((1,0), [World!], inset: 10pt), /// ) /// ```) /// /// Defaults to #the-param[diagram][node-inset]. /// /// - outset (length): Margin between the node's bounds to the anchor /// points for connecting edges. /// /// This does not affect node layout, only how closely edges connect to the /// node. /// /// In debug mode, the outset is visualised by a thin green outline. /// /// #example(``` /// diagram( /// debug: 3, /// node-stroke: 1pt, /// node((0,0), [Hello,]), /// edge(), /// node((1,0), [World!], outset: 10pt), /// ) /// ```) /// /// Defaults to #the-param[diagram][node-outset]. /// /// - width (length, auto): Width of the node. If `auto`, the node's width is /// the width of the node #param[node][label], plus twice the /// #param[node][inset]. /// /// If the width is not `auto`, you can use `align` to control the placement of the node's #param[node][label]. /// /// - height (length, auto): Height of the node. If `auto`, the node's height is the height of the node #param[node][label], plus twice the #param[node][inset]. /// /// If the height is not `auto`, you can use `align` to control the placement of the node's #param[node][label]. /// /// - enclose (array): Positions or names of other nodes to enclose by enlarging /// this node. /// /// If given, causes the node to resize so that its bounding rectangle /// surrounds the given nodes. The center #param[node][pos] does not affect /// the node's position if `enclose` is given, but still affects connecting /// edges. /// /// #box(example(``` /// diagram( /// node-stroke: 1pt, /// node((0,0), [ABC], name: <A>), /// node((1,1), [XYZ], name: <Z>), /// node( /// text(teal)[Node group], stroke: teal, /// enclose: (<A>, <Z>), name: <group>), /// edge(<group>, (3,0.5), stroke: teal), /// ) /// ```)) /// /// - shape (rect, circle, function): Shape of the node's outline. If `auto`, /// one of `rect` or `circle` is chosen depending on the aspect ratio of the /// node's label. /// /// Other shapes are defined in the `fletcher.shapes` /// submodule, including /// #{ /// dictionary(fletcher.shapes).pairs() /// .filter(((k, v)) => type(v) != module) /// .map(((k, v)) => [#raw(k)]) /// .join(last: [, and ])[, ] /// }. /// /// Custom shapes should be specified as a function `(node, extrude, ..parameters) => (..)` /// which returns `cetz` objects. /// - The `node` argument is a dictionary containing the node's attributes, /// including its dimensions (`node.size`), and other options (such as /// `node.corner-radius`). /// - The `extrude` argument is a length which the shape outline should be /// extruded outwards by. This serves two functions: to support automatic /// edge anchoring with a non-zero node `outset`, and to create multi-stroke /// effects using the `extrude` node option. /// See the /// #link("https://github.com/Jollywatt/typst-fletcher/blob/master/src/shapes.typ", /// ```plain src/shapes.typ```) source file for example shape implementations. /// /// Defaults to #the-param[diagram][node-shape]. /// /// - stroke (stroke): Stroke style for the node outline. /// /// Defaults to #the-param[diagram][node-stroke]. /// /// - fill (paint): Fill style of the node. The fill is drawn within the node /// outline as defined by the first #param(full: false)[node][extrude] value. /// /// Defaults to #the-param[diagram][node-fill]. /// /// - defocus (number): Strength of the "defocus" adjustment for connectors /// incident with this node. /// /// This affects how connectors attach to non-square nodes. If `0`, the /// adjustment is disabled and connectors are always directed at the node's /// exact center. /// /// #stack( /// dir: ltr, /// spacing: 1fr, /// ..(0.2, 0, -1).enumerate().map(((i, defocus)) => { /// fletcher.diagram(spacing: 8mm, { /// node((i, 0), raw("defocus: "+str(defocus)), stroke: black, defocus: defocus) /// for y in (-1, +1) { /// edge((i - 1, y), (i, 0)) /// edge((i, y), (i, 0)) /// edge((i + 1, y), (i, 0)) /// } /// }) /// }) /// ) /// /// Defaults to #the-param[diagram][node-defocus]. /// /// - extrude (array): Draw strokes around the node at the given offsets to /// obtain a multi-stroke effect. Offsets may be numbers (specifying multiples /// of the stroke's thickness) or lengths. /// /// The node's fill is drawn within the boundary defined by the first offset in /// the array. /// /// #diagram( /// node-stroke: 1pt, /// node-fill: red.lighten(70%), /// node((0,0), `(0,)`), /// node((1,0), `(0, 2)`, extrude: (0, 2)), /// node((2,0), `(2, 0)`, extrude: (2, 0)), /// node((3,0), `(0, -2.5, 2mm)`, extrude: (0, -2.5, 2mm)), /// ) /// /// See also #the-param[edge][extrude]. /// /// - corner-radius (length): Radius of rounded corners, if supported by the /// node #param[node][shape]. /// /// Defaults to #the-param[diagram][node-corner-radius]. /// /// - layer (number): Layer on which to draw the node. /// /// Objects on a higher `layer` are drawn on top of objects on a lower /// `layer`. Objects on the same layer are drawn in the order they are passed /// to `diagram()`. /// /// Defaults to layer `0` unless the node #param[node][enclose]s /// points, in which case `layer` defaults to `-1`. /// /// - snap (number, false): The snapping priority for edges connecting to this /// node. A higher priority means edges will automatically snap to this node /// over other overlapping nodes. If `false`, edges only snap to this node if /// manually set with #the-param[edge][snap-to]. /// /// Setting a lower value is useful if the node #param[node][enclose]s other /// nodes that you want to snap to first. /// /// - post (function): Callback function to intercept `cetz` objects before they /// are drawn to the canvas. /// /// This can be used to hide elements without affecting layout (for use with /// #link("https://github.com/touying-typ/touying")[Touying], for example). /// The `hide()` function also helps for this purpose. /// #let node( ..args, pos: auto, name: none, label: none, inset: auto, outset: auto, fill: auto, stroke: auto, extrude: (0,), width: auto, height: auto, radius: auto, enclose: (), corner-radius: auto, shape: auto, defocus: auto, snap: 0, layer: auto, post: x => x, ) = { if args.named().len() > 0 { error("Unexpected named argument(s) #..0.", args.named().keys()) } if args.pos().len() > 2 { error("`node()` can have up to two positional arguments; the position and label.") } // interpret first two positional arguments if args.pos().len() == 2 { (pos, label) = args.pos() } else if args.pos().len() == 1 { let arg = args.pos().at(0) // one positional argument may be the coordinate or the label if type(arg) in (array, dictionary, label) { pos = arg label = none } else { pos = if enclose.len() > 0 { auto } else { () } label = arg } } let extrude = as-array(extrude).map(as-number-or-length.with( message: "`extrude` must be a number, length, or an array of those" )) if not (type(snap) in (int, float) or snap == false) { error("`snap` must be a number or `false`; got #0.", repr(snap)) } metadata(( class: "node", pos: (raw: pos), name: pass-none(as-label)(name), label: label, inset: inset, outset: outset, enclose: as-array(enclose), size: (width, height), radius: radius, shape: shape, stroke: stroke, fill: fill, corner-radius: corner-radius, defocus: defocus, extrude: extrude, layer: layer, snap: snap, post: post, )) } #let resolve-node-options(node, options) = { let to-pt(len) = to-abs-length(as-length(len), options.em-size) node.stroke = map-auto(node.stroke, options.node-stroke) if node.stroke != none { let base-stroke = pass-none(stroke-to-dict)(options.node-stroke) node.stroke = base-stroke + stroke-to-dict(node.stroke) } node.stroke = pass-none(stroke)(node.stroke) // guarantee stroke or none node.fill = map-auto(node.fill, options.node-fill) node.corner-radius = map-auto(node.corner-radius, options.node-corner-radius) node.inset = to-pt(map-auto(node.inset, options.node-inset)) node.outset = to-pt(map-auto(node.outset, options.node-outset)) node.defocus = map-auto(node.defocus, options.node-defocus) node.size = node.size.map(pass-auto(to-pt)) node.radius = pass-auto(to-pt)(node.radius) node.shape = map-auto(node.shape, options.node-shape) if node.shape == auto { if node.radius != auto { node.shape = "circle" } if node.size != (auto, auto) { node.shape = "rect" } } let thickness = if node.stroke == none { 1pt } else { map-auto(node.stroke.thickness, 1pt) } node.extrude = node.extrude.map(d => { if type(d) == length { d } else { d*thickness } }).map(to-pt) if type(node.outset) in (int, float) { node.outset *= thickness } let default-layer = if node.enclose.len() > 0 { -1 } else { 0 } node.layer = map-auto(node.layer, default-layer) node } /// Measure node labels with the style context and resolve node shapes. /// /// Widths and heights that are `auto` are determined by measuring the size of /// the node's label. #let measure-node-size(node) = { // Width and height explicitly given if auto not in node.size { let (width, height) = node.size node.radius = vector-len((width/2, height/2)) node.aspect = width/height // Radius explicitly given } else if node.radius != auto { node.size = (2*node.radius, 2*node.radius) node.aspect = 1 // Width and/or height set to auto } else { let inner-size = node.size.map(pass-auto(i => i - 2*node.inset)) // Determine physical size of node content let (width, height) = measure(box( node.label, width: inner-size.at(0), height: inner-size.at(1), )) // let (width, height) = node.inner-size let radius = vector-len((width/2, height/2)) // circumcircle node.aspect = if width == 0pt or height == 0pt { 1 } else { width/height } if node.shape == auto { let is-roundish = calc.max(node.aspect, 1/node.aspect) < 1.5 node.shape = if is-roundish { "circle" } else { "rect" } } // Add node inset if radius != 0pt { radius += node.inset } if width != 0pt and height != 0pt { width += 2*node.inset height += 2*node.inset } // If width/height/radius is auto, set to measured width/height/radius node.size = node.size.zip((width, height)) .map(((given, measured)) => map-auto(given, measured)) node.radius = map-auto(node.radius, radius) } if node.shape in (circle, "circle") { node.shape = shapes.circle } if node.shape in (rect, "rect") { node.shape = shapes.rect } node } /// Process the `enclose` options of an array of nodes. #let resolve-node-enclosures(nodes, ctx) = { let nodes = nodes.map(node => { if node.enclose.len() == 0 { return node } let enclosed-vertices = node.enclose.map(key => { let near-node = find-node(nodes, key) if near-node == none or near-node.pos.raw == auto { // if enclosed point doesn't resolve to a node // enclose the point itself let (_, coord) = resolve(ctx, key) (coord,) } else { // if enclosed point resolves to a node // enclose its bounding box let (x, y) = near-node.pos.xyz let (w, h) = near-node.size ( (x - w/2, y - h/2), (x - w/2, y + h/2), (x + w/2, y - h/2), (x + w/2, y + h/2), ) } }).join() let (center, size) = bounding-rect(enclosed-vertices) node.pos.xyz = center node.size = vector-max( size.map(d => d + node.inset*2), node.size, ) node.shape = shapes.rect // TODO: support different node shapes with enclose node }) let extra-anchors = nodes.map(node => { if node.name != none { let snap-origin = uv-to-xy(ctx.grid, node.pos.uv) (str(node.name): (anchors: _ => snap-origin)) } else { (:) } }).join() (extra-anchors, nodes) } /// Resolve node positions to a target coordinate system in sequence. /// /// CeTZ-style coordinate expressions work, with the previous coordinate `()` /// referring to the resolved position of the previous node. /// /// The resolved coordinates are added to each node's `pos` dictionary. /// /// - nodes (array): Array of nodes, each a dictionary containing a `pos` entry, /// which should be a CeTZ-compatible coordinate expression. /// - ctx (dictionary): CeTZ-style context to be passed to `resolve(ctx, ..)`. /// This must contain `target-system`, and optionally `grid`. /// -> array #let resolve-node-coordinates(nodes, ctx: (:)) = { let ctx = default-ctx + ctx // e.g., nodes which enclose points can have position `auto` let auto-placed-nodes = () let coord for i in range(nodes.len()) { let node = nodes.at(i) if node.pos.raw == auto { auto-placed-nodes.push(i) coord = NAN_COORD } else { (ctx, coord) = resolve(ctx, node.pos.raw) } if node.name != none { ctx.nodes += (str(node.name): (anchors: _ => coord)) } nodes.at(i).pos.insert(ctx.target-system, vector-2d(coord)) } for i in auto-placed-nodes { let node = nodes.at(i) // the center of enclosing nodes defaults to the center // of the bounding rect of the points they enclose let enclosed-nodes = node.enclose.map(key => { let node = find-node(nodes, key) if node == none { // enclose key doesn't correspond to node // interpret key as real coordinate let (_, coord) = resolve(ctx, key) coord } else { node.pos.at(ctx.target-system) } }).filter(coord => not is-nan-vector(coord)) let coord = if enclosed-nodes.len() > 0 { bounding-rect(enclosed-nodes).center } else { NAN_COORD } nodes.at(i).pos.insert(ctx.target-system, coord) } (ctx, nodes) }
https://github.com/8LWXpg/typst-treet
https://raw.githubusercontent.com/8LWXpg/typst-treet/master/README.md
markdown
MIT License
# Treet <a href="https://github.com/8LWXpg/typst-treet/tags" style="text-decoration: none;"> <img alt="GitHub manifest version (path)" src="https://img.shields.io/github/v/tag/8LWXpg/typst-treet"> </a> <a href="https://github.com/8LWXpg/typst-treet" style="text-decoration: none;"> <img src="https://img.shields.io/github/stars/8LWXpg/typst-treet?style=flat" alt="GitHub Repo stars"> </a> <a href="https://github.com/8LWXpg/typst-treet/blob/master/LICENSE" style="text-decoration: none;"> <img alt="GitHub" src="https://img.shields.io/github/license/8LWXpg/typst-treet"> </a> <a href="https://github.com/typst/packages/tree/main/packages/preview/treet" style="text-decoration: none;"> <img alt="typst package" src="https://img.shields.io/badge/typst-package-239dad"> </a> Create tree list easily in Typst contribution is welcomed! ## Usage ```typst #import "@preview/treet:0.1.1": * #tree-list( marker: content, last-marker: content, indent: content, empty-indent: content, marker-font: string, content, ) ``` ### Parameters - `marker` - the marker of the tree list, default is `[├─ ]` - `last-marker` - the marker of the last item of the tree list, default is `[└─ ]` - `indent` - the indent after `marker`, default is `[│#h(1em)]` - `empty-indent` - the indent after `last-marker`, default is `[#h(1.5em)]` (same width as indent) - `marker-font` - the font of the marker, default is `"Cascadia Code"` - `content` - the content of the tree list, includes at least a list ## Demo see [demo.typ](https://github.com/8LWXpg/typst-treet/blob/master/test/demo.typ) [demo.pdf](https://github.com/8LWXpg/typst-treet/blob/master/test/demo.pdf) ### Default style ```typst #tree-list[ - 1 - 1.1 - 1.1.1 - 1.2 - 1.2.1 - 1.2.2 - 1.2.2.1 - 2 - 3 - 3.1 - 3.1.1 - 3.2 ] ``` ![1.png](https://github.com/8LWXpg/typst-treet/blob/master/img/1.png) ### Custom style ```typst #text(red, tree-list( marker: text(blue)[├── ], last-marker: text(aqua)[└── ], indent: text(teal)[│#h(1.5em)], empty-indent: h(2em), )[ - 1 - 1.1 - 1.1.1 - 1.2 - 1.2.1 - 1.2.2 - 1.2.2.1 - 2 - 3 - 3.1 - 3.1.1 - 3.2 ]) ``` ![2.png](https://github.com/8LWXpg/typst-treet/blob/master/img/2.png) ### Using show rule ```typst #show list: tree-list #set text(font: "DejaVu Sans Mono") root_folder\ - sub-folder - 1-1 - 1.1.1 - - 1.2 - 1.2.1 - 1.2.2 - 2 ``` ![3.png](https://github.com/8LWXpg/typst-treet/blob/master/img/3.png)
https://github.com/noriHanda/my-resume
https://raw.githubusercontent.com/noriHanda/my-resume/main/README.md
markdown
The Unlicense
# A modern typst CV template [![Build Typst document](https://github.com/peterpf/modern-typst-resume/actions/workflows/build.yaml/badge.svg)](https://github.com/peterpf/modern-typst-resume/actions/workflows/build.yaml) ![Cover](docs/images/cover.png) ## Requirements To compile this project you need the following: - Typst - Roboto font family Run the following command to build the typst file whenever saving changes ```bash typst watch main.typ ``` ## Usage This is a typst template that provides the general page style and several elements out-of-the-box. The following code provides a minimum working example: ```typst #import "modern-resume.typ": modern-resume #let data = ( name: "<NAME>", jobTitle: "Data Scientist", bio: lorem(5), // Optional parameter avatarImagePath: "avatar.png", // Optional parameter contactOptions: ( // Optional parameter, all entries are optional email: link("mailto:<EMAIL>")[<EMAIL>], mobile: "+43 1234 5678", location: "Austria", linkedin: link("https://www.linkedin.com/in/jdoe")[linkedin/jdoe], github: link("https://github.com/jdoe")[github.com/jdoe], website: link("https://jdoe.dev")[jdoe.dev], ), ) #show: doc => modern-resume(data, doc) // Your content goes here ``` See [main.typ](./main.typ) for a full example that showcases all available elements. ### Elements This section introduces the visual elements that are part of this template. #### Pills Import this element from the template module with `pill`. ![pills](docs/images/pills.png) ```typst #pill("German (native)") #pill("English (C1)") ``` ![pills filled](docs/images/pills-filled.png) ```typst #pill("Teamwork", fill: true) #pill("Critical thinking", fill: true) ``` #### Educational/work experience Import the elements from the template module with `educationalExperience` and `workExperience` respectively. ![educational experience](docs/images/educational-experience.png) ```typst #educationalExperience( title: "Master's degree", subtitle: "University of Sciences", taskDescription: [ - Short summary of the most important courses - Explanation of master thesis topic ], dateFrom: "10/2021", dateTo: "07/2023", ) ``` ![work experience](docs/images/work-experience.png) ```typst #workExperience( title: "Full Stack Software Engineer", subtitle: [#link("https://www.google.com")[Some IT Company]], facilityDescription: "Company operating in sector XY", taskDescription: [ - Short summary of your responsibilities ], dateFrom: "09/2018", dateTo: "07/2021", ) ``` #### Project Import this element from the template module with `project`. ![project](docs/images/project.png) ```typst #project( title: "Project 2", subtitle: "Data Visualization, Data Engineering", description: [ - #lorem(20) ], dateFrom: "08/2022", dateTo: "09/2022", ) ``` ### Theming Customize the color theme by changing the values of the `color` dictionary in [modern-resume](modern-resume.typ). For example: - The default color palette: ```typst #let colors = ( primary: rgb("#313C4E"), secondary: rgb("#222A33"), accentColor: rgb("#449399"), textPrimary: black, textSecondary: rgb("#7C7C7C"), textTertiary: white, ) ``` - A pink color palette: ```typst #let colors = ( primary: rgb("#e755e0"), secondary: rgb("#ad00c2"), accentColor: rgb("#00d032"), textPrimary: black, textSecondary: rgb("#7C7C7C"), textTertiary: white, ) ``` ## Contributing I'm grateful for any improvements and suggestions.
https://github.com/berceanu/activity-report
https://raw.githubusercontent.com/berceanu/activity-report/main/activity-report.typ
typst
BSD 3-Clause "New" or "Revised" License
#let mk_header( proiect, contract, anexa, ) = { set text(font: "DejaVu Sans", 12pt) show text: strong stack(dir: ltr, [#upper(proiect) \ #upper(contract)], 1fr, [#upper(anexa)] ) v(2em) } #let mk_title( title, ) = { set text(font: "DejaVu Sans", 14pt) show text: strong align(center, [#upper(title)#footnote("Se completează lunar")]) } #let mk_date( month, ) = { set text(font: "DejaVu Sans", 12pt) align(center, [*Perioada:* #month]) } #let mk_name_position( your_name, your_position, ) = { v(2em) block[ *Numele și prenumele:* #your_name \ *Funcția în cadrul proiectului:* #your_position ] } #let activities = v(1fr) + [*În această perioadă au fost desfășurate următoarele activități.*] + v(1em) #let results = v(1fr) + [*În urma desfășurării activităților prezentate, au fost obținute următoarele rezultate.*] + v(1em) #let mk_signatures( department_head, scientific_director, sign_date, ) = { v(1fr) grid( columns: (1fr, 1fr, 1fr), rows: (auto, auto, auto), row-gutter: 0.5em, align(left)[*Data:* #sign_date], align(center, upper(strong("Avizat,"))), align(right, upper(strong("Avizat,"))), align(left)[*Semnătura,*], align(center)[*Șef LGED*], align(right)[*Director Științific*], align(left, move(dy: -0.4em, image("signature.png", height: 1em))), align(center, department_head), align(right, scientific_director), ) v(2em) } #let report( proiect: "Proiect ELI-NP", contract: "Contract Finanțare nr. 1/2016", anexa: "Anexa I-4", title: "Raport de Activitate", month: none, your_name: "<NAME>", your_position: "CS-III", sign_date: none, department_head: "Ovidiu Teșileanu", scientific_director: "<NAME>", body, ) = { set document( title: title, author: your_name, ) set page( paper: "a4", margin: (top: 1.5cm, bottom: 1.5cm, left: 2cm, right: 2cm), ) mk_header(proiect, contract, anexa) mk_title(title) mk_date(month) set text( lang: "ro", font: "Linux Libertine", size: 12pt, ) set par( justify: true, ) mk_name_position(your_name, your_position) body mk_signatures(department_head, scientific_director, sign_date) }
https://github.com/rxt1077/it610
https://raw.githubusercontent.com/rxt1077/it610/master/markup/templates/exercise.typ
typst
#let primary-color = rgb("#D22630") #let secondary-color = rgb("#071D49") #let code-bg-color = rgb("#DCDCDC") #let exercise( course-name: none, exercise-name: none, doc, ) = { set page( paper: "us-letter", margin: (left: 0.5in, right: 0.5in), header: { smallcaps([#course-name Exercise]) h(1fr) emph(exercise-name) }, numbering: "1", ) set text( font: "Roboto", size: 11pt ) show heading.where(level: 1): it => { block(below: 28pt)[#it] } show heading.where(level: 2): it => [ #block(below: 3pt, grid(columns: 2, gutter: 5pt, square(size: 10pt, fill: primary-color), text(primary-color)[#it] ) ) #block(above: 0pt, line(stroke: primary-color, length: 100%)) ] show link: it => [ #set text(blue) #underline(it) ] show raw.where(block: false): box.with( fill: code-bg-color, inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) show quote: it => [ #grid( columns: (5pt, 1fr), rows: 1, gutter: 0pt, grid.cell(fill: secondary-color.lighten(50%))[], it, ) ] [ = #exercise-name #doc ] } // ugly hack to convert callout number to a unicode character #let num2unicode(num) = { set text(black, size: 14pt, weight: "bold") if num == "<0>" [\u{24FF}] else if num == "<1>" [\u{2776}] else if num == "<2>" [\u{2777}] else if num == "<3>" [\u{2778}] else if num == "<4>" [\u{2779}] else if num == "<5>" [\u{2780}] else if num == "<6>" [\u{2781}] else if num == "<7>" [\u{2782}] else if num == "<8>" [\u{2783}] else if num == "<9>" [\u{2784}] } // code blocks in boxes with AsciiDoctor style callouts #let code(body, title: none, callouts: none) = { show regex("<[0123456789]>"): it => { num2unicode(it.text) } if title != none { set text(secondary-color) block(below: 5pt, title) } box( fill: code-bg-color, inset: 8pt, radius: 4pt, width: 100%, body ) if callouts != none { for (num, desc) in callouts [ / #num2unicode(num): #desc ] } } // AsciiDoctor style(ish) admonitions for things we want to point out #let admonition(body, symbol: emoji.face.think, color: secondary-color) = { layout(size => { set text(size: 10pt) let width = 80% // most of this logic is to force a minimum height for the admonition let height = measure(box(width: (size.width * width), body)).height if height < 60pt { height = 60pt } align(center, rect( width: width, stroke: color + 2pt, radius: 4pt, fill: color.lighten(85%), grid( rows: height, columns: (60pt, 1fr), gutter: 10pt, // we have to use pad to nudge the symbol up a little text(color, font: "Noto Emoji", size: 40pt, align(horizon, pad(bottom: 10pt, symbol))), text(color, size: 10pt, align(left + horizon, body)) ) ) ) }) }
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/angle.typ
typst
--- angle-to-unit --- // Test angle methods. #test(1rad.rad(), 1.0) #test(1.23rad.rad(), 1.23) #test(0deg.rad(), 0.0) #test(2deg.deg(), 2.0) #test(2.94deg.deg(), 2.94) #test(0rad.deg(), 0.0)
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/nth/1.0.0/CHANGELOG.md
markdown
Apache License 2.0
## 1.0.0 - 2023-12-21 ### Changed * **Breaking** separated functionality, now `nth` only gives ordinals and `nths` gives ordinals in superscript (commit [eff87f3](https://github.com/extua/nth/commit/eff87f3f2a2a20cf05198fbd7d4e5fa2d30858d1), fixes [#1](https://github.com/extua/nth/issues/1) reported by [emilyyyylime](https://github.com/emilyyyylime)). ## 0.2.0 - 2023-10-02 ### Fixed * Added missing brace to if statement (commmit [bbe6251](https://github.com/typst/packages/commit/bbe6251c1511ff97d92988aeb55ff66470cbd0b9) in Typst repo) ([jeffa5](https://github.com/jeffa5)) ### Changed * Corrected a typo in the description in typst.toml (commit [2d5cbca](https://github.com/typst/packages/commit/2d5cbcada47a7fb1d00f2d3f7f67c11132e79429) in Typst repo) ([fnoaman](https://github.com/fnoaman)) ## 0.1.0 - 2023-09-15 :seedling: Initial release.
https://github.com/MDLC01/board-n-pieces
https://raw.githubusercontent.com/MDLC01/board-n-pieces/main/tests/logic.typ
typst
MIT License
#import sys.inputs.lib as bnp // Test starting position. #assert.eq(bnp.starting-position.fen, "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1") // Test `game` function. // Hamppe vs. Meitner, 1872 (https://en.wikipedia.org/wiki/Immortal_Draw). #let immortal-draw = bnp.game("e4 e5 Nc3 Bc5 Na4 Bxf2 Kxf2 Qh4 Ke3 Qf4 Kd3 d5 Kc3 Qxe4 Kb3 Na6 a3 Qxa4 Kxa4 Nc5 Kb4 a5 Kxc5 Ne7 Bb5 Kd8 Bc6 b6 Kb5 Nxc6 Kxc6 Bb7 Kb5 Ba6 Kc6 Bb7") #assert.eq(immortal-draw.first().fen, bnp.starting-position.fen) #assert.eq(immortal-draw.at(17).fen, "r1b1k1nr/ppp2ppp/n7/3pp3/N3q3/PK6/1PPP2PP/R1BQ1BNR b kq - 0 9") #assert.eq(immortal-draw.last().fen, "r2k3r/1bp2ppp/1pK5/p2pp3/8/P7/1PPP2PP/R1BQ2NR w - - 5 19") // Test en-passant & castling. // <NAME> vs. <NAME>, 1928 (from https://www.chess.com/blog/rat_4/the-elusive-en-passant-checkmate). #assert.eq(bnp.game("e4 e6 d4 d5 e5 c5 c3 cxd4 cxd4 Bb4+ Nc3 Nc6 Nf3 Nge7 Bd3 O-O Bxh7+ Kxh7 Ng5+ Kg6 h4 Nxd4 Qg4 f5 h5+ Kh6 Nxe6+ g5 hxg6#").last().fen, "r1bq1r2/pp2n3/4N1Pk/3pPp2/1b1n2Q1/2N5/PP3PP1/R1B1K2R b KQ - 0 15") #let test-pgn(file-name, expected-last-position) = { let g = bnp.pgn(read("assets/" + file-name)) assert.eq(g.last().fen, expected-last-position) } // https://lichess.org/NuxTdFcv #test-pgn("lichess-NuxTdFcv.pgn", "8/8/1pr2Pk1/p7/P5R1/3nP1P1/1B3P1P/6K1 b - - 2 50") // https://lichess.org/4cCk7Gi5 #test-pgn("lichess-4cCk7Gi5.pgn", "5r2/5k1p/1p2p3/p1p1P3/5P1b/1P2P2q/PB3RR1/5K2 w - - 1 44")
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/rivet/0.1.0/gallery/test.typ
typst
Apache License 2.0
#import "../src/lib.typ": * #let test-yaml = schema.load("/gallery/test.yaml") #schema.render(test-yaml, config: config.config( full-page: true )) #let test-json = schema.load("/gallery/test.json") #schema.render(test-json, config: config.blueprint( full-page: true )) #let test-xml = schema.load("/gallery/test.xml") #schema.render(test-xml, config: config.dark( full-page: true )) #let test-raw = schema.load(```yaml structures: main: bits: 4 ranges: 3-0: name: test ```) #schema.render(test-raw, config: config.config( full-page: true )) #let test-typ = schema.load(( structures: ( main: ( bits: 32, ranges: ( "31-28": (name: "cond"), "27": (name: "0"), "26": (name: "1"), "25": (name: "I"), "24": ( name: "P", description: "pre / post indexing bit", values: ( "0": "post, add offset after transfer", "1": "pre, add offset before transfer" ) ), "23": ( name: "U", description: "up / down bit", values: ( "0": "down, subtract offset from base", "1": "up, addition offset to base" ) ), "22": ( name: "B", description: "byte / word bit", values: ( "0": "transfer word quantity", "1": "transfer byte quantity" ) ), "21": ( name: "W", description: "write-back bit", values: ( "0": "no write-back", "1": "write address into base" ) ), "20": ( name: "L", description: "load / store bit", values: ( "0": "store to memory", "1": "load from memory" ) ), "19-16": ( name: "Rn", description: "base register" ), "15-12": ( name: "Rd", description: "source / destination register" ), "11-0": ( name: "offset", depends-on: "25", values: ( "0": ( description: "offset is an immediate value", structure: "immediateOffset" ), "1": ( description: "offset is a register", structure: "registerOffset" ) ) ) ) ), immediateOffset: ( bits: 12, ranges: ( "11-0": ( name: "12-bit immediate offset", description: "unsigned number" ) ) ), registerOffset: ( bits: 12, ranges: ( "11-4": ( name: "shift", description: "shift applied to Rm" ), "3-0": ( name: "Rm", description: "offset register" ) ) ) ), colors: ( main: ( "31-28": red, "11-4": green ), registerOffset: ( "11-4": rgb(240, 140, 80) ) ) )) #schema.render(test-typ, config: config.config( full-page: true ))
https://github.com/francescobortuzzo/Legionella
https://raw.githubusercontent.com/francescobortuzzo/Legionella/main/legionella.typ
typst
#import "template.typ": * #show: template #show par: set block(spacing: 0.65em) #let annotation(body)={ par( [#h(1.25em)#body] ) } #let appendix(body) = { set heading(numbering: "A", supplement: [Appendix]) counter(heading).update(0) body } = Introduzione == Introduzione al batterio Legionella #annotation[Il batterio Legionella è un bacillo gram-negativo aerobio, non mobile, che prospera in ambienti acquatici e umidi, sia naturali, come acque sorgive, termali, di fiumi o laghi, sia artificiali, come tubature, serbatoi, fontane e piscine. La Legionella è in grado di sopravvivere in una vasta gamma di condizioni ambientali, tra cui temperature comprese tra venti e quarantacinque gradi Celsius, pH neutro o leggermente alcalino, e presenza di nutrienti organici.] Il genere comprende sessantadue specie diverse, suddivise in settantuno sierotipi, di cui circa venti sono patogeni per l'uomo. La specie più comune è _Legionella pneumophila_, responsabile della maggior parte dei casi riportati di legionellosi#footnote("Legionellosi, o malattia del legionario, è una malattia infettiva che si presenta con sintomi simili all'influenza, come febbre, tosse, dolori muscolari e mal di testa. In alcuni casi, può evolvere in una forma polmonare, con sintomi analoghi a quelli della polmonite, e comportare complicazioni gravi, come polmonite atipica o decesso."). La malattia può essere contratta inalando aerosol contenenti il batterio, come quelli prodotti da docce, fontane, impianti di condizionamento o umidificatori. È quindi di fondamentale importanza monitorare la diffusione di questo batterio negli ambienti umidi e acquatici. Particolare attenzione deve essere rivolta alle strutture ospedaliere, termali e alberghiere, che rappresentano per loro natura ambienti a rischio di diffusione del batterio. #parbreak() == Legionella in <NAME> #annotation[A livello europeo, la raccolta di dati relativi alla presenza del batterio è effettuata dall'ECDC#footnote("Centro europeo per la prevenzione e il controllo delle malattie, istituito nel 2005."). Nel nostro Paese, invece, questa attività è svolta da diversi enti e istituzioni. Un contributo significativo proviene dall'Istituto Superiore di Sanità e dai vari organismi che costituiscono il SNPA#footnote("Sistema Nazionale per la Protezione dell'Ambiente"), di cui fa parte l'ARPA FVG#footnote("Agenzia Regionale per la Protezione dell'Ambiente Friuli Venezia Giulia.").] I dati raccolti sono utilizzati per valutare il rischio di diffusione del batterio e adottare le misure di prevenzione e controllo indicate dal Ministero della Salute nelle "Linee guida per la prevenzione ed il controllo della legionellosi"#footnote[#cite(<LineeGuida>, form: "full")]. In questo ambito, l'ARPA FVG ha effettuato numerose indagini sul territorio e ha pubblicato i risultati in vari report. Ad esempio, nel 2019, una collaborazione con l'Università degli Studi di Udine ha portato alla pubblicazione di un articolo#footnote[#cite(<EnvironmentalSurveillance>, form: "full")] relativo alla presenza di Legionella nei sistemi di raccolta e distribuzione dell'acqua nella regione. Lo studio ha coperto un periodo di sedici anni, dal 2002 al 2017, durante il quale sono stati raccolti e analizzati 20.319 campioni attraverso 3.983 indagini ambientali. I risultati riferiti alle indagini non clinciche e dunque eseguite routinariamente nell'ambito del piano regionale di sorveglianza ambientale hanno evidenziato che la presenza di Legionella è diffusa sopratutto nei cluster di impianti termali, nei quali il batterio è stato individuato nel 57,8% dei siti indagati, e in quelli ospedalieri, in cui nel 50,8% delle strutture è stata riscontrata la Legionella almeno una volta, con picchi dei campioni positivi soprattutto nei mesi che segnano l'inizio del periodo autunnale. Inoltre, si è osservato che la presenza del batterio ha registrato un notevole incremento tra la seconda metà del 2006 e l'inizio del 2009, seguito da una diminuzione fino al 2013 e da un nuovo aumento negli anni successivi. Questo andamento indica chiaramente che, per ridurre il rischio di diffusione del batterio, è essenziale implementare un piano di prevenzione adeguato, che comprenda sia la manutenzione degli impianti sia la sorveglianza ambientale. == Basi di dati #annotation[Pur riconoscendo l'importanza cruciale della sorveglianza ambientale per il controllo della legionellosi, in Friuli Venezia Giulia, come in molte altre regioni, manca un sistema efficiente per la memorizzazione, la gestione e l'analisi dei dati raccolti. Tale carenza rende estremamente oneroso lavorare con la mole di informazioni collezionate nelle indagini ambientali, ostacolando così lo svolgimento di analisi e ricerche mirate. Inoltre, tale mancanza rende poco sicura la conservazione dei dati. La memorizzazione delle informazioni in file di testo o in fogli di calcolo, infatti, non garantisce né la protezione né l'integrità dei dati, che potrebbero essere facilmente persi, alterati o resi incoerenti a causa di errori umani.] In questo contesto, i sistemi di basi di dati giocano un ruolo fondamentale, in quanto permettono di memorizzare grandi quantità di dati e di effettuare ricerche complesse in modo rapido ed efficiente. In particolare, il modello relazionale rappresenta la soluzione più largamente adottata per la memorizzazione delle informazioni. Tale rappresentazione, introdotta per la prima volta dall'informatico inglese <NAME> attraverso la pubblicazione, nel 1970, dell'articolo "_A Relational Model of Data for Large Shared Data Banks_"#footnote[#cite(<RelationalModel>, form: "full")], sfrutta il concetto di relazione, nella sua coniugazione matematica, per organizzare i dati mediante tabelle, che rappresentano l'insieme delle entità coinvolte nel sistema e delle associazioni tra di esse. Nello specifico, l'architettura relazionale propone di rappresentare il dominio informativo, ovvero l'insieme dei dati che si vogliono memorizzare, attraverso tabelle identificate da un nome, con un numero fisso di colonne, dette attributi, e un insieme di righe, denominate tuple, tutte distinte tra loro. Questa struttura supera le principali limitazioni delle analoghe precedentemente in voga, come quella gerarchica e quella reticolare. Per essere più specifici, la struttura gerarchica risulta estremamente rigida e comporta, a causa della duplicazione di alcuni valori, un significativo spreco di spazio e un aumento esponenziale della complessità delle interrogazioni, nonché la quasi totale impossibilità di modificare la struttura del database in modo efficiente e a basso costo. Anche il modello reticolare presenta alcune criticità. Da una parte agevola la rappresentazione di quelle relazioni complesse che il modello gerarchico non riesce a gestire, poiché consente di costruire connessioni multiple per i nodi "figli" con diversi nodi "genitori"; dall'altra, queste stesse connessioni devono essere esplicitate nel processo di costruzione della base di dati e, pertanto, tale modello risulta difficile da mantenere. Al contrario, il modello relazionale si colloca in un'ottica di maggiore flessibilità, pur mantenendo un'organizzazione dei dati ben strutturata che garantisce la conservazione dell'integrità e della coerenza delle informazioni. Più specificamente, l'efficacia del modello relazionale si deve a diversi fattori. In primo luogo, la struttura a tabelle garantisce al tempo stesso intuitività e chiarezza nella rappresentazione delle informazioni e, grazie all'impiego di chiavi primarie ed esterne, l'integrità dei dati, nonché un netto miglioramento dell'efficienza delle operazioni di interrogazione e manipolazione dei valori. Inoltre, la flessibilità della struttura, relativamente ai modelli sopra citati, le conferisce la proprietà di indipendenza dei dati, sia fisica che logica, ovvero la possibilità di modificare il modo in cui questi sono organizzati oppure alterare lo schema logico del database senza dover riformulare il codice per la connessione con le applicazioni, non direttamente interessate ai campi introdotti o alterati, che vi interagiscono. Infine, l'introduzione dei sistemi per la gestione dei database relazionali (RDBMS), come ad esempio Oracle, PostgreSQL e MySQL, ha introdotto ulteriori vantaggi. Questi sistemi, che sono coinvolti sia nella fase di creazione dello schema che in quella di manipolazione dei dati, offrono alcune funzionalità avanzate che facilitano il lavoro degli sviluppatori e degli utenti che interagiscono con il DB. Tra queste funzionalità si annoverano la gestione delle transazioni, che implementa la politica ACID#footnote[«_Atomicity represents the fact that a transaction is an indivisible unit of execution. Either all the effects of a transaction are made visible, or the transaction must have no effect on the database, with an 'all or nothing' approach ... Consistency demands that the carrying out of the transaction does not violate any of the integrity constraints defined on the database ... Isolation demands that the execution of a transaction is independent of the simultaneous execution of other transactions ... Durability, on the other hand, demands that the effect of a transaction that has correctly executed a commit is not lost_» #cite(<DatabaseSystems>, form:"full")] e l'impiego di un linguaggio specifico, ovvero SQL. Tale linguaggio è stato proposto per la prima volta nel 1974 presso i laboratori dell'IBM. È stato standardizzato a partire dal 1986, con lo sviluppo di SQL-86 da parte dell'American National Standards Institute (ANSI) e, nel 1987, dall'International Organization for Standardization (ISO). SQL rappresenta un punto di svolta nel mondo della gestione dei dati, in quanto racchiude tutte le funzionalità tipiche dei linguaggi di definizione, di manipolazione e di interrogazione dei dati, che in precedenza erano tra loro distinti. Le fortune di SQL sono legate a diversi aspetti. Su tutti, si notano: la sua semplicità, garantita dalla struttura simil dichiarativa, che lo rende più semplice da comprendere rispetto all'algebra relazionale; la sua portabilità, derivante dalla standardizzazione, che permette di utilizzare lo stesso linguaggio su diversi RDBMS. Grazie a queste caratteristiche, SQL è divenuto il linguaggio di riferimento per la gestione dei dati e, pertanto, è continuamente aggiornato e arricchito di nuove funzionalità. L'ultimo trentennio ha visto l'avvento di nuove tecnologie, come i DB NoSQL. Questi sistemi hanno un ambito applicativo simile a quello del modello relazionale, tuttavia vi sono alcune differenze significative. La principale riguarda la strutturazione delle informazioni che non sono organizzate in tabelle, ma mediante costrutti più flessibili, come documenti, per esmpio in formato JSON, e grafi. Questa configurazione ha lo scopo di aumentare la scalabilità orizzontale, ovvero la capacità di introdurre nuovi elementi nel sistema, e l'efficienza di interrogazioni complesse, che coinvolgono concatenazioni di relazioni, su dati non strutturati o semi-strutturati. Per queste ragioni, i sistemi NoSQL sono particolarmente adatti per applicazioni real-time. Tuttavia, è preferibile non utilizzarli in contesti critici, dove è fondamentale garantire l'integrità dei dati oppure in cui i dati sono fortemente strutturati. #linebreak() Sulla base delle considerazioni precedenti, nell'ambito di questa tesi si ritiene opportuno implementare il sistema tramite un database relazionale. Tale scelta è motivata dalla necessità di garantire l'integrità e la coerenza dei dati raccolti durante il monitoraggio della Legionella. Inoltre, si ritiene fondamentale fornire un sistema intuitivo e facilmente integrabile con le applicazioni pre-esistenti già in uso dai ricercatori dell'ARPA. In questo contesto, si considera il modello relazionale e, più specificamente, il RDBMS PostgreSQL una soluzione appropriata per la memorizzazione e la gestione dei dati. Soluzioni alternative, come i database NoSQL, potrebbero presentare limitazioni in termini di integrità e coerenza dei dati, senza tuttavia garantire vantaggi significativi in termini di prestazioni e quindi non vengono prese in considerazione. == Obiettivo della tesi #annotation[Il presente elaborato si propone di delineare gli aspetti principali per la progettazione di un database relazionale destinato alla memorizzazione dei dati relativi alla diffusione della Legionella. Più specificamente, nel @Analisi_critica[capitolo] viene condotta un'analisi critica approfondita di una soluzione pre-esistente, individuandone vantaggi e criticità. Tale analisi costituisce la base per le modifiche proposte nel capitolo successivo, dove vengono introdotte soluzioni migliorative volte ad adattare il sistema alle nuove esigenze emerse durante i colloqui condotti in collaborazione con i ricercatori dell'ARPA FVG. La @Progettazione_logica[sezione], invece, è dedicata alla ristrutturazione dello schema originario e alla sua traduzione in un modello logico. Infine, il @Progettazione_fisica[capitolo] si concentra sull'implementazione della base di dati, ponendo particolare attenzione alla definizione dei domini e dei vincoli che garantiscono l'integrità dei dati, oltre che la definizione di alcune funzioni relative all'inserimento e alla modifica dei dati.] #pagebreak() #set par(justify: true) = Analisi critica di una soluzione pre-esistente <Analisi_critica> #annotation[Come accennato nel capitolo introduttivo, una delle principali sfide riscontrate nell'attuale sistema di gestione dei dati riguarda la realizzazione di soluzioni efficienti per la memorizzazione delle informazioni raccolte durante le indagini ambientali. In questa sezione si procede a un'analisi critica di un database relazionale utilizzato per archiviare i dati relativi alla diffusione della Legionella. Il database oggetto di analisi è stato sviluppato dal dottor <NAME> nell'ambito della sua tesi di laurea triennale in informatica, dal titolo "Base di dati e applicazione web per il monitoraggio del batterio della Legionella"#footnote[#cite(<TesiGarlatti>, form: "full")].] == Requisiti #annotation[Prima di procedere con lo studio del database, è necessario definire i requisiti del sistema informativo. Questi sono di natura qualitativa e descrivono le caratteristiche che il sistema deve possedere per soddisfare le esigenze degli utenti e degli stakeholder. I criteri alla base della progettazione della soluzione in analisi riguardano l'intera fase di acquisizione dei dati relativi alle indagini ambientali portate a termine dai ricercatori di ARPA FVG per il monitoraggio della Legionella in regione.] Di seguito sono riportati i requisiti, non strutturati, che hanno guidato la progettazione della base di dati. Il sistema deve consentire la registrazione delle indagini ambientali relative alla presenza di Legionella nei sistemi di adduzione e conservazione dell'acqua. Ogni indagine è definita dal tipo, dalla data e dal sito presso il quale viene eseguita, ed è, eventualmente, associata al richiedente qualora si tratti di un'indagine di follow-up. Un sito è identificato dall'indirizzo e dalla categoria di appartenenza. Le indagini comprendono il prelievo di campioni, ciascuno dei quali è associato a una e una sola indagine. Tali campioni sono caratterizzati dalla temperatura, caldo o freddo, al momento dell'estrazione e dal punto di prelievo, all'interno del sito presso cui è svolta l'indagine cui afferiscono, e sono univocamente identificati da un codice. Tutti i campioni prelevati sono sottoposti a diverse analisi per accertare la presenza di Legionella. Si annoverano: la PCR qualitativa, che consente di rilevare la presenza del DNA del batterio; la PCR quantitativa, che misura la concentrazione di Legionella nei campioni, espressa in µg/l; l'analisi colturale, che consente di isolare e identificare le unità formanti colonia (UFC_L) e, in caso di positività, di determinare il sierogruppo. === Note #annotation[Si segnala che la PCR non costituisce un metodo diagnostico definitivo per la legionellosi, ma piuttosto un test di screening che necessita di conferma attraverso la coltura. Infatti, «poiché, così come specificato nella norma ISO “_Water quality- Detection and quantification of Legionella spp and/or Legionella pneumophila by concentration and genic amplification by quantitative polymerase chain reaction (qPCR)_” (ISO/TS 12869, 2012), la qPCR non dà informazione riguardo lo stato delle cellule, la quantificazione dovrà sempre essere determinata mediante esame colturale»#footnote[#cite( <LineeGuida>, form:"normal" ),«Linee guida per la prevenzione ed il controllo della legionellosi», p. 21].] Inoltre, si osserva che i metodi analitici utilizzati per la rilevazione del batterio, come indicato nell'allegato 4 delle "Linee Guida per la prevenzione e il controllo della legionellosi"#footnote[#cite( <LineeGuida>, form:"normal" ), «Linee guida per la prevenzione ed il controllo della legionellosi», p. 91], variano in base alla matrice da analizzare (acqua, biofilm, aria); tuttavia, i risultati ottenuti sono espressi in modo uniforme, a prescindere dal tipo di analisi effettuata. Pertanto, considerata l'esigenza di conservare le informazioni relative ai risultati delle analisi sui campioni, si ritiene lecito mantenere le tre tipologie di analisi sopra menzionate, senza ulteriori distinzioni. == Schema concettuale-logico #annotation[Di seguito viene presentato lo schema concettuale-logico del database sviluppato dal dottor Garlatti. Tale schema è stato modellato utilizzando il linguaggio IDEF1X#footnote("Integration DEFinition for information modeling."). Questo linguaggio appartiene alla famiglia dei linguaggi di modellazione IDEF#footnote[#cite(<IDEF>, form:"full")]. Per una corretta comprensione dello schema, è essenziale definire i concetti di entità e relazione, che rappresentano i fondamenti della modellazione dei dati.] === Notazione IDEF1X #annotation[Nella notazione IDEF1X, le entità sono rappresentate attraverso tabelle contenenti attributi che ne descrivono le proprietà, e ciascuna entità è identificata da una chiave primaria, costituita da un singolo attributo o da una combinazione di attributi in grado di identificare univocamente ogni riga della tabella. Un'entità può essere classificata come indipendente se può essere identificata senza necessità di relazioni con altre entità, mentre si considera dipendente quando il suo significato emerge solo in relazione a un'altra tabella associata.] Le relazioni di connessione, o associazioni, sono rappresentate mediante linee che collegano due entità, segnalando l'esistenza di un legame tra di esse. In particolare, si distinguono due tipi di relazioni: le associazioni identificative, in cui l'entità figlia è identificata in relazione all'entità genitore e la cui chiave primaria include quella del genitore, rappresentate da una linea continua; le associazioni non identificative, in cui l'entità figlia è comunque identificata in relazione all'entità genitore, ma la chiave primaria della figlia non include quella del genitore, rappresentate da una linea tratteggiata. La cardinalità di queste associazioni è indicata da lettere: "p" denota una relazione uno a uno o uno a molti, "z" indica una relazione uno a zero o uno a uno, e "n" specifica una relazione uno a esattamente n. Le relazioni di categorizzazione, invece, sono rappresentate da linee che collegano un'entità genitore a una o più entità figlie, sottolineando che queste ultime ereditano le proprietà dell'entità genitore, pur mantenendo attributi distintivi. Le entità di categoria#footnote("Entità che costituisce un sottotipo di un'altra.") sono mutuamente esclusive e si distinguono grazie a un attributo discriminatore, il cui valore è univoco per ciascuna entità di categoria. Esistono due tipologie di categorizzazione: le categorizzazioni complete, in cui ogni entità genitore deve essere associata a una figlia, rappresentate da un pallino vuoto e due linee; le categorizzazioni incomplete, in cui un'entità genitore può non essere associata a nessuna entità figlia, rappresentate da un pallino pieno e una linea. #pagebreak() === Schema concettuale-logico #figure( supplement: "Figura", image("/img/Relazionale_Screen.png", width: 100%), caption: [Diagramma ER], ) == Considerazioni e proposte di modifica #annotation[Lo schema illustrato è stato concepito per rispondere ai requisiti di memorizzazione dei dati relativi alla diffusione della Legionella. Tuttavia, durante una prima fase di analisi del database, sono stati individuati alcuni difetti che richiedono un'accurata valutazione e una eventuale revisione dello schema.] Alcune entità, come _indirizzo_ e _categoria_, sono state inizialmente progettate come entità autonome, ma potrebbe essere più efficace trattarle come attributi dell'entità _sito_. Questo approccio non solo semplificherebbe lo schema, ma migliorerebbe anche la sua chiarezza strutturale. In particolare, l'attributo descrizione dell'entità _categoria_ è superfluo, poiché il nome della categoria dovrebbe essere sufficiente per identificarla in modo univoco. Inoltre, l'aggiunta di un attributo nome all'entità _sito_ potrebbe facilitare la consultazione dei dati, specialmente per quanto riguarda gli ospedali, che sono generalmente riconosciuti dalla combinazione di nome e città, piuttosto che unicamente dall'indirizzo. In aggiunta, si propone di arricchire l'entità sito con nuovi attributi che ne descrivano le caratteristiche principali nel contesto specifico. Questi attributi includono dettagli sull'impiantistica del sito, come la tipologia di caldaia, il materiale delle tubature, l'uso del cloro, e altre informazioni di carattere generale, come l'anno dell'ultima ristrutturazione. Un ulteriore elemento di riflessione riguarda l'associazione del _richiedente_ alle _indagini ambientali_. Superando quanto indicato nei requisiti, si ritiene opportuno che l'entità _richiedente_ sia messa in relazione con indagini che non siano unicamente di follow-up. Inoltre, si suggerisce l'introduzione di una nuova entità denominata _follow-up clinico_, associata a una o più indagini ambientali. Questa modifica si dimostra particolarmente efficace nella gestione dei dati relativi ai pazienti affetti da legionellosi e nella valutazione del rischio di diffusione del batterio. Infatti, «per avere un quadro globale della situazione, è fondamentale disporre, per ciascun paziente affetto da legionellosi, di informazioni precise su una eventuale esposizione a rischio nei dieci giorni precedenti l'insorgenza dei sintomi»#footnote[#cite( <LineeGuida>, form:"normal" ), «Linee guida per la prevenzione ed il controllo della legionellosi», p. 30]. La possibilità di associare un paziente a una o più indagini ambientali risulterebbe, dunque, vantaggiosa. L'entità _follow-up clinico_ potrebbe essere ulteriormente arricchita con attributi volti a descrivere il paziente e la sua esposizione al rischio, quali la data di insorgenza dei sintomi, il luogo di residenza, il luogo di lavoro e le attività svolte nei dieci giorni precedenti l'insorgenza dei sintomi. Questi dettagli, tuttavia, non sono modellati nello schema attuale né saranno inclusi nello schema finale, poiché non sono stati considerati nei requisiti né approfonditi con i ricercatori. Ciononostante, potrebbero rivelarsi utili per una valutazione più accurata del rischio di diffusione del batterio. Per quanto concerne l'entità _campione_, è opportuno valutare l'introduzione di un attributo volume per specificare la quantità d'acqua prelevata per l'analisi. Sebbene non strettamente necessario, tale attributo trova pertinenza nel definire parametri di riferimento relativi al prelievo dei campioni, come il volume minimo d'acqua richiesto per eseguire tutte le analisi previste. Inoltre, poiché è possibile prelevare campioni di diversa matrice ambientale, come acqua, biofilm o aria, si presenta la proposta di introdurre un attributo "matrice" che consenta di specificare il tipo di campione analizzato. Infine, si propone di riorganizzare la disposizione delle entità _indagine ambientale_ e _campione_ all'interno dello schema. In particolare, per come definita nel glossario, un'indagine ambientale non è altro che una collezione di campioni prelevati in un sito specifico in una data determinata. Pertanto, risulta più coerente associare solo l'entità _campione_ alle informazioni spaziali contenute nelle tabelle _punto di prelievo_ e _sito_. Si noti che tale modifica comporta l'introduzione di un vincolo di integrità che stabilisce che tutti i campioni associati a un'indagine devono essere prelevati nello stesso sito. In questo contesto, appare vantaggioso apportare una modifica alla struttura delle entità _sito_ e _punto di prelievo_ nel modo seguente: si consiglia di aggiungere l'attributo coordinate all'entità _sito_, associandolo a una coppia di coordinate, ad esempio riferite al centro geografico o all'ingresso principale dell'edificio, che costituirebbero una chiave per l'entità. Inoltre, l'entità _punto di prelievo_ potrebbe essere trasformata in un'entità debole rispetto al _sito_, rendendo implicito il vincolo imposto dall'associazione di un punto di prelievo a un sito, secondo il quale un punto di prelievo deve essere situato all'interno del perimetro del sito di cui fa parte. Al _punto di prelievo_ potrebbero essere attribuite proprietà che ne descrivano la posizione all'interno del sito, come il piano, la stanza o il tipo di componente idraulica, da cui è stato prelevato il campione. Complessivamente, gli adeguamenti proposti esercitano un impatto positivo sulla gestione dei vincoli di integrità del database, poiché risultano logicamente più immediati e più facili da implementare rispetto alle soluzioni precedenti, e contribuiscono a fornire una visione ordinata e completa dei dati relativi alla diffusione della Legionella. #pagebreak() === Diagramma E-R che raccoglie le modifiche proposte #annotation[In seguito a queste considerazioni, si propone una revisione dello schema. La nuova versione è modellata secondo la notazione classica E-R#footnote[#cite(<DatabaseSystems>) «_Database Systems: Concepts, Languages & Architectures_»] che consente di rappresentare in modo chiaro e conciso le entità, le relazioni e gli attributi del database.] #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #rotate(90deg)[ #figure( supplement: "Figura", image("/img/er_base.png", width: 120%), caption: [Diagramma ER], ) ] #pagebreak() = Integrazione dei nuovi requisiti nella base di dati <Requisiti_2> #annotation[Come accennato in precedenza, la progettazione concettuale della base di dati deve essere adeguata alle nuove esigenze emerse in seguito ai colloqui con i ricercatori di ARPA FVG. In questa sezione si procede con l'integrazione dei nuovi requisiti nella base di dati, partendo dallo schema concettuale proposto in conclusione del capitolo precedente.] == Requisiti e proposte di modifica dello schema #annotation[Le nuove informazioni sono finalizzate a rendere la base di dati più completa e funzionale. In particolare, è stata considerata l'opportunità di introdurre ulteriori entità e attributi, allo scopo di memorizzare dati aggiuntivi relativi ai campioni raccolti nel corso delle indagini ambientali e ai siti coinvolti. Di seguito sono elencati i requisiti, non strutturati, che hanno guidato l'integrazione dei nuovi elementi e le corrispondenti proposte di modifica dello schema.] #linebreak() *Dati meteorologici* Si ritiene opportuno mantenere le informazioni relative agli aspetti meteorologici e climatici dei siti in cui vengono condotte le indagini ambientali, poiché tali dati possono essere utili per valutare l'impatto delle condizioni ambientali sulla diffusione del batterio e per individuare eventuali correlazioni tra la presenza di Legionella e particolari fattori climatici. Tali informazioni sono raccolte presso le stazioni meteorologiche presenti sul territorio e comprendono dati relativi a temperatura, umidità e pressione atmosferica. Nella base di dati si propone di introdurre un'entità denominata _stazione meteorologica_, identificata dalla posizione geografica, che può essere rappresentata attraverso l'indirizzo oppure le coordinate, e che conserva i dati meteorologici raccolti. Questa entità è associata a un _sito_ nel seguente modo: ogni sito è in relazione con la stazione meteorologica più vicina, la quale fornisce i dati relativi alle condizioni climatiche del luogo. #linebreak() *Analisi del pH* Una seconda considerazione riguarda l'opportunità di ampliare il campo di azione delle analisi condotte sui campioni prelevati durante le indagini ambientali. In particolare, si suggerisce di introdurre un nuovo tipo di analisi, denominata _analisi del pH_, volta a misurare il livello di acidità o alcalinità dell'acqua campionata. Questo parametro è di fondamentale importanza per valutare la qualità dell'acqua e la presenza di Legionella, poiché, come visto in precedenza, il batterio prospera in acque con pH neutro o leggermente alcalino. #linebreak() *Informazioni genomiche* Sempre in relazione alle analisi condotte sui campioni, durante i colloqui è emerso il proposito di memorizzare le informazioni genomiche relative al batterio. In particolare, si intende raccogliere dati sulla presenza, o assenza, di specifici geni e individuare i fattori genetici che influenzano la diffusione del batterio. A tale scopo, è necessario eseguire un'analisi genomica sui campioni prelevati per identificare la sequenza del DNA di Legionella. Questa informazione è memorizzata in un'entità _analisi genomica_, che rappresenta una specializzazione dell'entità _analisi_, e contiene l'intera sequenza del DNA di Legionella, espressa mediante le quattro lettere che indicano le basi azotate (A, T, C, G). A ciascun genoma sequenziato si intende associare i geni noti di Legionella, presenti nei database di riferimento di BLAST#footnote[«Basic Local Alignment Search Tool (Altschul et al., 1990 & 1997) is a sequence comparison algorithm optimized for speed used to search sequence databases for optimal local alignments to a query.» #cite(<BLAST>, form: "full")] corrispondenti. Tali geni sono memorizzati in un'entità _gene_, identificata univocamente mediante una chiave corrispondente al relativo protein ID#footnote("Identificativo univoco associato a ciascuna proteina mappata nei database di riferimento di BLAST.") e caratterizzata dal nome del gene, se presente nel database utilizzato per l'analisi. A questa entità, che ha lo scopo di conservare informazioni stabili e ben definite sui geni noti di Legionella, si propone di associare un'entità _gene del genoma_, che rappresenta i geni individuati per ogni genoma sequenziato. Si tiene traccia, tramite i principali parametri restituiti dalle query BLAST, del fattore di similarità tra i geni noti e quelli individuati tramite l'analisi. Questo approccio ha lo scopo di consentire, in futuro, in seguito al progresso delle tecniche di riconoscimento genetico e dell'aumento dei dati disponibili, una rivalutazione dei geni identificati, al fine di determinare se emergono geni con maggiore somiglianza rispetto a quelli attualmente presenti nel genoma analizzato. Ogni entry della tabella _gene di un genoma_ è distinta dall'insieme formato dal protein-ID, dal genoma di cui è parte e dalla posizione assoluta all'interno del genoma. Infine, si intende registrare per ciascun _gene del genoma_ la sua posizione relativa rispetto agli altri geni all'interno del profilo genetico sequenziato. Questa informazione è essenziale per valutare la prossimità tra i geni e per identificare eventuali pattern di distribuzione specifici all'interno del genoma di Legionella. In termini pratici, si suggerisce l'introduzione di una relazione autoreferenziale che coinvolga l'entità _gene del genoma_, al fine di stabilire un legame tra i geni identificati e la loro posizione, relativa#footnote("Definita in relazione alla prossimità ad altri geni all'interno del genoma sequenziato.") all'interno del genoma sequenziato. La cardinalità di tale relazione sarà definita come (0,1) a (0,1), indicando che ogni gene può essere associato a zero o un gene rispetto al quale è sequenziale nel profilo genetico. Questa configurazione tiene conto della limitata conoscenza attuale sui geni di Legionella, che potrebbe comportare l'assenza di associazioni per alcune aree del genoma. Si noti che la relazione di sequenzialità tra i geni è monodirezionale, ovvero è conservata, per ogni gene, solamente l'informazione relativa al predecessore nel profilo genetico. Tale scelta è dettata dal proposito di mantenere basso il costo computazionale per la gestione delle informazioni, evitando così il rischio di inconsistenza dovuto alla duplicazione dei dati. In questo modo si elude l'introduzione di vincoli di integrità aggiuntivi, preferendovi piuttosto l'aumento della complessità di un'eventuale query finalizzata a ottenere l'informazione nel senso opposto a quello definito dalla relazione. Si osserva che gli unici vincoli di integrità che si rendono necessari sono i seguenti: un gene del genoma non può essere associato a se stesso, né può essere associato a un altro gene se esistono geni noti che hanno posizione assoluta maggiore rispetto al gene con il quale si vuole stabilire la relazione di sequenzialità, ma minore rispetto al gene inserito. #pagebreak() == Diagramma E-R integrante le nuove esigenze <ER_aggiornato> #annotation[A seguito delle modifiche proposte, è realizzato il seguente diagramma E-R.] #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #linebreak() #rotate(90deg)[ #figure( supplement: "Figura", image("/img/er_aggiornato.png", width: 140%), caption: [Diagramma ER], ) ] #pagebreak() === Note #annotation[Si noti come l'introduzione di nuove entità e relazioni, sebbene arricchisca il quadro di informazioni memorizzate nel database, comporta un forte aumento della complessità del sistema. Più specificamente, l'aggiunta delle entità coinvolte nella memorizzazione delle informazioni genomiche richiede maggiore attenzione, in quanto, per garantire la coerenza dei dati con le informazioni disponibili nei database di BLAST o degli altri strumenti che possono essere utilizzati per l'individuazione e la classificazione dei geni, è necessario aggiornare costantemente le istanze dell'entità _gene_ con i dati più recenti.] In ultimo, si segnala che le principali operazioni eseguite sulla base di dati riguardano l'inserimento, la modifica e la cancellazione dei dati. Al contrario, le operazioni di interrogazione sono limitate a un numero ristretto di query, finalizzate a ottenere informazioni di tipo spaziale sui campioni, sulle analisi effettuate e sui risultati ottenuti oppure informazioni genetiche. Pertanto, si preferisce adottare una struttura facilmente manutenibile e ottimizzata per le operazioni fondamentali, che risulta già adeguata per l'esecuzione delle operazioni sopra menzionate, piuttosto che una struttura più complessa, progettata per ottimizzare le interrogazioni, ma che comporterebbe un costo maggiore per la gestione dei dati. Le considerazioni relative ai vincoli di integrità sono posticipate al @Progettazione_fisica[capitolo], nel quale, una volta terminata la fase di progettazione, sarà possibile ottenere una visione del tutto trasparente e definitiva delle entità coinvolte nel sistema dei relativi attributi e delle relazioni tra esse. #pagebreak() = Progettazione logica della base di dati <Progettazione_logica> == Ristrutturazione del modello concettuale #annotation[Ultimata la fase di progettazione concettuale della base di dati, è opportuno effettuare un'ultima revisione del modello al fine di elaborarne la struttura finale, priva di elementi discrezionali. In questa unità sono riportate le modifiche, congiuntamente alle motivazioni che le guidano, apportate allo schema E-R proposto al #ref(<ER_aggiornato>, supplement: "paragrafo"), con l'obbiettivo di risolvere generalizzazioni, attributi composti e attributi multivalore presenti in figura.] In prima istanza, sono trattati gli aspetti riguardanti le entità coinvolte nelle relazioni di generalizzazione. Per quanto concerne l'entità _analisi_, si propone di rimuovere, in tutto il suo complesso, la relazione di generalizzazione associando piuttosto le differenti tipologie di analisi ai campioni su cui sono eseguite. Ognuna delle analisi è identificata in modo esclusivo da un codice e caratterizzata sia della data di esecuzione, proprietà ereditata dall'entità soppressa _analisi_, sia dagli attributi caratterizzanti di ciascuna specializzazione. Inoltre, pur non essendo del tutto rigoroso dal punto di vista scientifico, si propone di riassumere le tabelle rappresentative delle analisi _PCR qualitativa_ e _PCR quantitativa_ in un'unica soluzione denominata _analisi PCR_, che conserva informazioni di entrambe le nature sui campioni analizzati. Questa semplificazione è ritenuta lecita in quanto i risultati prodotti dalle due analisi sono intrinsecamente correlati e possono essere memorizzati in modo più efficiente all'interno di un'unica entità. Infatti la PCR qualitativa, che rileva la presenza del DNA di Legionella, è, nell'ipotesi in cui restituisce un risultato positivo, seguita dalla PCR quantitativa, la quale misura la concentrazione del batterio nel campione. La soluzione proposta consente di alleggerire la struttura del database e di semplificare le operazioni di inserimento e consultazione dei dati, senza introdurre perdite di informazioni né comportare un aumento della complessità del sistema inteso come l'introduzione di vincoli di integrità. Si noti che non tutte le analisi sono eseguite su tutti i campioni, ma, talvolta, solo su una parte di essi. Ad esempio, come suggerito dalle linee guida per la prevenzione ed il controllo della legionellosi, «poiché la q-PCR è effettivamente vantaggiosa per molteplici aspetti ma non ancora validata a livello internazionale, essa può, a oggi, essere solo consigliata per una rapida analisi di numerosi campioni prelevati da siti probabilmente associati a un caso o ancor più a un cluster di legionellosi, potendo in tempi brevi escludere i siti negativi ed identificare quelli positivi» #footnote[#cite( <LineeGuida>, form:"normal" ), _Linee guida per la prevenzione ed il controllo della legionellosi_, p. 21]. In altre parole, le stesse linee guida suggeriscono di eseguire l'analisi colturale solo in caso di risultato positivo alla q-PCR, senza tuttavia stabilire una convenzione. Tale mancanza consente diverse interpretazioni e, pertanto, si è deciso di definire la cardinalità della relazione tra _campione_ e le diverse _analisi_ come "(0,1) a (1,1)", stabilendo che un campione può essere associato a zero o una sola analisi specifica, mentre ogni analisi è sempre associata a un campione. Questa scelta garantisce anche la retrocompatibilità del sistema. Infatti non essendo possibile associare a campioni già esistenti le analisi aggiunte successivamente, ovvero _analisi del pH_ e _analisi genomica_, è opportuno che i campioni non siano obbligatoriamente associati a tutte le tipologie di analisi. Per quanto concerne la specializzazione relativa all'_analisi colturale_, ovvero l'_analisi colturale positiva_, se ne suggerisce la sostituzione con un attributo denominato sierogruppo, che è proprio dell'entità _analisi colturale_. Tale modifica permette di conservare le informazioni relative al sierogruppo di Legionella identificato nel campione, senza introdurre una nuova entità e risparmiando dunque spazio. Si noti che questa modifica implica l'introduzione di un vincolo di integrità che assicuri che solamente le analisi colturali positive siano associate a un sierogruppo. Maggiori dettagli sui vincoli di integrità saranno forniti nel capitolo successivo. Un ulteriore elemento di riflessione riguarda la risoluzione degli attributi composti. In riferimento alla relazione _simile a_ che coinvolge le entità _gene_ e _gene del genoma_, si propone di scomporre l'attributo similarità nelle tre componenti che lo costituiscono: percent identity, query-cover ed e-value. Inoltre, la cardinalità della relazione suggerisce di ricollocare questi attributi in modo tale che siano riferiti all'entità _gene del genoma_. In merito all'attributo composto e multivalore _dati registrati_, afferente all'entità _stazione meteorologica_, la soluzione più adeguata consiste nella sua sostituzione con una nuova tabella denominata _dati meteorologici_ associata tramite una relazione del tipo 1 a N all'entità _stazione meteorologica_. Questa nuova entità è debole rispetto alla _stazione meteorologica_ e conserva le seguenti informazioni: la data di acquisizione di ogni set di dati, che costituisce parte della chiave primaria, la temperatura, l'umidità e la pressione atmosferica. In conclusione, relativamente agli attributi indirizzo e coordinate, si ritiene opportuno adottare la medesima soluzione per tutte le apparizioni nello schema: l'attributo coordinata è sostituito dalla coppia di latitudine e longitudine, che costituiscono la chiave primaria delle entità cui sono riferite, ovvero _sito_ e _stazione meteorologica_; l'attributo indirizzo è risolto in modo analogo, sostituendolo con i campi via, numero civico, CAP e città. Questa soluzione è dettata dall'intenzione di sfruttare le informazioni geografiche per condurre analisi mirate sulla diffusione della Legionella e per identificare eventuali cluster. In particolare, la soluzione proposta consente di semplificare notevolmente la struttura delle interrogazioni necessarie per ottenere tali informazioni. Per esempio, l'implementazione di un'interrogazione per la ricerca di tutti i campioni positivi raccolti in una determinata data, in una specifica via di una certa città, potrebbe restituire risultati non corretti qualora l'indirizzo fosse memorizzato come un'unica stringa di testo, poiché non sarebbe possibile fare una distinzione tra le componenti via e città. Al contrario, la soluzione proposta consente di ottenere risultati corretti e coerenti con le aspettative. #pagebreak() == Diagramma E-R finale #annotation[Il diagramma E-R finale, risultante dalla rielaborazione dello schema proposto al @ER_aggiornato[paragrafo], è presentato di seguito.] #linebreak() #linebreak() #linebreak() #linebreak() #rotate(90deg)[ #figure( supplement: "Figura", image("/img/er_finale.png", width: 140%), caption: [Diagramma ER], ) ] #pagebreak() == Trasposizione del modello concettuale in modello logico #annotation[La trasposizione del modello concettuale in quello logico comporta la definizione delle tabelle, dei rispettivi campi, e quindi dei domini, delle chiavi primarie e delle modalità di associazione tra le tabelle. In questa sezione vengono presentate le scelte progettuali adottate.] La traduzione delle entità in tabelle è diretta e non comporta particolari difficoltà. Ogni entità, infatti, è rappresentata mediante una matrice in cui ogni attributo corrisponde a una colonna. Per quanto riguarda le relazioni, è essenziale considerare tre tipologie fondamentali: le relazioni uno a uno, le relazioni uno a molti e le relazioni molti a molti. Le relazioni uno a uno sono le più complesse in quanto non è immediatamente chiaro quale entità debba essere scelta per mappare la relazione, ovvero in quale entità inserire la chiave esterna. Nel nostro contesto si presentano due situazioni principali: la relazione autoreferenziale tra i geni del genoma e la relazione tra i campioni e le analisi. Nel primo caso, la relazione è mappata sull'entità _gene del genoma_ poiché, sebbene gli strumenti di analisi presentino alcune limitazioni, nella maggioranza dei genomi analizzati esiste un'effettiva sequenzialità tra i geni. Pertanto lo spazio sprecato a causa della mancanza di informazioni è limitato e dunque non giustifica l'introduzione di una nuova tabella che, pur limitando lo spazio utilizzato, porterebbe problemi di integrità e complessità. Per quanto concerne le relazioni tra campioni e analisi, si è deciso di mappare la relazione sull'entità _analisi_. Si osserva che la soluzione alternativa, ovvero quella di mappare la relazione sull'entità _campione_, potrebbe comportare perdite di spazio a causa della presenza di campioni non analizzati rispetto a un esame specifico. Le relazioni uno a molti, invece, sono più semplici da gestire, in quanto la chiave esterna è necessariamente inserita nell'entità che rappresenta il lato "uno" della relazione. Infine, le relazioni molti a molti sono gestite mediante l'introduzione di una tabella di associazione che contiene le chiavi esterne delle due entità coinvolte. Per concludere, le entità deboli, come _dati meteorologici_, _punto di prelievo_ e _gene del genoma_, la chiave primaria è composta dalla chiave primaria dell'entità forte, o delle entità forti, a cui sono associate e un attributo che ne identifica univocamente l'istanza all'interno dell'entità forte. == Schema relazionale Sulla base delle considerazioni precedenti, si procede alla definizione dello schema relazionale, che rappresenta la struttura logica del database. #figure( image("/img/relazionale.png", width: 100%), caption: [Schema relazionale], ) #pagebreak() = Progettazione fisica della base di dati <Progettazione_fisica> In questa sezione viene eseguita l'implementazione della base di dati, iniziando dalla definizione dei domini, ovvero l'insieme dei valori ammissibili per ciascuna colonna, e proseguendo con la creazione delle tabelle e delle funzioni che implementano i vincoli che assicurano l'integrità dei dati. Il DBMS scelto per la realizzazione del database è PostgreSQL#footnote[#cite(<PostgreSQL>, form:"full")], un sistema di gestione di basi di dati relazionali rilasciato per la prima volta nel 1989. La scelta è motivata dalla flessibilità ed estensibilità del sistema, che consente di implementare vincoli di integrità complessi e di gestire grandi quantità di dati in modo efficiente. == Definizione dei domini #annotation[La maggior parte dei domini relativi alle colonne di ciascuna tabella è facilmente determinabile. Tuttavia, alcuni domini richiedono una definizione più dettagliata per garantire una corretta rappresentazione dei dati e facilitare l'esecuzione delle operazioni di interrogazione.] Primariamente, si considerino i domini relativi alla quantificazione della Legionella nei campioni, espressi in ufc/l e µg/l. Per entrambi è opportuno ridururre il dominio ai valori interi positivi, poiché non ha senso esprimere la presenza di Legionella con valori negativi. Un caso analogo si presenta per i valori che misurano il volume di un campione, l'umidità e la pressione atmosferica, per i quali si propone di definire il dominio come un numero decimale positivo, in quanto non ha senso avere valori negativi per queste grandezze fisiche. ```SQL -- Dominio per il valore intero non negativo CREATE DOMAIN INT_POS AS INTEGER CONSTRAINT valore_non_negativo CHECK (VALUE >= 0); -- Dominio per il valore reale non negativo CREATE DOMAIN FLOAT_POS AS FLOAT CONSTRAINT valore_non_negativo CHECK (VALUE >= 0); ``` #annotation[In secondo luogo si analizzi il dominio relativo al parametro di misurazione del pH. Per il fatto che il range di valori ammissibili per il pH è compreso tra 0 e 14, si propone di definire il dominio del pH come un numero decimale rientrante in questo intervallo.] ```SQL CREATE DOMAIN PH AS FLOAT CONSTRAINT ph_range CHECK (VALUE >= 0 AND VALUE <= 14); ``` #annotation[Un ulteriore aspetto da considerare riguarda le colonne categoria e matrice relative rispettivamente alle tabelle _sito_ e _campione_. Per quanto riguarda la colonna categoria, si propone di limitare il dominio a pochi vocaboli appartenenti a un ristretto insieme semantico, come ad esempio "ospedaliero", "termale", "alberghiero", "pubblico" e "privato". Si precisa che il valore "pubblico" include tutti quegli edifici, non afferenti alle categorie specificate, destinati alla fruizione da parte di un'ampia e variegata utenza. Analogamente, per la colonna matrice, si propone di fissare un dominio che comprenda solo valori appartenenti a un insieme finito di matrici, come ad esempio "acqua", "aria" e "biofilm" e "sedimento".] ```SQL -- Tipo enum per la categoria di un sito CREATE TYPE CATEGORIA AS ENUM ('ospedaliero', 'termale', 'alberghiero', 'pubblico', 'privato'); -- Tipo enum per la matrice di un campione CREATE TYPE MATRICE AS ENUM ('acqua', 'aria', 'biofilm', 'sedimento'); ``` #annotation[La colonna CAP delle tabelle _sito_ e _stazione meteorologica_ rappresenta un aggiuntivo aspetto notevole. Poiché il CAP è un codice numerico formato da cinque cifre, si suggerisce di definire un dominio di tipo integer che accetti esclusivamente valori numerici di tale lunghezza.] ```SQL CREATE DOMAIN CAP AS INTEGER CONSTRAINT cap_range CHECK (VALUE >= 10000 AND VALUE <= 99999); ``` #annotation[Inoltre, si propone di restringere l'intervallo dei valori ammessi per le colonne _percent-identity_, _query-cover_ e _e-value_ dell'entità _gene del genoma_. In termini pratici, si suggerisce di definire un dominio di tipo float compresi tra 0 e 100 per le colonne _percent-identity_ e _query-cover_, in quanto rappresentano percentuali di similarità tra i geni noti e quelli individuati tramite l'analisi. Per quanto concerne l'_e-value_, invece, si propone di utilizzare il dominio FLOAT_POS definito in precedenza, in quanto si vuole rappresentare un valore numerico positivo.] ```SQL CREATE DOMAIN PERCENT AS FLOAT CONSTRAINT percent_range CHECK (VALUE >= 0 AND VALUE <= 100); ``` #annotation[Infine, per quanto riguarda gli attributi relativi alle coordinate geografiche, ovvero latitudine e longitudine, il dominio deve essere limitato a valori compresi tra -90 e 90 per la latitudine e tra -180 e 180 per la longitudine.] ```SQL -- Dominio per latitudine CREATE DOMAIN LATITUDINE AS REAL CONSTRAINT latitudine_range CHECK (VALUE >= -90 AND VALUE <= 90); -- Dominio per longitudine CREATE DOMAIN LONGITUDINE AS REAL CONSTRAINT longitudine_range CHECK (VALUE >= -180 AND VALUE <= 180); ``` === Note Sulla base delle osservazioni riportate in diversi articoli scientifici riguardanti lo studio degli aspetti genetici del batterio, come ad esempio "_Draft genome sequences from 127 Legionella spp. strains isolated in water systems linked to legionellosis outbreaks_"#footnote[#cite(<DraftGenome>, form:"full")], è emerso che la lunghezza media del genoma di Legionella pneumophila è di circa 3.500.000 coppie di basi, con una significativa variabilità tra i genomi sequenziati. In considerazione alle dimensioni del dato, si propone di assegnare un dominio di tipo text#footnote("https://www.postgresql.org/docs/current/datatype-character.html"). Questo tipo di dato consente la memorizzazione di stringhe di lunghezza arbitraria, risultando particolarmente adatto per la conservazione di sequenze di DNA. Si evidenzia inoltre che, in termini di occupazione della memoria, la politica TOAST#footnote("https://www.postgresql.org/docs/current/storage-toast.html") tipica di PostgreSQL consente una gestione efficiente anche per attributi di grandi dimensioni, allocando i dati in pagine separate e comprimendoli per ridurre lo spazio complessivo occupato. #pagebreak() == Creazione delle tabelle #annotation[Il codice per la creazione delle tabelle è banalmente ottenuto dal modello relazionale. Tuttavia, merita particolare attenzione la gestione dei vincoli di chiave esterna. In particolare, è necessario considerare il comportamento delle chiavi esterne nei casi di eliminazione o aggiornamento di una riga a cui queste fanno riferimento. In questo ambito vi sono tre principali opzioni, ovvero l'impedimento dell'operazione (RESTRICT), che comporta il rifiuto dell'operazione stessa, l'azione di cascata (CASCADE), che comporta l'aggiornamento o la cancellazione delle righe collegate alla riga interessata, e l'assegnazione di un valore nullo (SET NULL), che imposta il valore nullo nelle righe che fanno riferimento alla riga eliminata o modificata.] Per ragioni di spazio vengono forniti alcuni esempi di creazione delle tabelle, mentre il codice completo è riportato in appendice. #linebreak() *Analisi* Un aspetto rilevante riguarda la cancellazione di un campione. In generale, si ritiene opportuno di eliminare i dati associate al campione, poiché perderebbero di significato in sua assenza. Tuttavia, si propone di impedire l'operazione di cancellazione qualora il campione sia associato a un'analisi del genoma. Tale decisione è finalizzata a interrompere la catena di eliminazione che coinvolgerebbe tutte le informazioni relative ai dati genomici osservati, al fine di evitare l'eliminazione accidentale di una grande quantità di dati. Si osserva che, per eliminare un campione, sarà sufficiente rimuovere preventivamente l'eventuale analisi genomica associata, dopodiché sarà possibile procedere con la cancellazione del campione stesso. A titolo di esempio si riportano le tabelle _analisi PCR_ e _analisi del genoma_. ```SQL -- Analisi PCR CREATE TABLE Analisi_PCR ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, esito BOOLEAN NOT NULL, µg_l INT_POS NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE CASCADE ON UPDATE CASCADE ); -- Analisi genomica CREATE TABLE Analisi_genomica ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, genoma TEXT NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE RESTRICT ON UPDATE CASCADE ); ``` #linebreak() *Gene del genoma* Un secondo caso di particolare rilevanza riguarda la tabella _gene del genoma_. Per quanto concerne la relazione con i geni noti di Legionella, si propone di propagare l'aggiornamento della chiave protein-ID a cascata e di impedire la cancellazione dei geni noti per i quali esistano geni del genoma associati. Relativamente alla relazione con i genomi di Legionella sequenziati, invece, è opportuno eseguire sia le operazioni di aggiornamento che di cancellazione a cascata. Tale comportamento è motivato dal fatto che, in caso di modifica di un genoma, è necessario aggiornare i geni del genoma associati; mentre, in caso di cancellazione del genoma, risulta opportuno eliminare anche i geni del genoma collegati, per evitare inconsistenze dovute alla presenza di dati orfani. Infine, riguardo alla relazione di sequenzialità tra i geni del genoma, la soluzione è immediatamente determinata dalla cardinalità della relazione: sostituire i dati registrati con il valore NULL. ```SQL CREATE TABLE Gene_del_genoma ( posizione INTEGER NOT NULL, codice_genoma CHAR(6) NOT NULL, protein_ID CHAR(6) NOT NULL, posizione_predecessore INTEGER, codice_genoma_predecessore CHAR(6), protein_ID_predecessore CHAR(6), query_cover PERCENT NOT NULL, percent_identity PERCENT NOT NULL, e_value FLOAT_POS NOT NULL, PRIMARY KEY (posizione, codice_genoma, protein_ID), FOREIGN KEY (codice_genoma) REFERENCES Analisi_genomica(codice) ON DELETE CASCADE ON UPDATE CASCADE, FOREIGN KEY (protein_ID) REFERENCES Gene(protein_ID) ON DELETE RESTRICT ON UPDATE CASCADE, FOREIGN KEY (posizione_predecessore, codice_genoma_predecessore, protein_ID_predecessore) REFERENCES Gene_genoma(posizione, codice_genoma, protein_ID) ON DELETE SET NULL ON UPDATE CASCADE ); ``` #linebreak() *Sito* In ultimo, si riporta una nota riguardante la tabella sito. Poiché si desidera collegare ciascun sito alla stazione meteorologica più vicina, le operazioni di aggiornamento e cancellazione delle stazioni sono risolte mediante l'applicazione di un trigger, che associa automaticamente tutti i siti legati alla stazione interessata alla stazione meteorologica più vicina. Di conseguenza, non è necessario definire un vincolo di chiave esterna per tale tabella. Una trattazione più approfondita sarà fornita nei capitoli successivi. Per ragioni di efficienza è consigliabile definire un indice spaziale per le colonne latitudine e longitudine delle tabelle _sito_ e _stazione meteorologica_. Questo indice consente di migliorare le prestazioni delle interrogazioni che utilizzano operazioni di ricerca basate sulla distanza tra due punti sulla superficie terrestre. In termini implementativi, si introduce un campo di tipo geometry#footnote("https://postgis.net/docs/geometry.html") del pacchetto PostGis#footnote[#cite(<PostGIS>, form: "full")], corrispondente alle coordinate geografiche. Successivamente, si definisce un indice spaziale su questo campo che permette di ottimizzare le operazioni di ricerca spaziale. ```SQL CREATE TABLE Sito ( latitudine LATITUDINE NOT NULL, longitudine LONGITUDINE NOT NULL, latitudine_stazione LATITUDINE NOT NULL, longitudine_stazione LONGITUDINE NOT NULL, CAP CAP NOT NULL, via_piazza VARCHAR(25) NOT NULL, civico INTEGER NOT NULL, città VARCHAR(25) NOT NULL, nome VARCHAR(25), categoria CATEGORIA NOT NULL, materiale_tubature VARCHAR(25), cloro BOOLEAN NOT NULL, anno_ultima_ristrutturazione DATE, caldaia VARCHAR(25), geom GEOMETRY(Point, 4326) NOT NULL, PRIMARY KEY (latitudine, longitudine), FOREIGN KEY (latitudine_stazione, longitudine_stazione) REFERENCES Stazione_meterologica(latitudine, longitudine) ); ``` #annotation[Si noti che il codice 4326 è relativo al sistema di riferimento spaziale WGS 84, che rappresenta un sistema di coordinate geografiche utilizzato comunemente per la rappresentazione di dati geografici sulla superficie terrestre.] Inoltre, l'operazione di aggiunta di una nuova _stazione meteorologica_ o _sito_ deve essere strutturata in modo tale da garantire la coerenza delle informazioni relative a latitudine e longitudine e il punto geografico associato. In tal senso è opportuno ridefinire le funzioni di inserimento per le tabelle coinvolte, piuttosto che introdurre vincoli di integrità che appesantirebbero solamente il codice senza apportare miglioramenti in termini prestazionali. Al contrario, le funzioni di inserimento specializzate garantiscono che i dati vengano gestiti correttamente sin dal momento dell'inserimento a costo di un leggero aumento della complessità del codice. La definizione di tali funzioni è riportata nella @query[sezione]. == Definizione dei vincoli #annotation[A questo punto si dispone di una visione completa e definitiva della struttura del database, che rende possibile analizzare le criticità non risolte dallo schema attuale. In questa sezione sono presentati i vincoli di integrità necessari per garantire la coerenzaa dei dati all'interno del database, insieme alle motivazioni che ne determinano l'introduzione.] === Chiave esterna relativa alla tabella _Sito_ <gestione_sito> #annotation[In questo paragrafo si affrontano le problematiche relative alla cancellazione e all'aggiornamento di una stazione meteorologica, accennate nel capitolo precedente. Poiché si desidera associare ciascun sito alla stazione meteorologica più vicina, si propone di implementare un trigger che, in caso di cancellazione o aggiornamento di una stazione meteorologica, assegni automaticamente a tutti i siti precedentemente collegati all'osservatorio interessato, la stazione meteorologica più vicina. Questa soluzione evita l'utilizzo di un vincolo RESTRICT, il quale renderebbe più complessa la gestione delle operazioni di cancellazione e aggiornamento.] In dettaglio, il trigger di aggiornamento si occupa di aggiornare la stazione meteorologica associata a ogni sito, sostituendola con quella più vicina in seguito alla modifica delle coordinate o all'inserimento di una stazione meteorologica. Il trigger di cancellazione, invece, impedisce l'eliminazione totale delle stazioni meteorologiche, assicurando che ve ne sia almeno una che possa essere associata a ciascun sito. Se la cancellazione è possibile, il trigger aggiorna le coordinate riferite alle stazioni meteorologiche dei siti coinvolti, sostituendole con quelle degli osservatori più vicini. Per garantire precisione ed efficienza, l'implementazione di questi trigger si avvale della funzione ST_Distance#footnote("https://postgis.net/docs/ST_Distance.html") del pacchetto PostGis che calcola la distanza tra due punti sulla superficie terrestre. Inoltre, si utilizza la funzione e ST_DWithin#footnote("https://postgis.net/docs/ST_DWithin.html"), che opera sugli indici spaziali, per determinare se due punti si trovano entro una certa distanza l'uno dall'altro e ridurre il numero di confronti necessari. Di seguito è presentato solo il codice relativo alla definizione del trigger per le operazioni di aggiornamento e inserimento di una nuovo centro meteorologico. Il codice per la gestione della cancellazione, simile a quello mostrato, è riportato in appendice. ```SQL CREATE OR REPLACE FUNCTION update_stazione_meteorologica_on_delete() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- Verifica se ci sono più di una stazione meteorologica IF (SELECT COUNT(*) FROM Stazione_meteorologica) = 1 THEN RAISE EXCEPTION 'Non è possibile eliminare tutte le stazioni meteorologiche.'; END IF; -- Trova la stazione meteorologica più vicina per ciascun sito che aveva come più vicina la stazione cancellata UPDATE Sito SET latitudine_stazione_meteorologica = stazione.latitudine, longitudine_stazione_meteorologica = stazione.longitudine FROM ( SELECT sito.latitudine AS sito_latitudine, sito.longitudine AS sito_longitudine, stazione.latitudine, stazione.longitudine, ST_Distance(sito.geom, stazione.geom) AS distance, ROW_NUMBER() OVER (PARTITION BY sito.latitudine, sito.longitudine ORDER BY ST_Distance(sito.geom, stazione.geom)) AS rn FROM Sito sito JOIN Stazione_meteorologica stazione ON ST_DWithin( geography(sito.geom), geography(stazione.geom), 100000 -- 100 km ) WHERE (stazione.latitudine != OLD.latitudine OR stazione.longitudine != OLD.longitudine) ) AS stazione WHERE Sito.latitudine = stazione.sito_latitudine AND Sito.longitudine = stazione.sito_longitudine AND rn = 1; RETURN NULL; END; $$; CREATE TRIGGER update_stazione_meteorologica_on_delete_trigger BEFORE DELETE ON Stazione_meteorologica FOR EACH ROW EXECUTE FUNCTION update_stazione_meteorologica_on_delete(); ``` === Vincoli relativi ai dati *Analisi* #annotation[Per quanto riguarda le analisi, si propone di introdurre vincoli che assicurino la coerenza dei dati registrati.] Si consideri, in primo luogo, l'entità analisi colturale. La corretta formazione dei dati registrati a seguito di ciascuna analisi richiede l'applicazione dei seguenti vincoli di integrità relativi ai casi di positività del campione: a ogni campione negativo non deve essere associato alcun sierogruppo; a ciascun campione positivo deve essere associato un valore di unità formanti colonia per litro (ufc/l) maggiore di zero, mentre a ogni campione negativo deve essere associato il valore pari a zero. Per quanto concerne l'entità analisi PCR, invece, si applica il seguente vincolo: a ciascun campione positivo deve essere associato un valore di microgrammi per litro (µg/l) maggiore di zero, mentre a ogni campione negativo deve essere associato il valore pari a zero. Per ragioni di spazio, si riporta esclusivamente il codice relativo ai vincoli riguardanti l'_analisi colturale_. Il codice che implementa la funzione di controllo per la tabella _analisi PCR_ è analogo, con l'eccezione che non prevede condizioni relative al sierotipo, ed è consultabile in appendice. ```SQL CREATE OR REPLACE FUNCTION check_esito_Colturale() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NEW.esito = TRUE AND NEW.ufc_l = 0 THEN RAISE EXCEPTION 'Il valore ufc/l deve essere maggiore di 0 quando l''esito è positivo.'; ELSIF NEW.esito = FALSE AND NEW.ufc_l > 0 THEN RAISE EXCEPTION 'Il valore ufc/l deve essere 0 quando l''esito è negativo.'; ELSIF NEW.esito = FALSE AND NEW.sierotipo IS NOT NULL THEN RAISE EXCEPTION 'Il sierotipo non può essere specificato quando l''esito è negativo.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_esito_Colturale BEFORE INSERT OR UPDATE ON Analisi_culturale FOR EACH ROW EXECUTE FUNCTION check_esito_Colturale(); ``` #linebreak() *Campioni* #annotation[Un ulteriore accorgimento deve essere impiegato nel caso dei campioni. Infatti, come già accennato, poiché un'indagine ambientale è una collezione di campioni raccolti in una stessa data, in un sito specifico, è necessario garantire che tutti i campioni associati a un'indagine siano prelevati nello stesso sito. In termini pratici si propone di introdurre un trigger che, in caso di inserimento o aggiornamento di un campione, verifichi che il sito associato al campione sia lo stesso per tutti i campioni associati all'indagine a cui il campione appartiene.] ```SQL CREATE OR REPLACE FUNCTION check_campione_indagine() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF EXISTS ( SELECT 1 FROM Indagine_ambientale JOIN Campione ON codice = codice_indagine WHERE codice = NEW.codice_indagine AND (latitudine_sito != NEW.latitudine_sito OR longitudine_sito != NEW.longitudine_sito) ) THEN RAISE EXCEPTION 'I campioni raccolti nell''ambito di una stessa indagine devono essere prelevati nel medesimo sito.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_campione_indagine BEFORE INSERT OR UPDATE ON Campione FOR EACH ROW EXECUTE FUNCTION check_campione_indagine(); ``` #linebreak() *Geni* Infine, per quanto riguarda l'entità _gene del genoma_ è necessario considerare attentamente la relazione di sequenzialità tra i geni. È consigliabile l'introduzione di alcuni vincoli che garantiscano che a un gene di un genoma non possa essere associato un gene di un genoma diverso, né se stesso, né possa essere associato a un altro gene dello stesso genoma, qualora esistano altri geni con posizione assoluta maggiore rispetto a quello con cui si intende stabilire la relazione, ma minore rispetto al gene considerato. L'imposizione di queste condizioni è necessaria per garantire la corretta rappresentazione della sequenza genetica della Legionella e prevenire eventuali situazioni di inconsistenza dei dati. ```SQL CREATE OR REPLACE FUNCTION check_genoma() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NEW.codice_genoma != NEW.codice_genoma_predecessore THEN RAISE EXCEPTION 'Non è possibile associare un gene a un genoma differente.'; END IF; IF NEW.posizione = NEW.posizione_predecessore AND NEW.codice_genoma = NEW.codice_genoma_predecessore AND NEW.protein_ID = NEW.protein_ID_predecessore THEN RAISE EXCEPTION 'Non è possibile associare un gene a se stesso.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_genoma BEFORE INSERT OR UPDATE ON Gene_genoma FOR EACH ROW EXECUTE FUNCTION check_genoma(); CREATE OR REPLACE FUNCTION check_predecessore() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF EXISTS ( SELECT 1 FROM Gene_genoma WHERE codice_genoma = NEW.codice_genoma_predecessore AND posizione > NEW.posizione_predecessore AND posizione < NEW.posizione ) THEN RAISE EXCEPTION 'Il gene predecessore non è corretto: esiste un gene con posizione compresa tra la posizione del gene e quella del gene predecessore.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_predecessore BEFORE INSERT OR UPDATE ON Gene_genoma FOR EACH ROW EXECUTE FUNCTION check_predecessore(); ``` #pagebreak() == Definizione dei trigger per la gestione delle istanze isolate #annotation[Prima di procedere con l'implementazione di alcune operazioni notevoli, è essenziale definire opportuni trigger che consentano di gestire, in modo conforme a una politica ben definita, le operazioni di cancellazione e aggiornamento relative a determinate tabelle. Questi trigger servono a prevenire situazioni in cui alcune entry risultino "prive" di significato in seguito a modifiche o cancellazioni di altre entry a cui erano precedentemente collegate.] #linebreak() *Indagine ambientale* Un primo caso riguarda la gestione delle indagini ambientali, definite come una collezione di campioni raccolti in una data specifica e in un sito particolare. A seguito della modifica o della cancellazione delle entry relative ai campioni afferenti a un'indagine, questa risulterebbe priva di collegamenti con alcun campione, ovvero, di fatto, priva di significato. Per evitare tale situazione, si propone di definire un trigger che, in caso di cancellazione di tutti i campioni associati a un'indagine, elimini automaticamente anche l'indagine stessa. #linebreak() *Richiedente e Follow-up Clinico* Un altro aspetto fondamentale riguarda la gestione delle entry relative ai richiedenti e ai follow-up clinici. È opportuno che, qualora tutte le indagini associate a un determinato richiedente o follow-up clinico vengano cancellate, anche il richiedente o il follow-up clinico vengano rimossi. Anche in questo caso, l'introduzione di un trigger automatizza il processo di cancellazione. Per quanto concerne le altre tabelle del database, si ritiene non necessario implementare trigger per gestire le operazioni di cancellazione o aggiornamento. Queste tabelle, infatti, contengono entità sostanzialmente stabili che mantengono la loro validità anche in assenza di entry collegate, poiché potrebbero essere riutilizzate in futuro. Si riporta solamente il codice relativo alla definizione del trigger per la gestione delle operazioni di cancellazione di un'indagine ambientale. Altri analoghi a quello mostrato sono stati implementati per le tabelle _Richiedente_ e _FollowUp_clinico_ e sono consultabili in appendice. ```SQL CREATE OR REPLACE FUNCTION delete_indagine() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NOT EXISTS ( SELECT 1 FROM Campione WHERE codice_indagine = OLD.codice ) THEN DELETE FROM Indagine_ambientale WHERE codice = OLD.codice; END IF; END; $$; CREATE TRIGGER delete_indagine AFTER DELETE ON Campione FOR EACH ROW EXECUTE FUNCTION delete_indagine(); ``` == Operazioni di inserimento e aggiornamento<query> #annotation[L'ultima sezione di questo elaborato è dedicata alla ridefinizione di alcune operazioni di inserimento o aggiornamento, con l'obbiettivo di agevolare la fruizione della base di dati per gli utenti.] #linebreak() *Stazione meteorologica* Una prima operazione di rilievo riguarda l'inserimento di una nuova stazione meteo. Per questa operazione è opportuno definire una funzione che, a partire dalle coordinate geografiche inserite dall'utente, calcoli il punto geografico associato. Questa soluzione è particolarmente utile per garantire la coerenza dei dati: infatti, evita che le coordinate geografiche e il punto geografico associato siano inseriti in modo non corrispondente e semplifica l'operazione per l'utente. Si sottolinea che l'operazione di aggiornamento è del tutto analoga a quella di inserimento; pertanto, di seguito, si riporta solamente il codice per l'inserimento di una nuova stazione meteo, mentre quello relativo all'aggiornamento è visionabile in appendice. ```SQL CREATE OR REPLACE FUNCTION insert_stazione_meteorologica( latitudine FLOAT, longitudine FLOAT, via_piazza VARCHAR(25), numero_civico INTEGER, CAP INTEGER, città VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; BEGIN -- Crea un punto geometrico a partire da latitudine e longitudine point := ST_SetSRID(ST_MakePoint(longitudine, latitudine), 4326); -- Inserisci i valori nella tabella Stazione_meteorologica INSERT INTO Stazione_meteorologica (latitudine, longitudine, via_piazza, civico, CAP, città, geom) VALUES (latitudine, longitudine, via_piazza, numero_civico, CAP, città, point); RETURN; END; $$; ``` #linebreak() *Sito* Un secondo punto di interesse riguarda l'inserimento o l'aggiornamento di sito. Anche in questo caso, è opportuno definire una funzione che, a partire dalle coordinate geografiche inserite dall'utente, calcoli il punto geografico associato. Inoltre, è necessario referenziare automaticamente il sito alla stazione meteorologica più vicina. Tale collegamento al momento della creazione di una nuova entry nella tabella sito è garantita dall'impiego, nella funzione di inserimento, di una user-defined function quasi del tutto analoga a quelle viste nel @gestione_sito[paragrafo] che calcola la distanza tra il sito e le stazioni meteorologiche presenti nel database e associa il sito alla stazione più vicina. Di seguito si riporta il codice per l'operazione di aggiornamento. Per la consultazione del codice relativo all'inserimento si rimanda all'appendice. ```SQL CREATE OR REPLACE FUNCTION update_sito( old_latitudine LATITUDINE, old_longitudine LONGITUDINE, new_latitudine LATITUDINE, new_longitudine LONGITUDINE, new_via_piazza VARCHAR(25), new_numero_civico INTEGER, new_CAP CAP, new_città VARCHAR(25), new_nome VARCHAR(25), new_categoria CATEGORIA, new_materiale_tubature VARCHAR(25), new_cloro BOOLEAN, new_anno_ultima_ristrutturazione DATE, new_caldaia VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; new_latitudine_stazione_meteorologica LATITUDINE; new_longitudine_stazione_meteorologica LONGITUDINE; BEGIN point := ST_SetSRID(ST_MakePoint(new_longitudine, new_latitudine), 4326); SELECT stazione.latitudine, stazione.longitudine INTO new_latitudine_stazione_meteorologica, new_longitudine_stazione_meteorologica FROM Stazione_meteorologica stazione WHERE ST_DWithin( geography(point), geography(stazione.geom), 100000 -- 100 km ) ORDER BY ST_Distance(point, stazione.geom) LIMIT 1; UPDATE Sito SET latitudine = new_latitudine, longitudine = new_longitudine, latitudine_stazione_meteorologica = new_latitudine_stazione_meteorologica, longitudine_stazione_meteorologica = new_longitudine_stazione_meteorologica, via_piazza = new_via_piazza, civico = new_numero_civico, CAP = new_CAP, città = new_città, nome = new_nome, categoria = new_categoria, materiale_tubature = new_materiale_tubature, cloro = new_cloro, anno_ultima_ristrutturazione = new_anno_ultima_ristrutturazione, caldaia = new_caldaia, geom = point WHERE latitudine = old_latitudine AND longitudine = old_longitudine; RETURN; END; $$; ``` #pagebreak() = Conclusioni #annotation[Questa tesi ha illustrato l'intero processo di sviluppo di un database relazionale per la gestione dei dati relativi al monitoraggio del batterio Legionella raccolti nell'ambito delle indagini svolte dall'ARPA FVG. In particolare, nei primi capitoli è stata condotta un'analisi approfondita di un progetto pre-esistente, evidenziandone i punti di forza e le criticità, ed è stato proposto un nuovo modello concettuale in grado di risolvere le problematiche riscontrate e integrare alcune nuove funzionalità richieste dagli stakeholders. Successivamente, nel @Progettazione_logica[capitolo] è stato definito lo schema logico del database, seguendo i principi del modello relazionale descritti nel volume "Database Systems#footnote[#cite(<DatabaseSystems>) «_Database Systems: Concepts, Languages & Architectures_»]". Infine, nella @Progettazione_fisica[sezione] è stato presentato il codice SQL per la creazione delle tabelle, la definizione degli indici spaziali e dei vincoli di integrità, oltre che per la gestione di alcune operazioni di inserimento e aggiornamento, corredato delle motivazioni che ne hanno guidato l'implementazione.] Il prodotto di questo elaborato costituisce una risorsa iniziale per l'ARPA FVG per il monitoraggio della Legionella nella nostra regione e consente di costruire analisi per monitorare e prevedere la diffusione del batterio permettendo di definire le misure di prevenzione e controllo maggiormente efficaci. Tuttavia, il lavoro svolto lascia spazio a ulteriori sviluppi e miglioramenti, come, per esempio, l'introduzione di applicativi software per la raccolta e l'inserimento dei dati, attualmente eseguiti manualmente attraverso schede cartacee o fogli elettronici senza l'impiego di procedure standardizzate, e per la consultazione del sistema, che potrebbe risultare complessa per utenti non esperti. #pagebreak() #set heading(numbering: none) = Appendice == Creazione delle tabelle e la definizione degli indici spaziali ```SQL -- DEFINIZIONE DELLE TABELLE -- Stazione meteorologica CREATE TABLE Stazione_meteorologica ( latitudine LATITUDINE NOT NULL, longitudine LONGITUDINE NOT NULL, via_piazza VARCHAR(25) NOT NULL, civico INTEGER NOT NULL, CAP CAP NOT NULL, città VARCHAR(25) NOT NULL, geom GEOMETRY(Point, 4326) NOT NULL, PRIMARY KEY (latitudine, longitudine) ); -- Dati meteorologici CREATE TABLE Dati_meteorologici ( data_ora TIMESTAMP NOT NULL, latitudine_stazione LATITUDINE NOT NULL, longitudine_stazione LONGITUDINE NOT NULL, temperatura FLOAT NOT NULL, umidità FLOAT_POS NOT NULL, pressione_atmosferica FLOAT NOT NULL, PRIMARY KEY (data_ora, latitudine_stazione, longitudine_stazione), FOREIGN KEY (latitudine_stazione, longitudine_stazione) REFERENCES Stazione_meteorologica(latitudine, longitudine) ON DELETE CASCADE ON UPDATE CASCADE ); CREATE TABLE Sito ( latitudine LATITUDINE NOT NULL, longitudine LONGITUDINE NOT NULL, latitudine_stazione_meteorologica LATITUDINE NOT NULL, longitudine_stazione_meteorologica LONGITUDINE NOT NULL, CAP CAP NOT NULL, via_piazza VARCHAR(25) NOT NULL, civico INTEGER NOT NULL, città VARCHAR(25) NOT NULL, nome VARCHAR(25), categoria CATEGORIA NOT NULL, materiale_tubature VARCHAR(25), cloro BOOLEAN NOT NULL, anno_ultima_ristrutturazione DATE, caldaia VARCHAR(25), geom GEOMETRY(Point, 4326) NOT NULL, PRIMARY KEY (latitudine, longitudine), FOREIGN KEY (latitudine_stazione_meteorologica, longitudine_stazione_meteorologica) REFERENCES Stazione_meteorologica(latitudine, longitudine) ); -- Punto di prelievo CREATE TABLE Punto_di_prelievo ( piano INTEGER NOT NULL, stanza VARCHAR(15) NOT NULL, latitudine_sito LATITUDINE NOT NULL, longitudine_sito LONGITUDINE NOT NULL, descrizione VARCHAR(100), componente_idraulica VARCHAR(25) NOT NULL, PRIMARY KEY (latitudine_sito, longitudine_sito, piano, stanza), FOREIGN KEY (latitudine_sito, longitudine_sito) REFERENCES Sito(latitudine, longitudine) ON DELETE RESTRICT ON UPDATE CASCADE ); -- FollowUp clinico CREATE TABLE FollowUp_clinico ( codice CHAR(6) NOT NULL, PRIMARY KEY (codice) ); -- Richiedente CREATE TABLE Richiedente ( codice CHAR(6) NOT NULL, nome VARCHAR(25), PRIMARY KEY (codice) ); -- Indagine ambientale CREATE TABLE Indagine_ambientale ( codice CHAR(6) NOT NULL, codice_FollowUp CHAR(6), codice_Richiedente CHAR(6), data DATE NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_FollowUp) REFERENCES FollowUp_clinico(codice) ON DELETE SET NULL ON UPDATE CASCADE, FOREIGN KEY (codice_Richiedente) REFERENCES Richiedente(codice) ON DELETE SET NULL ON UPDATE CASCADE ); -- Campione CREATE TABLE Campione ( codice CHAR(6) NOT NULL, longitudine_sito LONGITUDINE NOT NULL, latitudine_sito LATITUDINE NOT NULL, piano_punto_prelievo INTEGER NOT NULL, stanza_punto_prelievo VARCHAR(15) NOT NULL, codice_indagine CHAR(6) NOT NULL, temperatura FLOAT NOT NULL, matrice MATRICE NOT NULL, volume FLOAT_POS NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (longitudine_sito, latitudine_sito, piano_punto_prelievo, stanza_punto_prelievo) REFERENCES Punto_di_prelievo(longitudine_sito, latitudine_sito, piano, stanza) ON DELETE RESTRICT ON UPDATE CASCADE, FOREIGN KEY (codice_indagine) REFERENCES Indagine_ambientale(codice) ON DELETE RESTRICT ON UPDATE CASCADE ); -- Analisi PCR CREATE TABLE Analisi_PCR ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, esito BOOLEAN NOT NULL, µg_l INT_POS NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE CASCADE ON UPDATE CASCADE ); -- Analisi colturale CREATE TABLE Analisi_culturale ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, esito BOOLEAN NOT NULL, ufc_l INT_POS NOT NULL, sierotipo VARCHAR(50), PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE CASCADE ON UPDATE CASCADE ); -- Analisi del pH CREATE TABLE Analisi_pH ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, ph PH NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE CASCADE ON UPDATE CASCADE ); -- Analisi genomica CREATE TABLE Analisi_genomica ( codice CHAR(6) NOT NULL, codice_campione CHAR(6) NOT NULL, data_ora DATE NOT NULL, genoma TEXT NOT NULL, PRIMARY KEY (codice), FOREIGN KEY (codice_campione) REFERENCES Campione(codice) ON DELETE RESTRICT ON UPDATE CASCADE ); -- Gene CREATE TABLE Gene ( protein_ID CHAR(6) NOT NULL, nome VARCHAR(75), PRIMARY KEY (protein_ID) ); -- Gene del genoma CREATE TABLE Gene_genoma ( posizione INTEGER NOT NULL, codice_genoma CHAR(6) NOT NULL, protein_ID CHAR(6) NOT NULL, posizione_predecessore INTEGER, codice_genoma_predecessore CHAR(6), protein_ID_predecessore CHAR(6), query_cover PERCENT NOT NULL, percent_identity PERCENT NOT NULL, e_value FLOAT_POS NOT NULL, PRIMARY KEY (posizione, codice_genoma, protein_ID), FOREIGN KEY (codice_genoma) REFERENCES Analisi_genomica(codice) ON DELETE CASCADE ON UPDATE CASCADE, FOREIGN KEY (protein_ID) REFERENCES Gene(protein_ID) ON DELETE RESTRICT ON UPDATE CASCADE, FOREIGN KEY (posizione_predecessore, codice_genoma_predecessore, protein_ID_predecessore) REFERENCES Gene_genoma(posizione, codice_genoma, protein_ID) ON DELETE SET NULL ON UPDATE CASCADE ); -- DEFINIZIONE DEGLI INDICI -- Indice per la tabella Stazione_meteorologica CREATE INDEX idx_stazione_geom ON Stazione_meteorologica USING GIST (geom); -- Indice per la tabella Sito CREATE INDEX idx_sito_geom ON Sito USING GIST (geom); ``` == Implementazione dei trigger ```SQL -- TRIGGER --1. Trigger per aggiornamento automatico della stazione meteorologica più vicina -- 1.A : On Insert or Update SET search_path TO legionella; CREATE OR REPLACE FUNCTION update_stazione_meteorologica() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN UPDATE Sito SET latitudine_stazione_meteorologica = stazione.latitudine, longitudine_stazione_meteorologica = stazione.longitudine FROM ( SELECT sito.latitudine AS sito_latitudine, sito.longitudine AS sito_longitudine, stazione.latitudine, stazione.longitudine, ST_Distance(sito.geom, stazione.geom) AS distance, ROW_NUMBER() OVER (PARTITION BY sito.latitudine, sito.longitudine ORDER BY ST_Distance(sito.geom, stazione.geom)) AS rn FROM Sito sito JOIN Stazione_meteorologica stazione ON ST_DWithin( geography(sito.geom), geography(stazione.geom), 100000 -- 100 km ) ) AS stazione WHERE Sito.latitudine = stazione.sito_latitudine AND Sito.longitudine = stazione.sito_longitudine AND rn = 1; RETURN NULL; END; $$; -- si utilizza ROW_NUMBER per cercare la stazione meteorologica più vicina per ciascun sito -- partition by per raggruppare i siti con la stessa latitudine e longitudine CREATE TRIGGER update_stazione_meteorologica_trigger AFTER INSERT OR UPDATE ON Stazione_meteorologica FOR EACH ROW EXECUTE FUNCTION update_stazione_meteorologica(); -- 1.B : On Delete CREATE OR REPLACE FUNCTION update_stazione_meteorologica_on_delete() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- Verifica se ci sono più di una stazione meteorologica IF (SELECT COUNT(*) FROM Stazione_meteorologica) = 1 THEN RAISE EXCEPTION 'Non è possibile eliminare tutte le stazioni meteorologiche.'; END IF; -- Trova la stazione meteorologica più vicina per ciascun sito che aveva come più vicina la stazione cancellata UPDATE Sito SET latitudine_stazione_meteorologica = stazione.latitudine, longitudine_stazione_meteorologica = stazione.longitudine FROM ( SELECT sito.latitudine AS sito_latitudine, sito.longitudine AS sito_longitudine, stazione.latitudine, stazione.longitudine, ST_Distance(sito.geom, stazione.geom) AS distance, ROW_NUMBER() OVER (PARTITION BY sito.latitudine, sito.longitudine ORDER BY ST_Distance(sito.geom, stazione.geom)) AS rn FROM Sito sito JOIN Stazione_meteorologica stazione ON ST_DWithin( geography(sito.geom), geography(stazione.geom), 100000 -- 100 km ) WHERE (stazione.latitudine != OLD.latitudine OR stazione.longitudine != OLD.longitudine) ) AS stazione WHERE Sito.latitudine = stazione.sito_latitudine AND Sito.longitudine = stazione.sito_longitudine AND rn = 1; RETURN NULL; END; $$; CREATE TRIGGER update_stazione_meteorologica_on_delete_trigger BEFORE DELETE ON Stazione_meteorologica FOR EACH ROW EXECUTE FUNCTION update_stazione_meteorologica_on_delete(); --2. Analisi PCR CREATE OR REPLACE FUNCTION check_esito_PCR() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NEW.esito = TRUE AND NEW.µg_l = 0 THEN RAISE EXCEPTION 'Il valore µg/l deve essere maggiore di 0 quando l''esito è positivo.'; ELSIF NEW.esito = FALSE AND NEW.µg_l > 0 THEN RAISE EXCEPTION 'Il valore µg/l deve essere 0 quando l''esito è negativo.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_esito_PCR BEFORE INSERT OR UPDATE ON Analisi_PCR FOR EACH ROW EXECUTE FUNCTION check_esito_PCR(); --3. Analisi colturale CREATE OR REPLACE FUNCTION check_esito_Colturale() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NEW.esito = TRUE AND NEW.ufc_l = 0 THEN RAISE EXCEPTION 'Il valore ufc/l deve essere maggiore di 0 quando l''esito è positivo.'; ELSIF NEW.esito = FALSE AND NEW.ufc_l > 0 THEN RAISE EXCEPTION 'Il valore ufc/l deve essere 0 quando l''esito è negativo.'; ELSIF NEW.esito = FALSE AND NEW.sierotipo IS NOT NULL THEN RAISE EXCEPTION 'Il sierotipo non può essere specificato quando l''esito è negativo.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_esito_Colturale BEFORE INSERT OR UPDATE ON Analisi_culturale FOR EACH ROW EXECUTE FUNCTION check_esito_Colturale(); --4. Sito dei campioni di un'indagine CREATE OR REPLACE FUNCTION check_campione_indagine() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF EXISTS ( SELECT 1 FROM Indagine_ambientale JOIN Campione ON codice = codice_indagine WHERE codice = NEW.codice_indagine AND (latitudine_sito != NEW.latitudine_sito OR longitudine_sito != NEW.longitudine_sito) ) THEN RAISE EXCEPTION 'I campioni raccolti nell''ambito di una stessa indagine devono essere prelevati nel medesimo sito.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_campione_indagine BEFORE INSERT OR UPDATE ON Campione FOR EACH ROW EXECUTE FUNCTION check_campione_indagine(); -- 5. Sequenzialità geni del genoma --5.A : Trigger per controllare che un gene non venga associato a se stesso o a un genoma diverso CREATE OR REPLACE FUNCTION check_genoma() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NEW.codice_genoma != NEW.codice_genoma_predecessore THEN RAISE EXCEPTION 'Non è possibile associare un gene a un genoma differente.'; END IF; IF NEW.posizione = NEW.posizione_predecessore AND NEW.codice_genoma = NEW.codice_genoma_predecessore AND NEW.protein_ID = NEW.protein_ID_predecessore THEN RAISE EXCEPTION 'Non è possibile associare un gene a se stesso.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_genoma BEFORE INSERT OR UPDATE ON Gene_genoma FOR EACH ROW EXECUTE FUNCTION check_genoma(); --5.B : Trigger per controllare che il gene predecessore sia corretto CREATE OR REPLACE FUNCTION check_predecessore() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF EXISTS ( SELECT 1 FROM Gene_genoma WHERE codice_genoma = NEW.codice_genoma_predecessore AND posizione > NEW.posizione_predecessore AND posizione < NEW.posizione ) THEN RAISE EXCEPTION 'Il gene predecessore non è corretto: esiste un gene con posizione compresa tra la posizione del gene e quella del gene predecessore.'; END IF; RETURN NEW; END; $$; CREATE TRIGGER check_predecessore BEFORE INSERT OR UPDATE ON Gene_genoma FOR EACH ROW EXECUTE FUNCTION check_predecessore(); --6. Eliminazione di un'indagine se non ci sono più campioni associati CREATE OR REPLACE FUNCTION delete_indagine() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NOT EXISTS ( SELECT 1 FROM Campione WHERE codice_indagine = OLD.codice ) THEN DELETE FROM Indagine_ambientale WHERE codice = OLD.codice; END IF; END; $$; CREATE TRIGGER delete_indagine AFTER DELETE ON Campione FOR EACH ROW EXECUTE FUNCTION delete_indagine(); --7. Eliminazione di un richiedente o di un follow-up se non ci sono più indagini associate CREATE OR REPLACE FUNCTION delete_richiedente_follow_up() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN IF NOT EXISTS ( SELECT 1 FROM Indagine_ambientale WHERE codice_richiedente = OLD.codice ) THEN DELETE FROM Richiedente WHERE codice = OLD.codice; END IF; IF NOT EXISTS ( SELECT 1 FROM Indagine_ambientale WHERE codice_follow_up = OLD.codice ) THEN DELETE FROM Follow_up WHERE codice = OLD.codice; END IF; END; $$; CREATE TRIGGER delete_richiedente_follow_up AFTER DELETE ON Indagine_ambientale FOR EACH ROW EXECUTE FUNCTION delete_richiedente_follow_up(); ``` == Definizione di funzioni per le operazioni di inserimento e aggiornamento ```SQL -- 1. Inserimento di una stazione meteorologica CREATE OR REPLACE FUNCTION insert_stazione_meteorologica( latitudine FLOAT, longitudine FLOAT, via_piazza VARCHAR(25), numero_civico INTEGER, CAP INTEGER, città VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; BEGIN -- Crea un punto geometrico a partire da latitudine e longitudine point := ST_SetSRID(ST_MakePoint(longitudine, latitudine), 4326); -- Inserisci i valori nella tabella Stazione_meteorologica INSERT INTO Stazione_meteorologica (latitudine, longitudine, via_piazza, civico, CAP, città, geom) VALUES (latitudine, longitudine, via_piazza, numero_civico, CAP, città, point); RETURN; END; $$; -- 2. Aggiornamento di una stazione meteorologica CREATE OR REPLACE FUNCTION update_stazione_meteorologica( old_latitudine FLOAT, old_longitudine FLOAT, new_latitudine FLOAT, new_longitudine FLOAT, new_via_piazza VARCHAR(25), new_numero_civico INTEGER, new_CAP CAP, new_città VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; BEGIN -- Crea il punto geometrico a partire dalle nuove coordinate point := ST_SetSRID(ST_MakePoint(new_longitudine, new_latitudine), 4326); -- Esegui l'aggiornamento della stazione meteorologica UPDATE Stazione_meteorologica SET latitudine = new_latitudine, longitudine = new_longitudine, via_piazza = new_via_piazza, civico = new_numero_civico, CAP = new_CAP, città = new_città, geom = point WHERE latitudine = old_latitudine AND longitudine = old_longitudine; RETURN; END; $$; -- 3. Inserimento di un sito CREATE OR REPLACE FUNCTION insert_sito( latitudine LATITUDINE, longitudine LONGITUDINE, via_piazza VARCHAR(25), numero_civico INTEGER, CAP CAP, città VARCHAR(25), nome VARCHAR(25), categoria CATEGORIA, materiale_tubature VARCHAR(25), cloro BOOLEAN, anno_ultima_ristrutturazione DATE, caldaia VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; latitudine_stazione_meteorologica LATITUDINE; longitudine_stazione_meteorologica LONGITUDINE; BEGIN -- Crea un punto geometrico per il sito point := ST_SetSRID(ST_MakePoint(longitudine, latitudine), 4326); SELECT stazione.latitudine, stazione.longitudine INTO latitudine_stazione_meteorologica, longitudine_stazione_meteorologica FROM Stazione_meteorologica stazione WHERE ST_DWithin( geography(point), geography(stazione.geom), 100000 -- 100 km ) ORDER BY ST_Distance(point, stazione.geom) LIMIT 1; -- Inserisci i valori nella tabella Sito INSERT INTO Sito (latitudine, longitudine, latitudine_stazione_meteorologica, longitudine_stazione_meteorologica, via_piazza, civico, CAP, città, nome, categoria, materiale_tubature, cloro, anno_ultima_ristrutturazione, caldaia, geom) VALUES (latitudine, longitudine, latitudine_stazione_meteorologica, longitudine_stazione_meteorologica, via_piazza, numero_civico, CAP, città, nome, categoria, materiale_tubature, cloro, anno_ultima_ristrutturazione, caldaia, point); RETURN; END; $$; -- 4. Aggiornamento di un sito CREATE OR REPLACE FUNCTION update_sito( old_latitudine LATITUDINE, old_longitudine LONGITUDINE, new_latitudine LATITUDINE, new_longitudine LONGITUDINE, new_via_piazza VARCHAR(25), new_numero_civico INTEGER, new_CAP CAP, new_città VARCHAR(25), new_nome VARCHAR(25), new_categoria CATEGORIA, new_materiale_tubature VARCHAR(25), new_cloro BOOLEAN, new_anno_ultima_ristrutturazione DATE, new_caldaia VARCHAR(25) ) RETURNS VOID LANGUAGE plpgsql AS $$ DECLARE point GEOMETRY; new_latitudine_stazione_meteorologica LATITUDINE; new_longitudine_stazione_meteorologica LONGITUDINE; BEGIN point := ST_SetSRID(ST_MakePoint(new_longitudine, new_latitudine), 4326); SELECT stazione.latitudine, stazione.longitudine INTO new_latitudine_stazione_meteorologica, new_longitudine_stazione_meteorologica FROM Stazione_meteorologica stazione WHERE ST_DWithin( geography(point), geography(stazione.geom), 100000 -- 100 km ) ORDER BY ST_Distance(point, stazione.geom) LIMIT 1; UPDATE Sito SET latitudine = new_latitudine, longitudine = new_longitudine, latitudine_stazione_meteorologica = new_latitudine_stazione_meteorologica, longitudine_stazione_meteorologica = new_longitudine_stazione_meteorologica, via_piazza = new_via_piazza, civico = new_numero_civico, CAP = new_CAP, città = new_città, nome = new_nome, categoria = new_categoria, materiale_tubature = new_materiale_tubature, cloro = new_cloro, anno_ultima_ristrutturazione = new_anno_ultima_ristrutturazione, caldaia = new_caldaia, geom = point WHERE latitudine = old_latitudine AND longitudine = old_longitudine; RETURN; END; $$; ``` #pagebreak() #bibliography("bibliografia.bib", title: "Bibliografia" , style: "ieee") #pagebreak() = Glossario #annotation[Al fine di facilitare la comprensione dell'elaborato, è redatto il seguente glossario contenente le definizioni dei termini tecnici utilizzati.] #set par(justify: false) #figure( supplement: none, table( columns: (135pt, auto), inset: 8pt, [*Termine*], [*Definizione*], [Aerosol], [Particelle sospese nell'aria, contenenti gocce d'acqua, che possono trasportare il batterio Legionella.], [Analisi], [Esame di laboratorio effettuato su campioni di acqua prelevati durante un'indagine ambientale.], [Analisi Colturale], [Esame di laboratorio che permette di isolare e identificare le unità formanti colonia (UFC_L) di Legionella in un campione di acqua.], [Attributo],[Concetto che descrive una proprietà o una componente di una entità o di una relazione (_i.e._ campo).], [Attributo composto],[Attributo dalla struttura complessa, costituito da diversi sotto-attributi.], [Attributo multivalore],[Attributo che, per ogni istanza dell'entità cui è associato, può assumere più di un valore.], [Campione], [Piccola quantità di acqua da sottoporre a esame.], [Categoria], [Classificazione di un sito, o più specificamente di un edificio, in base alla sua destinazione d'uso, come ad esempio ospedaliero, termale o alberghiero.], [Chiave primaria], [Attributo o insieme di attributi che identifica univocamente ogni istanza di un'entità.], ), caption: "Glossario", ) <dictionary> #figure( supplement: none, table( columns: (135pt, auto), inset: 8pt, [*Termine*], [*Definizione*], [Componente idraulica], [Componente di un sistema idraulico da cui viene prelevato un campione di acqua, come un rubinetto o un filtro di un impianto di condizionamento.], [Entità], [In riferimento allo schema E-R, descrive una classe di oggetti con esistenza autonoma, con particolare significato nel contesto in esame (_i.e._ tabella).], [Entità debole], [Entità che non ha una chiave primaria propria, ma dipende da un'altra entità per la sua identificazione.], [Generalizzazione],[In riferimento al modello E-R, relazione che associa a un'entità genitore una o più entità figlie, che ereditano le proprietà del genitore (_i.e._ specializzazione).], [FollowUp Clinico], [Indagine ambientale, o indagini ambientali, condotte a seguito di uno o più casi di legionellosi. Tali indagini non si limitano al domicilio del paziente, ma possono estendersi a tutti i luoghi frequentati dal malato nei dieci giorni precedenti l'insorgenza dei sintomi. La decisione di effettuare tali indagini è lasciata al competente servizio territoriale che «deve valutare di volta in volta l'opportunità di effettuare o meno dei campionamenti ambientali, sulla base della valutazione del rischio»#footnote[#cite( <LineeGuida>, form:"normal" ), «Linee guida per la prevenzione ed il controllo della legionellosi», p. 30].], [Indagine Ambientale], [Collezione di campioni prelevati da un sito specifico in una data specifica.], ), caption: "Glossario", ) <dictionary_2> #figure( supplement: none, table( columns: (135pt, auto), inset: 8pt, [*Termine*], [*Definizione*], [PCR],[Polymerase Chain Reaction, è una «tecnica di laboratorio per produrre rapidamente (amplificare) milioni o miliardi di copie di uno specifico segmento di DNA, che può poi essere studiato in modo più dettagliato. La PCR prevede l'uso di brevi frammenti di DNA sintetico chiamati primer per selezionare un segmento del genoma da amplificare, e quindi più cicli di sintesi del DNA per amplificare quel segmento»#footnote[#cite(<PCR>, form: "full")].], [PCR Qualitativa], [Esame di laboratorio che fornisce un'informazione dicotomica sulla presenza di Legionella in un campione.], [PCR Quantitativa], [Esame di laboratorio rapido che rileva e quantifica il DNA o l'RNA di Legionella presenti in un campione (_i.e._ Real-Time PCR o qPCR).], [Relazione],[In riferimento allo schema E-R, legame che rappresenta la connessione logica e significativa per la realtà modellata, tra due o più entità.], [Relazione Ricorsiva],[Relazione che associa una entità a se stessa (_i.e._ relazione autoreferenziale).], [Richiedente], [Ente o istituzione che richiede un'indagine ambientale.], [Sierotipo], [Livello di classificazione di batteri di Legionella inferiore a quello specie. Il laboratorio ARPA distingue tre sierotipi: sierotipo 1, sierotipo 2-15 e sierotipo sp (_i.e._ sierogruppo).] ), caption: "Glossario", ) #figure( supplement: none, table( columns: (135pt, auto), inset: 8pt, [*Termine*], [*Definizione*], [Sito], [Edificio presso il quale è condotta un'indagine ambientale.], [UFC_L], [Unità formanti colonie per litro: ovvero unità di misura utilizzata per indicare la concentrazione di Legionella in un campione d'acqua destinato all'analisi colturale.], [UG_L], [Microgrammi per litro: ovvero unità di misura utilizzata per determinare la concentrazione di Legionella in un campione d'acqua mediante PCR quantitativa.], ), caption: "Glossario", )
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/tlacuache-thesis-fc-unam/0.1.1/README.md
markdown
Apache License 2.0
# tlacuache-thesis-fc-unam template Este es un template para tesis de la facultad de ciencias, en la Universidad Nacional Autónoma de México (UNAM). This is a thesis template for the Science Faculty at Universidad Nacional Autónoma de México (UNAM) based on my thesis. ## Uso/Usage En la aplicación web de Typst da click en "Start from template" y busca `tlacuache-thesis-fc-unam`. In the Typst web app simply click "Start from template" on the dashboard and search for `tlacuache-thesis-fc-unam`. Si estas usando la versión de teminal usa el comando: From the CLI you can initialize the project with the command: ```bash typst init @preview/tlacuache-thesis-fc-unam:0.1.1 ``` ## Configuración/Configuration Para configurar tu tesis puedes hacerlo con estas lineas al inicio de tu archivo principal. To set the thesis template, you can use the following lines in your main file. ```typ #import "@preview/tlacuache-thesis-fc-unam:0.1.1": thesis #show: thesis.with( ttitulo: [Titulo], grado: [Licenciatura], autor: [Autor], asesor: [Asesor], lugar: [Ciudad de México, México], agno: [#datetime.today().year()], bibliography: bibliography("references.bib"), ) // Tu tesis va aquí ``` Tambien puedes utilizar estas lineas para crear capítulos con bibliografía, si deseas crear un pdf solomente para el capítulo. You could also create a pdf for just a chapter with bibliography, by using the following lines. ```typ #import "@preview/tlacuache-thesis-fc-unam:0.1.1": chapter // completamente opcional cargar la bibliografía, compilar el capítulo #show: chapter.with(bibliography: bibliography("references.bib")) // Tu capítulo va aquí ``` Si quieres crear pdf aun mas cortos, puedes utilizar estas lineas para crear un pdf solo para el sección de tu capítulo. You could even create a pdf for just a section of a chapter. ```typ #import "@preview/tlacuache-thesis-fc-unam:0.1.1": section // completamente opcional cargar la bibliografía, compilar el sección #show: section.with(bibliography: bibliography("references.bib")) // Tu sección va aquí ```
https://github.com/andreasKroepelin/typst-notebook
https://raw.githubusercontent.com/andreasKroepelin/typst-notebook/main/examples/simple.typ
typst
MIT License
#import "../template-notebook.typ": notebook #set text(font: "Inria Sans") #show: notebook.with( title: [My cool notebook], author: [<NAME>], tags: ( tag-1: orange, tag-2: aqua, ) ) = Note 1 Something about @tag-1. #lorem(10) @tag-1 @tag-2 = Note 2 Refer to @note-1. #lorem(10) @tag-2 = Note 3 TODO do something DONE something else TODO do another thing
https://github.com/bigskysoftware/hypermedia-systems-book
https://raw.githubusercontent.com/bigskysoftware/hypermedia-systems-book/main/ch03-a-web-1-0-application.typ
typst
Other
#import "lib/definitions.typ": * == A Web 1.0 Application To start our journey into Hypermedia-Driven Applications, we are going to create a simple contact management web application called Contact.app. We will start with a basic, "Web 1.0-style" Multi-Page Application (MPA), in the grand CRUD (Create, Read, Update, Delete) tradition. It will not be the best contact management application in the world, but it will be simple and it will do its job. This application will also be easy to incrementally improve in the coming chapters by utilizing the hypermedia-oriented library htmx. By the time we are finished building and enhancing the application, over the next few chapters, it will have some very slick features that most developers today would assume requires the use of a SPA JavaScript framework. === Picking A "Web Stack" <_picking_a_web_stack> In order to demonstrate how web 1.0 applications work, we need to pick a server-side language and a library for handling HTTP requests. Colloquially, this is called our "Server-Side" or "Web" stack, and there are literally hundreds of options to choose from, many with passionate followings. You probably have a web framework that you prefer and, while we wish we could write this book for every possible stack out there, in the interest of simplicity (and sanity) we can only pick one. For this book we are going to use the following stack: - #link("https://www.python.org/")[Python] as our programming language. - #link("https://palletsprojects.com/p/flask/")[Flask] as our web framework, allowing us to connect HTTP requests to Python logic. - #link("https://palletsprojects.com/p/jinja/")[Jinja2] for our server-side templating language, allowing us to render HTML responses using a familiar and intuitive syntax. Why this particular stack? Python is the most popular programming language in the world, as of this writing, according to the #link("https://www.tiobe.com/tiobe-index/")[TIOBE index], a respected measure of programming language popularity. More importantly, Python is easy to read even if you aren’t familiar with it. We chose the Flask web framework because it is simple and does not impose a lot of structure on top of the basics of HTTP request handling. This bare-bones approach is a good match for our needs: in other cases you might consider a more full-featured Python framework, such as #link("https://www.djangoproject.com/")[Django], which supplies much more functionality out of the box than Flask does. By using Flask for our book, we will be able to keep our code focused on _hypermedia exchanges_. We picked Jinja2 templates because they are the default templating language for Flask. They are simple enough and similar enough to most other server-side templating languages that most people who are familiar with any server-side (or client-side) templating library should be able to understand them quickly and easily. Even if this combination of technologies isn’t your preferred stack, please, keep reading: you will learn quite a bit from the patterns we introduce in the coming chapters and it shouldn’t be hard to map them into your preferred language and frameworks. With this stack we will be rendering HTML _on the server-side_ to return to clients, rather than producing JSON. This is the traditional approach to building web applications. However, with the rise of SPAs, this approach is not as widely used a technique as it once was. Today, as people are rediscovering this style of web applications, the term "Server-Side Rendering" or SSR is emerging as the way that people talk about it. This contrasts with "Client-Side Rendering", that is, rendering templates in the browser with data retrieved in JSON form from the server, as is common in SPA libraries. In Contact.app we will intentionally keep things as simple as possible to maximize the teaching value of our code: it won’t be perfectly factored code, but it will be easy to follow for readers, even if they have little Python experience, and it should be easy to translate both the application and the techniques demonstrated into your preferred programming environment. === Python <_python> Since this book is for learning how to use hypermedia effectively, we’ll just briefly introduce the various technologies we use _around_ that hypermedia. This has some obvious drawbacks: if you aren’t comfortable with Python, for example, some example Python code in the book may be a bit confusing or mysterious at first. If you feel like you need a quick introduction to the language before diving into the code, we recommend the following books and websites: - #link( "https://nostarch.com/python-crash-course-3rd-edition", )[Python Crash Course] from No Starch Press - #link( "https://learnpythonthehardway.org/python3/", )[Learn Python The Hard Way] by <NAME> - #link("https://www.py4e.com/")[Python For Everybody] by Dr. <NAME> We think most web developers, even developers who are unfamiliar with Python, should be able to follow along with our examples. Most of the authors of this book hadn’t written much Python before writing it, and we got the hang of it pretty quickly. === Introducing Flask: Our First Route <_introducing_flask_our_first_route> #index[Flask][about] Flask is a simple but flexible web framework for Python. We’ll ease into it by touching on its core elements. #index[Flask][routes] #index[Flask][handlers] #index[Flask][decorators] A Flask application consists of a series of _routes_ tied to functions that execute when an HTTP request to a given path is made. It uses a Python feature called "decorators" to declare the route that will be handled, which is then followed by a function to handle requests to that route. We’ll use the term "handler" to refer to the functions associated with a route. Let’s create our first route definition, a simple "Hello World" route. In the following Python code you will see the `@app` symbol. This is the flask decorator that allows us to set up our routes. Don’t worry too much about how decorators work in Python, just know that this feature allows us to map a given _path_ to a particular function (i.e., handler). The Flask application, when started, will take HTTP requests and look up the matching handler and invoke it. #figure(caption: [A simple "Hello World" route], ```python @app.route("/") <1> def index(): <2> return "Hello World!" <3> ```) 1. Establishes we are mapping the `/` path as a route. 2. The next method is the handler for that route. 3. Returns the string "Hello World!" to the client. The `route()` method on the Flask decorator takes an argument: the path you wish the route to handle. Here we pass in the root or `/` path, as a string, to handle requests to the root path. This route declaration is then followed by a simple function definition, `index()`. In Python, decorators invoked in this manner apply to the function immediately following them. Therefore, this function becomes the "handler" for that route, and will be executed when an HTTP request to the given path is made. Note that the name of the function doesn’t matter, we can call it whatever we’d like so long as it is unique. In this case we chose `index()` because that fits with the route we are handling: the root "index" of the web application. So we have the `index()` function immediately following our route definition for the root, and this will become the handler for the root URL in our web application. The handler in this case is dead simple, it just returns a string, "Hello World!", to the client. This isn’t hypermedia yet, but as we can see in @fig-helloworld, a browser will render it just fine. #figure([#image("images/figure_2-1_hello_world.png")], caption: [ Hello World! ])<fig-helloworld> Great, there’s our first step into Flask, showing the core technique we are going to use to respond to HTTP requests: routes mapped to handlers. For Contact.app, rather than rendering "Hello World!" at the root path, we are going to do something a little fancy: we are going to redirect to another path, the `/contacts` path. Redirects are a feature of HTTP that allow you to redirect a client to another location with an HTTP response. #index[Flask][redirect] We are going to display a list of contacts as our root page, and, arguably, redirecting to the `/contacts` path to display this information is a bit more consistent with the notion of resources with REST. This is a judgement call on our part, and not something we feel is too important, but it makes sense in terms of routes we will set up later in the application. To change our "Hello World" route to a redirect, we only need to change one line of code: #figure(caption: [Changing "Hello World" to a redirect], ```python @app.route("/") def index(): return redirect("/contacts") <1> ```) 1. Update to a call to `redirect()` Now the `index()` function returns the result of the Flask-supplied `redirect()` function with the path we’ve supplied. In this case the path is `/contacts`, passed in as a string argument. Now, if you navigate to the root path, `/`, our Flask application will forward you on to the `/contacts` path. === Contact.app Functionality <_contact_app_functionality> #index[Contact.app][specs] Now that we have some understanding of how to define routes, let’s get down to specifying and then implementing our web application. What will Contact.app do? Initially, it will allow users to: - View a list of contacts, including first name, last name, phone and email address - Search the contacts - Add a new contact - View the details of a contact - Edit the details of a contact - Delete a contact So, as you can see, Contact.app is a #indexed[CRUD] application, the sort of application that is perfect for an old-school web 1.0 approach. Note that the source code of Contact.app is available on #link("https://github.com/bigskysoftware/contact-app")[GitHub]. ==== Showing A Searchable List Of Contacts <_showing_a_searchable_list_of_contacts> Let’s add our first real bit of functionality: the ability to show all the contacts in our app in a list (really, in a table). This functionality is going to be found at the `/contacts` path, which is the path our previous route is redirecting to. We will use Flask to route the `/contacts` path to a handler function, `contacts()`. This function will do one of two things: - If there is a search term found in the request, it will filter down to only contacts matching that term - If not, it will simply list all contacts This is a common approach in web 1.0 style applications: the same URL that displays all instances of some resource also serves as the search results page for those resources. Taking this approach makes it easy to reuse the list display that is common to both types of request. Here is what the code looks like for this handler: #figure(caption: [A handler for server-side search], ```python @app.route("/contacts") def contacts(): search = request.args.get("q") <1> if search is not None: contacts_set = Contact.search(search) <2> else: contacts_set = Contact.all() <3> return render_template("index.html", contacts=contacts_set) ```) 1. Look for the query parameter named `q`, which stands for "query." 2. If the parameter exists, call the `Contact.search()` function with it. 3. If not, call the `Contact.all()` function. 4. Pass the result to the `index.html` template to render to the client. #index[query strings] We see the same sort of routing code we saw in our first example, but we have a more elaborate handler function. First, we check to see if a search query parameter named `q` is part of the request. / Query Strings: #[ A "query string" is part of the URL specification. Here is an example URL with a query string in it: `https://example.com/contacts?q=joe`. The query string is everything after the `?`, and has a name-value pair format. In this URL, the query parameter `q` is set to the string value `joe`. In plain HTML, a query string can be included in a request either by being hardcoded in an anchor tag or, more dynamically, by using a form tag with a `GET` request. ] To return to our Flask route, if a query parameter named `q` is found, we call out to the `search()` method on a `Contact` model object to do the actual contact search and return all the matching contacts. If the query parameter is _not_ found, we simply get all contacts by invoking the `all()` method on the `Contact` object. Finally, we render a template, `index.html` that displays the given contacts, passing in the results of whichever of these two functions we end up calling. #sidebar[A Note On The Contact Class][ The `Contact` Python class we’re using is the "domain model" or just "model" class for our application, providing the "business logic" around the management of Contacts. #index[Contact.app][model] It could be working with a database (it isn’t) or a simple flat file (it is), but we’re going to skip over the internal details of the model. Think of it as a "normal" domain model class, with methods on it that act in a "normal" manner. We will treat `Contact` as a _resource_, and focus on how to effectively provide hypermedia representations of that resource to clients. ] ===== The list & search templates <_the_list_search_templates> Now that we have our handler logic written, we’ll create a template to render HTML in our response to the client. At a high level, our HTML response needs to have the following elements: - A list of any matching or all contacts. - A search box where a user may type and submit search terms. - A bit of surrounding "chrome": a header and footer for the website that will be the same regardless of the page you are on. #index[Templates] #index[Jinja2][about] We are using the Jinja2 templating language, which has the following features: - We can use double-curly braces, `{{ }}`, to embed expression values in the template. - we can use curly-percents, `{% %}`, for directives, like iteration or including other content. Beyond this basic syntax, Jinja2 is very similar to other templating languages used to generate content, and should be easy to follow for most web developers. Let’s look at the first few lines of code in the `index.html` template: #figure(caption: [Start of index.html], ```html {% extends 'layout.html' %} <1> {% block content %} <2> <form action="/contacts" method="get" class="tool-bar"> <3> <label for="search">Search Term</label> <input id="search" type="search" name="q" value="{{ request.args.get('q') or '' }}" /> <4> <input type="submit" value="Search"/> </form> ```) 1. Set the layout template for this template. 2. Delimit the content to be inserted into the layout. 3. Create a search form that will issue an HTTP `GET` to `/contacts`. 4. Create an input for a user to type search queries. The first line of code references a base template, `layout.html`, with the `extends` directive. This layout template provides the layout for the page (again, sometimes called "the chrome"): it wraps the template content in an `<html>` tag, imports any necessary CSS and JavaScript in a `<head>` element, places a `<body>` tag around the main content and so forth. All the common content wrapped around the "normal" content for the entire application is located in this file. The next line of code declares the `content` section of this template. This content block is used by the `layout.html` template to inject the content of `index.html` within its HTML. Next we have our first bit of actual HTML, rather than just Jinja directives. We have a simple HTML form that allows you to search contacts by issuing a `GET` request to the `/contacts` path. The form itself contains a label and an input with the name "q." This input’s value will be submitted with the `GET` request to the `/contacts` path, as a query string (since this is a `GET` request.) Note that the value of this input is set to the Jinja expression `{{ request.args.get('q') or '' }}`. This expression is evaluated by Jinja and will insert the request value of "q" as the input’s value, if it exists. This will "preserve" the search value when a user does a search, so that when the results of a search are rendered the text input contains the term that was searched for. This makes for a better user experience since the user can see exactly what the current results match, rather than having a blank text box at the top of the screen. Finally, we have a submit-type input. This will render as a button and, when it is clicked, it will trigger the form to issue an HTTP request. #index[Contact.app][table] This search interface forms the top of our contact page. Following it is a table of contacts, either all contacts or the contacts that match the search, if a search was done. Here is what the template code for the contact table looks like: #figure(caption: [The contacts table], ```html <table> <thead> <tr> <th>First <th>Last <th>Phone <th>Email <th/> <1> </tr> </thead> <tbody> {% for contact in contacts %} <2> <tr> <td>{{ contact.first }}</td> <td>{{ contact.last }}</td> <td>{{ contact.phone }}</td> <td>{{ contact.email }}</td> <3> <td><a href="/contacts/{{ contact.id }}/edit">Edit</a> <a href="/contacts/{{ contact.id }}">View</a></td> <4> </tr> {% endfor %} </tbody> </table> ```, ) - Output some headers for our table. - Iterate over the contacts that were passed in to the template. - Output the values of the current contact, first name, last name, etc. - An "operations" column, with links to edit or view the contact details. This is the core of the page: we construct a table with appropriate headers matching the data we are going to show for each contact. We iterate over the contacts that were passed into the template by the handler method using the `for` loop directive in Jinja2. We then construct a series of rows, one for each contact, where we render the first and last name, phone and email of the contact as table cells in the row. Additionally, we have a table cell that includes two links: - A link to the "Edit" page for the contact, located at `/contacts/{{ contact.id }}/edit` (e.g., For the contact with id 42, the edit link will point to `/contacts/42/edit`) - A link to the "View" page for the contact `/contacts/{{ contact.id }}` (using our previous contact example, the view page would be at `/contacts/42`) Finally, we have a bit of end-matter: a link to add a new contact and a Jinja2 directive to end the `content` block: #figure(caption: [The "add contact" link], ```html <p> <a href="/contacts/new">Add Contact</a> <1> </p> {% endblock %} <2> ```) 1. Link to the page that allows you to create a new contact. 2. The closing element of the `content` block. And that’s our complete template. Using this simple server-side template, in combination with our handler method, we can respond with an HTML _representation_ of all the contacts requested. So far, so hypermedia. @fig-contactapp is what the template looks like, rendered with a bit of contact information. #figure(image("images/figure_2-2_table_etc.png"), caption: [Contact.app])<fig-contactapp> Now, our application won’t win any design awards at this point, but notice that our template, when rendered, provides all the functionality necessary to see all the contacts and search them, and also provides links to edit them, view details of them or even create a new one. And it does all this without the client (that is, the browser) knowing a thing about what contacts are or how to work with them. Everything is encoded _in_ the hypermedia. A web browser accessing this application just knows how to issue HTTP requests and then render HTML, nothing more about the specifics of our applications end points or underlying domain model. As simple as our application is at this point, it is thoroughly RESTful. ==== Adding A New Contact <_adding_a_new_contact> The next bit of functionality that we will add to our application is the ability to add new contacts. To do this, we are going to need to handle that `/contacts/new` URL referenced in the "Add Contact" link above. Note that when a user clicks on that link, the browser will issue a `GET` request to the `/contacts/new` URL. All the other routes we have so far use `GET` as well, but we are actually going to use two different HTTP methods for this bit of functionality: an HTTP `GET` to render a form for adding a new contact, and then an HTTP `POST` _to the same path_ to actually create the contact, so we are going to be explicit about the HTTP method we want to handle when we declare this route. Here is the code: #figure(caption: [The "new contact" GET route], ```python @app.route("/contacts/new", methods=['GET']) <1> def contacts_new_get(): return render_template("new.html", contact=Contact()) <2> ```) 1. Declare a route, explicitly handling `GET` requests to this path. 2. Render the `new.html` template, passing in a new contact object. Simple enough. We just render a `new.html` template with a new Contact. (`Contact()` is how you construct a new instance of the `Contact` class in Python, if you aren’t familiar with it.) While the handler code for this route is very simple, the `new.html` template is more complicated. #sidebar[][For the remaining templates we are going to omit the layout directive and the content block declaration, but you can assume they are the same unless we say otherwise. This will let us focus on the "meat" of the template.] If you are familiar with HTML you are probably expecting a form element here, and you will not be disappointed. We are going to use the standard form hypermedia control for collecting contact information and submitting it to the server. Here is what our HTML looks like: #figure(caption: [The "new contact" form], ```html <form action="/contacts/new" method="post"> <1> <fieldset> <legend>Contact Values</legend> <p> <label for="email">Email</label> <2> <input name="email" id="email" type="email" placeholder="Email" value="{{ contact.email or '' }}"> <3> <span class="error"> {{ contact.errors['email'] }} <4> </span> </p> ```) 1. A form that submits to the `/contacts/new` path, using an HTTP `POST`. 2. A label for the first form input. 3. The first form input, of type email. 4. Any error messages associated with this field. In the first line of code we create a form that will submit back _to the same path_ that we are handling: `/contacts/new`. Rather than issuing an HTTP `GET` to this path, however, we will issue an HTTP `POST` to it. Using a `POST` in this manner will signal to the server that we want to create a new Contact, rather than get a form for creating one. We then have a label (always a good practice!) and an input that captures the email of the contact being created. The name of the input is `email` and, when this form is submitted, the value of this input will be submitted in the `POST` request, associated with the `email` key. Next we have inputs for the other fields for contacts: #figure(caption: [Inputs and labels for the "new contact" form], ```html <p> <label for="first_name">First Name</label> <input name="first_name" id="first_name" type="text" placeholder="<NAME>" value="{{ contact.first or '' }}"> <span class="error">{{ contact.errors['first'] }}</span> </p> <p> <label for="last_name">Last Name</label> <input name="last_name" id="last_name" type="text" placeholder="<NAME>" value="{{ contact.last or '' }}"> <span class="error">{{ contact.errors['last'] }}</span> </p> <p> <label for="phone">Phone</label> <input name="phone" id="phone" type="text" placeholder="Phone" value="{{ contact.phone or '' }}"> <span class="error">{{ contact.errors['phone'] }}</span> </p> ```, ) Finally, we have a button that will submit the form, the end of the form tag, and a link back to the main contacts table: #figure(caption: [The submit button for the "new contact" form], ```html <button>Save</button> </fieldset> </form> <p> <a href="/contacts">Back</a> </p> ```) It is easy to miss in this straight-forward example: we are seeing the flexibility of hypermedia in action. If we add a new field, remove a field, or change the logic around how fields are validated or work with one another, this new state of affairs would be reflected in the new hypermedia representation given to users. A user would see the updated new form and be able to work with these new features, with no software update required. ===== Handling the post to /contacts/new <_handling_the_post_to_contactsnew> The next step in our application is to handle the `POST` that this form makes to `/contacts/new`. To do so, we need to add another route to our application that handles the `/contacts/new` path. The new route will handle an HTTP `POST` method instead of an HTTP `GET`. We will use the submitted form values to attempt to create a new Contact. If we are successful in creating a Contact, we will redirect the user to the list of contacts and show a success message. If we aren’t successful, then we will render the new contact form again with whatever values the user entered and render error messages about what issues need to be fixed so that the user can correct them. Here is our new request handler: #figure(caption: [The "new contact" controller code], ```python @app.route("/contacts/new", methods=['POST']) def contacts_new(): c = Contact( None, request.form['first_name'], request.form['last_name'], request.form['phone'], request.form['email']) <1> if c.save(): <2> flash("Created New Contact!") return redirect("/contacts") <3> else: return render_template("new.html", contact=c) <4> ```) 1. We construct a new contact object with the values from the form. 2. We try to save it. 3. On success, "flash" a success message & redirect to the `/contacts` page. 4. On failure, re-render the form, showing any errors to the user. The logic in this handler is a bit more complex than other methods we have seen. The first thing we do is create a new Contact, again using the `Contact()` syntax in Python to construct the object. We pass in the values that the user submitted in the form by using the `request.form` object, a feature provided by Flask. This `request.form` allows us to access submitted form values in an easy and convenient way, by simply passing in the same name associated with the various inputs. We also pass in `None` as the first value to the `Contact` constructor. This is the "id" parameter, and by passing in `None` we are signaling that it is a new contact, and needs to have an ID generated for it. (Again, we are not going into the details of how this model object is implemented, our only concern is using it to generate hypermedia responses.) Next, we call the `save()` method on the Contact object. This method returns `true` if the save is successful, and `false` if the save is unsuccessful (for example, a bad email was submitted by the user). If we are able to save the contact (that is, there were no validation errors), we create a _flash_ message indicating success, and redirect the browser back to the list page. A "flash" is a common feature in web frameworks that allows you to store a message that will be available on the _next_ request, typically in a cookie or in a session store. Finally, if we are unable to save the contact, we re-render the `new.html` template with the contact. This will show the same template as above, but the inputs will be filled in with the submitted values, and any errors associated with the fields will be rendered to feedback to the user as to what validation failed. #sidebar[The Post/Redirect/Get Pattern][ #index[Post/Redirect/Get (PRG)] This handler implements a common strategy in web 1.0-style development called the #link("https://en.wikipedia.org/wiki/Post/Redirect/Get")[Post/Redirect/Get] or PRG pattern. By issuing an HTTP redirect once a contact has been created and forwarding the browser on to another location, we ensure that the `POST` does not end up in the browsers request cache. This means that if the user accidentally (or intentionally) refreshes the page, the browser will not submit another `POST`, potentially creating another contact. Instead, it will issue the `GET` that we redirect to, which should be side-effect free. We will use the PRG pattern in a few different places in this book. ] OK, so we have our server-side logic set up to save contacts. And, believe it or not, this is about as complicated as our handler logic will get, even when we look at adding more sophisticated htmx-driven behaviors. ==== Viewing The Details Of A Contact <_viewing_the_details_of_a_contact> The next piece of functionality we will implement is the detail page for a Contact. The user will navigate to this page by clicking the "View" link in one of the rows in the list of contacts. This will take them to the path `/contact/<contact id>` (e.g., `/contacts/42`). This is a common pattern in web development: contacts are treated as resources and the URLs around these resources are organized in a coherent manner. - If you wish to view all contacts, you issue a `GET` to `/contacts`. - If you want a hypermedia representation allowing you to create a new contact, you issue a `GET` to `/contacts/new`. - If you wish to view a specific contact (with, say, an id of #raw("42), you issue a `GET") to `/contacts/42`. #sidebar[The Eternal Bike Shed of URL Design][ It is easy to quibble about the particulars of the path scheme you use for your application: "Should we `POST` to `/contacts/new` or to `/contacts`?" We have seen many arguments online and in person advocating for one approach versus another. We feel it is more important to understand the overarching idea of _resources_ and _hypermedia representations_, rather than getting worked up about the smaller details of your URL design. We recommend you just pick a reasonable, resource-oriented URL layout you like and then stay consistent. Remember, in a hypermedia system, you can always change your endpoints later, because you are using hypermedia as the engine of application state! ] Our handler logic for the detail route is going to be _very_ simple: we just look the Contact up by id, which is embedded in the path of the URL for the route. To extract this ID we are going to need to introduce a final bit of Flask functionality: the ability to call out pieces of a path and have them automatically extracted and passed in to a handler function. Here is what the code looks like, just a few lines of simple Python: #figure(```python @app.route("/contacts/<contact_id>") <1> def contacts_view(contact_id=0): <2> contact = Contact.find(contact_id) <3> return render_template("show.html", contact=contact) <4> ```) 1. Map the path, with a path variable named `contact_id`. 2. The handler takes the value of this path parameter. 3. Look up the corresponding contact. 4. Render the `show.html` template. You can see the syntax for extracting values from the path in the first line of code: you enclose the part of the path you wish to extract in `<>` and give it a name. This component of the path will be extracted and then passed into the handler function, via the parameter with the same name. So, if you were to navigate to the path `/contacts/42`, the value `42` would be passed into the `contacts_view()` function for the value of `contact_id`. Once we have the id of the contact we want to look up, we load it up using the `find` method on the `Contact` object. We then pass this contact into the `show.html` template and render a response. ==== The Contact Detail Template <_the_contact_detail_template> Our `show.html` template is relatively simple, just showing the same information as the table but in a slightly different format (perhaps for printing). If we add functionality like "notes" to the application later on, this will give us a good place to do so. Again, we will omit the "chrome" of the template and focus on the meat: #figure(caption: [The "contact details" template], ```html <h1>{{contact.first}} {{contact.last}}</h1> <div> <div>Phone: {{contact.phone}}</div> <div>Email: {{contact.email}}</div> </div> <p> <a href="/contacts/{{contact.id}}/edit">Edit</a> <a href="/contacts">Back</a> </p> ```) We simply render a First Name and Last Name header, with the additional contact information below it, and a couple of links: a link to edit the contact and a link to navigate back to the full list of contacts. ==== Editing And Deleting A Contact <_editing_and_deleting_a_contact> Next up we will tackle the functionality on the other end of that "Edit" link. Editing a contact is going to look very similar to creating a new contact. As with adding a new contact, we are going to need two routes that handle the same path, but using different HTTP methods: a `GET` to `/contacts/<contact_id>/edit` will return a form allowing you to edit the contact and a `POST` to that path will update it. We are also going to piggyback the ability to delete a contact along with this editing functionality. To do this we will need to handle a `POST` to `/contacts/<contact_id>/delete`. Let’s look at the code to handle the `GET`, which, again, will return an HTML representation of an editing interface for the given resource: #figure(caption: [The "edit contact" controller code], ```python @app.route("/contacts/<contact_id>/edit", methods=["GET"]) def contacts_edit_get(contact_id=0): contact = Contact.find(contact_id) return render_template("edit.html", contact=contact) ```) As you can see this looks a lot like our "Show Contact" functionality. In fact, it is nearly identical except for the template: here we render `edit.html` rather than `show.html`. While our handler code looked similar to the "Show Contact" functionality, the `edit.html` template is going to look very similar to the template for the "New Contact" functionality: we will have a form that submits updated contact values to the same "edit" URL and that presents all the fields of a contact as inputs for editing, along with any error messages. Here is the first bit of the form: #figure(caption: [The "edit contact" form start], ```html <form action="/contacts/{{ contact.id }}/edit" method="post"> <1> <fieldset> <legend>Contact Values</legend> <p> <label for="email">Email</label> <input name="email" id="email" type="text" placeholder="Email" value="{{ contact.email }}"> <2> <span class="error">{{ contact.errors['email'] }}</span> </p> ```) 1. Issue a `POST` to the `/contacts/{{ contact.id }}/edit` path. 2. As with the `new.html` page, the input is tied to the contact’s email. This HTML is nearly identical to our `new.html` form, except that this form is going to submit a `POST` to a different path, based on the id of the contact that we want to update. (It’s worth mentioning here that, rather than `POST`, we would prefer to use a `PUT` or `PATCH`, but those are not available in plain HTML.) Following this we have the remainder of our form, again very similar to the `new.html` template, and our button to submit the form. #figure(caption: [The "edit contact" form body], ```html <p> <label for="first_name">First Name</label> <input name="first_name" id="first_name" type="text" placeholder="First Name" value="{{ contact.first }}"> <span class="error">{{ contact.errors['first'] }}</span> </p> <p> <label for="last_name">Last Name</label> <input name="last_name" id="last_name" type="text" placeholder="Last Name" value="{{ contact.last }}"> <span class="error">{{ contact.errors['last'] }}</span> </p> <p> <label for="phone">Phone</label> <input name="phone" id="phone" type="text" placeholder="Phone" value="{{ contact.phone }}"> <span class="error">{{ contact.errors['phone'] }}</span> </p> <button>Save</button> </fieldset> </form> ```) In the final part of our template we have a small difference between the `new.html` and `edit.html`. Below the main editing form, we include a second form that allows you to delete a contact. It does this by issuing a `POST` to the `/contacts/<contact id>/delete` path. Just as we would prefer to use a `PUT` to update a contact, we would much rather use an HTTP `DELETE` request to delete one. Unfortunately that also isn’t possible in plain HTML. To finish up the page, there is a simple hyperlink back to the list of contacts. #figure(caption: [The "edit contact" form footer], ```html <form action="/contacts/{{ contact.id }}/delete" method="post"> <button>Delete Contact</button> </form> <p> <a href="/contacts/">Back</a> </p> ```) Given all the similarities between the `new.html` and `edit.html` templates, you may be wondering why we are not _refactoring_ these two templates to share logic between them. That’s a good observation and, in a production system, we would probably do just that. For our purposes, however, since our application is small and simple, we will leave the templates separate. #sidebar[Factoring Your Applications][ #index[factoring] One thing that often trips people up who are coming to hypermedia applications from a JavaScript background is the notion of "components". In JavaScript-oriented applications it is common to break your app up into small client-side components that are then composed together. These components are often developed and tested in isolation and provide a nice abstraction for developers to create testable code. With Hypermedia-Driven Applications, in contrast, you factor your application on the server side. As we said, the above form could be refactored into a shared template between the edit and create templates, allowing you to achieve a reusable and DRY (Don’t Repeat Yourself) implementation. Note that factoring on the server-side tends to be coarser-grained than on the client-side: you tend to split out common _sections_ rather than create lots of individual components. This has benefits (it tends to be simple) as well as drawbacks (it is not nearly as isolated as client-side components). Overall, a properly factored server-side hypermedia application can be extremely DRY. ] ===== Handling the post to /contacts/\<contact\_id\>/edit <_handling_the_post_to_contactscontact_id> Next we need to handle the HTTP `POST` request that the form in our `edit.html` template submits. We will declare another route that handles the same path as the `GET` above. Here is the new handler code: #index[POST request] #figure( ```python @app.route("/contacts/<contact_id>/edit", methods=["POST"]) <1> def contacts_edit_post(contact_id=0): c = Contact.find(contact_id) <2> c.update( request.form['first_name'], request.form['last_name'], request.form['phone'], request.form['email']) <3> if c.save(): <4> flash("Updated Contact!") return redirect("/contacts/" + str(contact_id)) <5> else: return render_template("edit.html", contact=c) <6> ```) 1. Handle a `POST` to `/contacts/<contact_id>/edit`. 2. Look the contact up by id. 3. Update the contact with the new information from the form. 4. Attempt to save it. 5. On success, flash a success message & redirect to the detail page. 6. On failure, re-render the edit template, showing any errors. The logic in this handler is very similar to the logic in the handler for adding a new contact. The only real difference is that, rather than creating a new Contact, we look the contact up by id and then call the `update()` method on it with the values that were entered in the form. Once again, this consistency between our CRUD operations is one of the nice and simplifying aspects of traditional CRUD web applications. ==== Deleting A Contact <_deleting_a_contact> #index[Post/Redirect/Get (PRG)] We piggybacked contact delete functionality into the same template used to edit a contact. This second form will issue an HTTP `POST` to `/contacts/<contact_id>/delete`, and we will need to create a handler for that path as well. Here is what the controller looks like: #figure(caption: [The "delete contact" controller code], ```python @app.route("/contacts/<contact_id>/delete", methods=["POST"]) <3> def contacts_delete(contact_id=0): contact = Contact.find(contact_id) contact.delete() <2> flash("Deleted Contact!") return redirect("/contacts") <3> ```) 1. Handle a `POST` the `/contacts/<contact_id>/delete` path. 2. Look up and then invoke the `delete()` method on the contact. 3. Flash a success message and redirect to the main list of contacts. The handler code is very simple since we don’t need to do any validation or conditional logic: we simply look up the contact the same way we have been doing in our other handlers and invoke the `delete()` method on it, then redirect back to the list of contacts with a success flash message. No need for a template in this case, the contact is gone. ==== Contact.app…​ Implemented! <_contact_app_implemented> And, well…​ believe it or not, that’s our entire contact application! If you’ve struggled with parts of the code so far, don’t worry: we don’t expect you to be a Python or Flask expert (we aren’t!). You just need a basic understanding of how they work to benefit from the remainder of the book. This is a small and simple application, but it does demonstrate many of the aspects of traditional, web 1.0 applications: CRUD, the Post/Redirect/Get pattern, working with domain logic in a controller, organizing our URLs in a coherent, resource-oriented manner. And, furthermore, this is a deeply _Hypermedia-Driven_ web application. Without thinking about it very much, we have been using REST, HATEOAS and all the other hypermedia concepts we discussed earlier. We would bet that this simple little contact app of ours is more RESTful than 99% of all JSON APIs ever built! Just by virtue of using a _hypermedia_, HTML, we naturally fall into the RESTful network architecture. So that’s great. But what’s the matter with this little web app? Why not end here and go off to develop web 1.0 style applications? Well, at some level, nothing is wrong with it. Particularly for an application as simple as this one, the older way of building web apps might be a perfectly acceptable approach. However, our application does suffer from that "clunkiness" that we mentioned earlier when discussing web 1.0 applications: every request replaces the entire screen, introducing a noticeable flicker when navigating between pages. You lose your scroll state. You have to click around a bit more than you might in a more sophisticated web application. Contact.app, at this point, just doesn’t feel like a "modern" web application. Is it time to reach for a JavaScript framework and JSON APIs to make our contact application more interactive? No. No it isn’t. It turns out that we can improve the user experience of this application while retaining its fundamental hypermedia architecture. In the next few chapters we will look at #link("https://htmx.org")[htmx], a hypermedia-oriented library that will let us improve our contact application while retaining the hypermedia-based approach we have used so far. #html-note[Framework Soup][ #index[components] Components encapsulate a section of a page along with its dynamic behavior. While encapsulating behavior is a good way to organize code, it can also separate elements from their surrounding context, which can lead to wrong or inadequate relationships between elements. The result is what one might call _component soup_, where information is hidden in component state, rather than being present in the HTML, which is now incomprehensible due to missing context. Before you reach for components for reuse, consider your options. Lower-level mechanisms often (allow you to) produce better HTML. In some cases, components can actually _improve_ the clarity of your HTML. #blockquote( attribution: [<NAME>, #link( "https://www.matuzo.at/blog/2023/single-page-applications-criticism", )[Why I’m not the biggest fan of Single Page Applications]], )[ The fact that the HTML document is something that you barely touch, because everything you need in there will be injected via JavaScript, puts the document and the page structure out of focus. ] In order to avoid `<div>` soup (or Markdown soup, or Component soup), you need to be aware of the markup you’re producing and be able to change it. Some SPA frameworks, and some web components, make this more difficult by putting layers of abstraction between the code the developer writes and the generated markup. While these abstractions can allow developers to create richer UI or work faster, their pervasiveness means that developers can lose sight of the actual HTML (and JavaScript) being sent to clients. Without diligent testing, this leads to inaccessibility, poor SEO, and bloat. ]
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/align-03.typ
typst
Other
// Ref: false #test(type(center), "alignment") #test(type(horizon), "alignment") #test(type(center + horizon), "2d alignment")
https://github.com/sebastos1/light-report-uia
https://raw.githubusercontent.com/sebastos1/light-report-uia/main/template/main.typ
typst
MIT License
#import "@preview/light-report-uia:0.1.0": * // CHANGE THESE #show: report.with( title: "New project", authors: ( "<NAME>", "<NAME>", "<NAME>", ), group_name: "Group 14", course_code: "IKT123-G", course_name: "Course name", date: "august 2024", lang: "en", // use "no" for norwegian ) // neat code #import "@preview/codly:1.0.0": * #show: codly-init.with() = Introduction #lorem(25) = Examples == Citation This is something stated from a source @example-source. == Tables Here's a table: #figure( caption: [Table of numbers], table( columns: (auto, auto), inset: 10pt, align: horizon, table.header( [*Letters*], [*Number*], ), [Five], [5], [Eight], [8], ) ) == Code blocks Here's a rust code block: #figure( caption: [Epic code], ```rs fn main() { let name = "buddy"; let greeting = format!("Hello, {}!", name); println!("{}", greeting); } ``` ) == Math Here's some math: $ integral_0^infinity e^(-x^2) dif x = sqrt(pi)/2 $ And some more: $sigma / theta dot i$.
https://github.com/Tengs-Fan/Tengs-Penkwe
https://raw.githubusercontent.com/Tengs-Fan/Tengs-Penkwe/master/resume.typ
typst
#import "template.typ": * #show: resume.with( author: ( firstname: "<NAME>)", lastname: "Wu", email: "<EMAIL>", phone: "236 308 6684", github: "Tengs-Penkwe", linkedin: "tengs-wu", positions: () ), date: datetime.today().display() ) #resume_section("Skills") // #skill_item( // "Geology", // ("Stratigraphy", // "Paleontology", // "Hydrology", // ) // ) #skill_item( "Language", ( strong[C/C++], strong[JavaScript], strong[TypeScript], strong[Verilog], strong[Java], "Rust", "Haskell", "Prolog", "Assembly" ) ) #skill_item( "OS", ( strong[Linux], "Shell Scripting", "Systemd", "Makefile", "Compiling", "Git", "Docker", ) ) #skill_item( "Remote-Sensing", (strong[Image Processing/Analysis], ) ) #skill_item( "Geo-Informatics", ( "GIS", "Disaster Monitoring/Prediction", "City Planning", "Mapping", ) ) /* #resume_section("Experience") #work_experience_item_header( "Aaaaaa Aaaaaaaa", "Aaaaaa Aaaaa Aaaaa, AA", "Software Engineer", "Jul. 0000 - Jul. 0000", ) #resume_item[ - *#lorem(10)*. #lorem(20) - #lorem(30) - #lorem(40) - #lorem(30) ] */ #resume_section("Projects") #personal_project_item_header( "Enviromental Monitoring Using Remote Sensing", "China University of Geosciences", "Participant Researcher", "July 2020 - Apr 2021", ) #resume_item[ - Participated in the scientific research project _Using Landsat time-series images to explore the impact of land cover on urban surface heat island change_. Rated as “excellent” in the Provincial College Students' innovation and entrepreneurship training program. ] #personal_project_item_header( "Image Processing & Analysis System", "China University of Geosciences", "Course Project", "Dec 2020 - Feb 2021", ) #resume_item[ - Developed an Image processing software using C++, incorporating a diverse range of image processing algorithms and advanced analysis techniques, empowers users to efficiently process RS images, extracting valuable information. ] #personal_project_item_header( "Remote Sensing Data Collection & Analysis", "China University of Geosciences", "Summer Internship", "May 2021 - July 2021", ) #resume_item[ - Gathered time series remote sensing, DEM, and other multi-source data. - Conducted practical on-field data collection, refining skills. - Utilized GIS and RS software for thorough analysis of Remote Sensing (RS) and diverse source data, extracting valuable insights for improved understanding and actionable analyses. ] #personal_project_item_header( "OpenMIPS CPU", "", "Personal Project", "Mar 2021 - Apr 2023", ) #resume_item[ - Created a openMIPS CPU using Verilog to deepen understanding of CPU architectures. - Worked through various tests to verify the functionality and correctness of the CPU design. - Made use of scripts and makefiles to streamline the project build and testing process, saving time and effort. - Ensured the project's reliability through detailed testing using Verilog testbenches, helping to catch and fix errors early in the development process. ] #personal_project_item_header( "UBCInsight Software Project", "University of British Columbia", "Course Project", "Feb 2023 - May 2023", ) #resume_item[ - Collaborated within a team to design and develop a data analytics web platform that enables users to query, visualize, and analyze academic data from UBC courses and programs. - Utilized TypeScript for backend development and React for frontend implementation - Applied software engineering principles such as SOLID principles, and design patterns to ensure the application was modular, extensible, and maintainable. - Employed agile methodologies for iterative development and efficient project management - Conducted requirements analysis, design, implementation, testing, and thorough documentation - Presented the final project, showcasing features and functionalities to peers and instructors ] #personal_project_item_header( "Relational Databases Project", "University of British Columbia", "Course Project", "July 2023 - Aug 2023", ) #resume_item[ - Collaborated within a team to design, create, and manage a relational database using SQLite3 - Applied database normalization principles to ensure data integrity and efficiency - Developed a web interface utilizing Javascript, HTML, and CSS to for intuitive user interaction - Integrated PHP scripts to perform query operations like creating, adding, dropping, and updating data ] #resume_section("Education") #cvEntry( title: [B.E. in Geo-informatics], society: [China University of Geosciences], date: [Sept. 2019 - June. 2022], location: [Wuhan, China], logo: "./assets/CUG.svg", description: list( [Thesis: Flood Disaster Monitoring Based on Landsat-8 and Sentinel-1 Imagery - A Case Study of Dongting Lake], ) ) #cvEntry( title: [B.S. in Computer Science], society: [University of British Columbia], date: [Sept. 2022 - Expected Aug 2025], location: [Vancouver, Canada], logo: "./assets/UBC_COA.svg", description: list( // Additional details can be added here, such as relevant courses or projects ) ) #resume_section("Volunteer") #volunteer_item_header( "Class Committee In Charge of Study", "China University of Geosciences", "Wuhan, China", "Sept 2020 – June 2021" ) #resume_item[ - Assisted fellow students with academic coursework, serving as a liaison between students and faculty. - Provided instructors with insights into student academic performance and well-being. ] #volunteer_item_header( "Member of Student Graduation Committee", "China University of Geosciences", "Wuhan, China", "Sept 2021 – June 2022" ) #resume_item[ - Facilitated communication between students and the university to address concerns and provide assistance. - Assisted faculty in understanding student challenges and providing timely support. ] #volunteer_item_header( "Volunteer", "Various Volunteer Roles", "Wuhan, China", "June 2022 - Sept 2022" ) #resume_item[ - Served as a volunteer at Vaccine Stations, assisting in efficient vaccine distribution. - Aided student learning at the school's learning center. - Taught geology science popularization courses at nearby primary schools. - Actively participated in suicide intervention programs to offer timely support and resources. ] #volunteer_item_header( "Volunteer", "UBC Chinese Language Program", "Vancouver, BC", "Sept 2022 - Dec 2022" ) #resume_item[ - Facilitated conversational practice sessions to enhance students' spoken Chinese proficiency. - Assisted students in reviewing and mastering grammatical structures and course content. ] #volunteer_item_header( "Volunteer", "UBC Vancouver's Move-In Day", "Vancouver, BC", "Aug 2023 - Sept 2023" ) #resume_item[ - Assisted in the student check-in process, helping new and returning students smoothly transition to campus life. - Acted as a welcoming face for UBC, providing general guidance and answering queries to enhance the move-in experience for students. ] #volunteer_item_header( "Community Volunteer", "<NAME> Church", "Vancouver, BC", "Aug 2023 - Sept 2023" ) #resume_item[ - Assisted in organizing and participating in church events, contributing to community engagement and relationship-building. ]
https://github.com/luiswirth/bsc-thesis
https://raw.githubusercontent.com/luiswirth/bsc-thesis/main/src/title.typ
typst
#import "setup.typ": * #page[ #image("../res/ethz-logo.svg") #hide[#heading()[Title]] #align(center + horizon)[ #[ #set text(size: 25pt, weight: "bold") Rust Implementation of \ Finite Element Exterior Calculus on \ Coordinate-Free Simplicial Complexes \ ] #v(1cm) #text(size: 20pt, style: "italic")[ <NAME> ] \ #weblink("mailto:<EMAIL>", "<EMAIL>") \ #weblink("https://ethz.lwirth.com", "ethz.lwirth.com") #v(0.75cm) #text(size: 15pt)[ #datetime.today().display("[day]th [month repr:long] [year]"). ] ] ]
https://github.com/JulioJPinto/curriculum
https://raw.githubusercontent.com/JulioJPinto/curriculum/main/main.typ
typst
#import "template.typ": * #let today = datetime.today() #show: resume.with( author: ( firstname: "Júlio", lastname: "Pinto", email: "<EMAIL>", github: "JulioJPinto", linkedin: "JulioJPinto", positions: ( "Marketing Director @ CeSIUM", "Co-Champion @ CoderDojo Braga", ) ), date: today.display("[month repr:long] [day], [year]") ) #resume_section("Education") #work_experience_item_header( "University of Minho", "Braga, Portugal", "B.S. in Software Enginnering", "Oct. 2021 - Currently", ) #work_experience_item_header( "Internetional House", "Braga, Portugal", "First Certificate Cambridge English - 181/190 (c1) ", "Oct. 2018 - July. 2020", ) #resume_section("Experience") #work_experience_item_header( "CeSIUM", "Braga, Portugal", "Director of the Marketing Department", "Nov. 2021 - Currently", ) #resume_item[ - *CeSIUM - Centro de Estudantes de Engenharia Informática*. - I started my journey in CeSIUM in 2021 has a collaborator of the Marketing Department and over the years became a member of the organization. Firstly has a co-director of the department and in the last year has director - As Director of Marketing, I had multiple responsibilities. From managing a team of about 15 people, to managing CeSIUM's social media. - Managed to develop a lot of profiency with tools like Adobe Photoshop, Adobe Illustrator, Adobe After Effects and Figma. I also worked on a couple projects in #link("https://github.com/cesium")[*CeSIUM's organization*] by either helping in the development of a design or coding in HTML, CSS/TailwindCSS or NextJS. - Part of my job in CeSIUM is making part of different events organization teams, such as the #link("https://seium.org/")[*Semana da Engenharia Informática (SEI)*], #link("https://https://join.di.uminho.pt//")[*Jornadas da Informática (JOIN)*] and #link("https://bugsbyte.org/")[*Hackathon Bugsbyte*] ] #work_experience_item_header( "<NAME>", "Braga, Portugal", "Mentor & Co-Champion", "Mar. 2022 - Currently", ) #resume_item[ - *<NAME>* is an initiative started by *CeSIUM*. It teaches kids from 7 to 17 years old the basics to programming and how to develop projects. - I started my journey in <NAME>a has a Mentor and volunteer in 2022. Joined the core team in 2022 aswell and became Co-Champion in 2023. - As a Co-Champion, my main responsibilites reside in guiding other mentors, helping out with logistics and public relations. - I also organized DojoCon Braga. A small event that had the objective of reaching more people about CoderDojo Braga, aswell as talk about how computing and programming should be inserted into education. ] #resume_section("Projects") #personal_project_item_header( [#link("https://github.com/HexaTable/LearningChain")[LearningChain]], "", "Hexatable Group", "Apr. 2024", ) #resume_item[ - With a couple of friends, developed a small prototype of *LearningChain*. A Web3 platform just like Udemy or Skillshare, where anyone can publish their own course and emit a certification using Web3 technologies such as NFTs. - This project was developed in the Hackathon Bugbyte '23 - We used TypeScript, Tailwind, Prisma and Solidity ] #personal_project_item_header( [#link("https://github.com/JulioJPinto/CP-Project")[Program Design by Calculation]], "", "University of Minho", "Nov. 2023 - Jan. 2023", ) #resume_item[ - For a college project me and a couple of colleagues solved a couple of programming problems using concepts of Program Design by Calculation, such as catamorphisms, anamorphisms, functors, etc. - For this project we used Haskell ] #resume_section("Skills") #v(weak: false, 8pt) #skill_item( [Prog. Languagues], ( strong[Haskell], strong[C], strong[Java], strong[HTML & CSS], strong[SQL], strong[Python], "C++", "TypeScript", "JavaScript/NextJS", "Go", ) ) #skill_item( "Editing Tools", ( strong[Figma], strong[Adobe Illustrator], strong[Adobe After Effects], "Adobe Photoshop", "Adobe Premiere", "OBS" ) ) #skill_item( "Languagues", ( strong[Portuguese], strong[English], "Spanish" ) )
https://github.com/kdog3682/mathematical
https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/expanded-factorial-multiplication.typ
typst
#import "@local/typkit:0.1.0": * // currently a hack #let expanded-factorial-multiplication(..sink) = { let arr = rotate(sym.arrow, 90deg) let rows = sink.pos() let start = rows.at(0).flatten().at(0) let top-row = range(start, 0, step: -1) let e = top-row.map(x => [#x]).intersperse($sym.times$) e.push(hide([1])) e.push(hide([1])) e.push(table.cell(align: center, colspan: 3, arr)) e.push(hide([1])) e.push(table.cell(align: center, colspan: 3, arr)) e.push(hide([1])) e.push(hide([1])) e.push(hide([1])) e.push(hide([1])) e.push(table.cell(align: center, colspan: 3, [20])) e.push(hide([1])) e.push(table.cell(align: center, colspan: 3, [6])) e.push(hide([1])) e.push(hide([1])) e.push(table.cell(inset: 10pt, align: center, colspan: 11, $6 times 20 times 6 times 1= bold(720)$)) table(columns: 11, ..e, stroke: none) }
https://github.com/AliothCancer/AppuntiUniversity
https://raw.githubusercontent.com/AliothCancer/AppuntiUniversity/main/capitoli_fisica/trasmissione_non_stazionaria.typ
typst
= Trasmissione del calore: Regime non Stazionario <conduzione-del-calore-regime-non-stazionario> == Numero di Biot <numero-di-biot> $ upright("Bi") = R_k / R_h = frac(h L_(c a r a t t e r i s t i c a), k) $ - $h med med frac(W, m^2 K)$ : coeff. di scambio termico convettivo (fluidi) - $k med med frac(W, m K)$ : coeff. di scambio termico conduttivo (solidi) - L: lunghezza caratteristica - $R_k$ : Resistenza alla conduzione - $R_h$ : Resistenza alla convezione == Lunghezza Caratteristica <lunghezza-caratteristica> $ L_(upright("car.")) = V / S $ - V : Volume dell’oggetto - S: Superficie dell’oggetto \ #underline[a contatto con il fluido termovettore] == Tempo di Raffreddamento <tempo-di-raffreddamento> N.B. \ Valida solamente se \ il numero di Biot $lt.eq 0.1$ $ t = tau ln frac(T_i - T_oo, T_f - T_oo) $ \ $T_i$: iniziale \ $T_f$: finale \ $T_oo$: temperatura riferita al fluido in cui è immerso il corpo. \ \ Con $ tau = frac(rho c, h) L_(upright("caratteristica")) $ $ tau = frac(M dot.op c, h dot.op S) $ - $c:$ calore specifico del materiale della superficie considerata == Temperatura finale al tempo t <temperatura-finale-al-tempo-t> Dato l’istante t, data $T_oo$ (temperatura fluido convettivo) e $tau$. La temperatura finale è pari a:~~$ T_f = lr((T_i - T_oo)) dot.op e^(- t / tau) + T_oo $
https://github.com/mumblingdrunkard/mscs-thesis
https://raw.githubusercontent.com/mumblingdrunkard/mscs-thesis/master/src/modern-hardware-design/index.typ
typst
= Modern Hardware Design <ch:modern-hardware-design> Throughout this thesis, we discuss the relationships between abstract representations of circuits and their implementation as transistors and wires. This chapter briefly describes a standard flow of modern hardware development. == Register-Transfer Level and Hardware Description Languages Modern processors are not designed as sequences of logic gates and registers in large diagrams. They are far too complex for that. Logic gates are also too concrete for the algorithms used when optimising designs, described later in this chapter. Instead, modern hardware is designed using _hardware description languages_ (HDL): code that describes registers and logic and how they are connected. Popular languages include the _VHSIC#footnote[Very High Speed Integrated Circuit was a research program by the United States Department of Defense to develop high-speed integrated circuits.] hardware description language_ (VHDL) @bib:vhdl-standard, and _SystemVerilog_ @bib:systemverilog-standard. === Language Primitives HDLs define various concepts, types, and components. Some concepts have very clear, direct mappings to real hardware. Other concepts exist to enhance development of the hardware. Even at this level, HDLs concern themselves with abstract machines and operations within those abstract machines. Generally, there is a strict divide between _synthesisable_ and _unsynthesisable_ features of HDLs. Synthesisable features are those that can be _synthesised_ by a compiler. Synthesis means translating the abstract representation (the code) into a physical form (a network of transistors and wires). When we discuss hardware design, we are most concerned with the synthesisable features. The elements described here come from SystemVerilog. @lst:systemverilog-example showns an example of SystemVerilog code. #figure([```systemverilog typedef logic[31:0] word_t; typedef enum { ADD, SUB, AND, OR, XOR } op_t; module alu ( input logic clk, input logic nrst, input word_t in_a, input word_t in_b, input op_t in_op, input logic in_valid, output logic out_valid, output word_t out_value, ); word_t value; logic valid; word_t result; always_comb begin case (in_op) ADD : result = in_a + in_b; SUB : result = in_a - in_b; AND : result = in_a & in_b; OR : result = in_a | in_b; XOR : result = in_a ^ in_b; default : result = 'x; endcase out_value = value; out_valid = valid; end always_ff @(posedge clk or negedge nrst) begin if (!nrst) begin valid <= '0; end else begin if (in_valid) begin value <= result; end valid <= in_valid; end end endmodule : alu ```], caption: [SystemVerilog code displaying several features of the language], kind: raw, )<lst:systemverilog-example> It has to be noted that SystemVerilog is a high-level language. Logic and flip-flops are _inferred_ and are not explicitly declared by the programmer. The programmer only controls how the circuit should behave and not how the final circuit has to be implemented. First off, the code in @lst:systemverilog-example declares a module called `alu` that has six inputs: `clk`, `nrst`, `in_a`, `in_b`, `in_op`, and `in_valid`. The types of these inputs can be user-defined like `word_t` and `op_t` or one of the primitive types like `logic`. The module also declares two outputs `out_valid` and `out_value`. Within the module, more variables are created for `value`, `valid` and `result`. A procedure is declared with `always_comb` indicating the intent that the logic contained within is purely combinational. Here, a `case`-statement uses the `in_op` value to select between one of several operators and assigns the result to the `result` variable. In SystemVerilog `=` is a _blocking assignment_, meaning all subsequent operations in the procedure use the new value. The default case ensures `result` is always assigned to something, no matter the value of `in_op`. The outputs are also set to match the internal values `value` and `valid`. Next follows another procedure `always_ff` (always flip-flop) indicating the intent that the code within will require flip-flop circuits to implement. This logic only triggers on the positive edge of `clk` or on the negative edge of `nrst`. Then, if `nrst` is false (if `!nrst` is true), set the `valid` value to 0 using a _non-blocking assignment_ (`<=`). A non-blocking assignment is different in that the assignment only takes effect at the end of the simulation cycle. Other operations continue using the same `valid` until the next simulation cycle. Otherwise, if `!nrst` is false, the code checks whether the `in_valid` signal is high, and if it is, sets the `value` to the `result`. `valid` is always written to be the `in_valid` value as long as the circuit is not reset by `nrst` going low. === Mapping High-Level Constructs to Logic Gates As mentioned, we are concerned with the synthesisable subset of the discussed HDLs. As part of that, we describe how different HDL code can be synthesised into hardware. ==== Logic For example, the case-statement can be represented as a series of `if`-`else if`-`else` statements, which can be constructed in hardware as a series of muxes, as shown in @fig:case-synthesis. There are five units that perform the respective operations. There are also comparison units to determine whether the incoming `op` is one of the given operations, represented as `*?` where `*` is the operation. The results of these comparisons are used to control several muxes represented by `M`. When the control signal is low, the mux selects the top output, and when the signal is high, it selects the bottom output. #figure( ```monosketch a b op │ │ │ │ │ └──────┬───┬───┬───┐ │ ├─▶─┐ ┌─▼┐┌─▼┐┌─▼┐┌─▼┐ │ │ │+┼─┐│-?││&?││|?││^?│ ├─│─▶─┘ │└─┬┘└─┬┘└─┬┘└─┬┘ │ ├─▶─┐ └─▶▼┐ │ │ │ │ │ │-┼─┐ │M├┐ │ │ │ ├─│─▶─┘ └─▶─┘└▶▼┐ │ │ │ ├─▶─┐ │M├┐ │ │ │ │ │&├───────▶─┘│ │ │ ├─│─▶─┘ └▶▼┐ │ │ ├─▶─┐ │M├┐ │ │ │ │|│───────────▶─┘│ │ ├─│─▶─┘ └▶▼┐ │ └─▶─┐ │M├───▶result │ │^├───────────────▶─┘ └───▶─┘ ```, caption: [Implementation of the `case`-statement from @lst:systemverilog-example], kind: image, )<fig:case-synthesis> ==== Maybe, DontCare, and Unknown Notice that the value `'x` is assigned to `result` in the default case. This is not a "real" value and is treated as "unknown" or "invalid". When assigning `'x` like this, the programmer says that they do not care about the result in that case. All values must eventually resolve to high or low in a physical implementation; `x` is thus an unsynthesisable value as it does not actually exist as a value. "Don't care" values convey the semantic meaning that, in this case, the implementation of the circuit is allowed to do anything. This liberty has been taken by merging the `+` case with the default case. Such values are often referred to as "maybe", "don't care", or "unknown". They are neither true nor false, but possibly both. The truth tables for gates can be modified to account for unknown values and produce sensible results. For example: the output of an OR-gate is always true if at least one of its inputs are true, it is only false if both inputs are false, and otherwise, it is unknown. The output of an AND-gate is true only if both inputs are true, it is false if at least one of the inputs are false, and it is unknown otherwise. This kind of three-valued logic is available in SystemVerilog, though standard practice seems to avoid them. In this case, it may be better to explicitly specify that the default case should return the same result as the `+`-case. ==== Latches and Flip-Flops The implication of `always_ff` is that the logic contained inside should require flip-flops for the logic. Non-blocking assignments can be implemented using flip-flops as described in the previous chapter. By using a flip-flop style circuit, the output is only updated once the clock-signal goes low again. Latches are inferred at synthesis when a signal is not assigned in all possible cases. For example if the `case`-statement was missing cases and did not have a `default` case, the value of `result` would be indeterminate. SystemVerilog is defined such that variables retain their previously assigned value unless updated. A latch can be used to accomplish this and only enable the latch when there is an updated value available. A latch continuously reads and outputs the input value while the enable-signal is active. However, this behaviour is commonly undesirable as it is often unintentional and adds more delay on account of needing to pass through a latch, which is why the `always_comb` block exists to warn the programmer. If something inside an `always_comb` block results in an inferred latch, the tooling for the language gives an error or a warning. === Scaling Circuits With high-level HDLs, it may be easy to forget that the circuit is destined for synthesis. For example: variable indexing (where the index might change every cycle) in arrays requires a network of muxes for each place the array is indexed. Adding more ports to read from an array of values can be even more expensive as it complicates wiring. This is more true for arrays of stored values (large flip-flop structures). Ports dedicated to writing to registers are more expensive to implement than ports for reading. For multiple instructions to write back their results to the PRF, it must have multiple ports for writing. Naive scaling is expensive and much research has been done to reduce the number of ports needed in the register file @bib:banked-register-files. == Testing Designs There are several ways to go about testing hardware designs. Unsynthesisable features of HDLs are included because they are useful for testing and debugging circuit behaviour. === Simulation of Register-Transfer Level One alternative is to run the code in a simulator. A simulator takes the code and runs it according to the lanugage standard. A popular simulator is _Verilator_ @bib:verilator which accepts SystemVerilog and translates it to a multithreaded model that can be executed on the host system. The freedom of a software simulator allows for great support for various testing and debugging. === Field-Programmable Gate Arrays When the design has been tested in a simulator and correct behaviour is confirmed, it is common to prototype the circuit on a _field-programmable gate array_ (FPGA). An FPGA is a large canvas of _look-up tables_ (LUT) and various other components on a _fabric_. The LUTs can be programmed to provide a given output for any given input. The simplest possible LUT has two inputs and a single output. Within it are four register cells that can be selected by using the two inputs as an address. The values of the four register cells can be programmed to any values. A two-input LUT can act as any logic gate by programming it with the same behaviour as the appropriate truth table. The fabric consists of wires between the components and can be programmed in a similar fashion to decide which components are connected together. With a proper bit-stream and enough components, an FPGA can be programmed to act like any circuit. FPGAs are much faster than simulators, and even though they are slower than creating an integrated circuit, they are still representative of metrics like IPC. An FPGA is less flexible than a simulator in that the code has to go through synthesis and unsynthesisable features therefore become unavailable. Additionally, FPGAs are less flexible in how certain language elements are implemented. This means that some features that are synthesisable in one technology may be unsynthesisable in a different technology. One example is tri-state buffers that allow an output to both source and sink current, or be electrically disconnected. FPGAs usually have tri-state ports on the chip interface (the input and output ports to the chip) and generally don't support tri-state logic internally, which then has to be implemented some other way. == Logic Synthesis: From Register-Transfer Level to Logic Gates As shown in @fig:case-synthesis, translation from HDL to a circuit can be simple. This is the job of synthesis. The abstract circuit behaviour described by the HDL must be translated to a concrete implementation in terms of logic gates made from transistors. Logic synthesis tools will use primitives with various different implementations depending on timing requirements. A physically larger circuit can often have a shorter delay#footnote[See carry look-ahead adders.]. === Circuit Optimisation The circuit in @fig:case-synthesis was generated naively and contains a path that has to go through more logic than necessary; the maximal delay of the circuit is higher than it needs to be. There are many transformations that can be performed on the circuit that preserve correct behaviour. ==== Restructuring One example is turning a cascaded sequence of muxes---like the one in @fig:case-synthesis ---into a tree as shown in @fig:mux-tree. This reduces the number of muxes that a signal must pass through to get to the final output without increasing the number of components needed. The important thing here is that the longest possible delay is reduced. #figure( ```monosketch │ ─┬┴┐ │ │M├─┬┴┐ ─┴─┘ │M├┐ │ ─┴─┘└┬┴┐ ─┬─┐ │M├─ │M├─┴─┘ ─┴┬┘ │ ```, caption: "Turning the staggered muxes into a tree", kind: image, )<fig:mux-tree> The longest possible path a signal can take between a flip-flop output and another flip-flop input---in terms of delay---is called the _critical path_. There may be physically long wires with short delays. Optimisation will iteratively focus on shortening the longest path by replacing circuits along it with ones that shorten the delay. ==== Retiming Another important optimisation is _retiming_ where flip-flops and latches are moved, inserted, or removed in the circuit in a way that preserves behaviour at the output @bib:retiming. For example, in the circuit shown, the result is assigned to an output in a way that should infer a flip-flop. However, if the output of this circuit is immediately assigned to a flip-flop again by a consumer, there is an imbalance where a lot of logic is done in the first circuit, but no logic (and thus, inconsequential delay) is performed between the output flip-flop and the next flip-flop. A synthesising process will recognise this situation and move the output flip-flop into the circuit, so that some of the logic occurs before the flip-flop, and some of it happens after it. This way, the clock frequency can be increased because the longest path between flip-flops is shortened. Because of this retiming, it is often not necessary to be explicit about manually balancing logic between flip-flops. It is possible to do complex, slow logic, then assign the result to a chain of flip-flops and let the retiming algorithm deal with balancing the timing of the circuit. Optimisations like these are only possible because the code conveys _intent_. The language standard does not require that each signal used by the programmer actually exists in the final implementation, only that the circuit behaves _as if_. I.e., behaviour is only required to be preserved at the inputs and outputs of the system. === Place and Route The final step in the process is _place and route_ in which the individual transistors are placed in space and are connected by wires. The rules of place and route are dictated by the underlying technology to be used. Certain designs that are simple to implement when creating integrated circuits, can be complicated to implement when using an FPGA. High-level designs can be optimised for the underlying technology, but it requires having the knowledge of how constructs translate to the underlying technology. Place and route ties together with higher level logic synthesis and optimising a circuit is an iterative process. It is normal for certain common circuits to be designed by hand. Place and route will use these circuits as building blocks for the larger circuit. _Static random access memory_ (SRAM) blocks are often hand-designed to optimise for area and power usage.
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/docs/blocky2.typ
typst
Apache License 2.0
/* This is X Note: This is not Y */ #let x /* ident */ = 1;
https://github.com/donRumata03/aim-report
https://raw.githubusercontent.com/donRumata03/aim-report/master/lib/HW-template.typ
typst
#import "presentation-utilities.typ": codeblock, set-code #import "@preview/showybox:1.0.0": showybox #import "todos.typ": * #import "generic-utils.typ": * #let point-counter = counter("points") #let comment(body, width: 20%, color: green.lighten(90%), dy: 0em) = place(right, dy: dy -1em, dx: 25%, { set text(size: 0.85em) showybox(align: right, width: width, frame: (body-color: color), body) }) #let template(body, name: "", deadline: none, p: none, bonus-p: none) = { set raw(lang: "rs") set text(lang: "ru") show raw.where(block: false): box.with(fill: gray.lighten(80%), outset: (y: 0.2em), radius: 0.2em) let codeblock = codeblock.with(size: 1.2em) show raw.where(block: true, lang: "rs"): codeblock show raw.where(block: true, lang: "cpp"): codeblock show raw.where(lang: "error"): it => codeblock(nums: false, raw(it.text, lang: "rs")) show raw: set text(font: "Fira Code") show emph: set text(fill: red.darken(20%)) show link: set text(blue) show link: underline show link: emph set heading(numbering: "1.1.") set page(margin: (right: 20%)) align(center, text(size: 2em)[Домашнее задание]) align(center, text(size: 1.6em, name)) comment(width: 40%, color: none,)[ #text(blue.darken(50%), size: 1.2em)[ Всего баллов: #h(1fr) #if p == none {locate(loc => point-counter.final(loc).at(0))} else {p} ] #if bonus-p != none { text(red.darken(30%), size: 1.2em)[ Бонусные баллы: #h(1fr) #bonus-p ] } ] v(2em) body } #let points(n) = { point-counter.update(c => c + n) comment(text(fill: blue.darken(30%), size: 1.05em)[Баллы: #h(1fr)] + str(n)) } #let notice(fill: red, body) = block(stroke: fill, fill: fill.lighten(90%), inset: 1em, body)
https://github.com/MLAkainu/Network-Comuter-Report
https://raw.githubusercontent.com/MLAkainu/Network-Comuter-Report/main/components/header.typ
typst
Apache License 2.0
#import "../metadata.typ": meta #locate(loc => { set text(font: "Iosevka NF", size: 10pt) show: block.with( stroke: (bottom: 1pt), inset: (bottom: 0.5em), ) // skip first page header if loc.page() == 1 { return } stack( dir: ltr, image("./assets/hcmut.jpg", height: 2.5em), 0.5cm, align( horizon, stack( dir: ttb, upper("Trường Đại học Bách Khoa - ĐHQG-HCM"), 0.75em, upper("Khoa Khoa học và Kỹ thuật Máy tính"), ) ), 1fr, ) })
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/closure-19.typ
typst
Other
// Error: 10-14 expected identifier, found `none` #let foo(none: b) = key
https://github.com/hugoledoux/msc_geomatics_thesis_typst
https://raw.githubusercontent.com/hugoledoux/msc_geomatics_thesis_typst/main/front/abstract.typ
typst
MIT License
#import "../template.typ": * #heading(outlined: false)[Abstract] #info[ Should fit on one page. ] Lemongrass frosted gingerbread bites banana bread orange crumbled lentils sweet potato black bean burrito green pepper springtime strawberry ginger lemongrass agave green tea smoky maple tempeh glaze enchiladas couscous. Cranberry spritzer Malaysian cinnamon pineapple salsa apples spring cherry bomb bananas blueberry pops scotch bonnet pepper spiced pumpkin chili lime eating together kale blood orange smash arugula salad. Bento box roasted peanuts pasta Sicilian pistachio pesto lavender lemonade elderberry Southern Italian citrusy mint lime taco salsa lentils walnut pesto tart quinoa flatbread sweet potato grenadillo. Lemongrass frosted gingerbread bites banana bread orange crumbled lentils sweet potato black bean burrito green pepper springtime strawberry ginger lemongrass agave green tea smoky maple tempeh glaze enchiladas couscous. Cranberry spritzer Malaysian cinnamon pineapple salsa apples spring cherry bomb bananas blueberry pops scotch bonnet pepper spiced pumpkin chili lime eating together kale blood orange smash arugula salad. Bento box roasted peanuts pasta Sicilian pistachio pesto lavender lemonade elderberry Southern Italian citrusy mint lime taco salsa lentils walnut pesto tart quinoa flatbread sweet potato grenadillo.
https://github.com/WinstonMDP/math
https://raw.githubusercontent.com/WinstonMDP/math/main/knowledge/SLAEs.typ
typst
#import "../cfg.typ": cfg #show: cfg = Systems of linear algebraic equations (SLAEs) A linear algebraic equation with coefficients $arrow(a)$ in a field $F$ and a constant term $b in F := a_1 x_1 + ... + a_n x_n = b$. A linear equation is homogeneous $:= b = 0$. A SLAE is consistent $:= exists$ a solution. Solution sets of both SLAEs are equal $<->$ its matrixes are equivalent. A leading element of a row $:=$ the first nonzero element of the row. An unknown is main $(overline("free"))$ $:=$ it's with a leading element. A matrix is echelon $:=$ the leading elements form a strictly increasing sequence. It's possible to convert each matrix to echelon form by elementary transformations. A SLAE can be represented in a matrix form: $A vec(x_1, dots.v, x_m) = vec(b_1, dots.v, b_m)$. *The Kronecker-Capelli theorem:* A SLAE is consistent $<->$ a rank of the matrix $=$ a rank of the extended matrix. A solution set of a SLAE with $n$ unknowns is subspace of $K^n$. A solution set of a consistent SLAE is sum of one of its solution and solution subspace of the corresponding homogeneous SLAE. A dimension of solution space of a homogeneous SLAE $A$ with $n$ unknowns $= n - op("rk")A$. A fundamental system of solutions (FSS) $:=$ a basis of a solution space of a homogeneous SLAE. *Kramer's rule:* $A_i$ is a matrix obtained by replacing $i$ column with the constant terms column $-> det A != 0 -> x_i = (det A_i)/(det A)$,
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/012%20-%20Conspiracy/002_The%20Black%20Rose.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "The Black Rose", set_name: "Conspiracy", story_date: datetime(day: 21, month: 05, year: 2014), author: "<NAME>", doc ) #figure(image("002_The Black Rose/01.jpg", height: 40%), caption: [], supplement: none, numbering: none) The home was more ornate than it needed to be. Marchesa's mansion towered over the palatial homes of her neighbors, each extra story a mark of her success. While the rich afforded three to four stories on their homes, Marchesa had nine, seven of which were mostly unused, although they served their purposes. Situated among the elite of Paliano, the High City, Marchesa was entertaining a guest and business partner from the lowlands, Ervos Trax. Marchesa and Ervos were longstanding business partners. Marchesa's network of spies and rogues controlled much of the High City, while Ervos's criminal empire stretched from the lowlands all the way to the city of Talon and the docks beyond. Despite his power in the lowlands, Ervos was still not of the High City. His best clothes, which he had clearly worn, were fancy to lowlanders, but out of date and less impressive to a High City noble. Ervos had made the arduous trek up the Thousand Steps into the High City from the lowlands. Marchesa had invited him to dinner but she didn't send a ship to bring him to her—although she owned several staffed with pilots. Marchesa and Ervos were sitting in Marchesa's third-best dining room, which afforded them a more intimate meal. Instead of sitting at one end of a massive table meant to entertain two dozen, Ervos sat across from Marchesa. Ervos was not yet middle aged, although in his line of work that would make him ancient. He was arguably handsome, with sandy brown hair and straighter teeth than most. With his good looks and undeniable charm, he had exploited his first victims. Although he wore last season's fashion, a somewhat garish suit made of golden cloth, Marchesa did note that Ervos still looked pleasing to the eye. Marchesa wore her raven-black hair pinned up with ornate pins. Nobles and thieves all wondered why Marchesa insisted upon wearing the fashion of the older women in Paliano, although she was only slightly older than Ervos. Even now, at a more casual dinner, she wore the dress one would typically see in the High Chamber during a vote worn by a senator falling asleep as the call was tallied. Some suspected she dressed this way to assert the role she wanted upon others, while others whispered the Black Rose thought herself the ruler of the city. Ervos always smiled at these rumors, for he knew Marchesa dressed that way simply because she liked the clothes, and although she was a woman of grand ulterior motives, her clothing had none. She wore the style of the elders well, Ervos thought, somehow remaining fluid in her movements, using her arms to speak and walking quickly when talking, although the style was typically worn by the slow and rigid. #figure(image("002_The Black Rose/02.jpg", width: 100%), caption: [Paliano, the High City | Art by Adam Paquette], supplement: none, numbering: none) Marchesa also wore a ring on each finger, each expensive and ornate. The biggest was the ruby she wore on her left middle finger. Each ring housed a different poison, but the ruby contained the deadliest on Fiora. Here they sat, regal and dignified, two killers slowly eating their meal of roasted lamb and steamed exotic vegetables. The only sound in the room was the clank of silverware against plates, the knives cutting the lamb through and scratching the plates underneath. Then Ervos, without looking up at his hostess, spoke. "I think I am going to have you killed," Ervos said, then took a bite of a hunk of over-buttered bread. Marchesa stopped cutting her food, but only briefly, and continued to carefully cleave her pork. "Oh?" she replied after the silence. She took a bite of her food, eyes fixed on her plate. "How would you go about that?" Ervos looked up at Marchesa and pushed himself back in his chair, sitting up straight. "It would be a challenge, I'm sure, but I do have a plan," Ervos said, confidently. Marchesa took a sip of wine and then broke some bread from the basket in front of her. "And why would you wish to kill me?" "Business, pure and simple. I tire of making the trek up the stairs, and my network is now steadily moving into the High City. You, Dear Friend, are my only obstacle. And I know you would never allow a rival to have that much power in #emph[your] city." "I see. But please, do not tantalize me with vague notions," Marchesa said, almost teasing. "I must know how you would plan to end my life. Share the details." Ervos placed both hands on the table and smiled. "Well, of course I could not attack now. You have at least two... no, three men, in your walls. I don't hear any breathing, although I do notice that this palace of yours has a strong smell of yantal root. That means you are trying to cover up a smell, so I would guess zombies, most likely bound to protect you if you or they sense danger." Marchesa leaned back in her chair, smiling, as she sipped her wine, holding the wine glass nonchalantly to the side as she rested her arm on the armrest. "I would never make it out alive," Ervos continued, "even if I did strike you down where you sit right now and used a spell to render the zombies inert, I would still need to leave the house. I would have two avenues of exit, the yard or sewers—which I know, after murdering the city registrar and stealing the plans to your home, connect to your basement. The yard would be covered by the archers perched on your rooftop, and the sewers no doubt run me afoul of that damnable Grenzo you have arraignments with. Likewise, I highly suspect that if I were to murder you I would, of course, be afflicted with some sort of dark curse that would leave me in a state of horrible pain, but never allowing me to die." Ervos chuckled. Marchesa took a drink of wine. "Why would I leave real plans of my house with the registrar?" Marchesa asked. "Of course, they are not the real plans, although no doubt you would have had enforcers threaten the registrar so he would think they were real, and keep eyes on the man so if he was approached by another you would know. Which would mean the basement wouldn't even lead to the sewers, or if it did, might drop me into a chute that would have me fall out of the city, plunging to sure death into the lowlands beneath." #figure(image("002_The Black Rose/03.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none) "You give me much credit, Ervos. I thank you for the kindness." Marchesa placed her glass on the table and leaned forward, resting her head on the arch she formed with her hands. "Please, do go on." Ervos smiled and continued. "Knowing that the registrar would be a dead end, pardon the pun, I would instead have to think about how to strike from a distance. Now, my first guess would be to poison your food, but as that is one of your favorite motifs, you would be well prepared for this maneuver. I imagine you get your food from different locations, some even from the lowlands, using different couriers each time, so as to not give anyone the opportunity to tamper with your meals. I am also fairly certain you would feed your food to—no you are not cruel enough to do this to an employee—but maybe to rats or goblins, to see if they keel over. So, killing you through your food would be out of the question." "It's good to know this wasn't my last supper," Marchesa commented. "I would have preferred a better vintage of wine." "Quite," Ervos agreed. He leaned back in his chair. "And as I've already mentioned, your home is a safehold. You do not travel regularly, but when you do, you travel with armed guards and agents dressed as nobles and street folk, with some running along the rooftops. A direct assault on you would leave many dead, and you have enough contacts that garnering support would be difficult. Word of my sedition would eventually reach your ears. Even if I tried to recruit a gang of goblins or Custodi guards, you would most likely know." "It seems like I have nothing to fear," Marchesa said, still smiling. "Oh, but you do, for there is your weakness," Ervos said, now taking a large drink of wine. "We both, as a hazard of our business, rely far too much on others. What is a spider when it cannot trust its web? People can be broken, people can be made to turn. So with those who protect you and act as your agents throughout the city, all I would need to do is find someone in your organization I could own." "Very true, of course, but which player would you invest into this role?" "It would be a matter of access. Those in your personal guard and your house servants would be harder to meet with; I imagine each spying on the others as part of their position. I would need to find someone on the outside of your operations, someone who would get orders from those you give orders to, but not so far removed from the top they don't know anything. I would need someone like a foreman who oversees shipments or a bookkeeper who distributes funds to your assassins. I would need someone like..." "<NAME>?" Marchesa interrupted. Ervos coughed and drank some wine to calm his throat. Marchesa took the opportunity to take more bites of her food, moving from meat to vegetables, which were slightly cold now but still expensive and delicious. "Yes," Ervos said, still fighting a cough, his face slightly redder from his fit. "As one of your sub-lieutenants, <NAME> would be the sort of person I would use. I would use an agent of my own to find out his weaknesses, like his family. And then I would extort him, with threat of violence, into giving me information about how you move your personnel. I'd gather information over the course of a few weeks to see where you would be most vulnerable, even if it would just be an attack against your pocketbook." #figure(image("002_The Black Rose/04.jpg", width: 100%), caption: [Marchesa's Infiltrator | Art by <NAME>], supplement: none, numbering: none) Ervos began to cough again, this time producing blood into his hands, which he quickly wiped up with a cloth napkin that had been on his lap. Marchesa saw this, although she did not acknowledge that fact. She spoke while he coughed. "I would, of course, suspect such a subterfuge and end Pietro Lokosh's life as a precaution. Likewise, I would locate your spy and flip his allegiance with the promise of gold, allowing me to keep better tabs on you, feeding back the information I would want you to hear, until I decide to kill the spy and retrieve my gold. For good measure." Ervos nodded as she spoke, still coughing into his bloody napkin, face redder than before, and held up a finger asking her to pause. "I would, of course, know that the spy would be used against me," he said, speaking through the coughing, blood now splattering onto his plate of unfinished food. "I also know that any person in my organization would ultimately be corrupted by your promises, and I could never trust someone who had ever been in your employ. I also know I am just not as adept at knowing people as you, seeing all the variables. I admit that as my flaw. I would know I would not be able to kill you, but as our businesses continue to square off against the other, one of us would have to die. So instead of letting you kill me, I would poison myself, knowing I would be dead despite any schemes I might plan." Marchesa nodded, the smile now gone from her face. "I am impressed, Old Friend. I will say that I am shocked by this play. I had planned to have you killed at your secret penthouse in your sleep two nights from now. It seems I will be blamed for your death and face retaliation from your associates." She leaned forward. "This was a good play." Ervos smiled, now shaking as he tried to hold himself up in his chair, but then slumped forward, face into his plate, dead. Marchesa sighed and fidgeted with her rings. She stood up, pushing her chair back, and walked over the Ervos's body. She wanted to kiss him on the forehead, but she knew Ervos would have put poison on his skin to prey upon any compassion she might show. Instead, she walked out of the room to summon her butler, who had been in the backyard since before Ervos arrived, digging a hole for his body. Marchesa knew her rival would take his own life, but she wanted him to have the final victory as he died, even if she had known his play all along. #figure(image("002_The Black Rose/05.jpg", width: 100%), caption: [Marchesa, the Black Rose | Art by <NAME>], supplement: none, numbering: none)
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/syntax_01.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test a few shorthands. $ underline(f' : NN -> RR) \ n |-> cases( [|1|] &"if" n >>> 10, 2 * 3 &"if" n != 5, 1 - 0 thick &..., ) $
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/hover/builtin_module.typ
typst
Apache License 2.0
#(/* ident after */ sys);
https://github.com/dangh3014/postercise
https://raw.githubusercontent.com/dangh3014/postercise/main/themes/basic.typ
typst
MIT License
#import "../utils/scripts.typ": * #let focus-box( color: none, body ) = { locate(loc => { let primary-color = color-primary.get() show heading: it => [ #block(width: 100%, height: 1em, // stroke: primary-color, align(center+bottom)[ #it.body #v(-1.2em) #line(length: 100%, stroke: 0.0625em)] ) ] if color != none [ #let focus-color = color #box( width: 100%, stroke: black+0.0625em, fill: color, inset: 0%, [ #box( inset: (top: 4%, left: 4%, right: 4%, bottom: 4%), body ) ] ) ] else [ #let focus-color = color-accent.get() #box( width: 100%, stroke: black+0.0625em, fill: focus-color, inset: 0%, [ #box( inset: (top: 4%, left: 4%, right: 4%, bottom: 4%), body ) ] ) ] }) } #let normal-box( color: none, body ) = { locate(loc => { let primary-color = color-primary.get() // show heading: set text(fill: primary-color) show heading: it => [ #block(width: 100%, height: 1em, // stroke: primary-color, align(center+bottom)[ #it.body #v(-1.2em) #line(length: 100%, stroke: 0.0625em)] ) ] if color != none [ #let focus-color = color #box( width: 100%, stroke: black+0.0625em, fill: focus-color, inset: 0%, [ #box( inset: (top: 4%, left: 4%, right: 4%, bottom: 4%), body ) ] ) ] else [ #let focus-color = color #box( width: 100%, stroke: none,//primary-color+0.2em, fill: color, inset: 0%, [ #box( inset: (top: 0%, left: 4%, right: 4%, bottom: 4%), body ) ] ) ] }) } #let poster-content( col: 3, body )={ locate(loc => { let primary-color = color-primary.get() let bg-color = color-background.get() let titletext-color = color-titletext.get() let titletext-size = size-titletext.get() let current-title = context title-content.get() let current-subtitle = context subtitle-content.get() let current-author = context author-content.get() let current-affiliation = context affiliation-content.get() let current-logo-1 = context logo-1-content.get() let current-logo-2 = context logo-2-content.get() let current-footer = context footer-content.get() // Table captions go above // TO DO: Numbering is not working properly show figure.where(kind:table) : set figure.caption(position:top) show figure.caption: it => [ // #context it.counter.display(it.numbering) #it.body ] // Need to call body (hidden) to update header and footer block(height: 0pt, hide[#body]) v(0pt, weak: true) grid( columns: 1, rows: (16%, 80%, 4%), // Top = title row [ #box( stroke: none, fill: primary-color, height: 100%, width: 100%, inset: 4%, grid( columns: (10%, 80%, 10%), rows: 100%, stroke: none, // Left [ #place(horizon+left)[#current-logo-2] ], // Center [ #place(horizon+center)[ #set text(size: titletext-size, fill: titletext-color, ) *#current-title* #current-subtitle \ #set text(size: 0.5em) #current-author \ #current-affiliation ] ], [ #place(horizon+right)[#current-logo-1] ] ) ) ], // Middle = body [ #box( height: 100%, inset: 4%, fill: bg-color, columns(col)[#body] ) ], // Bottom = footer [ #box( stroke: none, fill: primary-color, height: 100%, width: 100%, inset: 4%, align(horizon+center)[#current-footer] ) ] ) }) }
https://github.com/mattyoung101/uqthesis_eecs_hons
https://raw.githubusercontent.com/mattyoung101/uqthesis_eecs_hons/master/util/macros.typ
typst
ISC License
// Contains general macros to be used across the template #let uqHeaderSize = 26pt #let uqHeaderNoChapter(it) = { v(3em) text(uqHeaderSize)[* #it.body * ] v(1em) } #let uqHeaderChapter(it) = { v(3em) // #counter(heading).display( it.numbering ) text(uqHeaderSize)[* Chapter #counter(heading).display("1") *] v(-15pt) text(uqHeaderSize)[* #it.body * ] v(1em) }
https://github.com/0x1B05/algorithm-journey
https://raw.githubusercontent.com/0x1B05/algorithm-journey/main/practice/note/content/滑动窗口.typ
typst
#import "../template.typ": * #pagebreak() = 滑动窗口 滑动窗口,即有一个大小可变的窗口,`L` `R` 两端方向一致的向前滑动(`R` 固定,`L` 滑动;`L` 固定, `R` 滑动)。 == 经典题目 有一个整型数组 `arr` 和一个大小为 `w` 的窗口从数组的最左边滑到最右边,窗口每次向右边滑一个位置。 #example("Example")[ 数组为`[4,3,5,4,3,3,6,7]`,窗口大小为 `3` 时: ``` [4 3 5]4 3 3 6 7 |窗口中最大值为5 4[3 5 4]3 3 6 7 |窗口中最大值为5 4 3[5 4 3]3 6 7 |窗口中最大值为5 4 3 5[4 3 3]6 7 |窗口中最大值为4 4 3 5 4[3 3 6]7 |窗口中最大值为6 4 3 5 4 3[3 6 7] |窗口中最大值为7 ``` ] 如果数组长度为 `n`, 窗口大小为 `w`, 则一共产生 `n-w+1` 个窗口的最大值。 请实现一个函数: - 输入:整型数组 `arr`,窗口大小为 `w`。 - 输出:一个长度为 `n-w+1` 的数组 res,res[i]表示每一种窗口状态下的最大值. 以本题为例,结果应该 返回`{5,5,5,4,6,7}`。 == 单调队列 数组arr,在 L,R 两端滑动(只能向前滑动,L <= R)的时候,给出窗口内的最大值.(不用遍历) 采用双端队列(头进头出,尾进尾出):队列里放下标.(下标包含的信息更多,包含了位置和值的信息) 对于最大值结构: 头 大->小 尾 R 向前滑动的时候,数从尾进,小的直接入队,要是当前数比队尾更大,队尾弹出 L 向前滑动的时候,看看过期的点是不是队的头节点,要是弹出,不然不动 举例: ``` |0|1|2|3|4|5|6|7| |-|-|-|-|-|-|-|-| |3|2|4|6|3|5|4|5| ``` ->R (0,3)入 |0 ->R (1,2)<(0,3) (1,2)入 |0 1 ->R (2,4)要进 (2,4)>(1,2)弹(1,2) (2,4)>(0,3)弹(0,3) (2,4)入 |2 ->R (3,6)要进 弹(2,4) 入(3,6) |3 ->R (4,3)入 |3 4 ->R (5,5)要入 弹(4,3) (5,5)入 |3 5 ->R (6,4)入 |3 5 6 ->R (7,5)要入 弹(6,4) 弹(5,5)(严格保证单调性) 入(7,5) |3 7 ``` |0|1|2|3|4| |-|-|-|-|-| |6|4|2|5|3| ``` ->R ->R ->R | 0 1 2 ->L 看看过期位置是不是双端队列头部节点的位置,若是,从头部弹出 |1 2 ->L |2 ->R (2,2)弹出 (3,5)入 ->R |3 4 ->L 2过期,看头部不是2,不需要任何操作. 双端队列里数的含义: 若 R 不再向前扩,让L向前扩,谁会依次成为最大值然后会被 L 会淘汰掉的数. 双端队列弹出的数不可能再成为最大值,因为入队的元素,下标更大,值也更大. ```java public static int[] maxSlidingWindow(int[] nums, int k) { int[] res = new int[nums.length - k + 1]; int index = 0; LinkedList<Integer> doubleEndQueue = new LinkedList<>(); for(int i = 0;i<nums.length;i++){ while(!doubleEndQueue.isEmpty()&&nums[doubleEndQueue.peekLast()]<nums[i]){ // 如果不空并且要加入的值大于双端队列的尾节点,一直弹 doubleEndQueue.pollLast(); } doubleEndQueue.add(i); if(i-k==doubleEndQueue.peekFirst()){ // 如果过期的节点是双端队列的头节点,那么弹出 doubleEndQueue.pollFirst(); } if(i>=k-1){ // 如果来到k-1之后的位置(窗口形成) res[index++]=nums[doubleEndQueue.peekFirst()]; } } return res; } ``` === 复杂度分析 每个数,最多进一次队列,最多出一次队列.总代价 O(n),单词平均 O(1).
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/opticalsize_05.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test spaces between $a' ' '$, $' ' '$, $a' '/b$
https://github.com/MrToWy/Bachelorarbeit
https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Code/main.component.typ
typst
```html <div class="main"> <app-topbar></app-topbar> <router-outlet /> </div> ```
https://github.com/Dherse/typst-brrr
https://raw.githubusercontent.com/Dherse/typst-brrr/master/samples/masterproef/content/a_annexes.typ
typst
#import "../elems/acronyms.typ": * #import "../elems/infos.typ": * #import "../elems/template.typ": * #import "../elems/hexagonal.typ": hexagonal_interconnect #set heading(numbering: "A", supplement: [Annex]) #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set page(flipped: true) = Programming paradigm poster <anx_paradigms> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [A.#x]) #figurex( title: [ Programming paradigms poster. ], caption: [ Programming paradigms poster, showing the different programming paradigms and their relationships. Created by _<NAME>_ @van_roy_classification_nodate. ], )[ #image("../figures/programming_paradigms.png", width: 71%) ] #set page(flipped: true, columns: 1) = AST data structure: overview <anx_ast_overview> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [B.#x]) #figurex( title: [ UML diagram of parts of the @ast relevant for @sec_ast. ], caption: [ UML diagram of parts of the @ast relevant for @sec_ast. It is incomplete since phos contains 120 data structures to fully represent the @ast. ] )[ #image("../figures/drawio/ex_ast.png", width: 88%) ] #set page(flipped: true) = Bytecode execution <anx_bytecode_execution> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [C.#x]) #figurex( title: [ Execution diagram of the stack of @sec_ex_bytecode_exec. ], caption: [ Execution diagram of the stack of @sec_ex_bytecode_exec, showing the stack before and after the execution of each of the bytecode instructions. ] )[ #image("../figures/drawio/execution.png", width: 85%) ] <fig_annex_execution> #set page(flipped: true) = Graph representation of a mesh <anx_bytecode_instruction_set> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [D.#x]) #figurex( title: [ Graph representation of a mesh. ], caption: [ Graph representation of a mesh, showing the direction that light is travelling in, and all of the possible connections. Based on the work of <NAME>, et al. @chen_graph_2020. This visualisation was created with the collaboration of <NAME>, as mentioned in #link(<sec_ack>)[the acknowledgements]. ], )[ #hexagonal_interconnect(side: 13cm, hex-side: 1.5cm, 10, 14) ]<fig_graph_representation_mesh> #set page(flipped: false) = Marshalling library example <anx_marshalling_library_example> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [E.#x]) #figurex( title: [ @phos example used in @lst_marshalling_comp. ], caption: [ @phos code performing splitting and modulation, used in @lst_marshalling_comp. ], )[ ```phos syn main(input: optical, modulated: electrical) -> (optical, optical) { input |> split((0.5, 0.5)) |> (modulator(modulated, type_: ModulationType::Amplitude), _) } ``` ] <lst_marshalling_phos> #figurex( title: [ Example of the marshalling library. ], caption: [ Example of the marshalling library, showing the configuration of the different components of the synthesis toolchain. ], )[ ```python # Import the marshalling library import phos as ph # Import the platform-support package import prg_device as prg # Create the device with the specific support package device = ph.Device(prg.DeviceDescription()) # Try and load a module, this will compile the module module = device.load_module("module.phos") # Create the I/O, each `io` calls returns an input and an output electrical = device.electrical_input(0) (input, _) = device.io(0) (_, output0) = device.io(1) (_, output1) = device.io(2) # Instantiate the module, first passing in the inputs and parameters # and then the outputs. This run evaluation of the module. instance = module(input, electrical, name="Module Instance") .output(output0, output1) # Build the design, this will run synthesis, with area optimisation built = device.synthesise(instance, optimisation="area") # Create the user HAL in the `./iq_modulator` directory built.generate_hal("./iq_modulator") # Create the firmware in the `./iq_modulator.bin` file build.generate_firmware("./iq_modulator.bin") ``` ] <lst_marshalling_comp> #pagebreak(weak: true) #figurex( title: [ Example of the marshalling library for simulation. ], caption: [ Example of the marshalling library for simulation, showing the simulation of a module. ], )[ ```python # Import numpy import numpy as np # Import plotting library import matplotlib.pyplot as plt # We set the simulation parameters dt = 1e-12 tstop = 1e-6 bitrate = 10e9 t = np.arange(0, tstop, dt) bit_timing = 1 / bitrate # generate a test PRBS sequence prbs = gen_prbs(12, tstop / bit_timing, 0x17D) prbs = np.array([1.0 if x else 0.0 for x in prbs]) # Create the simulator simulator = device.simulator() # Create the source with some noise noise = simulator.noise_source(0, 0.01) source = simulator.source(nm(1550), noise=noise) # Simulate the module (output0, output1) = simulator.simulate(module, t).with_input(source, prbs) # Plot `output0` and `output1` with respect to `t` plt.plot(t, output0) plt.plot(t, output1) plt.show() ``` ] <lst_marshalling_sim> #set page(flipped: true) #figurex( title: [ Layout of the circuit in the marshalling library example.], caption: [ Layout of the circuit in the marshalling library example, showing the path that the light takes inside of the photonic processor, as well as the state of each photonic gate. It also shows the path of the modulated light in red, and highlight the splitter. ] )[ #image( "../figures/drawio/chip_marshalling_ex.png", height: 90%, alt: "Shows a photonic chip made of a rather large hexagonal mesh, with modulators on the bottom, detectors on the top, and optical I/O on either remaining sides." ) ] <fig_marshalling_circ> #set page(flipped: false) = Example: Beam forming system <anx_beam_forming> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [F.#x]) #figurex( title: [ Example in @phos of beamforming system. ], caption: [ Example in @phos of beamforming system, parametric over the number of channels. ], )[ ```phos // Create a simple beamforming system // This system takes an input optical signal and a set of electrical signals // 1. It splits the input optical signal into N optical signals // 2. It ensures that the phase of each of the optical signals is the same // 3. It modulates each of the optical signals with the electrical signals // 4. It ensures that the delay of each of the optical signals is the same syn beam_forming( input: optical, phase_shifts: (electrical...), ) -> (optical...) { input // optical |> split(splat(1.0, phase_shifts.len())) // (optical...) |> constrain(d_phase = 0) // (optical...) |> zip(phase_shifts) // ((optical, electrical)...) |> map(set modulate(type: Modulation::Phase)) // (optical...) |> constrain(d_delay = 0) // (optical...) } ``` ] = Example: coherent 16-QAM transmitter <anx_coherent_transmitter> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [G.#x]) #figurex( title: [ Example in @phos of a 16-#gloss("qam", short: true) modulator. ], caption: [ Example in @phos of a 16-#gloss("qam", short: true) modulator. The four binary sources are modulated on a common laser source, and then interfered together. ], )[ ```phos // Coherent transmitter, modulates four binary signals into a 16-QAM signal. // 1. the signal is split into four, each signal is a fraction of the input signal // 2. each signal is zipped with its corresponding electrical signal // 3. each signal is modulated using amplitude modulation // 4. the phase difference between the four signals is constrained to 90° between each // other // 5. the four signals are merged back into one // Note: the splitting ratios and order of modulation are chosen to match the modulation // order for the coherent transmission syn coherent_transmitter( input: optical, (a, b, c, d): (electrical, electrical, electrical, electrical), ) -> optical { input // optical |> split((1.0, 1.0, 0.5, 0.5)) // (optical, optical, optical, optical) |> zip((a, c, b, d)) // ((optical, electrical), ...) |> modulate(type = Modulation::Amplitude) // (optical, optical, optical, optical) |> constrain(d_phase = 90°) // (optical, optical, optical, optical) |> merge() // optical } ``` ] <lst_modulation> = Example: lattice filter <anx_lattice_filter> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [H.#x]) #figurex( title: [ Example in @phos of a parametric lattice filter. ], caption: [ Example in @phos of a parametric lattice filter. ], )[ ```phos // The kinds of filter that can be used. // For this example, we only support Chebyshev and Butterworth. enum FilterKind { Chebyshev(uint), Butterworth(uint), } // Computes the coefficient of a given kind of filter fn filter_kind_coefficients( filter_kind: FilterKind, ) -> ((Fraction, Phase)...) { ... } // Implements the latice filter syn lattice_filter( a: optical, b: optical, filter_kind: FilterKind, ) -> (optical, optical) { // Steps: // 1. we compute the coefficients of the filter in the form ((Fraction, Phase)...) // that is: a list of coefficients and phases // 2. we fold over the list of coefficients and phases, that is, we iterate over // the list, and for each element, we apply a function to the current value // and the element, and return the result as the new current value, turning a // list of coefficients and phases into a single value, our starting value is // the a tuple of both input signals. In the fold we: // 1. couple the signals together with the computed coefficient // 2. we constrain the phase difference between the two signals, imposing the computed // phase difference filter_kind_coefficients(filter_kind) // ((Fraction, Phase)...) |> fold((a, b), |acc, (coeff, phase)| { // Fold over the list of coefficients: acc // (optical, optical) |> coupler(coeff) // (optical, optical) |> constrain(d_phase = phase) // (optical, optical) }) // (optical, optical) } ``` ] = Example: MVM <anx_matrix_vector> #counter(figure.where(kind: table)).update(0) #counter(figure.where(kind: image)).update(0) #counter(figure.where(kind: raw)).update(0) #set figure(numbering: (x) => [I.#x]) #figurex( title: [ Example in @phos of an analog matrix-vector multiplier. ], caption: [ Example in @phos of an analog matrix-vector multiplier. ], )[ ```phos // A single Mach-Zehnder interferometer based gate syn mzi_gate( a: optical, b: optical, (beta, theta): (Phase, Phase), ) -> (optical, optical) { (a, b) |> coupler(0.5) |> constrain(d_phase = beta) |> coupler(0.5) |> constrain(d_phase = theta) } // Produces a 4x4 matrix-vector multiplier syn matrix_vector_multiply( source: optical, (a, b, c, d): (electrical, electrical, electrical, electrical), coefficients: ( (Phase, Phase), (Phase, Phase), (Phase, Phase), (Phase, Phase), (Phase, Phase), (Phase, Phase) ) ) -> (electrical, electrical, electrical, electrical) { let (ref_a, ref_b, ref_c, ref_d, rest...) = source |> split(splat(1.0, 8)); let (a, b, c, d) = (a, b, c, d) |> zip((ref_a, ref_b, ref_c, ref_d)) |> modulate(type_: Modulation::Amplitude) let (c1, d1) = mzi_gate(c, d, coefficients.0); let (b1, c2) = mzi_gate(b, c1, coefficients.1); let (y1, b2) = mzi_gate(a, b1, coefficients.3); let (c3, d2) = mzi_gate(c2, d1, coefficients.2); let (y2, c4) = mzi_gate(b2, c3, coefficients.4); let (y3, y4) = mzi_gate(c4, d2, coefficients.5); (y1, y2, y3, y4) |> zip(rest) |> demodulate(type_: Modulation::Coherent) } ``` ]
https://github.com/justinvulz/document
https://raw.githubusercontent.com/justinvulz/document/main/summer_camp/DMA_slides.typ
typst
#import "@preview/polylux:0.3.1": * #import "@preview/ctheorems:1.1.2": * #import "/typst_packages/lecture.typ": * #import "@preview/pinit:0.1.4": * #import "@preview/cuti:0.2.1": show-fakebold, regex-fakeitalic #import "@preview/fletcher:0.4.5" as fletcher: diagram,node,edge #import "@preview/cetz:0.2.2" #import themes.university: * #show :show-fakebold #let uncv = uncover #set list(marker: listal) #set text(font: ("Times New Roman","DFKai-SB")) #show: thmrules #show math.equation: set text(weight: "extralight") #show math.equation.where(block: true): e => [ // #set block(fill: lime) #block(width: 100%, inset: 0.3em)[ #set align(center) #set par(leading: 0.65em) #e ] ] #let pinit-highlight = pinit-highlight.with(dy: -0.7em) #let pinit-highlight-equation-from(height: 2em, pos: bottom, fill: rgb(0, 180, 255), highlight-pins, point-pin, body) = { pinit-highlight(..highlight-pins, dy: -0.6em, fill: rgb(..fill.components().slice(0, -1), 40)) pinit-point-from( fill: fill, pin-dx: -0.6em, pin-dy: if pos == bottom { 0.8em } else { -0.6em }, body-dx: 0pt, body-dy: if pos == bottom { -1.7em } else { -1.6em }, offset-dx: -0.6em, offset-dy: if pos == bottom { 0.8em + height } else { -0.6em - height }, point-pin, rect( inset: 0.5em, stroke: (bottom: 0.12em + fill), { set text(fill: fill) body } ) ) } #let new-section-slide(title,subtitle: none) = slide([ #set align(horizon) #block(width: 100%,height: 100%)[ #text(title,size:35pt) #rect(width: 100%, height: 0.1em, fill: rgb("405577")) #align(right)[ #text(subtitle,size:25pt) ] ] ]) #show: university-theme.with( color-a :rgb("405577"), color-b :rgb("122264"), color-c :rgb("C2EBD9"), ) #title-slide( authors: "陽明交通大學應數系營隊", title: "抽象代數", subtitle: "群論", institution-name: " ", ) #slide(title: "群論")[ #set align(horizon) #block(width: 100%,height: 100%)[ 群(Group)是一個集合,並且配上一個良好的二元運算,而群論(Group Throry)是一們研究群這種結構的數學分支。群論在許多領域上有著廣泛的應用,以下介紹一些應用。 ] ] #slide(title: "群論的應用")[ 倍立方、化圓為方、三等分角等,尺規作圖問題。 #grid( columns: (auto, auto, auto), rows: (3fr,2fr,5fr), align: horizon+center, grid.cell(rowspan:2,image("pic/doublecube.png")),[], grid.cell(rowspan:2,image("pic/angle_trisection.jpg")), grid.cell(rowspan:2,image("pic/squaring_circle.png")) ) ] #slide(title: "群論的應用")[ 我們都知道一元二次方程 $a x^2 + b x + c =0$ 的解為 $ x = (-b plus.minus sqrt(b^2 - 4 a c))/(2 a) $ 但是對於一元五次方程 $x^5 + a x^4 + b x^3 + c x^2 + d x + e = 0$,可以用群論證明,我們無法用根式解析解來表示。 ] #slide(title: "群論的應用")[ #grid( gutter: 0.65em, columns: (auto, 1fr), rows: (auto, 1fr,1fr), align: horizon, grid.cell([ 除了數學上的應用外,在其他領域也有著廣泛的應用,例如 ],colspan: 2), grid.cell([ - 密碼學 (RSA加密算法) - 標準粒子模型中的對稱性]), grid.cell([#pause#image("pic/standard_model.png")],rowspan: 2) ) ] #new-section-slide("群",subtitle:"Group") #let slide = slide.with(title: "群") #slide(title: "群",new-section: "Group")[ #set text(size: 19pt) #definition(number:"1.1")[ $angle.l G , * angle.r$是一個集合 $G$ 與一個二元運算 $* : G times G |-> G$,滿足以下條件: #set enum(numbering: al(n => [$cal(G)_#n$:])) + 對於所有的$a,b,c in G$, $ (a*b)*c = a*(b*c) quad textb("結合律") $ + 存在一個元素 $e in G$,使得對於所有的 $a in G$, $ a*e = e*a = a quad textb("單位元素") $ + 對於每一個 $a in G$,存在一個元素 $a^(-1) in G$,使得 $ a*a^(-1) = a^(-1)*a = e quad textb("反元素") $ ] ] #slide[ #set table(stroke: (x,y) =>( bottom: if y==0 {1pt}, right: if x==0 {1pt}, )) #let t(e,a,b,op) = table( columns: (2em,2em,2em,2em), rows: auto, align: center, op, e, a, b, e, e, a, b, a, a, b, e, b, b, e, a ) #set text(size: 21pt) Example: - 整數集合 $ZZ$ 與加法運算 $+$ 構成一個群。 $angle.l ZZ,+ angle.r$\ 單位元素為 $0$,反元素為 $-a$。 - 整數集合 $ZZ$ 與乘法運算 $*$ 不是一個群。\ 乘法在整數裡沒有反元素。 - $angle.l QQ, + angle.r$,$angle.l RR, + angle.r$ 是群。 - $C_3 = {e,a,b}$ 與下面的運算是一個群。 #grid( columns: (1fr,1fr), rows: (auto), align: center, t($e$,$a$,$b$,"*"), t($0$,$1$,$2$,"+") ) - $3$的同餘加法群 $ZZ_3 = {0,1,2}$ 與加法運算 $+$ 是一個群。 ] #slide[ #set text(size: 19pt) #definition(number:"1.2")[ 讓$G$是一個群,定義$abs(G)$是$G$的元素個數,稱為$G$的*order*。 ] #definition(number:"1.3")[ 一個群$G$如果滿足交換率i.e. 對於所有的$a,b in G$,$ a*b = b*a $,則稱$G$是一個*交換群*(Abelian groups)。 ] // #set text(size: 25pt) #pause Example: - 整數集合 $ZZ$ 與加法運算 $+$ 是一個交換群。 - $C_3 ={e,a,b}$ 的 order 為 $3$。 - 可逆矩陣的集合與矩陣乘法是一個群,但不是交換群。 ] #let slide = slide.with(title: "群的基礎性質") #slide(new-section: "Properties of Groups")[ #set text(size: 19pt) #theorem(number:"1.4")[ 如果$G$是一個群,那*消去率*成立,即對於所有的$a,b,c in G$, $ a*b = a*c => b = c \ b*a = c*a => b = c $ ] #pause #proof[ 讓$G$是一個群,$a,b,c in G$。假設$a*b = a*c$。 #uncv("3-")[ 因為$a in G$,所以$a$的反元素$a^(-1)$存在,且$a*a^(-1) = a^(-1)*a = e$。 ] $ &a*b = a*c \ #uncv("3-")[ $=> &#pin(1)a^(-1)*a#pin(2)*b = #pin(3)a^(-1)*a#pin(4)*c \ $ ] #uncv("5-")[ $=> &e*b = e*c\ $ ] #uncv("2-")[ $=> &b = c$ ] $ ] #uncv("4-")[ #pinit-highlight(1,2) #pinit-highlight(3,4) ] ] #slide(title:"Example")[ // #set text(size: 19pt) #uncv("1-")[ 讓$x,y in ZZ$,假設$3+x= 3+y$,那麼$x=y$ ] #uncv("2-")[ 讓$A,B,C$是$n times n$的矩陣,如果$A B = A C$,那麼#pin(11)$B=C$#pin(12) ? ] #uncv("3-")[ #pinit-line(12,11,start-dy: -0.3em,end-dy: -0.3em,stroke: 5pt+red) ] #uncv("4-")[ 讓$A,B,C$是$n times n$的*可逆矩陣*,如果$B A = C A$,那麼$B=C$ ] #uncv("5-")[ 通過消去率我們可以證明反元素是唯一的。 ] ] #slide(new-section: "Properties of Groups")[ #set text(size: 19pt) #theorem(number: "1.5")[ 群$G$的單位元素$e$唯一。 ] #pause #proof[ 假設存在第二個單位元素$e_2$,滿足對於所有$a in G$ $ e_2*a = a*e_2 = a $ 因為$e in G$,所以 #only(2)[ $ e_2 * a = a $ ] #only("3-")[ $ #pin(1)e_2 * e#pin(2) = e $ ] #uncv("4-")[ #pinit-highlight-equation-from((1,2),2)[$= e_2$] ] #uncv("5-")[ 我們得到$e_2 = e$ ] ] ] #slide[ #set text(size: 19pt) #theorem(number:"1.6")[ 讓$G$是一個群,$a b in G$,那麼 $ (a b)^(-1) = b^(-1) a^(-1) $ ] #only("1")[我們有時候會省略運算符號,寫成$a b$代表$a*b$。] #only("2-")[ #proof[ 我們直接相乘 $ (a b)b^(-1) a^(-1) &= a (b b^(-1)) a^(-1) \ &= a e a^(-1)\ &= a a^(-1)\ &= e $ 根據反元素的定義,$(a b)^(-1) = b^(-1) a^(-1)$ ] 我們只證明了$(a b)b^(-1) a^(-1) =e$,但是$b^(-1) a^(-1)(a b) =e$也是成立的。 ] ] #let slide = slide.with(title: none) #new-section-slide("置換群",subtitle:"Permutation Group") #slide(title:"置換")[ $ A = {1,2,3,4,5}\ #uncv("2-")[ $ arrow.b sigma textr("排列")\ A = {3,1,5,2,4} $ ] $ #pause #pause #figure( $ 1 -> 3\ 2 -> 1\ 3 -> 5\ 4 -> 2\ 5 -> 4 $, caption: [$sigma$], ) <fig1> ] #slide(title:"置換")[ #set text(size: 19pt) #definition(number:"2.1")[ 一個$A$的是*置換*是一個一一對應的函數 $phi : A -> A$。 (one-one and onto) ] #set text(size: 25pt) #grid( columns: (1fr,1fr), rows: (auto), align: center, [#figure( $ 1 -> 3\ 2 -> 4\ 3 -> 5\ 4 -> 2\ 5 -> 1 $, caption: [一個置換 $sigma$], ) <fig1>], [#figure( $ 1 -> 2\ 2 -> 3\ 3 -> 2\ 4 -> 5\ 5 -> 1\ $, caption: "不是置換", )<fig2>] ) ] #slide(title:"置換的合成")[ #set text(size: 19pt) #definition(number:none)[ 讓$sigma$和$tau$是兩個置換,定義$sigma$和$tau$的*合成*是一個新的置換$sigma cir tau$,使得對於所有的$a in A$, $ (sigma cir tau)(a) = sigma(tau(a)) $ ] #pause #set text(size: 25pt) $ (sigma cir tau)(x) = sigma(tau(x)) \ A -->^tau A -->^sigma A $ 因為$sigma$和$tau$都是一一對應的函數,所以$sigma cir tau$也是一一對應的函數。\ 所以 $sigma cir tau$ 是一個置換。 ] #let msigma = $mat( 1, 2, 3, 4, 5; 3, 4, 5, 2, 1 )$ #let mtau = $mat( 1, 2, 3, 4, 5; 2, 3, 4, 5, 1 )$ #slide(title:"Eaxmple 置換")[ #grid( columns: (1fr,1fr), rows: (auto), align: center, $ sigma = msigma $, $ tau = mtau $ ) #pause #v(-0.5em) $ sigma cir tau = msigma cir mtau = mat( 1, 2, 3, 4, 5; 4, 5, 2, 1, 3 ) $ #pause #align(center)[ #diagram(spacing: (5em,0.3em),{ let nd(x) = ((x,0), (x,1), (x,2), (x,3), (x,4)) for x in range(3){ for y in range(5){ node((x,y), [#(y+1)]) } } node((0.5,0),[#show math.equation: set text(fill: teal);$tau$]) node((1.5,0),[#show math.equation: set text(fill: red);$sigma$]) node(enclose:nd(0)+nd(1),stroke: teal) node(enclose:nd(1)+nd(2),stroke: red) let edgearr = ( ((0,0),(1,1)), ((0,1),(1,2)), ((0,2),(1,3)), ((0,3),(1,4)), ((0,4),(1,0)), ((1,0),(2,2)), ((1,1),(2,3)), ((1,2),(2,4)), ((1,3),(2,1)), ((1,4),(2,0)) ) for e in edgearr{ edge(e.at(0),e.at(1),"->") } }) ] ] #slide(title: "循環表示法(Cycle)")[ $ sigma = mat(1, 2, 3, 4, 5; 3, 4, 5, 2, 1) $ #pause #v(1em) #only(2)[ #grid( columns: (1fr,1fr), rows: (auto), align: center, diagram( { // traingle let (p1,p3,p5) = ((0,0),(1,0),(0.5,0.866)) node(p1,[*1*]) node(p3,[*3*]) edge(p1,p3,"->",bend: 55deg) } ), diagram( { // traingle let (p2,p4) = ((0,0),(1,0)) node(p2,[*2*]) node(p4,[*4*]) edge(p2,p4,"->",bend: 55deg) } ) ) ] #only(3)[ #grid( columns: (1fr,1fr), rows: (auto), align: center, diagram( { // traingle let (p1,p3,p5) = ((0,0),(1,0),(0.5,0.866)) node(p1,[*1*]) node(p3,[*3*]) node(p5,[*5*]) edge(p1,p3,"->",bend: 55deg) edge(p3,p5,"->",bend: 55deg) } ), diagram( { // traingle let (p2,p4) = ((0,0),(1,0)) node(p2,[*2*]) node(p4,[*4*]) edge(p2,p4,"->",bend: 55deg) edge(p4,p2,"->",bend: 55deg) } ) ) ] #only("4-")[ #grid( columns: (1fr,1fr), rows: (auto), align: center, diagram( { // traingle let (p1,p3,p5) = ((0,0),(1,0),(0.5,0.866)) node(p1,[*1*]) node(p3,[*3*]) node(p5,[*5*]) edge(p1,p3,"->",bend: 55deg) edge(p3,p5,"->",bend: 55deg) edge(p5,p1,"->",bend: 55deg) } ), diagram( { // traingle let (p2,p4) = ((0,0),(1,0)) node(p2,[*2*]) node(p4,[*4*]) edge(p2,p4,"->",bend: 55deg) edge(p4,p2,"->",bend: 55deg) } ) ) ] #uncv("5-")[ $ sigma = (1,3,5)(2,4) $ ] ] #slide(title: "循環表示法(Cycle)")[ $ sigma = mat(1, 2, 3, 4, 5; 3, 4, 5, 2, 1) = (1,3,5)(2,4) = (3,5,1)(4,2) $ $ tau = mat(1,2,3,4,5;2,3,4,5,1) = (1,2,3,4,5) = (3,4,5,1,2) $ $ phi = mat(1,2,3,4,5;1,2,4,5,3)= (3,4,5)(1)(2) = (3,4,5) $ ] #slide(title: "循環表示法(Cycle)")[ $ sigma = (1,3,5)(2,4) $ #only("2")[#grid( columns: (1fr,1fr), rows: (auto), align: center, diagram( { // traingle let (p1,p3,p5) = ((0,0),(1,0),(0.5,0.866)) node(p1,[*1*]) node(p3,[*3*]) node(p5,[*5*]) edge(p1,p3,"->",bend: 55deg) edge(p3,p5,"->",bend: 55deg) edge(p5,p1,"->",bend: 55deg) } ), diagram( { // traingle let (p2,p4) = ((0,0),(1,0)) node(p2,[*2*]) node(p4,[*4*]) edge(p2,p4,"->",bend: 55deg) edge(p4,p2,"->",bend: 55deg) } ) )] #only("3-")[#grid( columns: (1fr,1fr), rows: (auto), align: center, diagram( { // traingle let (p1,p3,p5) = ((0,0),(1,0),(0.5,0.866)) node(p1,[*1*]) node(p3,[*3*]) node(p5,[*5*]) edge(p3,p1,"->",bend: -55deg) edge(p5,p3,"->",bend: -55deg) edge(p1,p5,"->",bend: -55deg) } ), diagram( { // traingle let (p2,p4) = ((0,0),(1,0)) node(p2,[*2*]) node(p4,[*4*]) edge(p2,p4,"->",bend: -55deg) edge(p4,p2,"->",bend: -55deg) } ) )] #only("3-")[ $ sigma^(-1) = (5,3,1)(2,4) $ ] ] #slide(title: "置換群")[ #set text(size: 19pt) #definition(number:"2.2")[ 一個集合(有限)$A$的所有置換構成一個_群_,稱為$A$的*置換群*,記為$S_(A)$。 ] #pause 我們驗證$S_A$確實是一個群。 (單位元素、結合律、反元素) #pause #set text(size: 19pt) #remark[ $n$個元素的集合的置換群計為$S_n$的order為$n!$。 ] #pause #example[ \ 上述的例子中,$tau$和$sigma$是$S_5$的元素。\ $S_5$的order為$5! = 120$。 並且$sigma$和$tau$的反元素 $ sigma^(-1) = (5,3,1)(2,4) \ tau^(-1) = (5,4,3,2,1) $ ] ] #new-section-slide("對稱群",subtitle:"Symmetry Group") #let (p1,p2,p3) = ((0,0),(1,0),(0.5,-0.866)) #slide(title:"對稱群")[ 我們接下來考慮一個正三角形,他有那些對稱性? ] #slide[ #set figure(supplement: none) #let tri(a,b,c,script:none) = block[#diagram({ node(p1,[*#a*]) node(p2,[*#b*]) node(p3,[*#c*]) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") script })] #grid( columns :(1fr,1fr,1fr), rows: (1fr,1fr), align: center, [#figure( tri(1,2,3), )], [#figure( tri(2,3,1), )], [#figure( tri(3,1,2), )], [#figure( tri(2,1,3,script:{edge(p3,(0.5,0.1),stroke: red)}) )], [#figure( tri(3,2,1,script:{ let mid = ((p3.at(0)+p1.at(0))/1.9,(p3.at(1)+p1.at(1))/1.9) edge(p2,mid,stroke: red) }) )], [#figure( tri(1,3,2,script:{ let mid = ((p3.at(0)+p2.at(0))/1.9,(p3.at(1)+p2.at(1))/1.9) edge(p1,mid,stroke: red) }) )], ) ] #slide[ #set figure(supplement: none) #let tri(a,b,c,script:none) = diagram({ node(p1,[*#a*]) node(p2,[*#b*]) node(p3,[*#c*]) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") script }) #grid( columns :(1fr,1fr,1fr), rows: (1fr,1fr), align: center, [#figure( tri(1,2,3), caption:$(1)(2)(3)$ )], [#figure( tri(2,3,1), caption:$(1,2,3)$ )], [#figure( tri(3,1,2), caption:$(3,2,1)$ )], [#figure( tri(2,1,3,script:{edge(p3,(0.5,0.1),stroke: red)}), caption:$(1,2)$ )], [#figure( tri(3,2,1,script:{ let mid = ((p3.at(0)+p1.at(0))/1.9,(p3.at(1)+p1.at(1))/1.9) edge(p2,mid,stroke: red) }), caption:$(1,3)$ )], [#figure( tri(1,3,2,script:{ let mid = ((p3.at(0)+p2.at(0))/1.9,(p3.at(1)+p2.at(1))/1.9) edge(p1,mid,stroke: red) }), caption:$(2,3)$ )], ) ] #slide(title:[$D_3$])[ 把上面正三角形的對稱性的置換收集起來,我們得到一個群,稱為正三角形的*對稱群*$D_3$。 那$D_3$的order是多少?只有$6$個嗎? ] #slide(title:[$D_4$])[ #grid( columns: (1fr,1fr), rows: (auto,auto), align: center, [ #v(20%) #figure( diagram({ node((0,0),[1]) node((1,0),[2]) node((1,1),[3]) node((0,1),[4]) edge((0,0),(1,0),"-") edge((1,0),(1,1),"-") edge((1,1),(0,1),"-") edge((0,1),(0,0),"-") node((0.5,-0.5),[#set text(fill: red);$tau_3$]) edge((0.5,-0.1),(0.5,1.1),"-",stroke: red) node((-0.5,0.5),[#set text(fill: lime);$tau_4$]) edge((-0.1,0.5),(1.1,0.5),"-",stroke: lime) node((1.5,-0.5),[#set text(fill: teal);$tau_2$]) edge((1,0),(0,1),"-",stroke: teal) node((-0.5,-0.5),[#set text(fill: purple);$tau_1$]) edge((0,0),(1,1),"-",stroke: purple) }) ) ], [ #set text(size: 21pt) $ e &= (1)(2)(3)(4) \ rho_1 &= (1,2,3,4)\ rho_2 &= (1,3)(2,4)\ rho_3 &= (1,4,3,2)\ tau_1 &= (1)(2,4)(3)\ tau_2 &= (1,3)(2)(4)\ tau_3 &= (1,2)(4,3)\ tau_4 &= (1,4)(2,3)\ $ ], [#pause 那$D_4$的order是多少? \ 只有$8$個嗎?] ) ] #slide(title: "如何計算對稱群")[ + 先找到圖形的不動點 + 畫一條通過不動點的直線。 + 假設有$m$個對稱稱使得這條*線上的點*不動,而條*線上的點*在對稱性下會被打到$n$個不同的位子。 + 那麼這個對稱群的order就是$n times m$。 下一節會證明這個方法是正確的。 ] #slide(title: "Exercise")[ #let tri = cetz.canvas( length: 10em,{ import cetz.draw:*; line((0,0),(1,0),(0.5,0.866),close: true) }) #grid( columns: (1fr,1fr), rows: (auto), align: center, tri, tri, ) ] #slide(title: "Exercise")[ #let squrae = cetz.canvas( length: 10em,{ import cetz.draw:*; rect((0,0),(1,1)) }) #grid( columns: (1fr,1fr), rows: (auto), align: center, squrae, squrae, ) ] #slide(title:"如何計算對稱群")[ #grid( columns: (1fr,1fr), rows: (auto), align: center, image("./pic/gimage.jpg",height: 250pt), cetz.canvas(length: 4em,{ import cetz.draw:*; ortho(x:20deg,y:50deg,z:0deg,{ on-xy(z:-1,{ rect((-1,-1),(1,1)) }) on-xy(z:1,{ rect((-1,-1),(1,1), fill: silver) }) on-yz(x:-1,{ rect((-1,-1),(1,1)) }) on-yz(x:1,{ rect((-1,-1),(1,1), fill : rgb("e8e8f8").transparentize(20%)) }) on-xz(y:-1,{ rect((-1,-1),(1,1), stroke: (dash: "dashed")) }) on-xz(y:1,{ rect((-1,-1),(1,1)) }) }) }) ) ] #slide(title: "Exercise")[ + 判斷下列圖形的對稱群的order: #grid( columns: (1fr,1fr), rows: (auto), align: center, image("./pic/tetrahedron.png",height: 210pt), image("./pic/smile_face.jpg",height: 210pt), ) ] #new-section-slide("群作用",subtitle:"Group Action") #let gset = $G negspace text("-set")$ #slide(title: "群作用")[ #set text(size: 19pt) #definition(number:"4.1")[ 一個*群*$angle.l G,* angle.r$對一個集合$A$的*作用*是一個映射 $phi : G times A -> A$,滿足以下條件: #set enum(numbering: al("1.")) + 對於所有 $a in A quad phi(e,a) = a$ + 對於所有 $a in A$ 和 $g,h in G$,$phi(g*h,a) = phi(g,phi(h,a))$ 在這個情況下,我們稱$A$是一個$G negspace textb("-set")$。 ] 為了簡化,我們有時候會省略運算符號,寫成$g a$代表$phi(g,a)$。 所以上述的條件可以寫成 $ e a = a \ (g h) a = g (h a) $ 像是在上一章節中,我們考慮了對稱群$D_3$對正三角形的作用。 ] #let tri(a,b,c) = block[#diagram({ node(p1,[*#a*]) node(p2,[*#b*]) node(p3,[*#c*]) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") })] #slide(title:"Example")[ $rho_1 = (1,2,3) in D_3$ $ rho_1 #tri(1,2,3) = #tri(2,3,1) $ $tau = (1)(2,3) in D_3$ $ tau #tri(1,2,3) = #tri(1,3,2) $ ] #slide(title:"群作用")[ #set text(size: 19pt) #theorem(number:"4.2")[ 讓$X$是一個#gset。如果$g x_1 = g x_2$,那$x_1 = x_2$ ] #proof[ 假設 $g x_1 = g x_2$,那麼 $g^(−1)g x_1 = g^(−1) g x_2$,所以 $e x_1 = e x_2$,所以 $x_1 = x_2$。 ] #remark[ 如果$x != y$,那$g x != g y$ ] ] #slide(title: "軌道 Orbits")[ #set text(size: 19.5pt) #theorem(number:"4.3")[ 讓$X$是一個#gset,我們定義一個在$X$上的關係$tilde.op$,對於所有的$x,y in X$,$x tilde.op y$若且為若存在$g in G$,使得$g x = y$。這個關係是一個等價關係。 ] <relation> #pause #proof[ 自反性、對稱性、傳遞性 #uncv("3-")[\ *自反性*:對於所有的$x in X$,$x tilde.op x$,因為$e x = x$。] #uncv("4-")[\ *對稱性*:如果$x tilde.op y$,那麼存在$g in G$,使得$g x = y$,所以$g^(-1) y = x$,所以$y tilde.op x$。] #uncv("5-")[\ *傳遞性*:如果$x tilde.op y$且$y tilde.op z$,那麼存在$g,h in G$,使得$g x = y$且$h y = z$,所以$h g x = z$,所以$x tilde.op z$。] ] ] #slide(title: "軌道 Orbits")[ #set text(size: 19pt) #definition(number:"4.4")[ 讓$X$是一個#gset,每一個在 Therorem 4.2 下的等價類稱為一個*軌道*。如果$x in X$,包含$x$的分割是$x$的軌道,記作$G_x$。 ] #remark()[ 讓 $X$ 是一個 #gset,$x in X$,那麼 $x$ 的軌道 $G_x = {g x mid(|) g in G}$。 ] ] #let Stab = math.op("Stab") #slide(title: "不動點、穩定子")[ *Fixed point, Stabilizers * #set text(size: 19pt) #definition(number: "4.5")[ 讓$X$是一個#gset,讓$x in X$,$g in G$。我們定義; $ Stab_G (x) = {g in G | g x = x} \ X^g = {x in X | g x = x} $ $Stab_G (x)$稱為$x$的*穩定子*,$X^g$稱為$g$的*不動點*。 ] ] #let tri(c1,c2,c3) = { set text(size: 11pt); block[ #diagram( { node(p1,$space$,fill:c1,shape: circle) node(p2,$space$,fill:c2,shape: circle) node(p3,$space$,fill:c3,shape: circle) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") }) ] } #let (r,b,g) = (red,teal,lime) #slide(title:"Example")[ #set text(size: 23pt) 讓$G = D_3$,$X$是用$3$種顏色為三角形的頂點上色所有可能所成的集合。 e.g. $ (#tri(red,teal,lime)),(#tri(red,red,red)),(#tri(red,teal,teal)) in X $ $abs(X) = 3 times 3 times 3 $ ] #slide(title: "Example: 軌道")[ #set text(size: 10pt) #let c = (red,teal,lime) #grid( columns: (1fr,)*9, column-gutter: 0em, rows: (1fr,)*3, fill: (x,y) => { let c = white; if x ==0{ if y == 0{ c = rgb("#b2c8eb"); } if y == 1{ c = rgb("#b2ebc8"); } if y == 2{ c = rgb("#ebc8b2"); } } if x==1{ c = rgb("#e9eba2"); } if x==2{ c = rgb("#b2ebc8"); } if x==3{ c = rgb("#ebc8b2"); } if x==4{ c = rgb("#b2c8eb"); } if x==5{ c = rgb("#b2ebc8"); } if x==6{ c = rgb("#ebc8b2"); } if (x==7 or x==8){ c = rgb("#e9eba2"); } return c.desaturate(50%) }, align: center+horizon, tri(r,r,r),tri(r,r,g),tri(g,g,r),tri(b,b,g),tri(g,g,b),tri(r,r,b),tri(b,b,r),tri(r,g,b),tri(r,b,g), tri(g,g,g),tri(g,r,r),tri(r,g,g),tri(g,b,b),tri(b,g,g),tri(b,r,r),tri(r,b,b),tri(b,r,g),tri(b,g,r), tri(b,b,b),tri(r,g,r),tri(g,r,g),tri(b,g,b),tri(g,b,g),tri(r,b,r),tri(b,r,b),tri(g,b,r),tri(g,r,b) ) ] #slide(title: "Example: Stabilizer")[ #place( top+ right, diagram( { node(p1,[*1*]) node(p2,[*2*]) node(p3,[*3*]) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") }) ) $Stab_G (#tri(red,red,red)) =D_3$ $Stab_G (#tri(g,g,b)) = {e,(1,2)}$ $Stab_G (#tri(r,g,b)) = {e}$ $Stab_G (#tri(b,r,r)) = {e,(2,3)}$ ] #slide(title: "Example: Fixed Point")[ #place( top+ right, diagram( { node(p1,[*1*]) node(p2,[*2*]) node(p3,[*3*]) edge(p1,p2,"-") edge(p2,p3,"-") edge(p3,p1,"-") }) ) $g = e$ , $X^g = X$ $g = (1,2,3)$ , $X^g = {#tri(r,r,r),#tri(g,g,g),#tri(b,b,b)}$ $g = (2,3)$ , $X^g =$ ${#tri(r,r,r),#tri(g,g,g),#tri(b,b,b), #tri(r,b,b),#tri(g,b,b),#tri(b,r,r),#tri(g,r,r),#tri(r,g,g),#tri(b,g,g)}$ ] #slide(title: "軌道-穩定子定理")[ #set text(size: 20pt) #theorem([軌道-穩定子定理 (Orbit-Stabilizer Theorem)],number:"4.6")[ 讓$G$是一個有限群,讓 $X$ 是一個 #gset,$x in X$,那麼 $abs(G) = abs(G_x) abs(Stab_G (x))$。 ] <orbit-stabilizer> ] #slide(title:"Proof")[ #set text(size: 19pt) 定義$f:G -> G_x$,$f(g) = g x$。我們證明每一個在$G_x$裡的元素都被打到$abs(Stab_G (x))$這麼多次。\ 給定一個$y in G_x$,那麼存在$h in G$使得$y = h x$。\ #pause 我們先證明這個引理: $f(g) = y <==> h^(-1) g in Stab_G (x)$。 #pause \ $=>$:如果$f(g) = y$,那麼$g x = h x$,所以$h^(-1)g x = x$,所以$h^(-1)g in Stab_G (x)$。 #pause \ $arrow.l.double$:如果$h^(-1) g in Stab_G (x)$,那麼$h^(-1) g x = x$,所以$g x = h x$,所以$f(g) = y$。\ ] #slide[ #set text(size: 19pt) 接著我們來討論有多少 $g in G$ 使得 $h^(-1)g in Stab_G (x)$。 #pause $ h^(-1) g in Stab_G (x) &<==> exists tilde(g) in Stab_G (x) st h^(-1) g = tilde(g)\ &<==> exists tilde(g) in Stab_G (x) st g = h tilde(g)\ &<==> g in {h tilde(g) | tilde(g) in Stab_G (x)} $ #pause 所以,#pin(1)$f(g) = y <==> g in {h tilde(g) | tilde(g) in Stab_G (x)}$ #pin(2),並且,對於所有$tilde(g) in Stab_G(x)$, #pinit-highlight(1,2, fill: teal.transparentize(70%)) $f(h tilde(g)) = h tilde(g)x = h x =y $。 因此,每個#pin(1)$y in G_x$ 都 $abs(Stab_G (x))$ 個 $g in G$ 使得 $f(g) = y$#pin(2)。所以,$abs(G) = abs(G_x) abs(Stab_G (x))$。 #only(4)[ #pinit-line(1,2,start-dy:0.3em,end-dy:0.3em, stroke: red+2pt) ] ] #slide(title:"Exercise")[ #set text(size:22pt) #set enum(numbering: al("1.")) - 為什麼${h tilde(g) | tilde(g) in Stab_G (x)}$ 的基數(cardinality)和 $Stab_G (x)$ 的基數相等? ] #slide(title: "伯恩賽德引理 (Burnside’s Lemma)")[ #set text(size: 19pt) #lemma([*伯恩賽德引理*],number:"4.7")[ 讓$G$是一個有限群,讓$X$是一個#gset。讓$r$是$X$的軌道數,那麼 $ r dot abs(G) = sum_(g in G) abs(X^g) $ ] #set text(size: 23pt) #pause 我們通過雙重計數來證明這個引理。考慮所有滿足$g x = x$的序組$(g,x)$,我們用兩種方式計數這些序組,這樣就會有一個很自然的等式。 ] #slide(title: "Proof")[ #set text(size: 19pt) 我們考慮序組$(g,x)$,其中$g x = x$。假設這樣的序組有$N$個。 給定 一個$g in G$,我們計算$(g,x)$的數量,這個數量是$abs(X^g)$。所以 $ N = sum_(g in G) abs(X^g) $ #pause 另一方面,給定一個$x in X$,我們計算$(g,x)$的數量,這個數量是$abs(Stab_G (x))$。所以 $ N = sum_(x in X) abs(Stab_G (x)) $ ] #slide[ #set text(size: 19pt) 根據 @orbit-stabilizer[*軌道穩定子定理* Thm],$abs(Stab_G (x))abs(G_x) = abs(G)$,所以, $ N = sum_(x in X) abs(Stab_G (x)) = sum_(x in X) abs(G) / abs(G_x) = abs(G) sum_(x in X) 1 / abs(G_x) $ 對於在相同軌道的元素,$abs(G_x)$是相同的。讓$cal(O)$是一個軌道,我們有 $ sum_(x in cal(O)) 1 / abs(G_x) = sum_(x in cal(O)) 1 / abs(cal(O)) = 1 $ 因此,$ sum_(x in X) 1 / abs(G_x) = (textr("軌道的數量")) $ ] #slide[ $ N = abs(G) dot (textr("軌道的數量")) = abs(G) dot r $ $ r dot abs(G) = sum_(g in G) abs(X^g) $ ] #slide(title: "Example")[ #set text(size: 19pt) 用$4$個顏色對一個正三角形的三個邊進行著色,有幾種不同的著色方法?(兩種著色方式被認為是相同的,如果他們可以通過旋轉、鏡射相互變換) #pause #text(size: 21pt)[*Method1*:] 分別討論有$1,2,3$個顏色時的著色方法數量。 $ C^4_1 times 1 + C^4_2 times 2 + C^4_3 times 1 = 20 $ e.g. #grid( columns: (1fr,1fr,1fr), rows: (auto), align: center, tri(r,r,r),grid.cell({tri(r,r,g);tri(g,g,r)}),tri(r,g,b) ) ] #slide(title: "Method2: Burnside's lemma")[ #set text(size: 21pt) 我們讓$G = D_3$是三角型的對稱群,$X$是所有著色的結果($abs(X) = 4^3$),所以我們要求$X$在$G$下有幾個軌道。根據前的討論,我們知道$abs(G) = 6$,然後我們計算不動點的個數: #grid( columns: (1fr,1fr), $ abs(X^(rho_0)) = 4^3\ abs(X^(rho_1)) = 4\ abs(X^(rho_2)) = 4\ $, $ abs(X^(tau_1)) = 4^2\ abs(X^(tau_2)) = 4^2\ abs(X^(tau_3)) = 4^2\ $ ) 根據*伯恩賽德引理*,我們有 $ 6r &= 4^3 +4 +4 +4^2 +4^2 + 4^2 = 120\ r &= 20 $ 所以正三角形的相異著色方法有$20$種。 ] #slide(title: "著色多項式")[ #set text(size: 19pt) 我們考慮我們有$n$個顏色,幫一個有對稱性的圖形上色,我們假設在對稱性下有$r$種上色方式。 讓$X$是所有上色方法的集合,讓$G$是該圖形的對稱群,根據博恩賽德引理,我們有 $ r = 1/abs(G) sum_(g in G) abs(X^g) $ 其中$X^g$是在$g$下的不動點的集合。 #pause $ g = underbrace((1,2,3)(5,4) dots (\#,\#),m_g) $ 「每個循環內的顏色都一樣」 $abs(X^g) = n^(m_g)$ #pause $ r = 1/abs(G) sum_(g in G) abs(X^g) = 1/abs(G) sum_(g in G) n^(m_g) $ ] #slide(title: "Example")[ #set text(size: 19pt) #let mg = $m_g$ 我們考慮有$n$個顏色,對一個正四邊形的頂點上色,我們要求在對稱性下有幾種不同的著色方法。 #pause 我們讓$G = D_4$是正四邊形的對稱群,$X$是所有著色的結果($abs(X) = n^4$),我們知道$abs(G) = 8$ #pause #set text(size: 21pt) #v(3em) - $1$個 $4$ cycle的單位變換,$e = (1)(2)(3)(4)$ - $2$個 $1$ cycle的旋轉($90 degree, 270 degree$),e.x. $g = (1,2,3,4)$ - $1$個 $2$ cycle的旋轉($180 degree$),e.x. $g = (1,2)(3,4)$ - $2$個 $3$ cycle的鏡射(對角線的鏡射),e.x. $g = (1)(3)(2,4)$ - $2$個 $2$ cycle的鏡射(中線的鏡射),e.x. $g = (1,3)(2,4)$ ] #slide[ #set text(size: 21pt) 所以我們有 $ r &= 1/8 (n^4 + 2n + n^2 + 2n^3 +2n^2) \ r &= 1/8 (n^4 + 2n^3 + 3n^2 +2n) $ ] #slide(title: "Example")[ 我們現在有$n$個顏色,幫一個正六面體上色,可以通過旋轉變換得到視為相同的著色方式。總共有多少種不同的著色方式? #grid( columns: (1fr,1fr), rows: (auto), align: center, cetz.canvas(length: 3em,{ import cetz.draw:* ortho(x:20deg,y:45deg,z:0deg,{ on-xy(z:-1,{ rect((-1,-1),(1,1),fill: rgb("e8e8f8")) }) on-xy(z:1,{ rect((-1,-1),(1,1),fill: rgb(silver)) }) on-yz(x:-1,{ rect((-1,-1),(1,1)) }) on-yz(x:1,{ rect((-1,-1),(1,1)) }) on-xz(y:-1,{ rect((-1,-1),(1,1)) }) on-xz(y:1,{ rect((-1,-1),(1,1)) }) }) }), cetz.canvas(length: 3em,{ import cetz.draw:* rect((0,0),(1,1),name: "1") rect((1,0),(2,1)) rect((2,0),(3,1)) rect((3,0),(4,1)) rect((1,1),(2,2)) rect((1,0),(2,-1)) content((0.5,0.5),[*1*]) content((1.5,0.5),[*2*]) content((2.5,0.5),[*3*]) content((3.5,0.5),[*4*]) content((1.5,1.5),[*5*]) content((1.5,-0.5),[*6*]) }) ) ] #slide[ #let mg = $m_g$ #set text(size: 21pt) 我們讓$G = D$是正六面體的對稱群,$X$是所有著色的結果($abs(X) = n^6$),我們知道$abs(G) = 24$ #pause #v(3em) + 單位變換:$(1)(2)(3)(4)(5)(6)$ + 過對面中點轉軸旋轉$90degree,270degree$,如:$(1,2,3,4)(5)(6)$,共 6 個。 + 過對面中點轉軸旋轉$180degree$,如:$(1,3)(2,4)(5)(6)$,共 3 個。 + 過對邊中點轉軸旋轉$180degree$,如:$(1,5)(3,6)(2,4)$,共 6 個。 + 過對頂點轉軸旋轉$120degree,240degree$,如:$(1,5,4)(2,3,6)$,共 8 個 ] #slide[ #set text(size: 21pt) 所以我們有 $ r &= 1/24 (n^6 + 6n^3 + 3n^4 + 6n^3 + 8n^2) \ r &= 1/24 (n^6 + 3n^4 + 12n^3 + 8n^2) $ ] #slide(title:"Example")[ #set text(size: 21pt) 在旋轉的對稱姓下,用$n$個顏色對一個正四面體的*邊*上色,總共有多少種不同的著色方式? #figure[ #image("/pic/image.png",width: 11em) ] 我們讓$G$是正四面體的對稱群,我們通過軌道-穩定子定理,我們可以得到$abs(G) =12$ ] #slide[ 我們討論裡面的對置換: - 單位變換:$(1)(2)(3)(4)(5)(6)$ - $8$個以一面中點的垂線為轉軸的旋轉:$(1,2,3)(4,5,6)$ - $3$個以過兩對邊中點的轉軸旋轉:$(1)(6)(2,4)(5,3)$ 所以我們有 $ r = 1/12 (n^6 + 8n^2 + 3n^4) $ ] #slide(title:"Exercise")[ #exercise[ 對於正$n$邊形的對稱群$D_n$,$abs(D_n)$是多少? ] #text(size:19pt)[5pt] ] #slide(title:"Exercise")[ #set text(size: 19pt) #exercise[ 有$n$個不同顏色的珠子,我們要把這些珠子串成一串$6$個珠子的項鍊,可以通過旋轉變換得到視為相同的項鍊。總共有多少種不同的項鍊? #figure[ #diagram( node-stroke: 1pt, { for t in range(6).map(i => i/6*360deg) { node((calc.cos(t),calc.sin(t)),[#v(0.1em)],shape: circle) edge((calc.cos(t),calc.sin(t)),(calc.cos(t+60deg),calc.sin(t+60deg)), bend: 30deg) } }) ] #set enum (numbering: al("a)")) + 對稱群的order是多少? + 對稱群的元素有哪些? 每個元素有幾個循環? + 有多少種不同的著色方式? ] #text(size:19pt)[1pt ;2pt; 2pt] ] #slide(title:"Exercise")[ #exercise[ 在旋轉的對稱性下,用$n$個顏色對一個正四面體的*面*上色。 #set enum (numbering: al("a)")) + 對稱群的order是多少? + 對稱群的元素有哪些? 每個元素有幾個循環? + 有多少種不同的著色方式? ] #text(size:19pt)[1pt; 2pt; 2pt] ] #slide(title:"Exercise")[ #exercise[ 有$3$個顏色,幫一個正六面體上色,*每個顏色上兩個面*,可以通過旋轉變換得到視為相同的著色方式。總共有多少種不同的著色方式? ] #text(size:19pt)[5pt] ]
https://github.com/bradmartin333/TypstNotes
https://raw.githubusercontent.com/bradmartin333/TypstNotes/main/README.md
markdown
# TypstNotes See [main.pdf](main.pdf) for the compiled version. # VS Code extensions - Typst-LSP (nvarner.typst-lsp) - vscode-pdf (tomoki1207.pdf)
https://github.com/gianzamboni/cancionero
https://raw.githubusercontent.com/gianzamboni/cancionero/main/acordes.typ
typst
#import "theme/music.typ": *; #set align(center) == Notas musicales === Guitarra #grid(columns: (1fr, 2.5fr), inset: 0.75em)[ #table( columns: 2, rows: auto, table.header([Inglés], [Latino]))[A][La][B][Si][C][Do][D][Re][E][Mi][F][Fa][G][Sol] ][ #image("assets/notas-guitarra.png") ] == Acordes === Guitarra #grid(columns: (1fr, 1fr, 1fr), row-gutter: 1em, ..chordsData.keys().map(drawChord) ) #pagebreak() === Ejercicios #set align(left) #v(2em) #beatDiagram((("G", 4), ("Am", 4), ("D7", 4), ("G", 4))) #beatDiagram((("G", 4), ("Am", 2), ("D7", 2)))
https://github.com/hillbillybones/job-search
https://raw.githubusercontent.com/hillbillybones/job-search/main/cv.typ
typst
#let ( tel, email, linkedin, skills, experience, education, ..data ) = yaml("./cv.yaml") #let reset(x) = text(weight: "regular", size: 11pt, x) = Gregory (Bear) Shuford #link("tel:" + tel.replace(" ", ""), tel) #sym.circle.filled.small #link("mailto:" + email, email) #sym.circle.filled.small #link("https://linkedin.com/in/" + linkedin, "linkedin.com/in/" + linkedin) #if "objective" in data [ == Objective #v(3.5pt) #data.objective #v(5.6pt) ] == Experience #for (title, dates, company, location, duration, responsibilities) in experience [ === #title · #reset[_#(company)_ · #location #h(1fr) #dates · _#(duration)_] #v(3.4pt) #for responsibility in responsibilities [ - #responsibility ] ] #v(7pt) == Skills #let tableProps = (columns: 2, stroke: none, inset: (left: 0pt), column-gutter: 8pt) #let skills = skills.map(((x, y)) => ([*#x*], y.join(" · "))).flatten() #table(..tableProps, ..skills) #v(3pt) == Education #for (degree, dates, institution, ..edu) in education [ === #degree #reset[ · _#(institution)_ #if "details" in edu [· #edu.details ] #h(1fr) #dates] #if "description" in edu { v(2.4pt) edu.description } ]
https://github.com/Aadamandersson/typst-analyzer
https://raw.githubusercontent.com/Aadamandersson/typst-analyzer/main/components/syntax/test_data/parser/ok/if_else_if_else_expr.typ
typst
Apache License 2.0
#if false {} else if false {} else if false {} else {}
https://github.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024
https://raw.githubusercontent.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024/master/chap03/main.typ
typst
#import "@preview/unify:0.4.3": num, qty #import "@preview/gentle-clues:0.7.0": clue #set text(font:("STIX Two Text", "Noto Serif CJK SC")) #set page(paper: "iso-b5", numbering: "1") #let problemCounter = counter("mycounter") #let problem(icon: emoji.quest , ..args) = clue( accent-color: green, title: [问题#problemCounter.display("一")], icon: icon, ..args ) #show heading.where(level: 1): it => [ #set text(font: ("Noto Sans CJK SC"), size: 18pt) #it.body ] #show heading.where(level: 2): it => [ #problem[ #set text(font: ("Noto Sans CJK SC"), size: 11pt, weight: "medium") #it.body ] ] = 数字图像处理 第三章作业 #problemCounter.step() == 为了展开一幅图像的灰度,使其最低灰度为 $C$、最高灰度为 $L-1$,试给出一个单调的变换函数. 记图像中的灰度值的最大值为 $r_max$,最小值为 $r_min$,则可给出一个符合要求的函数 $ T(r) = C + (L - 1 - C)/(r_max - r_min) (r-r_min). $ #problemCounter.step() == 试解释为什么离散直方图均衡技术一般不能得到平坦的直方图? 原图像一般只有有限个灰度级,部分灰度级像素较多,部分灰度级中没有像素.由于均衡化时的变换函数是单调递增的,每个灰度级只能映射到同一个灰度级,所以均衡化后的图像的灰度级数小于等于原图像的灰度级数. #problemCounter.step() == 假设对一幅数字图像进行直方图均衡处理.试证明(对直方图均衡后的图像)进行第二次直方图均衡处理的结果与第一次直方图均衡处理的结果相同. 由 $ s_k = T(r_k) = (L-1) sum^k_(j=0) p_r(r_j), quad k = 0,1,2, dots.c ,L-1 $ 和 $ p_r(r_k) = n_k / (M N), $ 得第一次直方图均衡化 $ s_k = T(r_k) = (L-1) sum^k_(j=0) p_r(r_j) = (L-1) sum^k_(j=0) n_k/(M N) = (L-1)/(M N) sum^k_(j=0) n_j. $ 其中,$M N$ 是图像中的像素总数,$n_k$ 表示灰度值为 $r_k$ 的像素数. 记 $n'_k$ 表示灰度值为 $s_k$ 的像素数,则第二次直方图均衡化的结果为 $ s'_k = s' = T(s) = (L - 1) / (M N) sum^k_(j=0) n'_j. $ 由于直方图均衡化使原有的每个灰度级的像素数量保持不变,只是灰度值发生了变化,因此,映射后的每个新灰度级依然保有相同数量的像素,即 $n'_k = n_k$,所以 $ s'_k = s' = T(s) = (L - 1) / (M N) sum^k_(j=0) n_j = s_k. $ #problemCounter.step() == 4、(a)证明式(3.3.8)中给出的离散变换函数对直方图均衡处理满足3.3.1节中的条件(a)和(b). 对于 $ s_k = T(r_k) = (L-1)/(M N) sum^k_(j=0) n_j, quad k = 0,1,2, dots.c ,L-1, $ // 其导数 $ s'_k () = T(r_k) = (L-1)/(M N) sum^k_(j=0) n_j, // quad k = 0,1,2, dots.c ,L-1, $ 设 $r_k_1 < r_k_2$, // 设 $r_1< r_2$, $ s_k_2 - s_k_1 =& T(r_(k_2)) - T(r_(k_1)) \ =& (L-1)/(M N) sum^(k_2)_(j=0) n_j - (L-1)/(M N) sum^(k_1)_(j=0) n_j\ =& (L-1)/(M N) sum^(k_2)_(j=k_1+1) n_j >= 0. $ 故 $T(r)$ 在区间 $0 <= r <= L-1$ 上是一个单调递增函数,条件 (a) 得证. $ s_(k max)& = T(r)_max =&(L-1)/(M N) M N &= L-1,\ s_(k min)& = T(r)_min =&(L-1)/(M N) dot 0 &= 0, $ 故对于 $0 < r < L-1$,有 $0 <= T(r) <= L-1$,条件 (b) 得证. == (b)证明只有在灰度不丢失的情况下,式(3.3-9)表示的离散直方图反变换才满足3.3.1节中的条件(a′)和(b). 对于 $0 < s_k < L-1$,有 $0 <= r_k = T^(-1)(s_k) <= L-1$,条件 (b) 得证. 由 (a),设 $r_k_1 < r_k_2$ 时,若 $r_k_2 = r_k_1 + 1$ 且 $r_k_2$ 丢失,则 $n_k_2 = 0$, $ s_k_2 - s_k_1 =& T(r_(k_2)) - T(r_(k_1)) \ =& (L-1)/(M N) sum^(k_2)_(j=k_1+1) n_j = 0. $ $T(r_k_2)$ 可能等于 $T(r_k_2)$,不符合单调递增的要求. 而当灰度不丢失时,$n_k_2>0$,$ s_k_2 - s_k_1 =& T(r_(k_2)) - T(r_(k_1)) \ =& (L-1)/(M N) sum^(k_2)_(j=k_1+1) n_j \ =& (L-1)/(M N) (n_(k_1+1) + dots.c + n_k_2) \ >=& (L-1)/(M N) (n_k_2) \ >& 0, $ 此时 $T(r)$ 在区间 $0 <= r <= L-1$ 上是一个严格单调递增函数. 下面证明严格单调递增函数 $T(r)$ 的反函数 $r = T^(-1)(s)$ 严格单调递增.取 $0 <= s_k_1 < s_k_2 <= L-1$,有 $r_k_1 = T^(-1)(s_k_1),$ $r_k_2 = T^(-1)(s_k_2)$,因为 $T(r)$ 严格递增,所以 $r_k_1 < r_k_2 $ ,所以 $T^(-1)(s)$ 在 $0 <= s <= L-1$ 上严格递增. #problemCounter.step() == 在给定应用中,一个均值模板被用于输入图像以减少噪声,然后再用一个拉普拉斯模板来增强图像中的细节.如果交换一下这两个步骤的顺序,结果是否会相同? 两个模板均为线性变换,顺序不影响最终结果.即交换前后结果相同. /* g(x, y) = f(x - 1, y - 1) + f(x, y - 1) + f(x + 1, y - 1) + f(x - 1, y) + f(x, y) + f(x + 1, y) + f(x - 1, y + 1) + f(x, y + 1) + f(x + 1, y + 1) h(x, y) = 5 g(x, y) - g(x + 1, y) - g(x - 1, y) - g(x, y + 1) - g(x, y - 1) = */ #problemCounter.step() == 使用式(3.6-6)给出的拉普拉斯定义,证明从一幅图像中减去相应的拉普拉斯图像等同于对图像进行非锐化模板处理. 非锐化处理中, $ g_"1mask"(x, y) = f(x, y) - overline(f) (x, y) $ $ g_1(x, y) = f(x, y) + k_1 dot g_"1mask"(x, y) = (1 + k_1) f(x, y) - k_1 overline(f) (x, y). $ 由拉普拉斯定义 $ nabla^2 f(x,y) = f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x, y), $ $ g_2(x, y) = &f(x, y) - nabla^2 f(x,y) \ = &f(x, y) - (f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1) - 4f(x, y)) \ =&5f(x, y) - (f(x+1, y) + f(x-1, y) + f(x, y+1) + f(x, y-1)) \ =&5f(x, y) - k_2 overline(f)(x, y). $ // $ f_2(x, y) = f(x, y) - g(x, y) = nabla^2 f(x,y). $ 有 $g_1(x, y)bar_(k_1 = 4) = g_2(x, y)bar_(k_2 = 4)$.即从一幅图像中减去相应的拉普拉斯图像等同于对图像进行非锐化模板处理.
https://github.com/OverflowCat/BUAA-Data-and-Error-Analysis-Sp2024
https://raw.githubusercontent.com/OverflowCat/BUAA-Data-and-Error-Analysis-Sp2024/neko/5-regression/regression.typ
typst
#import "helper.typ": hr, c #import "@preview/unify:0.5.0": num,qty,numrange,qtyrange #let regression = (x, _y, x_unit: "", y_unit: "", estimate: none, control: none) => { let N = x.len() let y = () let REP = false let m = 1 if type(_y.at(0)) == array { // 重复测量 REP = true m = _y.len() for i in range(N) { let avg = _y.map(y_group => y_group.at(i)).sum() / m y.push(avg) } } else { y = _y } // = 一元线性回归 assert(x.len() == y.len()) [== 原始数据] table( columns: (.8cm, ..range(N).map(x => 1fr)), table.header("N", ..range(N).map(x => [#(x+1)])), $x$, ..x.map(x => [#x]), $y$, ..y.map(y => [#y]), ) [== 回归方程的确定] let DS = $sum^N_(t=1)$ let XU = x_unit let XU2 = "" let YU = y_unit let YU2 = "" let XYU = [] let YXU = [] if XU != "" { XU2 = $XU^2$ XYU += XU YXU += "/" + XU } if YU != "" { YU2 = $" "#y_unit^2$ if XU != "" { XYU += $dot.c$ } XYU += YU YXU = YU + YXU } if XYU != "" { XYU = " " + XYU } // 计算x的均值 let sum_x = x.sum() let x_avg = sum_x / N $ DS x_t = #sum_x XU, overline(x) = #c(x_avg) XU $ // 计算x的平方的平均值 let x_sq = x.map(x => calc.pow(x, 2)) let sum_x_sq = x_sq.sum() let x_sq_avg = calc.pow(sum_x, 2) / N $ DS x_t^2 = #c(sum_x_sq) XU2, quad (DS x_t)^2 / N = #c(x_sq_avg) XU2 $ let l_xx = sum_x_sq - x_sq_avg $ l_(x x) = DS x_t^2 - (DS x_t)^2 / N = #c(l_xx) XU2 $ hr // 计算y的均值 let sum_y = y.sum() let y_avg = sum_y / N $ DS y_t = #sum_y YU, overline(y) = #y_avg YU $ // 计算y的平方的平均值 let y_sq = y.map(y => y*y) let sum_y_sq = y_sq.sum() let y_sq_avg = calc.pow(sum_y, 2) / N $ DS y_t^2 = #c(sum_y_sq) YU2, quad (DS y_t)^2 / N = #c(y_sq_avg) YU2 $ let l_yy = sum_y_sq - y_sq_avg $ l_(y y) = DS y_t^2 - (DS y_t)^2 / N = #c(l_yy) YU2 $ hr let sum_xy = x.zip(y).map(((x, y)) => x * y ).sum() let xy_sum_avg = (sum_x * sum_y) / N $ DS x_t y_t = #c(sum_xy) XYU, quad ((DS x_t)(DS y_t)) / N = #c(xy_sum_avg) XYU $ let l_xy = sum_xy - xy_sum_avg $ l_(x y) = DS x_t y_t - ((DS x_t)(DS y_t)) / N = #c(l_xy) XYU $ [=== 计算 $b$ 和 $b_0$] let b = l_xy / l_xx $ b = l_(x y) / l_(x x) = #b YXU $ let b_0 = y_avg - b * x_avg $ b_0 = overline(y) - b overline(x) = #c(b_0) YU $ [最终的回归直线为] $ hat(y) = #b_0 YU + (#b YXU) x $ [== 方差分析] let DM = $sum^m_(i=1)$ let (S, U, Q) = (0., 0., 0.) if REP { S = _y.flatten().map(yti => calc.pow(yti - y_avg, 2)).sum() U = m * b * l_xy Q = S - U let vU = 1 let QL = m * l_yy - U let QE = range(N).map(t => _y.map(y_group => calc.pow(y_group.at(t) - y.at(t), 2)).sum()).sum() let vL = N - 2 let vE = N * (m - 1) [ / 总离差平方和: $ S = #c(S) YU2. $ / 回归平方和: $ U = m b l_(x y) = #c(U) YU2. $ / 残余平方和: $ Q &= l_(y y) - b l_(x y) = S - U = #c(Q) YU2,\ Q_E &= DS DM (y_(t i) - overline(y)_t)^2 = #c(QE) YU2,\ Q_L &= m l_(y y) - U = #c(QL) YU2. $ ] let F1 = (QL / vL) / (QE / vE) let F2 = (U / vU) / ((QE + QL) / (vE + vL)) $ F_1 = (Q_L / v_L) / (Q_E / v_E) = #c(F1), quad F_2 = (U / v_U) / ((Q_E + Q_L) / (v_E + v_L)) = #c(F2). \ sigma_E = sqrt(Q_E / v_E) = #c(calc.sqrt(QE / vE)). quad sigma_L = sqrt(Q_L / v_L) = #c(calc.sqrt(QL / vL)). $ } else { S = l_yy U = calc.pow(l_xy, 2) / l_xx Q = S - U [ / 总离差平方和: $ S = DS (y_t - overline(y))^2 = l_(y y) = #c(S) YU2. $ / 回归平方和: $ U = DS (hat(y)_t - overline(y))^2 = b l_(x y) = l_(x y)^2 / l_(x x) = #c(U) YU2. $ / 残余平方和: $ Q = l_(y y) - b l_(x y) = S - U = #c(Q) YU2. $ ] } [ == 显著性检验 === $bold(F)$ 检验 ] let v_U = 1 let v_Q = N - 2 let F = calc.round((U/v_U)/(Q/v_Q), digits: 2) $ F =& (U "/" v_U) / (Q "/" v_Q) = (U "/" 1) / (Q "/" (N - 2)) \ =& #c(U) / (#c(Q) "/" #v_Q) \ =& #c(F). $ [ == 方差分析表 ] let sigma2 = Q / (N - 2) table( align: center, columns: (auto, auto, auto, auto, 1fr, auto), table.header([来源], [平方和], [自由度], [方差], $F$, [显著性水平]), [回归], $U = #c(U)$, $#v_U$, table.cell(rowspan: 2, $ sigma^2 &= Q / (N - 2)\ &= #c(sigma2) $), table.cell(rowspan: 2, $#F$), table.cell(rowspan: 2, $alpha = 0.01$), [残余], $Q = #c(Q)$, $#v_Q$, [总计], $S = #c(S)$, $#(v_Q + v_U)$, [#line(length: 1em)], [#line(length: 1em)], [#line(length: 1em)] ) let sigma_ = calc.sqrt(sigma2) /* 在定点的值 */ if estimate != none { [ == 预测问题 当 $x_0 = estimate$ 时,有 ] let y_0 = b_0 + b * estimate $ y_0 = #c(b_0) + (#c(b)) times estimate = #c(y_0) $ $ sqrt(Q / (N - 2)) = #c(sigma_) $ if N > 90 { [因 $alpha$ = 0.05,查附表 2.1 正态分布表可得 $Z_alpha approx 1.95$,得到] let d = 1.95 * sigma_ $ d = 1.95 times sigma = #c(d) $ [$y_0$ 的 95% 近似预测区间为 $(#c(y_0 - d), #c(y_0 + d)).$] } else { let t = 2.31 [查附表2.3可得 $t_alpha (N-2)= #t.$] let d = t * sigma_ * calc.sqrt(1 + 1/N + calc.pow(estimate - x_avg, 2) / l_xx) $ delta(x_0) = t_alpha (N-2) sigma sqrt(1 + 1/N + (x_0 - overline(x))^2/l_(x x)) = #c(d). $ [即预测水平为0.95的区间为 $(#c(y_0 - d), #c(y_0 + d))$.] } } if control != none { let (y01, y02) = control [ == 控制问题 $ y_"0 1" = y01, y_"0 2" = y02, $ ] $ cases( x_1 = 1/b ((y_0)_1 - b_0 plus/* .minus */ Z_alpha sqrt(Q/(N-2))), x_2 = 1/b ((y_0)_2 - b_0 minus/* .plus */ Z_alpha sqrt(Q/(N-2))) ) $ if b > 0 { $ b > 0, x_1 > x > x_2, $ } else { $ b < 0, x_1 < x < x_2, $ } let Z = 1.95 let x1 = 1/b*(y01 - b_0 + Z * sigma_) let x2 = 1/b*(y02 - b_0 - Z * sigma_) [其中 $x_1 = x1, x_2 = x2.$] } }
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/chronos/0.1.0/src/sync.typ
typst
Apache License 2.0
#import "sequence.typ" #let _sync(elmts) = { return (( type: "sync", elmts: elmts ),) } #let render(pars-i, x-pos, participants, elmt, y, lifelines) = { let draw-seq = sequence.render.with(pars-i, x-pos, participants) let shapes = () let end-y = y for e in elmt.elmts { let yi let shps (yi, lifelines, shps) = draw-seq(e, y, lifelines) shapes += shps end-y = calc.min(end-y, yi) } let r = (end-y, lifelines, shapes) return r }
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/figure-localization_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test French #set text(lang: "fr") #figure( circle(), caption: [Un cercle.], )
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/014%20-%20Khans%20of%20Tarkir/010_Victory.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Victory", set_name: "<NAME>", story_date: datetime(day: 26, month: 11, year: 2014), author: "<NAME>", doc ) #emph[Zurgo, khan of the Mardu, knows how to nurse a grudge. And there's no one he hates more than the Planeswalker Sarkhan Vol, a former Mardu who burned his own clanmates with dragonfire when his spark ignited.] #emph[But what lengths will he go to for vengeance?] #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) <NAME> stood on a rocky outcropping at the edge of a jagged plateau, surveying the assembled multitudes of the Mardu below him on the plain. Spread out among them were the corpses of many warriors. Some were Mardu, but the great majority were Temur. To the left of the army lay endless windswept scrubland, the home territory of his people. To the right lay the beginnings of the Temur foothills, where the Temur force he had just defeated had come from. While he surveyed his army, his army watched him as well. They looked at him with triumph, and weariness, and expectation. "We are Mardu!" he shouted. #figure(image("010_Victory/01.jpg", width: 100%), caption: [Mardu Ascendancy | Art by Jason Chan], supplement: none, numbering: none) "MARDU!" they returned, and they cheered as one for several seconds. He drank in their unified exaltation until the roar quieted. "Surrak has tested our borders," he shouted, "and we have shown him that they are strong. Perhaps he thought that we were sitting idle at Wingthrone. He is wrong! We are Mardu, and we rule these plains!" Zurgo stomped his great foot and the multitudes cheered again. As they cheered, a low cracking sound came from the rock below him. He looked down and saw that a jagged line had appeared beneath his feet. The cracking sound continued. Zurgo took two steps backward, and a moment later the forward part of the outcropping broke off and fell to the ground below with a great thump. As the cheer died down, a shrill voice coming from down on the plain reached Zurgo's ears. Warriors near to its source were turning to face it with worried, confused faces. Zurgo turned to Varuk, an old but clever orc standing nearby who served as Zurgo's closest advisor, and asked, "What is that?" Varuk swiveled his ears forward. "It is a goblin, my khan. He is angry." Zurgo sniffed. "Bring him here." Varuk gave him a quick but nervous look. "Your will, my khan." He looked at a nearby human guard and snapped his fingers, and she took off running toward the disturbance. By the time she returned to Zurgo with the goblin, the plain was silent once again, and the army watched as Zurgo peered down at the little ball of fuzz. #figure(image("010_Victory/02.jpg", width: 100%), caption: [Goblin token | Art by Kev Walker], supplement: none, numbering: none) Zurgo opened his mouth to speak, but the goblin was too fast. "My sister died to take this rock, and you broke it!" The goblin's squeaky voice somehow carried across the silent plain as the crowd began to shift uncomfortably. Zurgo stood as tall as he could. "We were victorious over the Temur because we fought as one mind, one body, one clan. Death in battle is a great glory if it serves the clan! Your sister's brave sacrifice saved many Mardu lives!" Varuk raised his weapon and shouted a cheer. In answer, the multitude raised its weapons to the sky and matched it. Their unified voice washed over the plateau, and dissipated. "Then you broke it!" The goblin looked down at where the section now lay at the foot of the plateau, then returned its defiant gaze to him. "It was a good rock!" The pathetic goblin stared up at Zurgo as its squeaky voice rang out across the silent plain. The warriors closest to Zurgo were edging forward, and their faces were cold and angry as they began to mutter among themselves. Rage welled in Zurgo's heart. "You think I do not command what is best for the Mardu?" "My sister died for nothing!" it squeaked. Zurgo raised his left foot as high as he could, then stomped on the goblin, dropping all his weight onto it. It flattened nearly to the ground with a satisfying crunch. Zurgo returned his attention to his multitudes. "I have no need for this rock, or any other! We move, we take, we eat! We are Mardu, and we have shown Surrak our might!" The army roared one more time, although this time it was not quite as loud. Zurgo turned away from the crowd, and the dull roar of conversation began below him. As the army's attention dispersed, Varuk approached Zurgo with a slightly lowered head and indicated the crushed goblin's corpse. "I am not certain it was wise to kill the goblin." #figure(image("010_Victory/03.jpg", width: 100%), caption: [Mardu Warshrieker | Art by <NAME>], supplement: none, numbering: none) "It threatened my authority, and without unity we are nothing." Something flashed in Varuk's eyes. "Is this more important than your position? His family will resent you." A warrior wearing a messenger's banner cut her way through the crowd around Zurgo, and stopped breathlessly in front of him. "I know why," she panted, "they attacked us. A Temur scout saw one of us in the forests, beyond our borders." #figure(image("010_Victory/04.jpg", width: 100%), caption: [War-Name Aspirant | Art by David Gaillet], supplement: none, numbering: none) Zurgo snapped to face her. "What?" She took a step back. "They surrounded him. He called himself 'Sarkhan.' The Temur were insulted that he claimed rule over them and they demanded that he surrender…" She stood there, saying nothing. Zurgo snorted. "And?" "He…they say he turned into a dragon. And breathed fire on them, took off, and flew farther into Temur territory." Vol. It could only be Vol. Zurgo's eyes narrowed. "They assumed it was the new khan of the Mardu, and so they attacked, while their enemy's leader was elsewhere. Except you weren't. And you can't turn into a dragon." She looked down for a moment, then back up with questioning eyes. "Right?" "You are dismissed," Zurgo bellowed. As she scurried away, Varuk approached closely, head bowed. "You should not chase him." Zurgo looked down at him. "He has threatened this clan enough. He must die." Varuk tilted his head to one side, a bit bolder now. "You forget how long I have been at your side. I remember when you were just a wing leader. I was there when Vol deserted, and expected to be welcomed with open arms when he returned. I was there when you sent him into battle against the Sultai. I was there when he turned into a great flying beast of flame and roasted your army with his breath. I know what he can do, and he is too much for you." "He called himself Sarkhan, and that is why Surrak attacked us. Do you think the next khan who hears this name will laugh and slap her thigh when she hears this claim? No. This will not be the last time we are attacked because of his treachery." "After a defeat of this size, Surrak must leave us alone for some time. Our horses do not do well in the mountains. And Vol is moving away from us." "He is a traitor and a threat, and I will see him dead." Varuk turned his head to face the army, which was now a good deal of the way into pitching camp. "How will you convince the rest of them to go? They do not share your history." Zurgo sneered. "Tonight, we celebrate. Tomorrow, we prepare. The next day, we punish Surrak for his impudence. Tell the rest of them." Varuk nodded, and disappeared into the raucous crowd. Zurgo's horde spent the night in celebration. Zurgo himself remained in his tent, allowing them their triumph. He was livid with Vol, and any warrior who saw him in this state would assume that he was angry at one of the Mardu instead. Only a veteran few of his warriors still desired revenge on Vol, and so Surrak's head would have to be enough to lead his army into the mountains. He could say now that he was angry with Surrak, but that would not work until the glory of the victory had faded, so he remained alone. The next day, the Mardu prepared to move. Zurgo's warriors scoured the corpses of the fallen for supplies and made great piles of their bodies. Shamans created great chasms underneath the piles and closed them again once the mass graves were full. Scouts probed the edges of the wooded foothills adjacent to the plains. And the three top wing leaders of his army attended Zurgo in his tent. "Tomorrow, we move into the mountains," he said to them. "We will punish Surrak for his impudence." "The Temur fare best in their mountains," Varuk said. "This path is dangerous." "We have scouts," Zurgo said. "We will be prepared when the enemy strikes." "They do not know Temur lands," said a female orc named Rufaz, her eyes wide with confusion. "We will be blind in comparison to our enemies." Zurgo glared at her. "You should have more confidence in our warriors." "We have already punished Surrak enough," said a male human named Batar, his lowered black eyebrows and mustached sneer thick with disdain. "Risking so much to punish him more is foolish." Zurgo's face twisted. "I am the khan of the Mardu. You will do as I say." Varuk nodded, and then Rufaz nodded. After a few moments, Batar nodded too and they all left. By the time he rejoined the army, all three of them had begun to prepare his horde for the next day's travel. The next morning, Zurgo's army packed its tents, mounted its horses and riding-beasts, and began to move. He sent scouts ahead to probe the forest for the Temur. "I also heard reports of a Mardu deserter," he said to the scouts. "If you find him, do not chase him but tell me." They nodded and dispersed into the woods. #figure(image("010_Victory/05.jpg", width: 100%), caption: [Wooded Foothills | Art by Jonas De Ro], supplement: none, numbering: none) Zurgo traveled in the center of the horde, his riding-beast towering over the horses of the army around him. It struggled a little in the hills, although not as much as the horses. His first wave of scouts returned with vague but disquieting news. The Temur were nearby, it was certain, but none had actually been seen. The scouts had only found broken branches, snapped twigs, fresh footprints that the Mardu had not made. Surrak was sure to know where they were. Three hours later, the Mardu army entered a valley that zigzagged up the mountain. A sudden chill fell over them and it began to snow. It was an unnatural, driving, insistent snow that coated the ground in minutes, even though they were far below the elevation where one would expect snow. His horde's mounts, horses and beasts alike, struggled to slog through the piled powder. A few scouts returned from forays into the forest with little to no information. One of them caught a glance of a Temur shaman doing what looked like some kind of weather magic, but this was hardly a revelation to anyone. Batar rode up next to Zurgo, his horse shifting uncomfortably in the snow. "My khan, we must turn back. This is absurd. We are riding into a trap." Zurgo considered him for a moment. "A threat to the unity of this clan hides in these mountains. Would you not see it stamped out?" Batar sneered. "The snow threatens our unity." Zurgo sat up in his saddle and glared at Batar with all of his might. "A little snow should not threaten a Mardu warrior, Batar Throatslasher." Batar huffed and rode away from Zurgo. After only fifteen feet Zurgo could not see him anymore. A scout ran up to him, her whole body covered with a fine layer of snow. "There are Temur nearby. They were massing at the top of a hill, above us. Perhaps a hundred of them." Zurgo's breath clouded in the unnatural cold. "Tell the others to prepare for—" The sounds of battle surrounded them. The clash of steel on steel, shouts of triumph and death, the great wet sounds of slain riding-beasts came from both behind him and in front of him in the near distance. He couldn't see far enough in the snow to know what was going on. He dismounted and ran forward. Perhaps two hundred feet ahead of where he had been, fifteen fur-clad Temur stood surrounded by many Mardu corpses and more Mardu warriors. The Mardu closed in, and soon all of the Mardu had been slain, and then all was quiet. The snowfall stopped. "What happened?" Zurgo bellowed. Sounds of running came from behind Zurgo. He turned and saw a scout approaching him. "Two breaches," she said, panting. "This one here and another one five hundred feet back. Fifty Temur arranged in a column broke into our line, killed fifty-six, and disappeared back into the woods. We were not prepared to chase them. They left eleven corpses behind." Zurgo turned back to the scene in front of him. "And what happened here?" "The same," said a female orc who stood nearby with two bright red cuts across her face. She surveyed what was now a clearing full of corpses in the center of the Mardu marching line. "I'd say about fifty dead Mardu, and I only see eight Temur." "You…and you," he said, pointing to each of them. "Show me where they came from. The rest of you, clean this up." Both the scout and the orc led him to the edge of the valley, where each path led up a steep slope. Each was steeper than any Mardu horse could climb and only wide enough for perhaps five warriors across. The Temur had hit him twice in the dead center of his army with a small enough force to fit through that passage, and they had disappeared back into the woods like water. He squinted and held his hand above his eyes, but could not see any farther up either path. When he returned to his lines, a scout was waiting for him. "What would you have us do?" "Collect them," he said. "Mass the army here and I will address them." The scout scurried away. Nearby, three young warriors sat in the snow, talking. "They came out of the woods, out of nowhere," one young man said, "and then were gone as quickly." "My brother sprouted four arrows and died in front of me, and I could not reach his killer!" cried a second young man. "This could happen five more times, and it would work just as well," said a young woman next to him. "We do not know this terrain." Zurgo pushed his way through the crowd and swaggered up to them. They stopped talking, and stood. "Tell me," Zurgo said. "Was this your first battle?" All three looked up to him and nodded. "And did you each kill an enemy?" They nodded again and stood, their faces now expectant. "You," Zurgo bellowed, pointing at one of them. "How did you slay your foe?" Silence began to spread around them. "I removed her head," he said, "with one clean cut." "Headtaker," Zurgo decreed. He turned to the next, who trembled with wide eyes. "And you?" They stood taller now. "I put three arrows in her chest," she said. "Heartpiercer." Zurgo turned to the last. "We had lost our weapons, and were wrestling," he said. "I crushed his throat with my bare hands." "Neckwringer!" Zurgo bellowed. The three of them bowed, each glowing. By then, much of the army had massed around him, and many warriors were filling in around the edges of what he could see. Zurgo raised his sword to the sky. "To the warriors of the Mardu, and their victory!" The horde cheered on command, but not as loudly as Zurgo had hoped. "No!" came a shout from nearby, and Batar stepped out of the crowd. His face was red, his muscles were tight, and his eyes were angry. "These young warriors were right. You led us into this forest to punish Surrak, you say. But you do not know where he is. And this is bad ground. And this is unnatural snow. And yet we continue. You must have other reasons. And you have not spoken of them to us. And now many of us have died. "I challenge you for the right to lead this clan." All motion stopped. All eyes came to rest on the two of them. Zurgo took his measure. The man was angry and stupid in his rage. Were he thinking about the good of the clan around him, he would not have done this. Zurgo had no choice now but to kill him. "Fine." Zurgo shrugged and drew his sword. The little man was defiant, a shield in each hand. Three great bone dragon claws were lashed to each shield. His weapons were impressive to the eye, but for a little human they would be heavy and slow. "Come show us," Zurgo said, "how great a warrior you are." #figure(image("010_Victory/06.jpg", width: 100%), caption: [Bloodsoaked Champion | Art by <NAME>], supplement: none, numbering: none) Batar sneered. With his heavy weapons, he must have wished Zurgo would come to him. But Zurgo would not. Batar could not wait, lest he look weak. The man loped forward, holding both shields at his sides. Zurgo waited for him. When he got close, he thrust at Zurgo with his right shield. But Zurgo dodged left, putting himself nearly behind the man. He cut for Batar's neck with the sword in his left hand, but Batar raised the hand that had just thrusted for Zurgo's chest with surprising speed. Zurgo's sword impacted on the man's forearm armor, denting it but doing no real damage. Then the other shield came hurtling toward Zurgo from under Batar's raised right arm, one claw pointed at his face and the other at his groin. Zurgo spun away from the attack fast enough that it impacted only the armor on his leg and shoulder, tearing a few plates out of each. He kept moving further behind Batar, putting the man's awkwardly raised right shield further out of position. As he moved, he cocked his right arm for a punch. Batar kept spinning to match him, guarding his face with his right shield. But the instant he let his guard drop, Zurgo's fist slammed into his chin. Batar slumped to the ground, groaning. Zurgo grasped Batar by the neck and lifted him off the ground. Batar struggled some, dangling like a child's doll as he gasped for air. Zurgo ran his sword straight through Batar's chest, threw the limp form to the ground, and stomped his great foot on the man's head. Bright red gore splattered in the white snow around them. He turned slowly, surveying all around him. "See what happens to those who challenge the khan of the Mardu!" Varuk rode into the clearing. "It will not happen again," he said. "I will kill anyone who dares!" Zurgo roared, thrusting his blood-soaked blade to the sky. "No," Varuk said, dismounting. "Because there is nothing more to challenge." His eyes were hard and cold, and he stood straighter than ever before. In defiance, not submission. Zurgo's eyes narrowed. "I am right here," he bellowed. Varuk motioned with one arm toward what remained of the horde. "Look at them, Zurgo." His voice echoed throughout the valley. "They once served you. Now they only fear you. And that means that you are not truly their khan." "You challenge my authority!" Zurgo bellowed. "There is nothing to challenge," he said. He turned his whole body to face the horde. "The Mardu have no quarrel with Surrak! Return to our home at Wingthrone with me," Varuk said, "and we will no longer risk our lives in service of this one foolish orc's desire for revenge!" The horde cheered its assent. Zurgo stared at them with wide eyes and a gaping jaw. Varuk turned to look at Zurgo once again. There was a moment of what might have been remorse, but then there was nothing. Varuk climbed back onto his mount and rode back down the valley through the center of the army. Zurgo stood and watched as his army turned away from him and slowly followed behind Varuk. And then they were only banners in the distance. #figure(image("010_Victory/07.jpg", width: 100%), caption: [Nomad Outpost | Art by Noah Bradley], supplement: none, numbering: none) The clan was gone, and Varuk was right. They were not truly Zurgo's anymore. He only had one thing left to give the Mardu, and that was Vol's head lying motionless in the snow. He looked down at his sword, which was still covered in Batar's glistening blood. He loped toward a corpse that had a dry shirt and ripped a piece of it off with his right hand…but stopped just short of wiping his blade clean. That blood was all he had left. He would not clean it until it had mixed with Vol's. A nearby fur-clad body with three arrows stuck in it shifted and groaned. He padded over to it and held his dripping sword at the dying human's throat. "You," he said. "Tell me, when your people last saw the khan of the Mardu, where was he going?" Her eyes bugged. She feebly pointed a finger further up the mountains. "The Spirit…" she croaked, "Dragon's…tomb," she heaved. He plunged his sword into the woman's throat and she stopped moving. Zurgo returned to his mount, climbed into the saddle, and rode for the chasm. Zurgo knew where the dragon's tomb was rumored to be, but it would be a dangerous trip. If Vol could turn into a dragon, though, it made some sense that he would seek it out. The ground grew increasingly treacherous as he rode in the direction of the chasm where the dragon's body lay. He rode over several steep hills, and into the beginning of the night. Soon after twilight, his mount lurched and heaved, groaned and stopped, and he nearly fell off. He dismounted. The beast had missed a step and broken a front leg, which now bent in an unnatural direction. Great shards of bone protruded from its skin and shifted slightly as the thing yowled in pain. Zurgo left it to die and continued alone.
https://github.com/KmaEE/ee
https://raw.githubusercontent.com/KmaEE/ee/main/notes.typ
typst
* Present findings in the introduction * Motivate the question at the start * Figures need to be labeled * Retitle Group Theory * Don't put too many tangents * Footnotes vs appendix * Determine speed vs. size experimentally
https://github.com/KireinaHoro/research-plan
https://raw.githubusercontent.com/KireinaHoro/research-plan/master/main.typ
typst
#let my-name = [<NAME>] #let my-title = [Co-designing HW and OS for Efficient Datacenter Communication] #let my-supervisor = [Prof. Dr. <NAME>] #let my-second-advisor = [To Be Determined] #let my-start-date = datetime(year: 2023, month: 12, day: 1) #import "/systems-cover/systems-cover.typ": cover-page #show: cover-page.with( doc-type: "doctoralplan", title: my-title, author: my-name, authorinfo: [ STF G 222 <EMAIL> ], contract-date: my-start-date, admission-date: datetime(year: 2021, month: 9, day: 20), supervisor: my-supervisor, second-advisor: my-second-advisor, ) #import "/infk-doctoral-plan/infk-doctoral-plan.typ": document, todo, work-package, is-glossary #show: document.with( student-name: my-name, student-number: [21-951-876], supervisor-name: my-supervisor, second-advisor-name: my-second-advisor, // XXX: do we count from the beginning of contract, or DD enroll? start-date: my-start-date, title: my-title, ) #import "@preview/glossarium:0.4.1": make-glossary, print-glossary, gls, glspl #show: make-glossary #import "glossary.typ": glossary #import "@preview/timeliney:0.0.1" #let show-page-limits = false #let lim(len) = if show-page-limits { text(blue, fractions: true)[(#len)] } // replace microsecond with Greek letter #let us_rgx = regex("(\d) us\b") #show us_rgx: it => [#it.text.match(us_rgx).captures.first() #{sym.mu}s] // comment function for mothy #let mothy = todo.with(fill: blue, prefix: [Mothy]) = Research Proposal // #mothy[example comment from mothy. *strong text* _emph text_] == Abstract #lim[max 1/2 page] Datacenter communication patterns are becoming increasingly oriented towards smaller transactions with the recent trend of micro-services and serverless computing. However, datacenter communication systems have been traditionally designed around PCIe #gls("dma", long: false), an interconnect standard designed and highly optimized for high-throughput workloads. The design of PCIe @dma disproportionately penalizes small transactions with various overheads, contradicting with the trend of pursuing higher efficiency. In this research plan, we present our vision to achieve efficient datacenter communication through a co-design of hardware and operating system, utilizing emerging cache-coherent interconnect standards between CPUs and custom-built #glspl("nic", long: false). We focus on three main aspects for building a successful solution: efficiency, deployability, and provable security. We pursue high efficiency by building a cache-coherent smart @nic with protocol offloading capabilities, aiming to eliminate all existing communication overheads. We ensure deployability by designing our software and hardware with attention to requirements in production environments, such as multi-tenancy, telemetry and instrumentation, and debugging. We target provable security by formally verifying critical software and hardware components we introduce, as well as how they interact with existing components. #pagebreak(weak: true) == Introduction #lim[ca 1 page] <intro> Virtually all workloads running in datacenters require communication with other systems in some way; one of the most commonly used paradigms for such is @rpc. They are the cornerstone of virtually all networked systems, including micro-services, serverless computing, networked filesystems, and many more. Previous work on characterizing @rpc workloads~@seemakhupt_cloud-scale_2023 demonstrated that short @rpc invocations in the ballpark of 1 us make up a significant portion of all @rpc workloads. Despite the high frequency of short @rpc workloads, traditional datacenter @rpc architecture using PCIe @dma @nic[s] incur high latency and CPU overhead. We identify three classes of overhead in the traditional PCIe @dma @rpc architecture: _protocol overhead_ from marshaling and unmarshaling, session maintenance, encryption and decryption, and more; _@dma overhead_ from the need to set up descriptor rings, various queues, and @dma buffers; and _schedule overhead_ from the need to multiplex CPU cores between normal workload and handling events from the @nic via @irq[s], and to deliver packet data to the correct user space application. All these overheads come on top of the actual CPU cycles spent executing the actual @rpc handler. Many of these overheads are fixed, not scaling with the size of the request and response, meaning that they disproportionately impact short invocations and significantly lowering efficiency in their processing. These overheads contradict with the ever increasing demand for higher processing efficiency by datacenters. We recognize that the aforementioned overheads come from the fundamental assumptions of PCIe about the system architecture: requests are long; bus latency is high; a server has few cores; there are many tasks other than network processing. The modern data center architecture, however, looks vastly different from these traditional assumptions. In most hyper-scalers, entire servers are dedicated to handling @rpc requests rather than shared with other tasks. In addition, a single CPU server can come with over a hundred cores, making obsolete mechanisms for multiplexing CPU time between multiple tasks. Emerging cache-coherent interconnects allow for communication between the CPU and @nic lower latency and higher throughput, without the need for huge batch sizes. Our vision to resolve this problem is to co-design the @nic and OS, taking full advantage of cache-coherent interconnects. We will build a cache-coherent offloading smart @nic to free the CPU cores that run @rpc service handlers from all overheads due to network and protocol processing. The @nic would be tightly integrated with various aspects of the OS, such as task scheduling and buffer management. Since most @rpc protocols are designed to be processed on the CPU, we also need to explore what type of network protocols are suitable for efficient implementation in hardware. For a successful solution with real-world impact, we also tackle important concerns for production environments such as multi-tenancy, inspectability, and accounting. As we integrate deeply and fundamentally with the OS, security is of utmost importance; we plan to employ various formal methods approaches to verify the functional correctness and isolation properties of critical components and how they interact with the rest of the system. #pagebreak(weak: true) == Current State of Research in the Field #lim[ca 2-3 page] We group prior works related to this thesis by topic and explain our vision on improving the status quo. We cite only the most relevant papers due to space limitation. === Cache Coherence Interconnects Various industry standards for cache-coherent interconnects in datacenters have been under development and are gradually seeing wider adoption. Examples in this field include OpenCAPI, Gen-Z, and CCIX; these protocols are based on different physical layer standards and upper protocols. They have been superseded by and largely absorbed into CXL, which aims to the one standard interoperable interconnect standard across vendors. While CXL has been hyped by many researchers, adoption has been slow due to lack of hardware implementation. Other notable coherent interconnects include TileLink for RISC-V systems as well as AMBA ACE from ARM; both of which are mainly implemented in low-power embedded systems instead of server-scale hardware. NVLink 2.0 from NVIDIA features cache coherence in high-performance hardware but is closed and proprietary. As a result from various restrictions in existing protocols, research on cache-coherent interconnects are largely performed on experimental systems like Intel HARP and Enzian~@cock_enzian_2022. We plan to implement our prototype systems on Enzian, but would also be open to adopt new CXL hardware suitable for our purposes as they become available. === Communication Pattern between CPU and @nic The communication pattern between CPU and peripheral device has been extensively studied. Previous works such as hXDP and kPIO+WC have shown the high overhead of PCIe @dma for smaller transactions and attempts to mitigate either by processing them solely on the CPU or using PCIe @pio for lower latency. Extra efficiency can be achieved with cache-coherent interconnects other than PCIe. Dagger~@lazarev_dagger_2021 builds on the UPI/CCI-P implementation of Intel HARP an @fpga @nic for low-latency @rpc, focusing mainly on using the UPI interconnect as a @nic interface to offload @rpc protocol processing. Previous work in the research group on @pio~@ruzhanskaia_rethinking_2024 showed that it is possible to achieve higher efficiency with @pio using cache-coherent interconnects. Our work builds on the basic @nic implementation in~@ruzhanskaia_rethinking_2024 for a full solution of offloading @rpc processing. Many works have since long discovered that a cache line is a better unit of transfer for workloads where small transfers are commonplace; notable examples are FastForward and Barrelfish UMP. Shinjuku~@kaffes_shinjuku_2019 and Concord~@iyer_achieving_2023 are more recent examples of employing this idea for low-latency scheduling via one polled cache line between the CPU and @nic. This observation coincides with findings from a recent study from Google~@seemakhupt_cloud-scale_2023, which highlights the importance of efficient small, #{sym.mu}s-scale transfers in datacenter @rpc due to their high frequency. In this thesis, we target these small transfers to improve efficiency for a common case of datacenter communication. /* === Offload-friendly Network Protocol Design <related-work-proto-design> #todo[what are the related works in this field?] */ === Integration with OS Improving scheduling latency and efficiency of networking tasks is a topic extensively explored by previous work. Previous works like Shinjuku~@kaffes_shinjuku_2019, Caladan~@fried_caladan_2020, and DemiKernel~@zhang_demikernel_2021 improve tail latency by dedicating CPU cores to polling @nic contexts with various kernel-bypass mechanisms to improve efficiency. More recently, Wave~@humphries_wave_2024 explores offloading scheduling policies to dedicated, smart #[@nic]-like @ipu[s] while maintaining low latency for dispatching with @pio mechanisms. We believe that with techniques like lazy update thanks to cache-coherent interconnects, we can manipulate internal states of existing OS schedulers and achieve more efficient and ergonomic integration. Buffer management is an important topic for offloading @rpc to smart @nic[s]. Zerializer~@wolnikowski_zerializer_2021 passes memory _arenas_ containing @rpc objects between the @nic and CPU over PCIe to achieve zero-copy serialization and deserialization; ProtoAcc~@karandikar_hardware_2021 from Berkeley adopts a similar approach over a tightly-coupled co-processor interface. We might be able to explore further in this field with customized cache line-level protocols on cache-coherent interconnects. === Deployability in Production Environments Telemetry data is crucial for analyzing performance and efficiency issues for complex distributed systems in datacenters. Dapper~@sigelman_dapper_2010 from Google is a distributed tracing platform for monitoring various metrics like @rpc tail latency, network usage, etc. Fathom~@qureshi_fathom_2023 further integrates with Dapper, providing low-level network stack instrumentation for all connections in Google datacenters. The ubiquity of tracing and instrumentation needs in production datacenters forms a stark contrast against many research prototype systems that treat tracing as _exceptions_ rather than _norm_: disregarding traced endpoints by excluding them from the fast path limits the deployability of these systems. We intend to invest in this direction to allow our system to be deployable in production environments. As of today, little attention is paid to multi-tenancy support for smart @nic[s], mainly due to most cloud providers deploying them as @ipu[s] for offloading work from the hypervisor host. FairNIC~@grant_smartnic_2020 discussed about performance isolation for the Cavium LiquidIO smart @nic. #box[OSMOSIS]~@khalilov_osmosis_2024 introduces a centralized hardware scheduler for multiplexing processing units on the smart @nic across multiple @sriov virtual functions. S-NIC~@zhou_smartnic_2024 focuses on security isolation between network functions on smart @nic[s] through virtualization of data plane and accelerators. We believe that integration between the CPU and smart @nic with cache-coherent interconnects would pose new challenges for virtualization and multi-tenancy, since conventional PCIe #[@sriov]-style virtualization technologies would not apply naively here. === Security and Verification #[@rpc]-offloading smart @nic[s] are a potential new single point of failure introduced to datacenter systems; the critical nature warrants extensive verification effort for their functional correctness. Conventional hardware verification focuses on @abv~@witharana_survey_2022 against properties specified in various logic domains. These properties can be either written by hand or generated from higher-level behavioral models of the final system. They can then be checked in an automated fashion with simulation or formal methods. We plan to integrate with prior models developed in the research group for cache-coherent interconnects to derive properties and employ standard techniques to prove the functional correctness of our custom hardware. _Specification synthesis_ is a method to extract behavioral specifications from a black-box component. Schult et.~al.~@schuh_cc-nic_2024 proposed a method to test cache-coherence protocol hardware implementations with partial specifications. Their method can be extended to synthesize full specifications for model-checking other components in a fully integrated system. We will work closely with the authors towards deriving specifications for all hardware components in the system. == Goals of the Thesis #lim[ca 2-3 pages] <goals> First and foremost, we need a base prototype system to demonstrate the feasibility of our approach to higher efficiency; we explain this in @goals-prototype. After attaining the prototype, we then have to tackle various real world problems that would allow adoption of our solutions in production environments; we explain how we achieve this in @goals-deployability. Finally, we explore how we can prove that the resulting system is secure and reliable with formal methods in @goals-security. === Base System <goals-prototype> The goal of the base prototype @rpc offloading system is to remove *all* overheads in @rpc processing from the host CPU cores. We use latency as a _proxy_ of efficiency: the lower latency we are able to achieve, the higher efficiency we will attain. Our system fuses the @nic and the OS with a smart @nic implemented on a coherently-attached @fpga to eliminate the three main types of overhead in conventional @rpc processing: _protocol overhead_, _@dma overhead_, and _schedule overhead_. To tackle _protocol overhead_, we offload @rpc protocol processing operations such as encryption/decryption, compression/decompression, and arguments marshaling/unmarshaling to the @fpga as hardware accelerators. We start with a very simple protocol, @oncrpc based on UDP, which is easy to implement in hardware but still have popular applications built with it, like @nfs. We can make use of EasyNet, the HLS TCP accelerator developed in the Systems Group, to implement more complex @rpc protocols. Encryption and compression are orthogonal to the serialization format, and off-the-shelf implementations of common cryptography and compression IP cores allow quick integration into our prototype system. We alleviate _@dma overheads_ by implementing the 2Fast2Forward message-passing protocol as discussed in our @pio paper~@ruzhanskaia_rethinking_2024 on top of ECI, the cache-coherence protocol in Enzian. Specifically, the CPU core polls in userspace on a pair of special control cache lines backed by @fpga memory for receiving packets. The @fpga blocks reload requests for these cache lines until a packet arrives, or a specific timeout occurs. This removes the overhead of setting up @dma descriptors and @irq for transfers between the CPU and the @nic. Kernel bypass from userspace mitigates the overhead of traversing many queues and protection boundaries. We make use of the latency and CPU cycles advantage from the cache-coherence protocol to deeply integrate the OS and the smart @nic and free the CPU from any _scheduling overhead_. This allows the smart @nic to make scheduling decisions on where to steer the packet: by placing related context of the OS scheduler on @fpga memory, the smart @nic receives on-demand updates whenever a core changes state. The smart @nic can also dispatch @rpc requests to be handled on a specific core: the CPU core would load a cache line from the smart @nic containing all necessary information to start executing the @rpc workload directly, including pointers to @rpc handler code, data, unmarshaled arguments, and protection domain information. We set an ambitious target for end-to-end @rpc latency of *1 us*. Preliminary results~@ruzhanskaia_rethinking_2024 show that we can send and receive Ethernet frames from the CPU over the ECI cache coherence protocol in around 800 ns, thus making a promising case for our latency target. === Deployability <goals-deployability> Apart from evaluating our system with synthetic benchmarks, we plan to show deployability of the system by porting existing workloads onto it. We can implement an @nfs server with the @oncrpc offloading support. Dandelion is a serverless scheduler and runtime developed in the Systems Group; the project would benefit from an offloaded @rpc smart @nic for implementing communication between worker nodes. We plan to work with their team such that the communication subsystems in Dandelion is built on our system. DeathStarBench~@gan_open-source_2019 is the _de facto_ standard for benchmarking micro-service systems and would be a good candidate as well. This process will expose practicality issues in our design and implementation, allowing us to further improve deployability for production systems. The smart @nic we build needs to allow telemetry collection to help identify possible performance bottlenecks and efficiency issues. Implemented hardware on @fpga[s] are not as easily instrumented as software. We implement flexible and customizable event counters for every part of the packet processing pipeline, which would allow a detailed break-down of latency introduced by the smart @nic. We design interfaces for configurable transaction-level tracing to allow higher-level analysis, profiling and debugging of the application. We plan to build tooling to allow us to analyze various sources of telemetry data and provide the same level of insight as in production environments. This will involve working with our industry partners to figure out the exact requirements real production environments have. Effective multi-tenancy support requires _multiplexing_ of processing elements as well as proper _performance isolation_ to avoid unwanted interference and fairness issues between tenants on the same smart @nic. We need to define clear interfaces between the @rpc application and the software runtime to allow for clean isolation. Once we attain the prototype system as described in @goals-prototype, we need to implement on top virtualization mechanisms to multiplex packet processing pipelines and on-chip memory. We also need to figure out scheduling policies to ensure fairness among tenants. We have to in addition enforce performance isolation for traffic from different tenants on the same coherent interconnect; many open questions exist here. === Security <goals-security> Security problems are also deployability problems: smart @nic[s] sit at the choke point between a server and the network, warranting high assurance in order to be deployed large scale. First and foremost we need to verify that the smart @nic's _functional correctness_. We first need to specify what is the correct behavior of the smart @nic and OS formally, by defining _contracts_ for each part of the system. Some of these contracts can be automatically derived from protocols imposed by existing components of the system, for example a model of the cache-coherence protocol. We will also need to specify some parts of the system by hand. After acquiring specifications for each component, we then have to prove that the implementation upholds the formal contract. We verify hardware components with various @abv techniques and software components with program verifiers. We can then compose all specifications of components, abstract away implementation details, and prove the higher-level correctness property. Through specifying and verifying our prototype system, we can then derive design principles and verification techniques that can generalize to other systems in a broader domain. == Progress to Date #lim[ca 1/2 page] <progress-to-date> The Doctoral Student has previously finished his master thesis on porting the PsPIN smart @nic platform to Xilinx @fpga[s], which are from the same vendor as @fpga[s] in Enzian. FPsPIN~@schneider_fpspin_2024 combines the Corundum @fpga @nic platform with PsPIN to create a prototype Ethernet smart @nic. This work helped him acquire necessary skills for @fpga development and in building Ethernet-based smart @nic[s]. The Doctoral Student has partaken in building a prototype @pio @nic for the @pio paper~@ruzhanskaia_rethinking_2024 using SpinalHDL, a hardware description language embedded in Scala. The basic @nic prototype passes raw Ethernet frames between a CPU core and the @fpga attached over ECI using a variant of the 2Fast2Forward protocol. It serves as a proof-of-concept for message-passing between CPU and device, as well as the foundation for the @rpc smart @nic prototype as we discussed in @goals-prototype. The Doctoral Student has finished a basic smart @nic that can offload unmarshaling of @oncrpc requests based on the @pio @nic prototype; implementation of the supporting system software is in progress. Further supplied with the @rpc encoding pipeline, this @nic will be able to demonstrate offloading of very simple #[@oncrpc]-based applications like a calculator service. The rest of the prototype system builds upon this preliminary demo. == Detailed Work Plan #lim[ca 1 page] <work-pkgs> We list out the exact work packages for each critical aspect of concern, as we have previously detailed in @goals. === Base System <work-pkgs-base-system> #work-package([@oncrpc offloading], [6 months]) <basic-nic> Build upon the ECI @nic done in @progress-to-date and offload @oncrpc marshaling and unmarshaling in hardware. The CPU core should be able to start the @rpc handler with information from a cache line, and able to return the results by writing to a cache line. This includes porting a simple demo application that builds on @oncrpc to be accelerated by the smart @nic. We let the Linux kernel schedule the userspace @rpc application naively. During development, we specify expected behavior and verify correctness of hardware we build with @abv paradigms. These specifications will facilitate later formal verification efforts in @specification. #work-package([Integrated buffer management], [3 months]) <buffer-mgmt> A solution for allocating memory in the decoding pipeline is needed for variable-length fields in the @rpc message. We exchange memory arenas between the CPU and smart @nic to allow decoding and encoding such fields. This will enable more complex @oncrpc services like a @nfs server; we tackle this in @real-systems. /* #work-package([Protocol design for HW offloading], [to be determined]) #todo[remove this work package? otherwise, find proper related work in @related-work-proto-design] */ #work-package([Integrated task scheduling], [3 months]) <scheduling> Integrate with the Linux scheduler to steer incoming @rpc requests to worker cores that are not busy, replacing the naive approach in @basic-nic. The smart @nic should keep track of states of worker cores with help from internal scheduler states, acquired over the cache-coherent interconnect. We should be able to show improvements in tail latency and efficiency in core utilization. === Deployability <work-pkgs-deployability> #work-package([Implement real systems with our smart @nic], [3 months]) <real-systems> Start implementing real workloads with the demo system implemented in @basic-nic, while integrating @buffer-mgmt and @scheduling as they become ready. Candidates include a @nfs server, a communication subsystem for Dandelion, and micro-service benchmark suites like DeathStarBench~@gan_open-source_2019 are potential targets. We might need to implement other serialization protocols like ProtoBuf in @fpga or integrate existing IP cores. #work-package([Telemetry and instrumentation], [6 months]) <telemetry> Basic telemetry through performance and event counters should already be integrated as the system is built in @basic-nic. We need to further explore what to collect for more in-depth tracing by talking to industry partners. Trace collection should mostly be done in hardware on the @fpga to avoid _degrading_ traced endpoints' performance. We then need to build tools to analyze the collected traces and provide insight. #work-package([Multi-tenancy and virtualization], [6 months]) <multi-tenancy> Map @pio region into each VM to allow multiplexed access to smart @nic. Implement multiple partitions on hardware scheduler from @scheduling to allow dispatching to a specific group of cores running the tenant VM. Hardware scheduler in the smart @nic should be able to _wake up_ a specific VM by contacting the hypervisor scheduler. We reason about security and performance isolation properties between multiple tenants as part of @specification. === Security <work-pkgs-security> #work-package([Specify all hardware and software components], [continuous target]) <specification> We reuse specifications and models of existing components in the system, for example the cache-coherence components in the CPU and @fpga, from previous work in the research group on conformance testing and specification synthesis. We manually specify new components in the system. Many components should already be partially specified during development and testing in @basic-nic; we bridge the gap to allow all specifications to be composed to prove the collective correctness property. The specification process happens along with system design and implementation and thus does not have a fixed time goal. #work-package([Prove that implementations match specification], [continuous target]) <correctness-proof> We employ multiple paradigms to check that the implementation of each component matches the acquired specification in @specification. We utilize conventional @abv methods like simulation and symbolic execution, as well as blackbox testing methods developed in the research group. The proof process happens along with specification extraction and thus does not have a fixed time goal. == Publication Plan The basic smart @nic prototype described in @work-pkgs-base-system, together with one real system implementation in @real-systems, should warrant one paper in systems top conferences like ASPLOS. Virtualization and multi-tenancy support is complicated enough for a separate publication in similar venues. Telemetry and instrumentation work from @telemetry coupled with network-level characterization can result in a good paper in conferences like NSDI. Specifying (@specification) and verifying (@correctness-proof) all system components can lead to a paper in formal methods conferences like FM. We might have to develop new specification and verification methods, which would result in a second paper there. == Time Schedule #lim[ca 1/2 page] #{ show link: emph set text(size: 10pt) timeliney.timeline( show-grid: true, grid-style: (stroke: ( dash: "dotted", thickness: .7pt, paint: rgb("#aaaaaa"), )), milestone-line-style: (stroke: (dash: "dashed")), { import timeliney: * let num-years = 5 let start-year = 2023 headerline(..range(num-years).map(n => group(([*#{start-year+n}*], 4)))) headerline(..range(num-years).map(_ => group(..range(4).map(n => "Q" + str(n + 1))))) let actual-work = (stroke: 2pt + gray) let depend = (stroke: (paint: red, dash: "dotted")) taskgroup(title: link(<progress-to-date>, [*_Preparation_*]), { task("FPsPIN", (from: 1, to: 3.25, style: actual-work)) task("ECI NIC", (from: 3.75,to: 6.25, style: actual-work)) }) taskgroup(title: [*@work-pkgs-base-system*], { task("RPC offload", (from: 6, to: 8, style: actual-work)) task("Task sched.", (from: 8, to: 9, style: actual-work)) task("Buffer mgmt.", (from: 9, to: 10, style: actual-work)) }) taskgroup(title: [*@work-pkgs-deployability*], { task("Real systems", (from: 10, to: 11, style: actual-work)) task("Instrumentation", (from: 11, to: 13, style: actual-work)) task("Multi-tenancy", (from: 13, to: 15, style: actual-work)) }) taskgroup(title: [*@work-pkgs-security*], { task("Specification", (from: 6, to: 10, style: actual-work), (from: 13, to: 16, style: actual-work)) task("Verification", (from: 10, to: 12, style: actual-work), (from: 16, to: 18, style: actual-work)) }) milestone(at: 3.75, align(center, [ *Start of doctorate*\ Dec 2023 ])) milestone(at: 7.25, align(center, [ *Today*\ Oct 2024 ])) milestone(at: 11, [_ASPLOS #{sym.quote.single}26_#h(2em)]) milestone(at: 12, [_FM #{sym.quote.single}26_]) milestone(at: 13, [#h(2em)_NSDI #{sym.quote.single}26_]) milestone(at: 15, [_ASPLOS #{sym.quote.single}27_]) milestone(at: 18, [_FM #{sym.quote.single}27_]) } ) } == References #lim[ca 1 page] #{ set text(size: 10pt) bibliography("citations.bib", title: none, style: "ieee") } == Glossary #{ set text(size: 10pt) print-glossary( disable-back-references: true, glossary) } #pagebreak(weak: true) = Teaching Responsibilities The research group generally expects every doctoral student to teach one course per semester. In case of this could not happen due to lack of a suitable teaching position, he or she is expected to take on extra administrative responsibilities as described in @other-duties. Since the beginning of doctoral studies, the Doctoral Student has served as the head teaching assistant in the following two courses: #table( columns: (1fr, 5fr), stroke: none, align: (right, left), [Spring 2024], [_Advanced Operating Systems_], [Autumn 2024], [_Systems Programming and Computer Architecture_], ) These two courses repeat annually. It is expected that the Doctoral Student will continue teaching these two courses for the remaining duration of the doctoral studies to maximize reuse of experience. = Other Duties <other-duties> The research group offers bachelor's thesis, master's thesis, and semester projects to students at ETH. Doctoral students in the research group are expected to supervise student projects related to their research area. Since the beginning of doctoral studies, the Doctoral Student has supervised the following theses and projects: #table( columns: (1fr, 3.5fr, 1.5fr), stroke: none, align: (right, left, right), [Spring 2024], [_Hybrid FPGA-Accelerator Encryption and Compression_], [<NAME>], [Autumn 2024], [_USB Subsystem Support for an OS Course_], [<NAME>], ) The Doctoral Student will continue to supervise future students in related research fields provided that overall research and studying workload allows. The Systems Group has several general administrative tasks benefiting all research groups related thereof, that are handled by doctoral students in the group. These tasks are assigned to doctoral students on a semester-basis for a fair distribution of workload. Since the beginning of doctoral studies, the Doctoral Student has been in charge of the following tasks: #table( columns: (1fr, 5fr), stroke: none, align: (right, left), [Autumn 2024], [Organizing the _Systems Group Lunch Seminars_], ) Further administrative tasks may be assigned to the Doctoral Student based on research, teaching, and studying workload as agreed upon with the supervising professor. #pagebreak(weak: true) = Study Plan The D-INFK department stipulates that every doctoral student shall finish 12 ECTS credits during the studies, with #link("https://inf.ethz.ch/doctorate/doctoral-study-program/credit_points.html", [requirements on recognition of credits]). Since the beginning of doctoral studies, the Doctoral Student has partaken in the following courses: #table( columns: (1fr, 4fr, 1fr), stroke: none, align: (right, left, right), [Spring 2024], [_Diskutieren und Präsentieren auf Deutsch; B2-C1_], [2 ECTS], [Autumn 2024], [_Kommunizieren auf Deutsch; B2_], [2 ECTS], ) A provisional schedule for lectures the Doctoral Student will partake is as follows. Since course offerings are subject to variation each year, more courses than enough are listed. #table( columns: (1fr, 4fr, 1fr), stroke: none, align: (right, left, right), [Spring 2025], [_Financial Engineering_ (UZH)], [6 ECTS], [Autumn 2025], [_Corporate Finance_ (UZH)], [3 ECTS], [Spring 2026], [_Advanced Banking_ (UZH)], [6 ECTS], [Autumn 2026], [_Asset Management: Advanced Investments_ (UZH)], [3 ECTS], ) The Doctoral Student has verified that the provisional schedule satisfies the credit recognition requirements from the department.
https://github.com/k0tran/typst
https://raw.githubusercontent.com/k0tran/typst/sisyphus/vendor/hayagriva/CHANGELOG.md
markdown
# 0.5.1 - Fixed spacing around math blocks - Fixed title case formatting next to `{verbatim}` blocks and apostrophes # 0.5.0 - **Breaking change:** The API for archived styles has changed. - **Breaking change:** The name of the GB/T 7714 family of styles have been corrected to `gb-7714-...` from `gb-7114-...`. - **Breaking change:** The reexported `TypeErrorKind` and `ParseErrorKind` enums in `biblatex` have added variants and become non-exhaustive. - Date parsing will not panic anymore (https://github.com/typst/typst/issues/2553). - Anthos entries now properly recognize their parent (#72, https://github.com/typst/typst/issues/2572). Thanks, @zepinglee! - Proceedings titles will now be printed correctly (#78). Thanks, @vtta! - Citation numbers will now collapse if the style requests it. - Escaping in format and chunked strings now works (https://github.com/typst/typst/issues/2669). - The old behavior of the alphanumeric style has been restored. - Bibliographies now forcibly print the alphanumeric `citation-label` instead of the `citation-number` if the cite only printed the former (and vice-versa; https://github.com/typst/typst/issues/2707). - We dropped the dependency on `rkyv` in favor of code generation in a test. This should resolve build problems on some platforms. - The retrieval of the volume variable is now more robust (#82). Thanks, @mpmdean! - Fixed delimiter order for contributors (#73). Thanks, @zepinglee! - Page ranges can now be strings (#83). - Page ranges will now use the correct delimiter, even if printed with `cs:text` - Fixed a bug with the suppression of empty groups (https://github.com/typst/typst/issues/2548). - Bumped `citationberg` to solve a CSL locale fallback issue that affected https://github.com/typst/typst/issues/2548 - Bumped the `biblatex` crate to 0.9.0 to fix BibLaTeX parsing bugs (e.g. https://github.com/typst/biblatex/issues/41, https://github.com/typst/biblatex/issues/33, https://github.com/typst/biblatex/issues/40, https://github.com/typst/typst/issues/2751, #81) # 0.4.0 ## Breaking changes: Hayagriva now uses the [Citation Style Language](https://citationstyles.org) to encode formatting styles. This means that Hayagriva's own formatting styles have been deprecated. ### For users: - The YAML input format has changed. - Titles and formattable strings have been merged into one type. All formattable strings can have a shorthand now. - Formattable Strings do not have `title-case` and `sentence-case` keys anymore. `shorthand` has been renamed to `short`. To prevent changes of the text case of formattable strings, you can use braces. Enclose a part of a formattable string (or `short`) in `{braces}` to print it as-is. - The fields `doi`, `isbn`, and `issn` have been moved to `serial-number` which can now be a dictionary containing these and arbitrary other serial numbers like a `pmid` (PubMed ID) and `arxiv` (ArXiv Identifier). - The `tweet` entry type has been renamed to `post`. - All numeric variables can now also contains strings. Numbers can have string affixes. Refer to the updated [file format](https://github.com/typst/hayagriva/blob/main/docs/file-format.md) docs for examples. ### For developers: - To use a CSL style, you can either supply a CSL file or use an archive of provided styles with the `archive` feature. - The `from_yaml_str` function will now return the new `Library` struct, with the entries within. - The `Database` struct has been replaced by the easier to handle `BibliographyDriver`. - We switched from `yaml_rust` to `serde_yaml`. The `Entry` now implements `serde`'s `Serialize` and `Deserialize` traits. Hence, the `from_yaml` and `to_yaml` functions have been deleted. - Brackets are no longer individually overridable. Instead, use the new `CitePurpose`. - `Entry::kind` has been renamed to `Entry::entry_type`. - The citation styles `AuthorTitle` and `Keys` have been removed but can be realized with CSL. This release fixes many bugs and makes Hayagriva a serious contender for reference management. ## Other changes - We added the entry types `Performance` and `Original`. - We added the field `call-number`. # 0.3.2 Fixes a title case formatting bug introduced in the previous release. # 0.3.1 _Bug Fixes:_ - Added an option to turn off abbreviation of journals (thanks to @CMDJojo) - Fixed bugs with title case formatting (thanks to @jmskov) - Fixed off-by-one error with dates in APA style (thanks to @bluebear94) - Fixed supplements in the Alphanumeric and AuthorTitle styles (thanks to @lynn) - Fixed bugs with sentence case formatting - Fixed `verbatim` option - Fixed terminal formatting - Fixed some typos (thanks to @kianmeng and @bluebear94) # 0.3.0 *Breaking:* - Updated to `biblatex` 0.8.0 *Bug Fixes:* - Fixed string indexing for titles, removed panic - More permissive BibLaTeX parsing # 0.2.1 *Bug Fixes:* - Fixed APA bibliography ordering # 0.2.0 *Breaking:* - Replaced `NoHyphenation` formatting with `Link` formatting - Switched to newest BibLaTeX (which is part of the public API) *Bug Fixes:* - Fixed IEEE bibliography ordering - Fixed A, B, C, ... suffixes for Author Date citations - Removed `println` calls # 0.1.1 🐞 This release fixes the documentation of the CLI in the `README.md` file. ✨ There are new options for bracketed citations in the CLI. ✅ No breaking changes. # 0.1.0 🎉 This is the initial release!
https://github.com/FA555/ignite
https://raw.githubusercontent.com/FA555/ignite/main/readme.md
markdown
MIT License
Ignite is a sophisticated index document generating tool, suitable for open-book exams, documentation, and similar scenarios. Readme [中文](readme_zh.md) | English ## Prerequisites - Python 3.4+ (with pip) - Typst ## Usage ### Index File Syntax Begin by writing the index content into `data/index.txt`. The index file consists of several lines. Except for blank lines (which will be ignored), each line should conform to one of two formats: 1. Starting with `<` and ending with `>`, this denotes a chapter division. The content inside the angle brackets will be considered the chapter title. 2. Composed of any content followed by a number at the end, representing an entry. The number indicates the page number of the entry. ### Generating the Document ```bash python3 -m pip install -r requirements.txt python3 main.py typst compile index.typ # This will generate index.pdf ``` For adjustments and formatting details, please review and modify `index.typ` as needed. ## Example We have provided a set of examples under the `example/` directory. You can generate an example document with the following command: ```bash typst compile index.typ --input data-dir=example # This will generate index.pdf ``` ![Example Document P1](img/1.png) ![Example Document P2](img/2.png)
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/linebreak-obj_00.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test punctuation after citations. #set page(width: 162pt) They can look for the details in @netwok, which is the authoritative source. #bibliography("/assets/files/works.bib")
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/007%20-%20Theros/011_Building%20Toward%20a%20Dream%2C%20Part%202.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Building Toward a Dream, Part 2", set_name: "Theros", story_date: datetime(day: 04, month: 12, year: 2013), author: "<NAME>", doc ) My love— I have lived to see my worst nightmares come true. No, not true... you and Lara still live, and there is still hope. There is still hope. Today is the day we met with the leonin to sign the treaty, to pledge a truce between our people and those ferocious beast-men who have plagued and assaulted us for so long. Today is the day that the kingdom of Iretis has come to the precipice of destruction. We met near the height of the sun, at a pavilion set up by both sides, far away from both our city and the leonin tribal areas. Udaen had spent many weeks negotiating the specifics of how many troops each side were allowed to bring, and how the ceremony would progress. Udaen and I were at the table, along with two of my bodyguard, Chelta and Vanin. There were six leonin tribal elders with us, although their only obvious sign of age were their number of scars. They were warriors all, and they looked at the bodyguards with disdain. #figure(image("011_Building Toward a Dream, Part 2/01.jpg", width: 100%), caption: [Coordinated Assault | Art by <NAME>sell], supplement: none, numbering: none) It is one thing to fight cats in the heat of battle, but to be up close to these feral animals was unnerving. Taller, bigger, thicker than the mightiest of men, with claws that could slice a man from throat to stomach, and sharp teeth as large as a finger. Their stench was difficult to bear in the close confines around the signing table. Truly, I thought at the time, there are few creatures as fearsome as these. If only I could go back to that time today and know how blessed my ignorance was. There was no ominous wind, no dark heralding, no premonition like the stories would have you believe—in a single moment, we went from crafting a peace to living in war. In a single moment, there were nightmares among us. A tall human figure encased completely in metal, everywhere on his body were spikes and sharp edges. His metallic form moved fluidly and easily as he swung his fist at one of the leonin, and the cat's head exploded in blood and bone. Another man was swathed in a greenish mist, and dripping from his hands and tongue were long drops of honey. He leaned in close to a leonin and blew out that honeyed, poisoned mist. The cat died, choking and gasping for air, as the sweet mist closed in. The cats weren't the only ones slaughtered. In front of me, Thoros's two-headed monstrosity of a cat appeared, four arms and all, like a child's nightmare of a cat come to life. I had a moment to think of how I had killed Thoros because I did not believe him, before the beast roared in front of me and raised its arms. Chelta bulled me out of the way and stabbed at the monster's chest. The sword penetrated, but the monster took no notice as it clawed at Chelta's head with two of its massive arms. Chelta's head stayed on but most of his face was gone as he dropped to the ground dead. He never even screamed. The monster then left the pavilion to seek out other humans, Chelta's sword still sticking through his body. Behind me I heard a growl, and turned to look at an inky cloud of darkness, roughly man-height, floating a few inches off of the ground, with a pair of golden glowing cat-eyes blinking from within the cloud. Vanin whirled to face this new threat. Udaen shouted at us both to flee, but Vanin charged the cloud, hoping perhaps to find something to kill within. A clawed hand came out of the darkness and grabbed Vanin's shoulder and dragged him into the cloud, where Vanin's screams and the sound of something eating and chewing could be heard. Nothing left the cloud, not even Vanin's remains. Those horrible golden eyes blinked once at me, and then the cloud moved in the opposite direction, killing and enveloping the humans, my soldiers, outside of the pavilion. #figure(image("011_Building Toward a Dream, Part 2/02.jpg", width: 100%), caption: [Time to Feed | Art by <NAME>], supplement: none, numbering: none) Even as I write down every detail, seeking to capture every improbability, I can hardly believe what I saw. But I did see them. These were our worst dreams come to life to slaughter us. Udaen and I looked at each other in horror, at how quickly all we had worked toward vanished in blood and violence. While I was still in shock, I was still a warrior, a king. I reached down to Chelta's body and grabbed one of his spears, hoping to fight against the monsters. But what the monsters had begun, cat and human were too willing to finish. Of the nightmares there was no sign, but everywhere were humans and cats slaughtering each other. Each side assumed the other had sought to betray. I stood there, rooted to the ground, my prayers to the gods unspoken, my mind refusing to work, refusing to decide. I was witnessing the death of almost everything I hold dear. With a scream I hurled a spear at a cat, one of the few leonin elders who had not fallen prey to the nightmares. He fell prey to me instead. Within minutes the battle was done, and we were the only ones left standing as a few cat stragglers made their escape. We were in no shape to pursue and finish the slaughter. Out of my original force, only twelve men, including Udaen and me, remained. Udaen had managed to hide under the table, and blessedly escaped death. I don't know why we were able to prevail so easily, given how even our numbers were at the start. Maybe the nightmares killed more of the cats before they vanished. Surely this must have been the work of some god. Mogis, or perhaps Phenax. But I have been a devout king, and have said my prayers and made my offerings to Heliod and Ephara and Iroas. Udaen thinks some other malevolent force to blame, but admits he cannot say who. Regardless of who is to blame, Iretis is facing its darkest days. Udaen says he has received reports that the leonin tribes in the greater surrounding areas are already gathering and marching for war. Their reported numbers are in the many thousands, far more than my beleaguered army, especially without the expansionist forces. The cats' goal is likely the destruction of Iretis and its people. My people. And now I sit here in my throne room and write, as I have done for the last two hours. I have written letters to Meletis, and Akros, and Setessa, pleading for assistance. I have written letters to the other leonin tribes, for whatever good that might do. And I am writing this letter to you, Klytessa, my last letter of the evening. I know you were leaving from Meletis in a few days time, but now you must stay there until this storm has passed. Add your voice to mine before the Twelve. While Iretis has always been jealous of Meletis, surely Meletis will not let its tiny cousin perish from the earth. I love you. The world is a dark and terrible place, but I will meet it with light and courage and hope. Though nothing makes sense to me today, I still have light. This flickering torch light above me, illuminating these words to you. I still have courage. The thumping in my chest as I continue to strive for Iretis's survival. And I still have hope. These words to you are the proof of that. We will see each other soon, in happiness. Kedarick   #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em)     To the lord of Iretis, currently Kedarick the VI— A topic of frequent debate in these hallowed halls is a beguiling simple question—what is the nature of reality? How can we be sure that what our eyes, what our very senses are telling us, is in fact, a shared truth? Even if multiple people see the same thing, what if they are merely subject to the same delusion? Some of us believe that the material world is an essential truth, and we can only distort that essential truth through our flawed lenses of perception. Others of us believe that we help create the world through our very act of perception. Of course, taken to an extreme, this view would lead to the ludicrous notion of... but we digress. #figure(image("011_Building Toward a Dream, Part 2/03.jpg", width: 100%), caption: [Traveling Philosopher | Art by <NAME>], supplement: none, numbering: none) None of this is directly relevant to your request for assistance. Your letter sponsored a fresh round of argument on the nature of reality, which was lively and contentious, but we all agreed to deny your request. What is beyond dispute is that your men, under your command, have slaughtered hundreds of leonin under the auspices of a peace treaty. We commissioned oracular study of the events you described, and found no evidence of these "monsters" you claim initiated the slaughter. Nor do your descriptions of these creatures match the knowledge we have of divine visitations. Either you are lying, or you are insane, or you face a new and terrifying threat. This in itself is a fascinating question, and one we spent some amount of time debating. We have more debates on the topic scheduled this afternoon, but regardless of the conclusion, none of these outcomes give us any reason to lend you support. We have formally declared our support for the leonin mission to overthrow you and end your tyranny. As of this moment, all diplomatic ties between Meletis and Iretis are severed, until your reign has ended. The Twelve, ruling philosophical council of Meletis     #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em)     Lara— When you were four years old, you refused to stay in your bed during the night. You would get up, walk past your sleeping nanny, and blink your large eyes at the guard, who would inevitably let you into our room. After a week of this I decided it was enough, and I told you to go back to your room. You refused, and I shouted, "Go back to your bed!" You looked up at me, and you blinked those now-even-larger eyes, and you said, "But my bed doesn't have daddy and mommy in it." I let you go back to sleep in our bed without another word, and you proceeded to sleep there for another few weeks until you declared, "I want my own bed," and you went back to your room and we never had a nighttime visit again. I think about that night often, Lara, and how you looked in that moment. I cherish that memory... I cherish all the memories I have of you. I love you very much. Your father,#linebreak King Kedarick of Iretis     #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em)     Klytessa— I have enclosed a letter to Lara in this letter to you. Please read it to her, and tell her how much I love her. There is a large army of leonin camped outside of the city's walls. Our earlier reports were correct, they number in the thousands. Meletis and Akros and Setessa have all refused our requests for aid. They claim I am crazy or worse. When everyone thinks you crazy, it can drive you insane. Udaen claims that there is still hope, that there is still the possibility of having the leonin stand down and return to their lands. I appreciate his efforts, but it does not matter. I know my destiny. If I could kill myself here, if I could know that doing so would save my people and my kingdom, I would do it. But I worry that the leonin would feel robbed of vengeance, robbed of justice. And in the leonin sating their need for blood, the cost to our people would be terrible. So soon I will leave my throne room, leave my palace, leave the city walls, and present myself to the leonin. The guards, my men, will not stop me. They can barely look at me now, their eyes drop to the ground when I pass. The ten men who returned with me are still loyal, but all of us carry the stink of failure and death. No, no one will stop me. My source of strength in this, as always, is you and Lara. Knowing the two of you are safe keeps me calm. I wish I could have had one more day of holding you, touching you, seeing your beautiful face. If I think on it any more, I will lose what resolve I have. I have had my life. It has been a good life. Perhaps Iretis will be able to recover from this disaster. Perhaps my sacrifice will allow a new rebirth for our kingdom. Perhaps one day people will understand all I did was for a lasting peace. Perhaps one day that peace will occur. My enduring legacy, my true legacy, are you and our daughter. There is knocking at the door. I hope it is Udaen with news. Kedarick     #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em)     Phenax stood in the Iretian throne room, still invisible, looking down at the body of the king. Phenax had existed for a very long time, and had seen and done many things both wondrous and terrible in that span of years. Until today, he had never seen a mortal stab both of his own eyes out. When the old man had entered the room and told the king of the murder of his wife and child at the hands of the leonin, the king had screamed. Phenax had heard tortured screams as anguished as the mortal's, but it was still a scream of a special timbre, the sound of utter despair with no possibility of succor or redemption. A rare flavor. The mortal had then drawn his dagger and plunged it into his eye. It was the second stab into his other eye, the mortal shrieking all the while, that Phenax found so impressive. Phenax looked at the old man, still standing there, and the old man's form began to waver and shimmer. It slowly disappeared, and in the place of the illusion of the old man was another strange sight, although one Phenax had seen once before. The revealed form was humanoid and mortal. Of that Phenax was sure. The form floated several inches from the ground and was dressed in black cloth and leather in a mockery of current fashions in the cities of #emph[Theros] . Ignoring the head, the hands were the most interesting feature—exquisitely long, slender fingers with even longer talon-like claws for fingernails. The closest Phenax had seen to hands like these were those of a siren, those winged humanoid seducers that Phenax had a special fondness for. But the form in front of him was not a siren. The form in front of him shouldn't even be able to exist. The bottom half of the form's head was normal—human, even. But the top half was impossible. Two large black horns, made of some rough rocky substance, framed...nothing. No upper face, no head, no eyes or nose. Nothing except wispy black smoke continuously emanating from where the figure's mouth and upper lip ended. The black smoke whirled around the figure's head, swirling down in wider radius outside the body. When the two had first met, the figure had called itself Ashiok. #figure(image("011_Building Toward a Dream, Part 2/04.jpg", width: 100%), caption: [Ashiok, Nightmare Weaver | Art by Karla Ortiz], supplement: none, numbering: none) Ashiok floated over to the king's body, and saw the two letters the king had been writing. He, no, Phenax was not sure if the mortal even had a gender, #emph[Ashiok] , bent down as if to read the letters, although Phenax did not know how a mortal could read without eyes. Ashiok stopped reading, picked up the letters, and brought them to the fireplace in the room. Ashiok held the letters above the fire for a second, then stopped and brought them back, undamaged, to lay on the desk. Ashiok smiled and a shudder passed through Ashiok's body. A few tiny pieces of Ashiok's cheek, so small they were discernible only to a god's senses, evaporated into wispy black smoke and joined the penumbra surrounding Ashiok. Phenax became visible, and his voice rumbled in the room, "The wife and child, you had them killed?" Most mortals would have been driven to their knees from the power of the god's voice. Ashiok merely hovered, and turned to face the god. "No. Yes. Perhaps. The story of their deaths I made up. But if they're not dead now, they easily could be soon. They were on their way from Meletis to here, that part was true. And the current traveling conditions are," another smile, "difficult. My dear Phenax, do you actually care?" Phenax was surprised to realize that he was in fact curious. He was appreciative of Ashiok's trick, but still this familiarity would not serve. "One warning only, Mortal. I care not for our deal nor for your abilities. Presume with me again, and I will erase you from existence." Phenax raised his voice at the end, and this time Ashiok floated back, head bowed in subservience, as was proper. "My apology if I offended. I had not anticipated your question, and I am not used to being caught unaware. I assume my delivery of the terms of our deal was satisfactory?" Ashiok's words were silky and smooth and precise, without being overly unctuous. It was the mark of a good deceiver, as Phenax knew well. But Phenax was very satisfied. He wanted a third city for the Returned, part of longer-term plans. He needed a minor city-state, not too powerful or notable, nor under the protection of one of his kind. And he needed to not have a direct hand in the city's downfall, so that none of his brethren could accuse him of undue influence. "I assume there will be no Iretian rebirth?" Ashiok laughed. "The leonin sacking of the city has already begun. Their bloodlust is large and will not be satiated for some time. I doubt there will be a living citizen of Iretis by tomorrow. The leonin may occupy the city for a short while, but they won't want to stay. They'll go back to their hills and plains. I will leave a few of my creations here to deal with any stragglers or brave adventurers. The major cities want no part of this debacle. No, Iretis is now yours, to do with as you will." "And you? 'Ask me what I want when I have completed your task.' Those were your words that ended our first meeting. You have completed your task, and you have completed it well, Mortal. I am satisfied. So what boon do you wish of me?" Ashiok again shuddered, and a few more specks of cheek vanished into smoke. "I have so much of what I want, Phenax. #emph[Theros] is such a wonderful world, full of possibility. For so long I have sought to perfect my craft, pulling nightmares from the minds of dreamers and making them real. But here, here I can make more... ambitious nightmares come to life. Why be satisfied with simple creations that take bodily form, when I can take a man's darkest fears, the very ruin of his hopes and life's work, and turn the destruction of all he holds dear into a living nightmare? I have built a beautiful dream here in Iretis." Ashiok floated over to the dead king's body and bent down. Ashiok trailed one clawed finger up and down the length of the body's frame, before resting a finger on the hilt of the dagger plunged into the ruined eye. "I am pleased with my work today... but I have more beautiful art to achieve." Ashiok rose and floated back to Phenax. "What do I want? Let me tell you." As Ashiok leaned close and whispered, Phenax almost resolved to end the mortal regardless of the services delivered, but intrigued, he restrained himself. He heard Ashiok's request. And for the second time that day, Phenax was surprised. "You are sure, mortal? This is what you want?" "Look at me, god. Look at me truly. What do you see?" And Phenax, god of deception and lies, peered deep into Ashiok, into the essence of Ashiok's mortal being. Phenax laughed. A long, loud laugh that echoed down the halls, through the palace, and even out to the city beyond. Snarling leonin and the few humans that were left heard the laughter, and for a brief moment the bloodshed stopped, as every mortal paused before that horrible laugh. Phenax had not laughed like that in such a long time. Ashiok had so many #emph[grand] tricks. "So be it, mortal. You shall have your wish." And Phenax left Ashiok and the dead body of the ruined king in the former throne room of Iretis, his laughter echoing off the walls as he vanished.
https://github.com/htlwienwest/da-vorlage-typst
https://raw.githubusercontent.com/htlwienwest/da-vorlage-typst/main/lib/assertions.typ
typst
MIT License
#let assertType(val, typ, message: none) = { assert(type(typ) == "string", message: "Value hat falschen Typen. Muss `string` sein war aber " + typ) if type(val) == typ { return } let msg = "" if message != none { msg += message + ". " } msg += "Der erwartet Typ war `" + typ + "`, jedoch wurde `" + type(val) + "` für den Wert `" + repr(val) + "` erhalten" assert(type(val) == typ, message: msg) } #let _sentence(msg) = { if msg == none or msg == "" { return "" } if msg.ends-with(".") { return msg + " " } return msg + ". " } #let assertDictKeys(val, fields, message: none) = { assertType(val, "dictionary", message: message) assertType(fields, "array", message: "Fields Argument muss ein Array sein") for f in fields { let msg = _sentence(message) + "Key `" + f + "` wurde erwartet in `" + repr(val) + "`" assert(val.keys().contains(f), message: msg) } } #let assertEnum(val, options, message: none) = { assertType(val, "string", message: message) assertType(options, "array", message: "Options Argument muss ein Array sein") let msg = _sentence(message) + "Wert `" + val + "` war nicht einer der folgenen Werte `" + repr(options) + "`" assert(options.contains(val), message: msg) } #let assertNotNone(val, message: none) = { let msg = _sentence(message) + "Erhaltener Wert war `none`." assert(val != none, message: msg) }
https://github.com/protohaven/printed_materials
https://raw.githubusercontent.com/protohaven/printed_materials/main/common-tools/sandblaster.typ
typst
#import "../environment/env-protohaven-class_handouts.typ": * = Sandblaster (Overview paragraph(s)) == Usage Notes === Safety === Media // General and specific to Protohaven == Parts of the Sandblaster === E-Stop (Does our machine have an E-Stop?) === Power Switch === Viewport === Siphon Hose === Air Hose === Blasting Gun === Foot Pedal === Screen Filter === Media Hopper === Gauntlet Gloves === Dust Collector == Basic Operation === Workholding === Setting Up === USE === Cleaning Up
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/inlay_hints/pos_rest.typ
typst
Apache License 2.0
#let test(a, b, ..args) = args.pos() #test([1], [4], [3], [2], [3])
https://github.com/linhduongtuan/DTU-typst-presentation
https://raw.githubusercontent.com/linhduongtuan/DTU-typst-presentation/main/dtu.typ
typst
// ========================================= // Duy Tân University theme for Typst slides. // Made By <NAME>. // https://github.com/linhduongtuan/DTU-typst-presentations // ========================================= #import "slides.typ": * #let slide_footnote_counter = page #let sans = ("New Computer Modern Sans") #set text(font: sans, lang: "vi") #let dtu-theme( mail: "<EMAIL>", college: "<NAME>", usage: "Master Thesis Defense", color: rgb("#711A5F"), biglogo: "images/bgd.png", // left cover page logo watermark: "images/nankai-10.png", //watermark logo logo: "images/duoc.jpeg", // small logo for headers dtulogo: "images/dtu.png", // right cover page logo ) = data => { let my-dark = rgb("#192e41") let my-bright = rgb("#fafafa") let my-accent = rgb("#fc9278") let title-slide(slide-info, bodies) = { // setting watermark place(dx: 10%, dy: -13%, image(watermark, height: 1pt) ) // setting right cover page logo place(dx: 0.4em+500pt+0.4em, //set the logo postions at the cover page dy: 0.4em, image(dtulogo, height: 80pt) ) // setting left cover page logo place(dx: 0.4em + 50pt, dy: 0.4em, image(biglogo, height: 90pt) ) v(82pt) // Setting title align(center + horizon)[ #block( // border line color for the title presentation stroke: (y: 1mm + rgb("#FF0000"), x: 1mm + rgb("#FF0000")), //stroke: (y: 1mm + rgb("#3399FF"), x: 1mm + rgb("#3399FF")), inset: 1em, breakable: false, // fill color for the title presentation fill: rgb("#CCFFE5"), radius: 15pt, [ #box()[#text(1.3em)[*#data.title*] \ #{ if data.subtitle != none { parbreak() text(.9em)[#data.subtitle] } } ] ] ) // #h(1fr) #set text(size: 1em) #grid( columns: (1fr,) * calc.min(data.authors.len(), 3), column-gutter: 1em, row-gutter: 1em, ..data.authors ) #block( // Setting vertical line for Author & Supervisor stroke: (left: 2mm + rgb("#FF9999")), inset: 0.4em, breakable: false, align(left)[ #if bodies.len() > 1 { panic("title slide of default theme does not support too many bodies") } else if bodies.len() == 1 { let body = bodies.first() text(size: 1em, body) } ] ) #parbreak() #text(0.8em)[#data.date] #v(15fr) ] } // globe font setting // Next Pages let displayed-title(slide-info) = if "title" in slide-info { // Setting text color for section on header text(fill: rgb("#FFFFFF"), slide-info.title) } else { [] } let decoration(position, body) = { let border = color let strokes = ( header: ( bottom: border ), footer: ( top: border ) ) block( stroke: none, width: 100%, height: 1em, fill: color, outset: 0em, inset: 0em, breakable: false, align(left + horizon)[#h(0.2em) #box[ #box(image(logo, width: .8em))] // Setting text color for the others on header and footer #text(fill: rgb("#FFFFFF"), 0.5em,body ) ] ) } let default(slide-info, bodies) = { if bodies.len() != 1 { panic("default variant of default theme only supports one body per slide") } let body = bodies.first() place(dx: 50%, dy: -13%, image(watermark, height: 510pt) ) // header decoration("header", section.display() + h(1fr) +displayed-title(slide-info)+ h(1fr)+ data.date +h(1em)+ usage + h(1em)) if "title" in slide-info { block( width: 100%, inset: (x: 2em), breakable: false, outset: 0em, heading(level: 1, slide-info.title) ) } v(1fr) block( width: 100%, inset: (x: 2em), breakable: false, outset: 0em, text(size: 0.8em)[#body] ) v(2fr) // footer decoration("footer")[ #h(1em) #data.short-authors #h(4em) #mail #h(4em) #college #h(1fr) #text(1.5em)[#logical-slide.display()] #h(1em) ] } let center-split(slide-info, bodies) = { if bodies.len() != 2 { panic("center split variant of bipartite theme only supports two bodies per slide") } let body-left = bodies.first() let body-right = bodies.last() box( width: 50%, height: 100%, outset: 0em, inset: (x: 1em), baseline: 0em, stroke: none, fill: my-bright, align(right + horizon, text(fill: my-dark, body-left)) ) box( width: 50%, height: 100%, outset: 0em, inset: (x: 1em), baseline: 0em, stroke: none, fill: my-dark, align(left + horizon, text(fill: my-bright, body-right)) ) } let center-split-white(slide-info, bodies) = { if bodies.len() != 2 { panic("center split variant of bipartite theme only supports two bodies per slide") } let body-left = bodies.first() let body-right = bodies.last() box( width: 50%, height: 100%, outset: 0em, inset: (x: 1em), baseline: 0em, stroke: none, fill: my-bright, align(right + horizon, text(fill: my-dark, body-left)) ) box( width: 50%, height: 100%, outset: 0em, inset: (x: 1em), baseline: 0em, stroke: none, fill: my-bright, align(left + horizon, text(fill: my-dark, body-right)) ) } let section-slide(slide-info, bodies) = { let body-left = bodies.first() box( width: 100%, height: 100%, outset: 0em, inset: (x: 1em), baseline: 0em, stroke: none, fill: my-dark, align(center + horizon, text(fill: my-bright, body-left)) ) } let wake-up(slide-info, bodies) = { if bodies.len() != 1 { panic("wake up variant of default theme only supports one body per slide") } let body = bodies.first() v(0em) // block( // width: 100%, inset: (x: 2em), breakable: false, outset: 0em, // text(size: 1.5em, fill: white, {v(1fr); body; v(1fr);}) // ) block( width: 100%, height: 100%-1em,inset: 2em, breakable: false, outset: 0em, fill: color, text(size: 1.5em, fill: white, {v(1fr); body; v(1fr);}) ) v(1fr) decoration("footer")[ #h(1fr)#text(1.5em)[#logical-slide.display()] #h(1em) ] } ( "title slide": title-slide, "default": default, "wake up": wake-up, ) }
https://github.com/russelljjarvis/SpikeTime
https://raw.githubusercontent.com/russelljjarvis/SpikeTime/main/main.typ
typst
#import "template.typ": * #show: ieee.with( title: "SpikeTime: Spiking Neuronal Network Simulation in Julia Language", abstract: [ ], authors: ( (name: "<NAME>", affiliation: "nternational Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University"), (name: "<NAME>", affiliation: "nternational Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University"), (name: "<NAME>", affiliation: "International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University"), (name: "<NAME>", affiliation: "nternational Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University") ), bibliography-file: "refs.bib", ) = Introduction While there is much focus on hardware advances that accellerate the simulation of large scale spiking neural networks, it is worthwhile to shift our attention to language advances that may also support accelerated large scale spiking neural network simulation. Some gains in biologically faithful neuronal network simulation can be achieved by applying recent computer language features. For example, the Julia language supports Sparse Compressed Arrays, Static Arrays, furthermore Julia provides very extensive support for CUDA GPU, as well as a plethora of reduced precision types. Julia also provides a high-level syntax that facilitates high code reuse while simplifying plotting and data analysis. These features lend themselves towards high-performance large-scale Spiking Neural Network simulation. Therefore, we are using Julia to develop an open-source software package that enables the simulation of networks with millions to billions of synapses on a computer with a minimum of $64$GB of memory and an NVIDIA GPU. Another major advantage implementing SNN simulations in the Julia language is reduced technical debt. The simulation code we are developing is both faster and less complicated to read compared with some other simulation frameworks. The simplicity of the code base encompasses a simple installation process. Ease of installation is an important part of neuronal simulators that is often overlooked when evaluating simulation merit, GPU simulation environments are notoriously difficult to install and this big technical burden of installation harms model portability and reproducibility. The Julia language facilitates the ease of installation to solve the “two language problem” of scientific computing. The simulator encompasses a singular language environment, which includes a reliable, versatile, and monolithic package manager. Furthermore the simulator installation includes no external language compilation tools or steps. // In neuromorphic engineering literature, you can often find tables To demonstrate the veracity and performance of this new simulation approach, we compare the the Potjans and Diesmann model as implemented in the NEST and GENN simulators. In a pending analysis, we compare simulation execution speeds and spike train raster plots to NEST and GENN using the discussed models as benchmarks. A review of the literature suggests that there is a desire to modernize pre-existing large scale network simulators, but such efforts fall short of re-writing existing simulator code in the Julia language. @awile2022modernizing The discussed code repository started from using a the pre-existing GitHub code base @yaolu, and is similar in other ways to @arthur2022scalable and @illing2019biologically === Intended Motivation/Intro. In order to garner evidence for the "replay as network attractor" theory of memory encoding and memory recall faster scalable methods are needed to transform spike raster plots into attractor trajectories and energy landscapes. A problem with converting spike train raster plots to attractor trajectories, is the oldest and most established system for deriving attractor trajectories (and energy landscapes) needs to the system under investigation to be encoded as a continuous differentiable function. A dominant approach which satisfys the continuous function requirement is to fit a differential equation that models a networks firing rate(s) in response to current injection the assumption underlying this approach, is that the rate coded information and network states are more important than or even exclude temporal codes. Another approach to estimating attractor trajectories involves applying Delay Coordinate Embeddings framework. The advantage of this approach is that a model equation is not required, and a timeseries of system observations satisfies the algorithms requirements. Spikes time raster plots are sparsely encoded collections of events that are naturally encoded by ragged arrays, and delay coordinate embeddings requires a state space map. Vector matrices that are output from spike2vec are sufficient to satisfy Delay Coordinate Embeddings, however, the frame work is slow to evaluate, and the quality of the output of the algorithm dependent on many parameters (both in parameters of spike2vec and DCE). Yet another approach is to use Recurrence Analysis. Recurrence Analysis is orders of magnitude faster than DCE, and the results of DCE usefully describe the network properties of state transition matrices. It is the authors view, that the fast algorithm described above is functionally similar to the RecurrenceAnalysis approach, and that it leads to a faster and more interprebable network transition matrices. === Intended caption for spike2vec document. *The output of the framework is a sequential state transition network of the spike train. Spontaneous network activity which didn't get repeated was simply not included in the state transition diagram.* *Two state transition diagrams are output, one with non repeating states, and one with repeating states." === Intended Discusssion The attractor network view of the mammal cortex is consistant with phenomological observations about the mind, such that people commonly refer to "circular thinking", in obsessive compulsive disorder. Furthermore action and perception are theorized to occur in alternating cycles, during "action-perception" loops. Neuronal synaptic weight changes, that happen as a result of STDP, simply bias the brain in a manner which will make salient brain states more likely to occur. It is possible that the windows which were disregarded because they didn't repeat, may well repeat given long enough neuronal recordings. This is an unavoidable problem, caused by fact that of limited or finite observations are novel network states, or repeating states. It is also possible that detected reoccuring states, are really only reoccuring state transitions. From our perspective we are unable to distinguish between a state and a network state transition. Whatever the case, state, or state transition, detecting periods of repeating patterns in a fast and scalable way, still bolsters the attractor network view of the brain. The algorithm may also be helpful for feature reconstruction in Neuromorphic Data sets from event based cameras. Reoccurance analysis was used to characterize vector encoded spike train matrices for repeating patterns. Re-currence analysis was able to give us numbers quantify the degree of repitition of states, and the entropy of state transition matrices. Steps convert the spike encoded vector matrices to "state space sets" as defined in the Julia Package Dynamical Systems.jl Re-occurance analysis did #TODO quantify the complexity of state transition matrices with the non reoccuring states included, as this may give us insight about, information in the brain at a different time scale. state transition networks Delay Embeddings can //dynamic systems view of the brain @scholarpedia attractor_network. caused by the network transitioning to familiar states, Of course given longer recordings. * = Theoretical Framework // Nothing new is presented in terms of theoretical framework. We use the forward Euler implementations of synaptic current weight updates, and $ V_M $ updates, as the forward Euler method is fast, and sufficiently robust for use on well known homogenuous Leaky Integrate and Fire neurons. = Methodological Framework Forward Euler Weight update rules, and membrane potential update rules where applied. == Virtual Experiment Simulation Protocol We presented neuromorphic data to the potjan's network model. To do this we constructed a 2D population layer, and gave every cell spatial coordinates. We systematically populated cell centres along a 2D grid. /*randomly populated a 2D sheet with x,y coordinates so that we could give each cell a unique spatial position. We then applied a distance dependent wiring rule to the input layer, */ We then used a distance dependent wiring rule such that neighbouring cells are more likely to be synaptically connected with each other with inhibitory synapses, according to a "winner-take-all" connection scheme. This 2D sheet population of cells then mapped onto regular 1D populations of cells, in the Potjan's balanced E/I model. In this way we were able to present several neuromorphic data types to the Potjan's E/I network, and we allowed the network to train its synapses with STDP synaptic update rules. under exposure to 5 out of 10 different numbers of the NMNIST neuromorphic data set. We used the spike2vec algorithm from to show that presenting familiar training items (0-2-4-6-8) to the trained model caused the network to enter attractors (caused the network to recall a repeated temporal spatial pattern). Presenting un-familiar NMNIST items (odd numbers 1-3-5-7-9), did not cause the network to enter those attractors as frequently. We also applied the Recurrence Analysis metrics to its found state transition matrices, for spike2vec encoded raster plot evolutions under both familiar and unfamiliar item categories. Finally we converted the spike sequence vectors to word vectors, on trained familiar items, and trained unfamiliar items. Converting spikes to words via vectors, allowed us to compare model spike trains with emperical data, using the statistics of natural language @illing2019biologically. = Result Analysis #align(center + bottom)[ #image("Potjans_connectome_no_input_layer.png", width: 70%) *Heatmap visualization of the Potjans and Diesmon static connectome @potjans2014cell, scalled such that the total number of cells including the combined contribution of E and I populations is 5450.* ] /* #align(center + bottom)[ #image("Potjans_connectome_input_layer.png", width: 70%) *Heatmap visualization of the same connectome except with an added input layer that is used to impinge external spikes from the NMNIST dataset.* ] */ #align(center + bottom)[ #image("Graph_embedding.png", width: 70%) *SGtSNEpi visualization of the Potjans and Diesmon static connectome @potjans2014cell. Graph partitioning of the connectome adjacency matrix can be used to compile network models in a way that minimises spike traffic between GPU thread locks. Although the SGtSNEpi dimensionality reduction technique provides a nice over view of network structure at scale, it is not as fast or useful as other techniques that distribute the network based on effective connectivity measures. different technique we developed which is called Spike2Vec.* ] = Validation of Network Simulation Results Two common models of cortical spiking networks are the, Potjan's and Diesmon @potjans2014cell model and the Brunel model @brunel1996hebbian, both of these models exist within a fluctuation driven regime. When each of these respective network models are simulated, observed spike times are typically appear to be poisson distrubited psuedo random spike times. By design these models make it unlikely that fine grained recognizable repeating patterns also occur. The Potjan's model can be used to make data points seperable. However, new data sets prominently capture replayed states, and previously collected spike trains may, too, have latent and unpublished states of replay. The limited recordings from limited species may have biased previous recordings in a way that underrepresented the prevalence of replay. #align(center + bottom)[ #image("NMNIST_Impinged_onto_Potjans.png", width: 70%) *One labelled sample of the NMNIST data set was fed into a Potjans connectome SNN, and the readout from the whole network was recorded.* ] #align(center + bottom)[ #image("balanced_if_net_structure.png", width: 70%) *The adjacency matrix from a random connectome approximately balanced E/I network with 4 different populations EI, EE, IE, II.* ] #align(center + bottom)[ #image("balanced_random_spikes.png", width: 70%) *The spike trains read out after stimulating the above basic balanced connectome, after all of the neurons in the network were stimulated with direct current over the duration of the simulation recording.* ] // Take a look at the file `template.typ` in the file panel // to customize this template and discover how it works. // We generated the example code below so you can see how // your document will look. Go ahead and replace it with // your own content! //#bibliography("bibliography.bib") //https://www.frontiersin.org/articles/10.3389/fninf.2022.884046/full //Modernizing the NEURON Simulator for Sustainability, Portability, and Performance // Omar Awile1†, Pramod Kumbhar1†, <NAME>1, <NAME>2,3, <NAME>1, <NAME>1, <NAME>, <NAME>4,5,6, <NAME>2,4, <NAME>, <NAME>1, <NAME>7‡, <NAME>3‡, <NAME>7‡ and <NAME>1*‡ //<NAME>, <NAME> & <NAME>, Biologically plausible deep learning - but how far can we go with shallow networks?, Neural Networks 118 (2019) 90-101 //The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model //<NAME> 1 , <NAME> 2014 Mar //doi: 10.1093/cercor/bhs358. Epub 2012 Dec 2. //;24(3):785-806. // https://pubmed.ncbi.nlm.nih.gov/23203991/ //= Appedix or notes to self: //Similar Simulators With Different Goals //https://github.com/FabulousFabs/Spike.jl (looking very inspired by Brian2 except in Julia, nice use of code generation) //https://github.com/SpikingNetwork/TrainSpikingNet.jl (nice use of CUDA and reduced precision types) //https://github.com/leaflabs/WaspNet.jl (nice use of block arrays) //https://github.com/darsnack/SpikingNN.jl (nice use of multidispatch abstract types and type restriction) //https://github.com/wsphillips/Conductor.jl (nice use of DiffEq.jl and code generation) //https://github.com/FabulousFabs/AdEx (interesting and different) //https://github.com/ominux/pub-illing2019-nnetworks (research orientated code)
https://github.com/rlpundit/typst
https://raw.githubusercontent.com/rlpundit/typst/main/Typst/fr-Rapport/chaps/intro.typ
typst
MIT License
/* ------------------------------- NE PAS MODIFIER ------------------------------ */ #import "../common/metadata.typ": title #set page(header: smallcaps(title) + h(1fr) + emph("Introduction générale") + line(length: 100%)) #text(white)[= Introduction générale]#v(-1cm) /* ------------------------------------------------------------------------------ */ *Motivations* #lorem(64) *Plan du rapport* #lorem(64) / @chp:chap1: #lorem(16) / @chp:chap2: #lorem(16) / @chp:chap3: #lorem(16)
https://github.com/ShapeLayer/ucpc-solutions__typst
https://raw.githubusercontent.com/ShapeLayer/ucpc-solutions__typst/main/docs/i18n.md
markdown
Other
--- title: ucpc.i18n supports: ["0.1.0"] --- ucpc-solutions includes a utility for generating sentences or words. Additionally, the i18n is included to enable multilingual support for the content generated by this utility. A utility function that generates a sentence or word receives a multilingual language map as the `i18n` parameter. If no parameter value is passed, it uses the default US English (`en-us`). The multilingual language map to be passed as an argument to each creation function is the same as the function name. ```typst #import "/lib/i18n.typ": make-prob-meta( i18n: en-us.make-prob-meta ) ``` Therefore, the keys for multilingual maps follow these rules: ```typst [locale].[function-name] ``` ## Add i18n support language map for new locale If you want to add new language support, you must add a new language map under `/lib/i18n/` and register it with the package. 1. The new language map must have the same key values ​​as `/lib/i18n/en-us.typ`. Therefore, if you want to add language support, it is recommended to copy and use the `en-us.typ` file. 2. Import the language map added to `/lib/i18n.typ`. 3. Add values ​​in the format `("locale name", language-map)` under the `supports` array in `/lib/i18n.typ`. 4. Test whether the language map you added works well. Testing can be performed with `just test` or `typst-test run i18n`. - The test operates only in an environment where typst-test is installed. > [!NOTE] > 3. Items do not directly affect the use of each language map. > However, it is used by testers to check whether the newly added language map supports all items well.
https://github.com/JanEhehalt/typst-demo
https://raw.githubusercontent.com/JanEhehalt/typst-demo/main/Chapter_Introduction.typ
typst
#import "utils.typ": todo #import "glossary.typ": glossary = Introduction <chapter_introduction> == Subchapter 1 #todo[TODO: BACHELORARBEIT] // Der language server checkt das hier leider nicht hrmpf :( ZITAT: @cite_todo Ich kann auch @hosney2022artificial zitieren. :) When using a acronym for the first time it looks like this: @rest When using a acronym a second time it looks like this: @rest #lorem(100) == Subchapter 2 As shown in @dummy_figure, this is wonderful. Typst documentation #footnote(link("https://typst.app/docs")) or that way #footnote[Google einfach]. Look at @einstein. Du kannst dir auch mal @dummy_table anschauen. Oder wir bleiben einfach dabei #figure( [#image("dummy_image.svg", width: 25%)], caption: [ CAPTION DUMMY BILD ] ) <dummy_figure> #figure( [ #table( columns: 2, [*Amount*], [*Ingredient*], [360g], [Baking flour], [250g], [Butter (room temp.)], [150g], [Brown sugar], [100g], [Cane sugar], [100g], [70% cocoa chocolate], [100g], [35-40% cocoa chocolate], [2], [Eggs], [Pinch], [Salt], [Drizzle], [Vanilla extract], ) ], caption: [ CAPTION DUMMY TABELLE ] ) <dummy_table> MATHEMATIK $ E = M C^2 $ <einstein>
https://github.com/chubetho/THWS_Bachelor_Template
https://raw.githubusercontent.com/chubetho/THWS_Bachelor_Template/main/chapters/introduction.typ
typst
= Introduction Article @articlename Book @bookname Misc @miscname #figure(caption: "Caption")[ #image("/assets/image.png") ] #figure(caption: "Caption")[ ```ts console.log("Hello, World!"); ``` ] #figure(caption: "Caption")[ #table( columns: (1fr, 1fr), [A], [B], [C], [D], ) ] == Motivation #lorem(100) == Objective #lorem(100) == Structure #lorem(100) #pagebreak(weak: true)
https://github.com/lkoehl/typst-boxes
https://raw.githubusercontent.com/lkoehl/typst-boxes/main/examples/stickybox.typ
typst
MIT License
#import "@dev/colorful-boxes:1.3.1": * #set page(width: auto, margin: 0.5cm, height: auto) #stickybox(width: 5cm, rotation: 5deg)[ #lorem(20) ]
https://github.com/pank-su/typst-gost
https://raw.githubusercontent.com/pank-su/typst-gost/main/templates/utils.typ
typst
#let ch(content) = { show heading: it => { text(it, size: 14pt) } align(heading(upper(content), numbering: none), center) }
https://github.com/dainbow/MatGos
https://raw.githubusercontent.com/dainbow/MatGos/master/themes/18.typ
typst
#import "../conf.typ": * = Достаточные условия сходимости тригонометрического ряда Фурье в точке В доказательствах некоторых теорем этого и следующего билетов используется интересный трюк: если у нас есть цепочка равенств $a = b$, то мы с лёгкостью сможем продолжить её, написав $a = b = (a + b) / 2$. Если вы понимаете, что в доказательстве теоремы с интегралами происходит какая-то дичь, то вспоминайте этот трюк! #definition[ #eq[ $L_(2 pi) := { f in L_1[-pi, pi] | f - 2 pi "периодическая"}$ ] ] #definition[ *Ядром Дирихле* $D_n (u)$ называется выражение #eq[ $D_n (u) = 1 / 2 + sum_(k = 1)^n cos(k u) = (sin((n + 1 / 2) u)) / (2 sin (u / 2))$ ] ] #definition[ Пусть $f in L_(2 pi)$, тогда *частичной суммой тригонометрического ряда Фурье* называется #eq[ $S_n (f, x) := a_0 / 2 + sum_(k = 1)^n (a_k cos(k x) + b_k sin (k x))$ ] где #eq[ $a_k := 1 / pi integral_(- pi)^pi f(t) cos(k t) dif mu(t); quad b_k = 1 / pi integral_(-pi)^pi f(t) sin (k t) dif mu(t)$ ] ] #lemma( "О представлении частичной суммы", )[ Если $f in L_(2 pi)$, то $n$-я частичная сумма тригонометрического ряда Фурье может быть представлена #eq[ $\ S_n (f, x) = 1 / pi integral_(-pi)^pi f(t) D_n (x - t) dif mu(t) = 1 / pi integral_(-pi)^pi f(x + u) D_n (u) dif mu(u)$ ] ] #theorem( "Теорема Римана об осцилляции", )[ Если $f in L_1(I)$, где $I$ -- конечный или бесконечный промежуток, то #eq[ $lim_(lambda -> oo) integral_I f(x) cos(lambda x) dif mu(x) = lim_(lambda -> oo) integral_I f(x) sin(lambda x) dif mu(x) = 0$ ] ] #theorem("<NAME>")[ Если $f in L_(2 pi)$ и $phi_x_0 in L_1(0, delta), delta > 0$, где #eq[ $\ phi_x_0 (t) := (f(x_0 + t) + f(x_0 - t) - 2 S (x_0)) / t$ ] то тригонометрический ряд Фурье функции $f(x)$ сходится к $S(x_0)$ ] <dini> #proof[ Рассмотрим разность $S_n (f, x_0) - S(x_0)$, пользуясь леммой о представлении, можем записать её как #eq[ $S_n (f, x_0) - S(x_0) attach(=, t: "трюк") 1 / pi integral_0^pi (f(x + u) + f(x - u) - 2S(x_0))D_n (u) dif mu(u)$ ] В данном переходе мы воспользовались сразу несколькими фактами: - Подынтегральная функция чётная относительно $u$ - Интеграл по $[-pi, pi]$ от ядра Дирихле равен $pi$ - Если заменить в представлении частичной суммы $t$ на $-t$, то ничего не изменится. Продолжим цепочку преобразований, раскрыв в формуле ядра Дирихле #eq[ $sin((n + 1 / 2) t) = sin(n t) cos(t / 2) + cos(n t) sin(t / 2)$ ] А также добавим и вычтем интеграл #eq[ $1 / pi integral_0^delta (f(x + t) + f(x - t) - 2S(x_0)) / t sin(n t) dif mu(t)$ ] Итак, приступим #eq[ $S_n (f, x_0) - S(x_0) = \ 1 / pi integral_0^delta (f(x + t) + f(x - t) - 2S(x_0)) / t sin(n t) dif mu(t) + \ 1 / pi integral_0^pi (f(x + t) + f(x - t) - 2S(x_0)) cos(n t) / 2 dif mu(t) + \ 1 / pi integral_delta^pi (f(x + t) + f(x - t) - 2S(x_0)) (sin(n t) cos(t / 2)) / (2 sin(t / 2)) dif mu(t) + \ 1 / pi integral_0^delta (f(x + t) + f(x - t) - 2S(x_0)) sin(n t) (cos(t / 2) / (2 sin(t / 2)) - 1 / t) dif mu(t) $ ] По условию $phi_x_0$ сумирумая, значит по теореме Римана об осцилляции первое слагаемое стремится к нулю. $f(x + t) + f(x - t) - 2S(x_0)$ суммируема как сумма суммируемых и константы, значит по теореме Римана об осцилляции второе слагаемое стремится к нулю. В третьем cлагаемом $(f(x + t) + f(x - t) - 2S(x_0)) (cos(t / 2)) / (2 sin(t / 2)) in L_1[delta, pi]$, так как мы отделились от нуля и по теореме Римана об осцилляции третье слагаемое стремится к нулю. Для четвёртого слагаемого рассмотрим разность: #eq[ $cos(t / 2) / (2 sin(t / 2)) - 1 /t attach(tilde.op, t: t -> 0) (1 - t^2 / 8) / (2 (t / 2 - t^3 / 48))) - 1 / t = (t - t^3 / 8 - t + t^3 / 24) / (t^2) = 0$ ] Значит мы умножили суммируемую функцию $f(x + t) + f(x - t) - 2S(x_0)$ на другую, имеющую устранимый разрыв в нуле, а значит #eq[ $(f(x + t) + f(x - t) - 2S(x_0))(cos(t / 2) / (2 sin(t / 2)) - 1 /t) in L_1[0, delta]$ ] И опять применяем теорему об осцилляции. ] #definition[ Будем говорить, что функция $f$ удовлетворяет *условию Гёльдера* порядка $alpha in (0, 1]$ в точке $x_0$, если существуют конечные односторонние пределы $f(x_0 plus.minus 0)$ и константы $C > 0, delta > 0$ такие, что #eq[ $forall t, 0 < t < delta : space abs(f(x_0 + t) - f(x_0 + 0)) <= C t^alpha and abs(f(x_0 - t) - f(x_0 - 0)) <= C t^alpha$ ] ] #theorem( "<NAME>", )[ Если $f in L_(2 pi)$ удовлетворяет условию Гёльдера порядка $alpha$ в точке $x_0$, то тригонометрический ряд Фурье функции $f(x)$ сходится в точке $x_0$ к $(f(x_0 + 0) + f(x_0 - 0)) / 2$ ] #proof[ По условию теоремы, хотим #eq[ $S(x_0) = (f(x_0 + 0) + f(x_0 - 0)) / 2$ ] Значит функция $phi_x_0$ из признака Дини будет иметь вид #eq[ $\ phi_x_0 (t) = (f(x_0 + t) - f(x_0 + 0) + (f(x_0 - t) - f(x_0 - 0))) / t$ ] То что $phi$ измерима -- очевидно. Осталось доказать ограниченность интеграла #eq[ $abs(integral_0^delta phi_x_0 (t) dif mu(t)) <= integral_0^delta abs(f(x_0 + t) - f(x_0 + 0)) / t dif mu(t) + integral_0^delta abs(f(x_0 - t) - f(x_0 - 0)) / t dif mu(t) <= \ 2 C integral_0^delta t^(alpha - 1) dif mu(t) = 2C delta^alpha / alpha$ ] Значит мы можем применить признак Дини и всё доказано. ]
https://github.com/paugarcia32/CV
https://raw.githubusercontent.com/paugarcia32/CV/main/modules_es/certificates.typ
typst
Apache License 2.0
#import "../brilliant-CV/template.typ": * #cvSection("Certificados") #cvHonor( date: [2024], title: [Certificado profesional de Ciberseguridad de Google], issuer: [Coursera], )
https://github.com/hugo-b-r/insa-template-typst
https://raw.githubusercontent.com/hugo-b-r/insa-template-typst/master/examples/short-report.typ
typst
#import "../templates/short-report.typ":project #show: project.with( title: "A spoonful of title", authors: ("<NAME>",), ) #lorem(42) = Premier Lorem == Premier sous-Lorem === Premier sous sous lorem #lorem(42) === Second sous sous lorem #lorem(42) == second sous lorem === Troisieme sous sous lorem #lorem(42) === quatrieme sous sous lorem #lorem(42)
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/05-features/glyph-class.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/lib/glossary.typ": tr #show: web-page-template // ## Glyph Classes and Named Classes == #tr[glyph]类与命名类 // Now let’s write a set of rules to turn lower case vowels into upper case vowels: 现在我们编写一系列把所有的小写元音字母变成大写的规则: ```fea feature liga { sub a by A; sub e by E; sub i by I; sub o by O; sub u by U; } liga; ``` // That was a lot of work! Thankfully, it turns out we can write this in a more compact way. Glyph classes give us a way of grouping glyphs together and applying one rule to all of them: 这样代码好像有点长,我们能把它写成更紧凑的形式。通过将一些#tr[glyph]写成一组来组成#tr[glyph]类的方式可以批量应用规则: ```fea feature liga { sub [a e i o u] by [A E I O U]; } liga; ``` // Try this in OTLFiddle too. You'll find that when a class is used in a substitution, corresponding members are substituted on both sides. 在 `OTLFiddle` 中试试这个例子,你会发现当在#tr[substitution]规则中使用#tr[glyph]类时,会在匹配和替换两侧分别使用类中同一位置的成员。 // We can also use a glyph class on the "match" side, but not in the replacement side: 我们也可以只在匹配侧使用#tr[glyph]类: ```fea feature liga { sub f [a e i o u] by f_f; } liga; ``` 这等价于写: ```fea feature liga { sub f a by f_f; sub f e by f_f; sub f i by f_f; # ... } liga; ``` // Some classes we will use more than once, and it's tedious to write them out each time. We can *define* a glyph class, naming a set of glyphs so we can use the class again later: 有些类会被多次使用,每次都明确写出其中每个#tr[glyph]就太繁琐了。在这个场景下我们可以给定义的#tr[glyph]类命名,这样就能在后面的代码中直接使用了: ```fea @lower_vowels = [a e i o u]; @upper_vowels = [A E I O U]; ``` // Now anywhere a glyph class appears, we can use a named glyph class instead (including in the definition of other glyph classes!): 现在任何可以填写#tr[glyph]类的地方,你都可以使用这些名字来代替。甚至在后续定义其他#tr[glyph]类时也可以使用: ```fea @vowels = [@lower_vowels @upper_vowels]; feature liga { sub @lower_vowels by @upper_vowels; } liga; ```
https://github.com/augustebaum/petri
https://raw.githubusercontent.com/augustebaum/petri/main/README.md
markdown
MIT License
# petri Petri nets in Typst. ## Examples ![](/tests/fletcher/relative-positioning/ref/1.png) ![](/tests/fletcher/four-seasons/ref/1.png) ![](/tests/fletcher/two-concurrent-processes/ref/1.png) ![](/tests/fletcher/large-example/ref/1.png) Find the full list of examples in the [tests](/tests/README.md) directory. Yes, the examples are just rendered tests! ## TODO - [ ] Things like `node-defocus` seem to have no effect - [ ] Make the fact that the label moves not change the edge behaviour (in particular, make it possible for an edge to behave as though the label was not there) - [ ] Refactor places and transitions to custom CeTZ shapes, which can then be used to define custom fletcher shapes - [x] Add CeTZ shapes - [ ] Maybe don't focus so much on fletcher in general? - [ ] See if it's possible to change styles using CeTZ's `set-style` (hard to say out loud!) - [x] Example gallery (render the tests) - [x] Show 4 comprehensive examples in the top-level README
https://github.com/csimide/cuti
https://raw.githubusercontent.com/csimide/cuti/master/lib.typ
typst
MIT License
#let fakebold(base-weight: none, s, ..params) = { set text(weight: base-weight) if base-weight != none set text(weight: "regular") if base-weight == none set text(..params) if params != () context { set text(stroke: 0.02857em + text.fill) s } } #let regex-fakebold(reg-exp: ".", base-weight: none, s, ..params) = { show regex(reg-exp): it => { fakebold(base-weight: base-weight, it, ..params) } s } #let show-fakebold(reg-exp: ".", base-weight: none, s, ..params) = { show text.where(weight: "bold").or(strong): it => { regex-fakebold(reg-exp: reg-exp, base-weight: base-weight, it, ..params) } s } #let cn-fakebold(s, ..params) = { regex-fakebold(reg-exp: "[\p{script=Han}!-・〇-〰—]", base-weight: "regular", s, ..params) } #let show-cn-fakebold(s, ..params) = { show-fakebold(reg-exp: "[\p{script=Han}!-・〇-〰—]", base-weight: "regular", s, ..params) } #let regex-fakeitalic(reg-exp: "\b.+?\b", ang: -18.4deg, s) = { show regex(reg-exp): it => { box(skew(ax: ang, reflow: false, it)) } s } #let fakeitalic(ang: -18.4deg, s) = regex-fakeitalic(ang: ang, s) #let fakesc(s) = { show regex("[\p{Lu}]"): text.with((10 / 8) * 1em) text(0.8em, upper(s)) }
https://github.com/pavelzw/moderner-cv
https://raw.githubusercontent.com/pavelzw/moderner-cv/main/README.md
markdown
MIT License
# moderner-cv This is a typst adaptation of LaTeX's [moderncv](https://github.com/moderncv/moderncv), a modern curriculum vitae class. ## Requirements This template uses FontAwesome icons via the [fontawesome typst package](https://typst.app/universe/package/fontawesome). In order to properly use it, you need to have fontawesome installed on your system or have typst configured (via `--font-path`) to use the fontawesome font files. You can download fontawesome [here](https://fontawesome.com/download). ## Usage ```typst #import "@preview/moderner-cv:0.1.0": * #show: moderner-cv.with( name: "<NAME>", lang: "en", social: ( email: "<EMAIL>", github: "jane-doe", linkedin: "jane-doe", ), ) // ... ``` ## Examples ![Jane Doe's CV](assets/example.png) ## Building and Testing Locally To build and test the template locally, you can run `pixi run watch` in the root of this repository. Please ensure to have linked this package to your local typst packages, see [here](https://github.com/typst/packages#local-packages): ```bash # linux mkdir -p ~/.local/share/typst/packages/preview/moderner-cv ln -s $(pwd) ~/.local/share/typst/packages/preview/moderner-cv/0.1.0 # macos mkdir -p ~/Library/Application\ Support/typst/packages/preview/moderner-cv ln -s $(pwd) ~/Library/Application\ Support/typst/packages/preview/moderner-cv/0.1.0 ```
https://github.com/mvuorre/quarto-preprint
https://raw.githubusercontent.com/mvuorre/quarto-preprint/main/_extensions/preprint/typst-show.typ
typst
Creative Commons Attribution 4.0 International
#show: doc => preprint( $if(title)$ title: [$title$], $endif$ $if(running-head)$ running-head: [$running-head$], $endif$ $if(by-author)$ authors: ( $for(by-author)$ $if(it.name.literal)$ ( name: [$it.name.literal$], affiliation: [$for(it.affiliations)$$it.id$$sep$, $endfor$], $if(it.orcid)$orcid: "https://orcid.org/$it.orcid$",$endif$ $if(it.email)$email: [$it.email$]$endif$), $endif$ $endfor$ ), $endif$ $if(affiliations)$ affiliations: ( $for(affiliations)$( id: "$it.id$", name: "$it.name$", $if(it.department)$department: "$it.department$"$endif$ ), $endfor$ ), $endif$ $if(date)$ date: [$date$], $endif$ $if(leading)$ leading: $leading$, $endif$ $if(branding)$ branding: "$branding$", $endif$ $if(spacing)$ spacing: $spacing$, $endif$ $if(linkcolor)$ linkcolor: $linkcolor$, $endif$ $if(citation)$ citation: ( type: "$citation.type$", container-title: "$citation.container-title$", doi: "$citation.doi$", url: "$citation.url$" ), $endif$ $if(authornote)$ authornote: [$authornote$], $endif$ $if(lang)$ lang: "$lang$", $endif$ $if(region)$ region: "$region$", $endif$ $if(abstract)$ abstract: [$abstract$], $endif$ $if(keywords)$ keywords: [$for(keywords)$$it$$sep$, $endfor$], $endif$ $if(wordcount)$ wordcount: [$wordcount$], $endif$ $if(margin)$ margin: ($for(margin/pairs)$$margin.key$: $margin.value$,$endfor$), $endif$ $if(papersize)$ paper: "$papersize$", $endif$ $if(mainfont)$ font: ("$mainfont$",), $endif$ $if(fontsize)$ fontsize: $fontsize$, $endif$ $if(section-numbering)$ section-numbering: "$section-numbering$", $endif$ $if(toc)$ toc: $toc$, $endif$ $if(toc-depth)$ toc-depth: $toc-depth$, $endif$ $if(toc-title)$ toc-title: "$toc-title$", $endif$ $if(toc-indent)$ toc-indent: "$toc-indent$", $endif$ $if(cols)$ cols: $cols$, $endif$ $if(col-gutter)$ col-gutter: $col-gutter$, $endif$ $if(bibliography-style)$ bibliography-style: [$bibliography-style$], $endif$ $if(bibliography-title)$ bibliography-title: [$bibliography-title$], $endif$ doc, )
https://github.com/tiankaima/typst-notes
https://raw.githubusercontent.com/tiankaima/typst-notes/master/7e1810-algo_hw/hw2.typ
typst
#import "utils.typ": * == HW 2 (Week 3) Due: 2024.03.24 #rev1_note[ + Review: 最大堆/最小堆 最大堆是一种满足性质 $A["PARENT"(i)] >= A[i]$ 的二叉树, 其中 $"PARENT"(i) = floor(i\/2)$, 两个子节点分别是 $2i, 2i+1$. 最小堆是则满足性质 $A["PARENT"(i)] <= A[i]$. 插入过程: 向最下一层、最右侧节点插入新叶节点 (实际上就是在数组结尾添加), 添加之后向上调整, 使其重新满足最大(小)堆的性质. 调整时间复杂度 $O(log n)$ 删除过程: 向下调整, 在子节点中寻找最大(小)值, 与当前节点交换, 递归调整. 调整时间复杂度 $O(log n)$ 建堆过程: 从最后一个非叶节点开始, 依次向前调整, 使其满足最大(小)堆的性质. 时间复杂度 $O(n)$, 主要考虑从叶到根开始「合并」已经建好的堆, 每次都是向下调整, 时间复杂度 $O(log n - k)$, 总时间复杂度 $O(n log n) - sum log i=O(n)$ + Review: 计数排序 一个保证稳定性的思路如下, 考虑数据长度 $n$, 范围 $[0,k]$. - 开辟一个 $[0,k]$ 的数组 $C$. 清零 - 对于原数据 $n$, 遍历一遍并加入到这个计数的数组中 - 计算前缀和: $C[i] = C[i] + C[i-1]$ - 从原数据的尾部开始, 将数据放入到 $C$ 中对应的位置, 并将 $C$ 中的值减一, 这样每次从 $C$ 中取得的数字总是不同的. 总时间复杂度: $O(n+k)$. + Review: 基数排序 假设每个数据有 $k$ 个关键字, 每个关键字有自己的排序方式, 以第一个关键字开始, 从小到大排序, 第一个关键字相同的情况下比较第二个关键字. 以此类推, 直到最后一个关键字. 在下面这个问题中, 内层排序方式使用计数排序, 相当于每层排序 $l$ 组共计 $n$ 个元素的关键字, 按照计数排序, 每层的复杂度在 $O(n)$. 三层也还是 $O(n)$. + Review: 比较排序方法复杂度下限 基于比较的排序方法的下界是 $Omega(n log n)$, 证明方法是通过决策树模型: 对于 $n$ 个不同元素, 有 $n!$ 种不同的排列方式, 因此决策树至少有 $n!$ 个叶节点来表示排序结果. 因此可以得到层高 $h$ 满足: $h>=log_2(n!)=Theta(n log n)$, 最好情况就是比较 $h$ 次, 所以比较排序的下界是 $Omega(n log n)$. ] === Question 6.2-6 The code for MAX-HEAPIFY is quite efficient in terms of constant factors, except possibly for the recursive call in line 10, for which some compilers might produce inefficient code. Write an efficient MAX-HEAPIFY that uses an iterative control construct (a loop) instead of recursion. #ans[ Consider the following pseudocode code: ```txt MAX-HEAPIFY(A, i) while true l = LEFT(i) r = RIGHT(i) if l <= A.heap-size and A[l] > A[i] largest = l else largest = i if r <= A.heap-size and A[r] > A[largest] largest = r if largest != i exchange A[i] with A[largest] i = largest else break ``` ] === Question 6.5-9 Show how to implement a first-in, first-out queue with a priority queue. Show how to implement a stack with a priority queue. (Queues and stacks are defined in Section 10.1.3.) #ans[ - For stack, add element with increasing priority, and pop the element with the highest priority, pseudocode: ```txt PUSH(S, x) S.top = S.top + 1 S[S.top] = x POP(S) if S.top < 1 error "underflow" else S.top = S.top - 1 return S[S.top + 1] ``` - For queue, add element with decreasing priority, and pop the element with the highest priority, pseudocode: ```txt ENQUEUE(Q, x) Q.tail = Q.tail + 1 Q[Q.tail] = x DEQUEUE(Q) if Q.head > Q.tail error "underflow" else return Q[Q.head] ``` ] === Question 7.4-6 Consider modifying the PARTITION procedure by randomly picking three elements from subarray $A[p : r]$ and partitioning about their median (the middle value of the three elements). Approximate the probability of getting worse than an $alpha$-to-$(1 - alpha)$ split, as a function of $alpha$ in the range $0 < alpha < 1/2$. #rev1_note[ 认为元素可以重复取得, 或者认为 $r-p$ 足够大, 这样可以保证三次选取独立. 考虑三个变量中位数的分布, 只要它落在 $[0,alpha]union[1-alpha,1]$ 之间, 这样的 $q$ 的选取就不如 $(0,alpha)union(alpha,1)$, 由于对称性, 我们只需要计算左边的部分, 共有两种情况能使得中位数落在 $[0,alpha]$: - 前两个数和最后一个分别落在 $[0,alpha],[alpha,1]$中: $vec(2,3) times alpha^2(1-alpha)$ - 三个数均落在 $[0,alpha]$ 中: $alpha^3$ 容易证明这些情况是无交的, 并且列举了所有可能的「中位数落在...」的情况, 乘 $2$ 即可: ] #ans[ *Assuming the same element could be picked more than once*(which should be the case in real world). The probability of getting worse than an $alpha$-to-$(1 - alpha)$ split is the probability of picking the smallest or the largest element as the median. $ P = 2 * [binom(2,3) times alpha^2(1 - alpha) + alpha^3] = 6 alpha^2 - 4 alpha^3 $ ] === Question 8.2-7 Counting sort can also work efficiently if the input values have fractional parts, but the number of digits in the fractional part is small. Suppose that you are given n numbers in the range $0$ to $k$, each with at most $d$ decimal (base $10$) digits to the right of the decimal point. Modify counting sort to run in $Theta(n + 10^d k)$ time. #ans[ To achieve $Theta(n + 10^d k)$ time, we first use $Theta(n)$ time to multiply each number by $10^d$, then change the $C[0, k]$ to $C[0, 10^d k]$, and finally use $Theta(10^d k)$ time to sort the numbers. With other part of the counting sort unchanged, the pseudocode is as follows: ```txt COUNTING-SORT(A, B, k, d) let C[0, 10^d k] be a new array for i = 0 to 10^d k C[i] = 0 for j = 1 to A.length C[A[j] * 10^d] = C[A[j] * 10^d] + 1 for i = 1 to 10^d k C[i] = C[i] + C[i - 1] for j = A.length downto 1 B[C[A[j] * 10^d]] = A[j] C[A[j] * 10^d] = C[A[j] * 10^d] - 1 ``` This is the required $Theta(n + 10^d k)$ time algorithm. ] === Question 8.3-5 Show how to sort $n$ integers in the range $0$ to $n^3 - 1$ in $O(n)$ time. #ans[ First convert each number to base $n$, then use counting sort to sort the numbers. Since each number would now have at most $log_n n^3 = 3$ digits, 3 passes of counting sort would be enough to sort the numbers, during which each pass would take $O(n)$ time since there's only $n$ numbers. ] === Question 9.3.9 Describe an $O(n)$-time algorithm that, given a set $S$ of $n$ distinct numbers and a positive integer $k <= n$, determines the $k$ numbers in $S$ that are closest to the median of $S$. #rev1_note[ 下面的回答中需要更正: Step 2 中, 对于每个元素 $y$ 计算的是 $abs(y-x)$. 然后记录这个作为 key, 原来的值作为 value, 进行计数排序, 最后取前 $k$ 个元素即可. ] #ans[ + $O(n)$: Using SELECT, we can find $x$ to be the median of $S$. + $O(n)$: Subtract $x$ from each element in $S$. + $O(n)$: Use COUNTING-SORT to sort the absolute values of the differences. + $O(k)$: Return the first $k$ elements in the sorted array. This is the required $O(n)$-time algorithm. ]