hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc7fc10f212444e2cf64cdd3d21a5626c2695e1b | 1,505 | md | Markdown | examples/webgoat/vuln-27/readme.md | gauntlt/gauntlt-demo | 13a1a1306bcb4aaa70ba42c97f469b8a35c65902 | [
"MIT"
] | 21 | 2015-04-11T16:30:43.000Z | 2021-08-24T21:40:51.000Z | examples/webgoat/vuln-27/readme.md | gauntlt/gauntlt-demo | 13a1a1306bcb4aaa70ba42c97f469b8a35c65902 | [
"MIT"
] | 12 | 2015-05-13T04:25:15.000Z | 2015-12-12T06:03:38.000Z | examples/webgoat/vuln-27/readme.md | gauntlt/gauntlt-demo | 13a1a1306bcb4aaa70ba42c97f469b8a35c65902 | [
"MIT"
] | 61 | 2015-03-16T21:39:48.000Z | 2021-05-12T17:20:28.000Z | TEST vuln-27.attack
This is a Gauntlt test to check if the vulnerability in WebGoat located at Parameter Tampering => Bypass HTML Field Restrictions (vuln-27) exists.
It will return a
- 1 (error) if the vulnerability is present
- 0 (success) if the vulnerability is fixed (aka not present)
This test assumes 3 things:
(1) That the python requests, json, and sys modules are all installed. The json and sys modules should both be included with python as of version 2.6 and later.
```
$ pip install requests
```
(This pip command assumes that the user running it has permissions to write to the directory that pip is using. If there are permission errors, then it *can* be installed as root; however, **the authors do not vouch for anything in the requests library, and extreme caution should be taken when installing foreign libraries as root**.) If pip is not installed, it can be installed with
```
$ sudo apt-get install python-pip
```
(2) WebGoat is running on http://127.0.0.1:8080/WebGoat/
(3) The script examples/webgoat/vuln-27/vuln-27.py is in the path ($PATH). One possible way to remidy this problem is to ensure that the python script is executable and then copy it into /usr/bin/.
```
$ chmod a+x vuln-27.py
$ sudo cp vuln-27.py /usr/bin/
```
Testing vuln-27 can be done outside of Gauntlt by navigating to the examples/webgoat/vuln-27/ directory and running:
```
$ python vuln-27.py
```
This Gauntlt test was written by Kyle DeHolton and Bryant Peng on Tues, December 8, 2015.
| 38.589744 | 385 | 0.749502 | eng_Latn | 0.998897 |
dc802164c6ee1b76cde8e44a72c645a7f3f5cb69 | 44 | md | Markdown | README.md | golangdorset/ast | e08315a357d5ca7b03866e26eead7a35977a693b | [
"MIT"
] | null | null | null | README.md | golangdorset/ast | e08315a357d5ca7b03866e26eead7a35977a693b | [
"MIT"
] | null | null | null | README.md | golangdorset/ast | e08315a357d5ca7b03866e26eead7a35977a693b | [
"MIT"
] | null | null | null | # ast
Making Go tools that harness the AST.
| 14.666667 | 37 | 0.75 | eng_Latn | 0.997555 |
dc80a25a53633cc0ae987032f2c17a187c7ac3c1 | 2,400 | md | Markdown | hacking/_posts/2020-10-29-fishy.md | 0xordinaryday/blog | 8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4 | [
"MIT"
] | null | null | null | hacking/_posts/2020-10-29-fishy.md | 0xordinaryday/blog | 8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4 | [
"MIT"
] | 3 | 2021-05-20T19:06:26.000Z | 2021-09-12T09:55:32.000Z | hacking/_posts/2020-10-29-fishy.md | 0xordinaryday/blog | 8b7bc6c3ad43f6631daa5337b2f2e4f6e3656ac4 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Vulnhub - FishyMail: 1"
date: 2020-10-29 21:00:00 +1100
category: hacking
---
## Introduction
*This is my first vulnerable virtual machine called fishymail. You can download it here load the .vdi up on VirtualBox and give it a try.*
This is [FishyMail: 1](https://www.vulnhub.com/entry/fishymail-1,583/) from vulnhub.
## Setup
This box is provided as a virtual disk image, not as a full VM. So you have to create a new VM in VirtualBox and then add the disk image. Also the page doesn't tell you, but it's running OpenBSD 64-bit, so you have to figure that out for yourself? This is somewhat unusual and it strikes me as odd that it's not mentioned on the box description on Vulnhub. I mean it does say 'BSD' but that doesn't give you the full picture. Anyway ...
## Whatever
I'm not going to bother writing up all the steps for this. Basically it's find a text file on the webserver containing base64 encoded text that gives you some credentials and then log in to the box via SSH. Only one set of the creds works. On the box is another file with base64 encoded creds (passwords hashed with MD5), and again this will get you another user with a better shell via SSH. The first user has a limited shell.
## Privsec
Being OpenBSD 6.6 there is at least one Local Privilege Escalation exploit available - e.g. [here](https://seclists.org/bugtraq/2019/Dec/25). When I tried this, I could only get errors about the disk being full and I literally couldn't write *anything* to the disk.
After a while I checked a writeup and they ran a similar exploit with no issues. They were apparently able to write to the disk with no problems; what was going wrong? I tried setting up the machine again a few times including with a larger capacity, but nothing changed - it still complained the disk was full - eg:
{% highlight shell %}
fishymail$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/wd0a 867M 849M -25.4M 103% /
/dev/wd0e 365M 1.8M 345M 1% /home
/dev/wd0d 2.5G 1.8G 614M 75% /usr
{% endhighlight %}
I couldn't find any options for enlarging the partitions without being root, and it seemed like even though I was on the right track there was some issue with my configuration that was preventing me moving forward. I decided not to spend any more time on it.
Not an overly satisfying experience.
| 68.571429 | 436 | 0.7425 | eng_Latn | 0.999623 |
dc80af644690c3c06a89e5dd97fc64d5f42d9158 | 2,894 | md | Markdown | README-CN.md | TexBlock/Silent-Gear | 92f8d7081bacb10594e19d5447ad16bca34002ed | [
"MIT"
] | null | null | null | README-CN.md | TexBlock/Silent-Gear | 92f8d7081bacb10594e19d5447ad16bca34002ed | [
"MIT"
] | null | null | null | README-CN.md | TexBlock/Silent-Gear | 92f8d7081bacb10594e19d5447ad16bca34002ed | [
"MIT"
] | null | null | null | # Silent-Gear
用于 Minecraft 的模块化工具/装甲模块。制作是使用蓝图处理的,这消除了所有配方冲突。材料和零件可以通过数据包与 JSON 文件一起添加。装备制作配方(所需材料、所需零件的数量等)也可以通过数据包进行更改。
这是基于并完全取代了 Silent's Gems 的工具/装甲系统,但有各种变化和改进。
附加模组可以添加新的零件类型、齿轮类型和特性类型,以及数据包可以做的任何事情。
## 链接和下载
- [CurseForge](https://minecraft.curseforge.com/projects/silent-gear) (下载和更多信息)
- [Wiki](https://github.com/SilentChaos512/Silent-Gear/wiki) (高级信息)
- [GitHub repository](https://github.com/SilentChaos512/Silent-Gear) (源代码)
- [Issue Tracker on GitHub](https://github.com/SilentChaos512/Silent-Gear/issues) (错误报告和功能请求)
- [Discord Server](https://discord.gg/Adyk9zHnUn) (获得快速问题解答的最简单方法,不要用于报告错误)
### 下载注意事项
**我只将构建上传到 Minecraft CurseForge。** 如果您从 Curse/CurseForge 或 Twitch 启动器(或在某些情况下作为整合包的一部分)以外的其他地方下载了 mod,我无法对文件或其内容做出任何保证,因为它是在未经我许可的情况下上传的。
-----------------------------------
## 制作附属模组
要在项目中使用 Silent Gear,您需要添加 Silent Gear、Silent Lib 和silent-utils 的依赖项。将以下内容添加到您的`build.gradle`。
您还需要生成一个 GitHub 令牌并将其与您的 GitHub 用户名一起添加到您的个人 `gradle.properties` 文件中的 `C:\Users\YOUR_USERNAME\.gradle` 或 `~/.gradle/gradle.properties` 中。 该文件可能不存在,您必须自己创建。
GitHub 令牌可以在 [这里](https://github.com/settings/tokens) 生成。 单击 _生成新令牌_ 并单击 _read:packages_ 的复选标记
`C:\Users\YOUR_USERNAME\.gradle` 或 `~/.gradle/gradle.properties` 中的 `gradle.properties` 文件示例
```gradle
//你的 GitHub 用户名
gpr.username=SilentChaos512
// 您的 GitHub 生成的具有读取权限的令牌(一组十六进制数字)
gpr.token=paste_your_token_here
```
-----------------------------------
添加到`build.gradle`的代码。 _注意“silentlib”没有连字符。 在创建 repo 时,我采用了不同的命名方式。
我更喜欢把我的认证细节分配给一个变量,以减少重复,使构建文件看起来更干净。
```gradle
// GitHub 包的身份验证详细信息
// 这也可以放在 `repositories` 块中,或者如果您愿意,也可以将其内联
def gpr_creds = {
username = property('gpr.username')
password = property('gpr.token')
}
```
添加所有必要的存储库...
```gradle
repositories {
maven {
url = uri("https://maven.pkg.github.com/silentchaos512/silent-gear")
credentials gpr_creds
}
maven {
url = uri("https://maven.pkg.github.com/silentchaos512/silentlib")
credentials gpr_creds
}
maven {
url = uri("https://maven.pkg.github.com/silentchaos512/silent-utils")
credentials gpr_creds
}
}
```
最后,添加Silent Gear和Silent Lib的依赖关系(这将为你包括silent-utils)。
```gradle
dependencies {
// 将VERSION替换为你需要的版本,形式为“MC_VERSION-MOD_VERSION”
// 例如: compile fg.deobf("net.silentchaos512:silent-gear:1.16.3-2.+")
// 可用的构建可以在这里找到:https://github.com/SilentChaos512/silent-gear/packages
// 在某些情况下,“排除模块”行将防止导入错误
compile fg.deobf("net.silentchaos512:silent-gear:VERSION") {
exclude module: 'forge'
exclude module: 'jei-1.16.3'
exclude module: 'silent-lib-1.16.3'
exclude module: 'curios-forge'
}
// 和以前一样,VERSION的形式是 "MC_VERSION-MOD_VERSION"(例如,1.16.3-4.+)。
// https://github.com/SilentChaos512/silentlib/packages
compile fg.deobf("net.silentchaos512:silent-lib:VERSION") {
exclude module: "forge"
}
}
```
| 29.835052 | 155 | 0.705598 | yue_Hant | 0.842281 |
dc8101c8c480c130e1e624e9b152695b7592db5d | 3,524 | md | Markdown | Week2/README.md | byaruhaf/RWiOSBootcamp | 2056a2f34128bdc5d15908257aa5a49a45baee80 | [
"MIT"
] | 5 | 2020-06-16T12:43:54.000Z | 2021-01-07T04:27:02.000Z | Week2/README.md | byaruhaf/RWiOSBootcamp | 2056a2f34128bdc5d15908257aa5a49a45baee80 | [
"MIT"
] | 1 | 2020-06-05T02:49:13.000Z | 2020-06-05T07:07:15.000Z | Week2/README.md | byaruhaf/RWiOSBootcamp | 2056a2f34128bdc5d15908257aa5a49a45baee80 | [
"MIT"
] | 1 | 2021-07-22T23:15:08.000Z | 2021-07-22T23:15:08.000Z | # BullsEye Game App Collection
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
![Platforms](https://img.shields.io/badge/platform-iOS-lightgrey.svg)
[![Swift Version](https://img.shields.io/badge/Swift-5.2-F16D39.svg?style=flat)](https://developer.apple.com/swift)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)
[![Twitter](https://img.shields.io/badge/twitter-@byaruhaf-blue.svg)](http://twitter.com/byaruhaf)
## Week 2 Assignment
Refactor the [BullsEye](https://files.betamax.raywenderlich.com/attachments/videos/1927/20a3598d-8d8d-4044-ad39-5a1d58c62ad9.zip) project to make it conform to the Model-View-Controller architecture.
The app can be used with any iPhone that supports **[iOS 13](https://support.apple.com/en-il/guide/iphone/iphe3fa5df43/ios)**
### Assigment Requirements completed
* Refactor the ViewController in BullsEye Game to create a UI-independent model.
* Test UI-independent model on RGBullsEye Game replace Int() type with RGB() type.
* modify slider’s minimum track tint color in BullsEye Game to provide a visual hint to the player
### Stretch Above and Beyond Requirements completed:
* “Reverse” the BullsEye game to use a text field.
* Modify slider’s minimum track tint color in RevBullsEye Game to provide a visual hint to the player
* Prevent the user from submitting text that won’t work, either because
* The text isn’t a number
* The Number outside the slider’s range
* User can dismiss the keyboard using two methods for the RevBullsEye
1. Tap screen
2. Tap done button
### Additional Assignment Goals completed
* Groups all game projects and packages under one Xcode workspace so it's easier to work on them together
* Model code for games BullsEye & RevBullsEye moved to Swift Package to avoid duplicating code.
* As an SPM Package model code can be distributed to other developers to make their version of BullsEye
* Use Apples Combine framework so UI can listen for changes in the model.
* Show animation when the user has a high score.
### Class vs Struct:
My personal preference is always to start with a struct. **why you wonder?**
Because [Crusty](https://twitter.com/DaveAbrahams/status/1104537209456091136) said so.
![Crusty](Demo/Crusty.png)
**All joking aside,**
The reason I prefer structs is the value semantics especially immutability. But as game model required **mutation** I changed the model to a class to avoid adding mutation keyword to all my model functions. The next version of the 3 games could be rewritten for **MVVM** so the **model** is a struct & the **view model** is a class.
### Shared Game assets :
The shared game assets could also be moved from the individual game project to Swift Package.
The feature **[SE-0271: Package Manager Resources](https://github.com/apple/swift-evolution/blob/master/proposals/0271-package-manager-resources.md)** will be available in swift 5.3. So the next version of BullsEye & RevBullsEye can take advantage of that.
## App Demo
### BullsEye
![RGBullsEye](Demo/Bull3.gif)
### RGBullsEye
![RGBullsEye](Demo/RGBBull.gif)
## Attribution
Game Background Image created by <a href='https://www.freepik.com/upklyak'> upklyak</a>
## Contribution
- If you have a **feature request**, open an **issue**
- If you want to **contribute**, submit a **pull request**
## License
[MIT License](https://github.com/byaruhaf/RWiOSBootcamp/blob/master/LICENSE).
| 49.633803 | 334 | 0.759648 | eng_Latn | 0.913589 |
dc8203312e36c81b7b795898c8bc3c03f456620b | 2,339 | md | Markdown | README.md | jeffprestes/wibpt | 5c70e3e47eaeda4901d862d79c1745125f1be9d0 | [
"MIT"
] | null | null | null | README.md | jeffprestes/wibpt | 5c70e3e47eaeda4901d862d79c1745125f1be9d0 | [
"MIT"
] | null | null | null | README.md | jeffprestes/wibpt | 5c70e3e47eaeda4901d862d79c1745125f1be9d0 | [
"MIT"
] | null | null | null | # Ementa Workshop de Introdução a Blockchain e Smart Contracts no Ethereum
## Jeff Prestes
Jeff tem mais de 22 anos de experiência com Desenvolvimento de Software, trabalhando desde 1999 com Internet. Hoje, Blockchain e Inteligência Artificial são suas novas paixões.
Ele participa de várias comunidades de desenvolvedores, sendo um dos líderes da comunidade Go (Golang) no Brasil e trabalhou diversos anos como Evangelista Técnico,
ministrando diversas palestras no Brasil e no Exterior.
Tem sua empresa, a NX, onde alia técnicas avançadas de arquitetura de Software com Blockchain para ajudar empresas a inovar e gerar negócios.
## Sinopse do workshop
O advento do Bitcoin trouxe uma nova tecnologia: Blockchain. Porém Vitalik Buterin ampliou as suas possibilidades com os Smart Contracts quando criou o https://ethereum.org .
Com os Smart Contracts podemos criar aplicações que nunca sairão do ar e seus dados nunca serão perdidos.
Usando uma linguagem simples, Solidity, a criação de Smart Contracts esta disponível para todos. Venha em nosso workshop se empoderar aprendendo essa revolucionária tecnologia.
## Conteúdo programático
- O que é Blockchain?
- Exemplo do que é um blockchain: https://anders.com/blockchain
- Hashes: SHA-256
- Distributed Ledger & Blockchain: quais as diferenças
- Tipos de rede: permissionada, publica, privada
- Tipos de consenso
- Comparativos das principais redes atuais
- Exemplo de Rede: Ethereum
- Tipos de rede Ethereum
- DApps - Aplicações descentralizadas
- DApps - Exemplo do Ethereum Stack
- Clientes Ethereum: Geth, parity e eth
- Contas no Ethereum
- Conceito e conteúdo de um bloco no Ethereum
- Transação no Ethereum
- Conteúdo de uma transação
- Ether: o bloco e seus "centavos"
- Criando uma rede privada (modulo para DevOps)
- Conectando a outro nó de uma rede privada (modulo para DevOps)
- Criando um Smart Contract no Ethereum usando Solidity
- Calculando custo do Contrato
- Executando seus contratos em nós privados (modulo para DevOps)
- Executando um contrato na rede de teste do Ethereum
- O que são ERCs: ERC-20 e ERC-721
- Conectando uma página web a um Smart Contract
- Enviando dados de uma página web a um Smart Contract usando Metamask
## Como contratar
Entre em contato pelo LinkedIn para saber como contratar o workshop: https://linkedin.com/in/jeffprestes
| 48.729167 | 177 | 0.795212 | por_Latn | 0.999705 |
dc82b4dc46bfc22712da8689680a57c860f5d605 | 169 | md | Markdown | site/archetypes/default.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | site/archetypes/default.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | site/archetypes/default.md | sajayantony/mcr-images | 9ecf2b4f6873cc33da68f1c608c7bb3a175ca5d3 | [
"MIT"
] | null | null | null | ---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---
{{ range $.Site.Data.repositories }}
{{ partial "repositories.html" . }}
{{ end }} | 18.777778 | 44 | 0.526627 | eng_Latn | 0.402993 |
dc8307956dc2e436d499729e7b283e7b2d5aa4fd | 1,229 | md | Markdown | javascriptTasks1/index.md | ungureanustefan/eleventy-base-blog1 | a10eb48a6fefd7f244e62b7d6556bf5cfd156388 | [
"MIT"
] | null | null | null | javascriptTasks1/index.md | ungureanustefan/eleventy-base-blog1 | a10eb48a6fefd7f244e62b7d6556bf5cfd156388 | [
"MIT"
] | null | null | null | javascriptTasks1/index.md | ungureanustefan/eleventy-base-blog1 | a10eb48a6fefd7f244e62b7d6556bf5cfd156388 | [
"MIT"
] | null | null | null | ---
layout: layouts/post.njk
title: Functions & Control Flow - Part 1
templateClass: tmpl-post
eleventyNavigation:
key: Functions & Control Flow - Part 1
order: 7
---
# Tasks
## Task 1
```
function hello() {
console.log("Hello")
}
hello()
```
## Task 2
```
function fullName(firstName, lastName) {
console.log("My full name is " + firstName + " " + lastName + ".");
};
fullName("Stefan", "Ungureanu");
```
## Task 2 - Second Part
```
function fullName(firstName, lastName) {
let firstAndLastName = firstName + " " + lastName;
return firstAndLastName;
}
let sayFullName = "My full name is " + fullName("Stefan", "Ungureanu");
console.log(sayFullName);
```
## Task 3
```
let temperature = (degrees) => {
if (degrees < 50) {
console.log("Put on a coat!");
}
}
temperature(25);
```
## Task 3 - Second Part
```
let temperature = (degrees) => {
if (degrees < 50 && degrees > 30) {
console.log("Put on a coat!");
}
else if (degrees < 30) {
console.log("Put on a coat and a hat!");
}
else if (degrees < 0) {
console.log("Stay inside!");
}
else {
console.log("Pants and vest is fine");
}
temperature(55);
}
```
| 14.807229 | 71 | 0.576892 | eng_Latn | 0.741442 |
dc844f5be806e4a2ecb08ac77814033c63d4da07 | 9,609 | md | Markdown | README.md | johnwschroeder/PlantMicrobeSimulation | c83fd4c4b8e801969bc43be60d6e27773a77d7f3 | [
"MIT"
] | 1 | 2021-04-19T22:29:23.000Z | 2021-04-19T22:29:23.000Z | README.md | johnwschroeder/PlantMicrobeSimulation | c83fd4c4b8e801969bc43be60d6e27773a77d7f3 | [
"MIT"
] | null | null | null | README.md | johnwschroeder/PlantMicrobeSimulation | c83fd4c4b8e801969bc43be60d6e27773a77d7f3 | [
"MIT"
] | 2 | 2020-03-06T16:20:43.000Z | 2020-07-23T06:57:17.000Z | README
================
John Schroeder
4/5/2020
# Introduction
The following R project includes code necessary to run the simulation
for the manuscript entitled “Mutualist and pathogen traits interact to
affect plant community structure in a spatially explicit model.” All
functions necessary to run the simulation are included in
Simulation\_functions.R. We also include results from 16K simulations
runs using random parameter values in
./RunFiles/sim\_results\_random.RData. Simulation results from the
step-wise optimization are stored in files labeled
./RunFiles/sim\_opt\_step\*.RData. This project is archived under:
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3735452.svg)](https://doi.org/10.5281/zenodo.3735452)
The following R packages are required to run the simulation
``` r
library(doSNOW) #For parallelizing simulation runs
library(randomForest) #For conducting random forest analyses between batches of runs (used for optimization)
library(abind) #For binding lists of results
library(poweRlaw) #For generating dispersal kernels
library(vegan)
library(dplyr) #For plotting
library(plotrix) #For plotting
library(ggplot2) #For plotting
library(RColorBrewer) #For plotting
```
Source code for the simulation functions and plotting functions:
``` r
source("./Simulation_functions.R")
source("./Simulation_plotting_functions.R")
```
The following code chunks conduct an example simulation run using the
parameter values presented in Table 1 of Schroeder et al. (2020, Nature
Communications)
``` r
numcores <- 4 #Define the number of cores to use to simultaneously run multiple simulations
cl <- parallel::makeCluster(numcores) #Create cluster
registerDoSNOW(cl) #Register cluster
```
Define parameter values and settings for simulation runs (this example
runs simulations with identical parameter values four times):
``` r
load("./RunFiles/position.RData")
number.of.runs <- 4
particle.positions <- as.data.frame(position[rep(1,number.of.runs),])
tree.species <- 5
trees.in.forest <- 499
mort <- 0.1
mortality.replacement.steps <- 3000
eliminate.feedback <- c(rep(FALSE,number.of.runs))
fix.feedback <- c(rep(FALSE,number.of.runs))
remove.mutualists <- c(rep(FALSE,number.of.runs))
remove.pathogens <- c(rep(FALSE,number.of.runs))
f.vals <- t(sapply(particle.positions[,4],function(x) {(c(1:tree.species-1)/(tree.species-1))*(1-x)+x}))
pb <- txtProgressBar(max = number.of.runs, style = 3)
```
## | | | 0%
``` r
progress <- function(n) setTxtProgressBar(pb, n)
opts <- list(progress = progress)
```
Run multiple simulations using the ‘dopar’ function, and store the
output in a list called simulation.results:
``` r
simulation.results <- foreach(g = particle.positions$g,
h = particle.positions$h,
b.t = particle.positions$b.t,
s.m = particle.positions$s.m,
b.m = particle.positions$b.m,
alpha.m = particle.positions$alpha.m,
gamma.m = particle.positions$gamma.m,
r.m = particle.positions$r.m,
q.m = particle.positions$q.m,
c.m = particle.positions$c.m,
s.p = particle.positions$s.p,
b.p = particle.positions$b.p,
alpha.p = particle.positions$alpha.p,
gamma.p =particle.positions$gamma.p,
r.p = particle.positions$r.p,
q.p = particle.positions$q.p,
c.p = particle.positions$c.p,
fix.feedback = fix.feedback,
eliminate.feedback = eliminate.feedback,
remove.mutualists = remove.mutualists,
remove.pathogens = remove.pathogens,
index = c(1:number.of.runs),
.packages = c("poweRlaw","vegan"), .options.snow = opts) %dopar%
(
psf.simulation(m = trees.in.forest,
mort = mort,
tree.species = tree.species,
mutualist.species.per.tree = 1,
pathogen.species.per.tree = 1,
time.steps = mortality.replacement.steps,
mutualist.effect.function.consp = mutualist.effect.function.consp,
mutualist.effect.function.heterosp = mutualist.effect.function.heterosp,
pathogen.effect.function.consp = pathogen.effect.function.consp,
pathogen.effect.function.heterosp = pathogen.effect.function.heterosp,
g = g,
h = h,
s.p = s.p,
s.m = s.m,
b.p = b.p,
b.m = b.m,
b.t = b.t,
f.vals = rev(f.vals[index,]),
index = index,
gamma.m = gamma.m,
gamma.p = gamma.p,
r.m = r.m,
r.p = r.p,
q.m = q.m,
q.p = q.p,
c.m = c.m,
c.p = c.p,
alpha.m = alpha.m,
alpha.p = alpha.p,
fix.feedback = fix.feedback,
eliminate.feedback = eliminate.feedback,
remove.mutualists = remove.mutualists,
remove.pathogens = remove.pathogens,
initiate.forest.matrix = initiate.forest.matrix,
initiate.fungal.matrix = initiate.fungal.matrix,
trial.function = trial.function,
dispersal.function = dpldis,
microbe.dispersal.function = dpldis,
track.over.time = TRUE))
```
## | |================== | 25% | |=================================== | 50% | |==================================================== | 75% | |======================================================================| 100%
Plot results analogous to those presented in Figure 1a and 1b (these
will likely be noisy with just 4 simulation
runs)
``` r
mutualist.spatial.patterns <- calculate.microbes.through.space(modelOutput = simulation.results, mu.or.pa = "mu",indices = c(1:4),ncells=5)
pathogen.spatial.patterns <- calculate.microbes.through.space(modelOutput = simulation.results, mu.or.pa = "pa",indices = c(1:4), ncells=5)
survival.spatial.patterns <- plot.survival.through.space(simulation.results,indices = c(1:4),ncells = 5,fitness.dif=FALSE)
fungi.abund.plot <- plot.microbes.through.space(mutualist.spatial.patterns,pathogen.spatial.patterns,survival.spatial.patterns)
fungi.abund.plot
```
![](README_files/figure-gfm/unnamed-chunk-6-1.png)<!-- --> Plot results
analogous to those in Fig 1c
``` r
PSF.strength <- foreach(index=c(1:4)) %do% ( #Run in sequence
measure.PSF.strength(modelOutput = simulation.results[[index]],dpldis))
psf.plot <- plot.feedback.per.species(cndd.strength = PSF.strength,simulation.results = simulation.results,index=c(1:4))
psf.plot
```
![](README_files/figure-gfm/unnamed-chunk-7-1.png)<!-- --> Plot results
analogous to those in 2a. These use average results from one simulation,
but results do not vary much from simulation to
simulation
``` r
mutualists.over.time <- calculate.microbes.over.time.single.tree(simulation.results,"mu",time.steps=50,step.range=c(50:3000),indices = c(1:4))
pathogens.over.time <- calculate.microbes.over.time.single.tree(simulation.results,"pa",time.steps=50,step.range=c(50:3000), indices = c(1:4))
microbes.over.time <- plot.microbes.over.time(mutualists.over.time,pathogens.over.time)
microbes.over.time
```
![](README_files/figure-gfm/unnamed-chunk-8-1.png)<!-- -->
Plot results analogous to those in
2b
``` r
survival.over.time <- survival.over.time.single.tree(simulation.results,time.steps=50,step.range=c(50:3000),indices=c(1:4))
survival.over.time
```
![](README_files/figure-gfm/unnamed-chunk-9-1.png)<!-- --> Plot results
analogous to those in Fig 3a. Here, we display population dynamics from
the start of the simulation. There is no change in plant abundances for
the first 50 time steps because it represents a conditioning phase after
which we measure
PSF.
``` r
tree.abundances <- plot.tree.abundances.over.time(simulation.results,step.range=c(1:3000),indices = c(1:4),xlim=c(0,16))
tree.abundances
```
![](README_files/figure-gfm/unnamed-chunk-10-1.png)<!-- -->
| 46.645631 | 639 | 0.545114 | eng_Latn | 0.688768 |
dc84d5305d666b4718eee73dcdd3baa804ea64ba | 333 | md | Markdown | README.md | shivasps/my-website | 8489a156b99dbc9f31605ab85d770359e1b8f546 | [
"Apache-2.0"
] | null | null | null | README.md | shivasps/my-website | 8489a156b99dbc9f31605ab85d770359e1b8f546 | [
"Apache-2.0"
] | 3 | 2018-11-18T05:38:53.000Z | 2018-11-25T02:49:56.000Z | README.md | shivasps/my-website | 8489a156b99dbc9f31605ab85d770359e1b8f546 | [
"Apache-2.0"
] | null | null | null | # my-website
This is really awesome website
Updates made on master on GitHub before rebase
Repository Purpose
This file is just a readme file.
## Purpose
This purpose of the file is to provide examples
on how to use Git and GitHub together.
## Getting Started
to get started with this project, just `clone` this repository.
| 16.65 | 63 | 0.768769 | eng_Latn | 0.999952 |
dc84dc7a206fec303f0b2d31b22b71384e47fd37 | 93 | md | Markdown | _includes/04-lists.md | luizfrodrigues/markdown-portfolio | 180543a6501f94c4f416da4ac724a69f6230dc44 | [
"MIT"
] | null | null | null | _includes/04-lists.md | luizfrodrigues/markdown-portfolio | 180543a6501f94c4f416da4ac724a69f6230dc44 | [
"MIT"
] | 5 | 2021-02-14T19:21:53.000Z | 2021-02-16T02:44:10.000Z | _includes/04-lists.md | luizfrodrigues/markdown-portfolio | 180543a6501f94c4f416da4ac724a69f6230dc44 | [
"MIT"
] | null | null | null | :zap: Things I love to do :zap:
* Cooking
* Reading.
* Coding.
* Playing some mobile games.
| 13.285714 | 31 | 0.677419 | eng_Latn | 0.87342 |
dc862ef64c3ef61cde7863b18a7ca9ab01986b57 | 2,769 | md | Markdown | _posts/2021-09-02-marketwatch-newssentiment-sep-02.md | JacobJinwonLee/JacobJinwonLee.github.io | f43c6604146358e8029906a6b07139a7f469163c | [
"MIT"
] | null | null | null | _posts/2021-09-02-marketwatch-newssentiment-sep-02.md | JacobJinwonLee/JacobJinwonLee.github.io | f43c6604146358e8029906a6b07139a7f469163c | [
"MIT"
] | null | null | null | _posts/2021-09-02-marketwatch-newssentiment-sep-02.md | JacobJinwonLee/JacobJinwonLee.github.io | f43c6604146358e8029906a6b07139a7f469163c | [
"MIT"
] | null | null | null | ---
layout: post
title: "[NewsSentiment] News Sentiment Analysis - Sep 02"
subtitle: "News Sentiment 20210902"
categories: marketwatch
tags: newssentiment
comments: true
---
## 개요
> News Sentiment Analysis 결과를 보고, 해석해 봅니다.
수집했던 뉴스 text 데이터로 sentiment analysis를 수행했고, 결과를 보겠습니다. 이동 평균은 5일, 20일 이동평균을 표시합니다. 데이터가 어느 정도 쌓여 60일 이동평균도 추가하게 되었습니다. 글의 작성 일자는 9월 2일이지만, 9월 2일 미국 시장 개장 전에 9월 1일까지의 S&P 500 종가와 9월 1일까지의 영문 뉴스 기사 text 데이터를 취합한 결과입니다.
이전에 작성한 News Sentiment Project 첫 글에서 News Sentiment Index (NSI)는 뉴스 기사들이 긍정적인가 부정적인가를 계산한 후 여러 가지 기법을 적용해 산출한 것이므로, S&P 500 지수를 선행하거나 추종한다고 할 수는 없다고 했습니다. 그러나, 그때그때의 시황을 참고하여 고려하는 목적으로는 충분히 유용한 지표라고 생각합니다. News Sentiment Index (NSI)도 일종의 심리 지수로 볼 수 있으므로, 데이터가 충분히 쌓인다면 Michigan Consumer Sentiment, Conference Board Consumer Confidence와도 비교해 볼 것입니다.
우선 각 날짜별 데이터인 1일 데이터는 변동이 다소 거친 편입니다. 주말이 끼어 있으면 직전 거래일 S&P 500 종가를 그대로 사용합니다. 5일 이동평균 데이터는 지수 이동평균 방식으로 구하여 최근의 것에 더 높은 가중치를 주었습니다. 1일 데이터보다 움직임이 덜 거친 편입니다. 시장 심리 변화를 보기 위해 진행한 프로젝트이고, 심리 변화가 주가 지수에 직결되지는 않기 때문에 주가 지수와 연계된 해석이 주가 지수를 예측한다고 보장할 수 없습니다.
백신 접종 후 많이 잤는데도 아직도 피곤합니다. 3일째인데 언제까지 가려나 모르겠습니다. 유럽은 홍콩, 중국 시장이 강해서였는지 상승으로 시작했습니다. 그런데 독일 DAX는 그 자리가 당일 고가가 되어버렸고, 결국 어제와 별 차이 없이 끝났습니다. EURO STOXX는 독일과 달리 초반보다 밀리기는 했지만 그럭저럭 상승으로 끝났습니다.
미국은 한국 시간 1일 밤 9시 15분에 ADP 고용 수치가 나왔는데, 지난달과 비슷합니다. 또 쇼크입니다. Consensus 62.5만, 실제 값 37.4만인데, 지난달보다는 실제 값이 좋고 지난달에도 비슷한 격차의 쇼크였기 때문에 민간인 ADP 말고 미 정부 발표 수치까지 보아야 합니다. 10시 45분에는 제조업 PMI가 나왔고, 61.1로 consensus와 별 차이 없었습니다. 50 이상이니 상황 좋습니다. 11시에는 ISM 제조업 지수가 나왔는데, consensus 58.6을 넘는 59.9로 잘 나왔습니다. ADP 고용 발표로 유로, 엔, 파운드가 달러 대비 강해 달러는 하락이었고, 국채는 잠깐 잘 가다가 제조업 PMI와 ISM 제조업 지수 이후 흐지부지 되었습니다.
원유는 러시아의 노박 부총리가 OPEC+가 어떻게 하는지에 관계 없이 증산할 수 있다고 해 버리자 급락했습니다. 그러다가 OPEC+가 결국 감산 수준을 유지하는 것으로 합의했다는 뉴스가 나오고 다시 68달러 대로 올라와 끝났습니다.
나스닥은 AAPL, AMZN, NVDA 등이 올라가면서 초반에 밀어올렸는데, 그 후 장기 횡보하다가 (한국 시간 2일 오전 3시-3시 30분 정도였던 것 같습니다) 올라온 것을 다 뱉어내고 끝났습니다. 은행주, 특히 WFC가 5% 가까이 내려가면서 하락을 보이고, 원유 관련 주식도 하락으로 전체적으로 상승 동력이 없는 모습입니다. VIX는 여전히 16, 지수 등락은 나스닥 기준 하루 0.3% 수준으로 VIX를 감안하면 한참 낮은 변동을 보이고 있습니다. 변동성이 커질 무언가가 있어야 횡보가 끝나고 방향을 정할 것 같습니다.
![newplot (3)](https://user-images.githubusercontent.com/54884755/131890865-6f53911d-45cb-44bc-85f1-5cd27d41b242.png)
![newplot (4)](https://user-images.githubusercontent.com/54884755/131890891-6465ed2a-7908-4f61-a5cb-af6284f1ac0b.png)
![newplot (5)](https://user-images.githubusercontent.com/54884755/131890887-9ef6c728-718d-43da-8eca-d659814699ac.png)
![newplot (6)](https://user-images.githubusercontent.com/54884755/131890884-386dcca8-4063-4f07-8db4-bc099beeffbc.png)
![newplot (7)](https://user-images.githubusercontent.com/54884755/131890879-de1af58b-d214-4383-9b56-b2a9a0a1a97a.png)
![newplot (8)](https://user-images.githubusercontent.com/54884755/131890873-c10831a4-174b-46a0-95ba-a264efbb4ca6.png)
| 81.441176 | 382 | 0.749729 | kor_Hang | 1.00001 |
dc86584e292fda84b6ffa2ac26d9bea6cf02a425 | 801 | md | Markdown | chapter8-vae/README.md | gabrielmahia/obamAI | ba45f0a6efae793d7f5e356a1dbf5c6835a65dba | [
"MIT"
] | 1,291 | 2018-03-30T07:42:07.000Z | 2022-03-31T20:27:55.000Z | chapter8-vae/README.md | gabrielmahia/obamAI | ba45f0a6efae793d7f5e356a1dbf5c6835a65dba | [
"MIT"
] | 18 | 2019-01-01T16:50:28.000Z | 2022-03-31T17:58:31.000Z | chapter8-vae/README.md | gabrielmahia/obamAI | ba45f0a6efae793d7f5e356a1dbf5c6835a65dba | [
"MIT"
] | 798 | 2018-03-26T01:01:47.000Z | 2022-03-31T06:33:07.000Z |
## Chapter 8 - Variational Autoencoders
![Figure 8.1.6](images/vae_mean_mlp.png)
Figure 8.1.6 Latent vector mean values for test dataset (VAE MLP). The colorbar shows the corresponding MNIST digit as a function of z.
![Figure 8.1.11](images/vae_mean.png)
Figure 8.1.11 Latent vector mean values for test dataset (VAE CNN). The colorbar shows the corresponding MNIST digit as a function of z.
![Figure 8.2.4](images/cvae_mean.png)
Figure 8.2.4 Latent vector mean values for test dataset (CVAE CNN). The colorbar shows the corresponding MNIST digit as a function of z.
![Figure 8.3.1](images/beta9_cvae.png)
Figures 8.3.1 Latent vector mean values for test dataset (β-VAE with β=9)
![Figure 8.3.2](images/beta10_cvae.png)
igures 8.3.1 Latent vector mean values for test dataset (β-VAE with β=10)
| 44.5 | 136 | 0.757803 | eng_Latn | 0.674709 |
dc86bb608a51697a8217af3448b95c7a07be7fd0 | 90 | md | Markdown | README.md | vonzo/WhosApp | deaf5e5980208f9de73c8191d34217b46902d942 | [
"MIT"
] | null | null | null | README.md | vonzo/WhosApp | deaf5e5980208f9de73c8191d34217b46902d942 | [
"MIT"
] | null | null | null | README.md | vonzo/WhosApp | deaf5e5980208f9de73c8191d34217b46902d942 | [
"MIT"
] | null | null | null | # WhosApp
Takes a WhatsApp conversation file as input and gives cool statistics as output
| 30 | 79 | 0.822222 | eng_Latn | 0.996004 |
dc87138b180d103e0ad902f5bcb6808e6d5f2bc4 | 17,260 | md | Markdown | articles/data-factory/v1/data-factory-api-change-log.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/v1/data-factory-api-change-log.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/v1/data-factory-api-change-log.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Data Factory — dziennik zmian interfejsu API platformy .NET
description: Opisuje istotne zmiany, Dodatki funkcji, poprawki błędów i tak dalej, w określonej wersji interfejsu API platformy .NET dla Azure Data Factory.
services: data-factory
documentationcenter: ''
author: djpmsft
ms.author: daperlov
manager: jroth
ms.reviewer: maghan
ms.service: data-factory
ms.workload: data-services
ms.topic: conceptual
robots: noindex
ms.date: 01/22/2018
ms.openlocfilehash: dbbbdebdcf1db7afe485166f5744f2291b757d50
ms.sourcegitcommit: 5ab4f7a81d04a58f235071240718dfae3f1b370b
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 12/10/2019
ms.locfileid: "74979006"
---
# <a name="azure-data-factory---net-api-change-log"></a>Azure Data Factory — dziennik zmian interfejsu API platformy .NET
> [!NOTE]
> Ten artykuł dotyczy wersji 1 usługi Data Factory.
Ten artykuł zawiera informacje o zmianach w Azure Data Factory SDK w określonej wersji. Najnowszy pakiet NuGet dla Azure Data Factory można znaleźć [tutaj](https://www.nuget.org/packages/Microsoft.Azure.Management.DataFactories)
## <a name="version-4110"></a>4\.11.0 wersja
Dodatki do funkcji:
* Dodano następujące typy połączonych usług:
* [OnPremisesMongoDbLinkedService](https://msdn.microsoft.com/library/mt765129.aspx)
* [AmazonRedshiftLinkedService](https://msdn.microsoft.com/library/mt765121.aspx)
* [AwsAccessKeyLinkedService](https://msdn.microsoft.com/library/mt765144.aspx)
* Dodano następujące typy zestawów danych:
* [MongoDbCollectionDataset](https://msdn.microsoft.com/library/mt765145.aspx)
* [AmazonS3Dataset](https://msdn.microsoft.com/library/mt765112.aspx)
* Dodano następujące typy źródeł kopiowania:
* [MongoDbSource](https://msdn.microsoft.com/library/mt765123.aspx)
## <a name="version-4100"></a>4\.10.0 wersja
* Następujące opcjonalne właściwości zostały dodane do formatu TextFormat:
* [SkipLineCount](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.textformat.skiplinecount.aspx)
* [FirstRowAsHeader](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.textformat.firstrowasheader.aspx)
* [TreatEmptyAsNull](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.textformat.treatemptyasnull.aspx)
* Dodano następujące typy połączonych usług:
* [OnPremisesCassandraLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.onpremisescassandralinkedservice.aspx)
* [SalesforceLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.salesforcelinkedservice.aspx)
* Dodano następujące typy zestawów danych:
* [OnPremisesCassandraTableDataset](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.onpremisescassandratabledataset.aspx)
* Dodano następujące typy źródeł kopiowania:
* [CassandraSource](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.cassandrasource.aspx)
* Dodaj właściwość [WebServiceInputs](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.azuremlbatchexecutionactivity.webserviceinputs.aspx) do AzureMLBatchExecutionActivity
* Włącz przekazywanie wielu danych wejściowych usługi sieci Web do Azure Machine Learning eksperymentu
## <a name="version-491"></a>4\.9.1 wersja
### <a name="bug-fix"></a>Poprawka błędu
* Zaniechanie uwierzytelniania opartego na WebApi dla [WebLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.weblinkedservice.authenticationtype.aspx).
## <a name="version-490"></a>4\.9.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Dodaj właściwości [EnableStaging](https://msdn.microsoft.com/library/mt767916.aspx) i [StagingSettings](https://msdn.microsoft.com/library/mt767918.aspx) do kopiowania. Aby uzyskać szczegółowe informacje na temat tej funkcji, zobacz [przygotowane kopie](data-factory-copy-activity-performance.md#staged-copy) .
### <a name="bug-fix"></a>Poprawka błędu
* Wprowadź Przeciążenie metody [ActivityWindowOperationExtensions. list](https://msdn.microsoft.com/library/mt767915.aspx) , która przyjmuje wystąpienie [ActivityWindowsByActivityListParameters](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.activitywindowsbyactivitylistparameters.aspx) .
* Oznacz [WriteBatchSize](https://msdn.microsoft.com/library/dn884293.aspx) i [WriteBatchTimeout](https://msdn.microsoft.com/library/dn884245.aspx) jako opcjonalne w CopySink.
## <a name="version-480"></a>4\.8.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Następujące opcjonalne właściwości zostały dodane do typu działania kopiowania w celu umożliwienia dostrajania wydajności kopiowania:
* [ParallelCopies](https://msdn.microsoft.com/library/mt767910.aspx)
* [CloudDataMovementUnits](https://msdn.microsoft.com/library/mt767912.aspx)
## <a name="version-470"></a>4\.7.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Dodano nowy typ StorageFormat typu [OrcFormat](https://msdn.microsoft.com/library/mt723391.aspx) do kopiowania plików w formacie zoptymalizowanego wiersza kolumnowy (Orc).
* Dodaj właściwości [AllowPolyBase](https://msdn.microsoft.com/library/mt723396.aspx) i PolyBaseSettings do SqlDWSink.
* Umożliwia korzystanie z bazy danych Base do kopiowania do SQL Data Warehouse.
## <a name="version-461"></a>Wersja 4.6.1
### <a name="bug-fixes"></a>Poprawki błędów
* Rozwiązuje żądanie HTTP dotyczące wyświetlania okna działania.
* Usuwa nazwę grupy zasobów i nazwę fabryki danych z ładunku żądania.
## <a name="version-460"></a>4\.6.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Następujące właściwości zostały dodane do [PipelineProperties](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.pipelineproperties_properties.aspx):
* [Potokmode](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.pipelineproperties.pipelinemode.aspx)
* [ExpirationTime](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.pipelineproperties.expirationtime.aspx)
* [Zestawy danych](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.pipelineproperties.datasets.aspx)
* Następujące właściwości zostały dodane do [PipelineRuntimeInfo](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.common.models.pipelineruntimeinfo.aspx):
* [PipelineState](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.common.models.pipelineruntimeinfo.pipelinestate.aspx)
* Dodano nowy typ [StorageFormat](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.storageformat.aspx) typu [formatu jsonformat](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.jsonformat.aspx) w celu zdefiniowania zestawów danych, których dane są w formacie JSON.
## <a name="version-450"></a>4\.5.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Dodano [operacje listy dla okna działania](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.activitywindowoperationsextensions.aspx).
* Dodano metody pobierania okien działania z filtrami w oparciu o typy jednostek (czyli fabryki danych, zbiory, potoki i działania).
* Dodano następujące typy połączonych usług:
* [ODataLinkedService](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.odatalinkedservice.aspx), [WebLinkedService](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.weblinkedservice.aspx)
* Dodano następujące typy zestawów danych:
* [ODataResourceDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.odataresourcedataset.aspx), [WebTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.webtabledataset.aspx)
* Dodano następujące typy źródeł kopiowania:
* [WebSource](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.websource.aspx)
## <a name="version-440"></a>4\.4.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Następujący typ połączonej usługi został dodany jako źródła danych i ujścia dla działań kopiowania:
* [AzureStorageSasLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.azurestoragesaslinkedservice.aspx). Aby uzyskać informacje o pojęciach i przykłady, zobacz [połączoną usługę Azure Storage SAS](data-factory-azure-blob-connector.md#azure-storage-sas-linked-service) .
## <a name="version-430"></a>4\.3.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Następujące typy połączonych usług Haven zostały dodane jako źródła danych dla działań kopiowania:
* [HdfsLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.hdfslinkedservice.aspx). Zobacz [przenoszenie danych z systemu plików HDFS przy użyciu Data Factory](data-factory-hdfs-connector.md) , aby uzyskać informacje koncepcyjne i przykłady.
* [OnPremisesOdbcLinkedService](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.onpremisesodbclinkedservice.aspx). Zobacz [przenoszenie danych z magazynów danych ODBC przy użyciu Azure Data Factory](data-factory-odbc-connector.md) , aby uzyskać informacje koncepcyjne i przykłady.
## <a name="version-420"></a>4\.2.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Dodano następujący nowy typ działania: [AzureMLUpdateResourceActivity](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuremlupdateresourceactivity.aspx). Aby uzyskać szczegółowe informacje o działaniu, zobacz [aktualizowanie modeli usługi Azure ml przy użyciu działania Aktualizuj zasób](data-factory-azure-ml-batch-execution-activity.md).
* Do [klasy połączenie](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuremllinkedservice.aspx)dodano nową właściwość [Właściwości updateresourceendpoint](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuremllinkedservice.updateresourceendpoint.aspx) .
* Właściwości [LongRunningOperationInitialTimeout](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.datafactorymanagementclient.longrunningoperationinitialtimeout.aspx) i [LongRunningOperationRetryTimeout](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.datafactorymanagementclient.longrunningoperationretrytimeout.aspx) zostały dodane do klasy [DataFactoryManagementClient](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.datafactorymanagementclient.aspx) .
* Zezwalaj na konfigurowanie limitów czasu dla wywołań klientów do usługi Data Factory.
## <a name="version-410"></a>4\.1.0 wersja
### <a name="feature-additions"></a>Dodatki funkcji
* Dodano następujące typy połączonych usług:
* [AzureDataLakeStoreLinkedService](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuredatalakestorelinkedservice.aspx)
* [AzureDataLakeAnalyticsLinkedService](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuredatalakeanalyticslinkedservice.aspx)
* Dodano następujące typy działań:
* [DataLakeAnalyticsUSQLActivity](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datalakeanalyticsusqlactivity.aspx)
* Dodano następujące typy zestawów danych:
* [AzureDataLakeStoreDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuredatalakestoredataset.aspx)
* Dodano następujące typy źródła i ujścia dla działania kopiowania:
* [AzureDataLakeStoreSource](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuredatalakestoresource.aspx)
* [AzureDataLakeStoreSink](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuredatalakestoresink.aspx)
## <a name="version-401"></a>4\.0.1 wersja
### <a name="breaking-changes"></a>Zmiany powodujące niezgodność
Zmieniono nazwy następujących klas. Nowe nazwy były oryginalnymi nazwami klas przed wydaniem 4.0.0.
| Nazwa w 4.0.0 | Nazwa w 4.0.1 |
|:--- |:--- |
| AzureSqlDataWarehouseDataset |[AzureSqlDataWarehouseTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuresqldatawarehousetabledataset.aspx) |
| AzureSqlDataset |[AzureSqlTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuresqltabledataset.aspx) |
| AzureDataset |[AzureTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.azuretabledataset.aspx) |
| OracleDataset |[OracleTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.oracletabledataset.aspx) |
| RelationalDataset |[RelationalTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.relationaltabledataset.aspx) |
| SqlServerDataset |[SqlServerTableDataset](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.sqlservertabledataset.aspx) |
## <a name="version-400"></a>Wersja 4.0.0
### <a name="breaking-changes"></a>Zmiany powodujące niezgodność
* Nazwy następujących klas/interfejsów zostały zmienione.
| Stara nazwa | Nowa nazwa |
|:--- |:--- |
| ITableOperations |[IDatasetOperations](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.idatasetoperations.aspx) |
| Tabela |[Zestawu](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.dataset.aspx) |
| TableProperties |[DatasetProperties](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetproperties.aspx) |
| TableTypeProprerties |[DatasetTypeProperties](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasettypeproperties.aspx) |
| TableCreateOrUpdateParameters |[DatasetCreateOrUpdateParameters](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetcreateorupdateparameters.aspx) |
| TableCreateOrUpdateResponse |[DatasetCreateOrUpdateResponse](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetcreateorupdateresponse.aspx) |
| TableGetResponse |[DatasetGetResponse](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetgetresponse.aspx) |
| TableListResponse |[DatasetListResponse](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetlistresponse.aspx) |
| CreateOrUpdateWithRawJsonContentParameters |[DatasetCreateOrUpdateWithRawJsonContentParameters](https://msdn.microsoft.com/library/microsoft.azure.management.datafactories.models.datasetcreateorupdatewithrawjsoncontentparameters.aspx) |
* Metody **list** zwracają teraz wyniki stronicowania. Jeśli odpowiedź zawiera niepustą Właściwość **Nextlink** , aplikacja kliencka musi kontynuować pobieranie następnej strony do momentu zwrócenia wszystkich stron. Oto przykład:
```csharp
PipelineListResponse response = client.Pipelines.List("ResourceGroupName", "DataFactoryName");
var pipelines = new List<Pipeline>(response.Pipelines);
string nextLink = response.NextLink;
while (!string.IsNullOrEmpty(nextLink))
{
PipelineListResponse nextResponse = client.Pipelines.ListNext(nextLink);
pipelines.AddRange(nextResponse.Pipelines);
nextLink = nextResponse.NextLink;
}
```
* Interfejs API potoku **list** zwraca tylko podsumowanie potoku, a nie pełne szczegóły. Na przykład działania w podsumowaniu potoku zawierają tylko nazwę i typ.
### <a name="feature-additions"></a>Dodatki funkcji
* Klasa [SqlDWSink](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.sqldwsink.aspx) obsługuje dwie nowe właściwości, **SliceIdentifierColumnName** i **SqlWriterCleanupScript**do obsługi kopiowania idempotentne do Azure SQL Data Warehouse. Zapoznaj się z artykułem [Azure SQL Data Warehouse](data-factory-azure-sql-data-warehouse-connector.md) , aby uzyskać szczegółowe informacje na temat tych właściwości.
* Teraz obsługujemy procedurę przechowywaną w odniesieniu do Azure SQL Database i Azure SQL Data Warehouse źródeł w ramach działania kopiowania. Klasy [sqlsource](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.sqlsource.aspx) i [SqlDWSource](https://msdn.microsoft.com/library/azure/microsoft.azure.management.datafactories.models.sqldwsource.aspx) mają następujące właściwości: **SqlReaderStoredProcedureName** i **StoredProcedureParameters**. Aby uzyskać szczegółowe informacje o tych właściwościach, zobacz artykuły [Azure SQL Database](data-factory-azure-sql-connector.md#sqlsource) i [Azure SQL Data Warehouse](data-factory-azure-sql-data-warehouse-connector.md#sqldwsource) w witrynie Azure.com.
| 92.795699 | 745 | 0.813499 | pol_Latn | 0.824491 |
dc87779c49159e2b5be5fcbfa216963aa8bbc4ed | 162 | md | Markdown | README.md | zekaf/jetty-projects | 3747daa2e73c5df9cca941c41db016d9b5b6fd2a | [
"MIT"
] | 1 | 2016-10-28T17:54:53.000Z | 2016-10-28T17:54:53.000Z | README.md | zekaf/jetty-projects | 3747daa2e73c5df9cca941c41db016d9b5b6fd2a | [
"MIT"
] | null | null | null | README.md | zekaf/jetty-projects | 3747daa2e73c5df9cca941c41db016d9b5b6fd2a | [
"MIT"
] | 3 | 2016-10-07T19:06:17.000Z | 2017-01-17T16:14:42.000Z | # jetty-nodes
Shell scripts to help programming and test multiple nodes Jetty project using a single computer and Linux virtual network kernel devices (TUN/TAP)
| 40.5 | 146 | 0.814815 | eng_Latn | 0.980202 |
dc879296de1a7eee760aca6f1a890418417da1aa | 936 | md | Markdown | _posts/2021-08-05-My-First-Post.md | LaonMoon/laonmoon.github.io | 65ef3dec7a4bdff506239c4f7520b1b75daf9e14 | [
"MIT"
] | null | null | null | _posts/2021-08-05-My-First-Post.md | LaonMoon/laonmoon.github.io | 65ef3dec7a4bdff506239c4f7520b1b75daf9e14 | [
"MIT"
] | null | null | null | _posts/2021-08-05-My-First-Post.md | LaonMoon/laonmoon.github.io | 65ef3dec7a4bdff506239c4f7520b1b75daf9e14 | [
"MIT"
] | null | null | null | ---
title: My First Post
author: LaonMoon
date: 2021-08-05 01:16:00 +/-TTTT
categories: [Blogging, Memo]
tags: [] # TAG names should always be lowercase
pin: false
---
## Hello, World!
This is my first post on this blog. I'll record what I study.
## Why I start blog?
Actually I have other blogs, but I wanted the one created on github. Because I'm learning Programming!
Probably the post will have something to do with programming.
## What I want to do?
### Programming
- Django
- Flask
- Bootstrap
- Flutter
### Artificial Intelligence
- Deep Learning
- Machine Learning
- NLP(Natural Language Processing)
### Other things
- Improve English skill
- Learn German
- Use Drops, and DW learn
- Writing a novel
- Study stock
- read a classic
- Reading a book
- Learn guitar
## Wrap up
I think it would be better to leave the old posts for a while. It's because I'm not familiar with markdown yet.
+) I love this theme! | 21.272727 | 111 | 0.715812 | eng_Latn | 0.991787 |
dc894975771cd88ee5da42a7bcb677255b1a3462 | 179 | md | Markdown | README.md | bkornpob/axehelper | d89407f73f92e140a5cc9a76c643b9a8656e8b0f | [
"MIT"
] | null | null | null | README.md | bkornpob/axehelper | d89407f73f92e140a5cc9a76c643b9a8656e8b0f | [
"MIT"
] | null | null | null | README.md | bkornpob/axehelper | d89407f73f92e140a5cc9a76c643b9a8656e8b0f | [
"MIT"
] | null | null | null | # axehelper
axehelper is a python package to use with aXe. aXe operates on Python 2. However, most of the routines in this package also work with Python 3.
pip install axehelper
| 35.8 | 143 | 0.787709 | eng_Latn | 0.999896 |
dc8a71b6b55a74861e3363f43d584c76cfbcfc9b | 2,725 | md | Markdown | README.md | enquirer/prompt-google-form | 260cfbd1e6b558bb1ed17c47c9da4d7f150106d7 | [
"MIT"
] | 8 | 2020-01-08T20:46:40.000Z | 2020-03-12T09:24:02.000Z | README.md | enquirer/prompt-google-form | 260cfbd1e6b558bb1ed17c47c9da4d7f150106d7 | [
"MIT"
] | 5 | 2020-01-13T20:55:05.000Z | 2020-01-26T20:03:33.000Z | README.md | enquirer/prompt-google-form | 260cfbd1e6b558bb1ed17c47c9da4d7f150106d7 | [
"MIT"
] | null | null | null | # prompt-google-form [![NPM version](https://img.shields.io/npm/v/prompt-google-form.svg?style=flat)](https://www.npmjs.com/package/prompt-google-form) [![NPM monthly downloads](https://img.shields.io/npm/dm/prompt-google-form.svg?style=flat)](https://npmjs.org/package/prompt-google-form) [![NPM total downloads](https://img.shields.io/npm/dt/prompt-google-form.svg?style=flat)](https://npmjs.org/package/prompt-google-form)
> Enquirer prompt for filling in Google Forms.
Please consider following this project's author, [Aditya Vyas](https://github.com/adityavyas611), and consider starring the project to show your :heart: and support.
## Install
Install with [npm](https://www.npmjs.com/):
```sh
$ npm install --save prompt-google-form
```
## Usage
```js
const GoogleFormPrompt = require('prompt-google-form');
const prompt = new GoogleFormPrompt({
formId: '1FAIpQLSdniaX5nAjywbvnT9tQp1OTryh7148Lkl5LnvJV1mBOy1QXdA'
});
prompt.run()
.then(answer => {
console.log(answer);
})
.catch(err => {
console.error(err);
});
```
## About
<details>
<summary><strong>Contributing</strong></summary>
Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new).
Please read the [contributing guide](.github/contributing.md) for advice on opening issues, pull requests, and coding standards.
</details>
<details>
<summary><strong>Running Tests</strong></summary>
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
```sh
$ npm install && npm test
```
</details>
<details>
<summary><strong>Building docs</strong></summary>
_(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_
To generate the readme, run the following command:
```sh
$ npm install -g verbose/verb#dev verb-generate-readme && verb
```
</details>
### Contributors
| **Commits** | **Contributor** |
| --- | --- |
| 5 | [adityavyas611](https://github.com/adityavyas611) |
| 5 | [doowb](https://github.com/doowb) |
### Author
**Aditya Vyas**
* [GitHub Profile](https://github.com/adityavyas611)
* [Twitter Profile](https://twitter.com/cybertron611)
* [LinkedIn Profile](https://linkedin.com/in/aditya-vyas-25b370123/)
### License
Copyright © 2020, [Aditya Vyas](https://github.com/adityavyas611).
Released under the [MIT License](LICENSE).
***
_This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on January 14, 2020._ | 30.617978 | 425 | 0.721468 | eng_Latn | 0.550659 |
dc8aa1ef3bcc3cd75e28f07f41060c3c08c47cfb | 904 | md | Markdown | catalog/radiant/en-US_radiant-2nd-season.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/radiant/en-US_radiant-2nd-season.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/radiant/en-US_radiant-2nd-season.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Radiant 2nd Season
![radiant-2nd-season](https://cdn.myanimelist.net/images/anime/1974/108398.jpg)
- **type**: tv-serie
- **episodes**: 21
- **original-name**: ラディアン 第 2 シリーズ
- **start-date**: 2019-10-02
- **end-date**: 2019-10-02
- **opening-song**: "Naraku (ナラク)" by Halo at Yojouhan (Halo at 四畳半)
- **ending-song**: "Chittomo Shiranakatta (ちっとも知らなかった)" by NakamuraEmi
- **rating**: PG-13 - Teens 13 or older
## Tags
- action
- adventure
- magic
- fantasy
## Sinopse
Second season of Radiant.
## Links
- [My Anime list](https://myanimelist.net/anime/39355/Radiant_2nd_Season)
- [Official Site](http://www.nhk.or.jp/anime/radiant/)
- [AnimeDB](http://anidb.info/perl-bin/animedb.pl?show=anime&aid=14701)
- [AnimeNewsNetwork](http://www.animenewsnetwork.com/encyclopedia/anime.php?id=21877)
- [Wikipedia](https://en.wikipedia.org/wiki/Radiant_%28TV_series%29)
| 28.25 | 87 | 0.675885 | yue_Hant | 0.149033 |
dc8ab8a80bdc3479929f651141903f8ee59c2156 | 3,894 | md | Markdown | _posts/Study/OS/2021-01-12-주-기억-장치-관리.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | null | null | null | _posts/Study/OS/2021-01-12-주-기억-장치-관리.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | 1 | 2020-12-16T14:16:07.000Z | 2020-12-16T14:16:07.000Z | _posts/Study/OS/2021-01-12-주-기억-장치-관리.md | barking-code/barking-code.github.io | e016ec021a9407190bc701be0ed510a9aae31717 | [
"MIT"
] | null | null | null | ---
layout: post
title: "주 기억 장치(Memory) 관리"
categories:
- Study
- OS
tags:
- OS
created_at: 2021-01-12T22:43:00+09:00
modified_at: 2021-01-14T16:43:00+09:00
visible: true
---
## 주 기억 장치 관리란?
주 기억 장치(Memory)는 명령어나 데이터 저장를 저장하는 CPU가 직접 접근이 가능한 장치이다.
CPU를 빌려서 사용하는 프로세스가 수행되기 위해 필요한 정보들이 수시로 탑재되고 교체되기 때문에 주 기억 장치에 적절한 크기의 데이터를 적절할 위치에 탑재하고 적절한 시기에 교체하는 것이 중요하다.
### 주소 바인딩
메모리는 데이터가 저장되는 물리적인 공간을 지정할 수 있는 **주소**와 저장된 **데이터**로 구성되어 있다.
다음과 같은 3단계 과정을 거쳐 최종적으로 메모리의 주소 바인딩이 이루어진다
1. 소스 코드 컴파일 단계
* 프로그래머가 작성한 소스 코드를 컴퓨터가 인식할 수 있는 기계어로 번역(컴파일)한 **목적 코드로 변환**한다.
2. 프로그램에 필요한 라이브러리를 목적 코드에 Link
* 소스 코드 작성시 외부의 라이브러리를 통해 추가한 모듈을 Linker가 연결해 실행 가능한 코드(프로그램)를 생성한다.
* 이때 프로그램에서 사용하는 주소들을 재배치 가능한 주소에 바인딩한다(논리 주소 : 재배치 가능 주소)
3. 프로그램을 메모리에 Load
* 실행 가능한 프로그램이 Loader에 의해 된다.
* 이때 재배치 가능한 주소와 실제 메모리 주소를 바인딩한다.(논리 주소 : 재배치 가능 주소 : 실제 주소)
* 프로세스가 실행 될 때 마다 항상 같은 메모리 주소에 바인딩 된다는 보장이 없으므로 이러한 과정을 통해서 프로그램 작성 시 실제 주소를 신경 쓰지 않고 코드를 작성할 수 있도록 한다.
* Memory Management Unit(MMU)
* 대개 프로그램에서 사용하는 논리(가상) 주소는 실제 메모리의 크기보다 크고, 항상 같은 주소에 바인딩된다는 보장이 없으므로 논리 주소를 실제 주소와 바인딩해주는 중간과정이 필요하다.
* 이를 담당하는 것이 MMU이다.
### 동적 적재, 동적 연결
주소 바인딩 과정에서 중복되거나 불필요한 코드가 Load되어 메모리를 낭비하는 것을 방지하기 위한 기술이다
#### 동적 적재
* 프로그램을 독립적으로 실행할 수 있는 루틴 단위(if문, exception 등)로 분할하고 저장해놨다가 사용해야 할 상황이 오게되었을 때 메모리에 적재한다.
* if문 같은 경우 특정 상황에서만 내부 코드가 수행되기 때문에 일단 해당 루틴은 적재하지 않고 있다가 if문 내부 코드가 수행되어야 하는 상황이 오면 저장되어있는 루틴을 찾아 메모리에 적재하고 실행한다.
#### 동적 연결
* 동일한 라이브러리를 사용하는 프로그램이 동시에 메모리에 올라가는 경우 똑같은 코드가 두번 적재되는 일이 발생한다.
* 기존에는 컴파일 완료 후 Link과정을 수행했다면 동적 연결은 프로그램 실행 과정에서 Link과정을 수행한다.
* 이 경우 이미 메모리에 적재된 라이브러리가 존재하면 추가로 Load하지 않고 이미 탑재된 라이브러리에 Link하여 메모리 공간을 절약할 수 있다.
### Overlay
* 메모리보다 크기가 큰 용량의 프로그램을 적재하기 위해 생긴 기술이다.
* 프로그램을 메모리보다 작은 크기로 분할하여 저장하고 프로세스의 실행에 꼭 필요한 부분만 메모리에 적재하여 실행하는 방식이다.
* 프로그램을 쪼개는 과정을 프로그래머가 수행하기 때문에 많은 시간이 소비된다.
### Swapping
* 현재 수행중이지 않은 프로세스의 정보들을 보조기억장치로 내리고 실행하고자 하는 프로세스의 정보들 메모리에 적재하는 것 Swapping이라고 한다.
* 과거에는 프로그램 코드 전체를 Swap-in 해서 프로세스를 진행하고 프로세스 교체시 Swap-out해 다른 프로그램 코드를 적재했다.
* 현재는 프로그램 크기가 메모리에 비해 매우 커졌고
* 불필요한 코드가 같이 적재되어 Context Switching 시 자원낭비가 심하며
* 하나의 메모리에 다수의 프로세스가 접근하여 같이 사용하기 때문에 잘 사용되지 않음
* 페이징 기법으로 발전하였다.
---
## 연속 메모리 할당 기법
> 하나의 프로그램이 메모리 상에 하나의 덩어리(연속적)로 적재되는 방법
### 단일 연속 할당 기법
* 단일 사용자가 메모리를 전부 점유하여 사용하는 방식
* 한번에 1개의 프로그램 코드가 메모리에 적재된다
* 메모리의 크기보다 큰 프로그램을 구동 하지 못하거나 오버레이 기법을 사용해야 한다.
### 분할 연속 할당 기법
* 다중 프로그래밍 환경에서 호율적인 실행을 위해 다수의 프로그램이 메모리를 공유하여 사용해야함
* 운영체제가 실행되기 위해 필요한 메모리 공간을 제외한 나머지 메모리 공간을 분할하여 다수의 프로그램(프로세스)에 할당한다.
* 메모리 공간이 분할 된 수만큼 프로세스가 동시에 실행될 수 있다.
* 메모리 공간이 낭비되는 단편화 현상이 발생한다.
* 내부 단편화: 큰 메모리 공간에 작은 프로그램 코드가 할당되어 남는 공간
* 외부 단편화: 작은 메모리 공간에 큰 프로그램 코드가 할당되지 못해 분할된 메모리 공간 전체가 남는 공간
* 메모리 공간을 분할하는 방법에 따라 고정 분할 기법과 가변 분할 기법으로 나뉜다.
#### 고정 분할 기법
* 메모리의 공간을 미리 정해진 크기대로 분할하여 사용한다.
* 단편화 현상이 많이 발생한다.
#### 가변 분할 기법
* 프로그램이 요구하는 크기에 맞게 요구하는 시점에 메모리를 분할하여 할당하고, 사용하지 않는 메모리는 회수한다.
* 메모리 공간을 할당하는 방법에 의해 배치된다.
* 최초 적합 전략: 프로그램 코드가 탑재될 수 있는 크기의 메모리 공간을 만나면 탑재한다.
* 최적 적합 전략: 프로그램 코드가 탑재되기 가장 적절한 크기의 메모리 공간에 탑재한다.
* 최악 적합 전략: 가장 큰 메모리 공간에 프로그램 코드를 탑재한다.
* 메모리 할당과 회수가 계속 일어날 경우 할당 가능한 메모리 공간의 크기와 위치가 제각각이 되기때문에 적절한 관리가 필요하다.
* 서로 인접해 있는 분할 된 메모리 공간을 통합한다.
* 여러 곳에 분산되어 있는 사용중인 메모리 공간을 한 곳으로 모은다.
## 분산 메모리 할당 기법
> 하나의 프로그램이 분할된 메모리 공간 상에 흩어져서 적재되는 방법
### 페이징 기법
* 메모리를 **동일한 크기**의 **프레임**으로 분할하고 프로그램을 **프레임과 동일한 크기**의 **페이지**로 분할한다.
* 프로세스 실행에 필요한 페이지만 메모리에 적재하고 있다가 추가 페이지가 필요한 경우 메모리에 적재한다.
* 메모리 상에 연속적으로 적재되지않고 필요한 시점에 적재하기 때문에 적재된 지점의 메모리 주소를 알아야 프로세스에서 찾아쓸 수 있다.
* 따라서 페이지의 메모리 주소를 관리하고 있는 페이지 테이블이 프로세스마다 1개씩 존재한다.
* 페이지와 프레임의 크기가 동일하기 때문에 **외부단편화**가 발생하지 않는다.
* 하지만 프로그램의 크기가 페이지 분할크기와 딱 맞아 떨어지지 않아 **내부단편화**는 발생가능하다.(ex: 프로그램 크기 10MB, 페이지 크기 3MB인 경우 1MB크기의 페이지가 발생해 2MB의 내부단편화 발생 가능)
### 세그멘테이션 기법
* 메모리를 크기가 변할 수 있는 **세그먼트**로 분할하고 그 한도 내에서 프로그램 코드를 필요한 만큼 적재한다.
* 페이지 기법과 같이 혼용되어 프로세스 별로 세그먼트를 할당하여 메모리를 분할하고 세그먼트 내에서 페이징 기법으로 프로그램 코드를 적재한다.
* 메모리 -> 세그먼트로 분할, 세그먼트 -> 프레임으로 분할
| 27.814286 | 124 | 0.681561 | kor_Hang | 1.00001 |
dc8b14119a335579f7fc76042a2aad8148359022 | 1,122 | md | Markdown | README.md | flezzfx/php-ncd-classifier | c0f8ccae3cc14570fc15fc70d8db6f7528e8fc91 | [
"Apache-2.0"
] | null | null | null | README.md | flezzfx/php-ncd-classifier | c0f8ccae3cc14570fc15fc70d8db6f7528e8fc91 | [
"Apache-2.0"
] | null | null | null | README.md | flezzfx/php-ncd-classifier | c0f8ccae3cc14570fc15fc70d8db6f7528e8fc91 | [
"Apache-2.0"
] | null | null | null | PHP NCD Classifier
----
NCD means Normalized Compression Distance and describes a formula that compares the length of the compressed strings X and Y as well as a concat version of XY.
This classifier allows you to easily classify texts with PHP.
You can use the classifier with simply require the file.
```php
require 'ncd.class.php';
```
After that create a new NCD object.
```php
$c = new NCD();
```
After that you can as much examples from the past as you want.
Add them with their specific label.
```php
$c->add(array(
"hi my name is alex, what about you?" => 'ok',
"are you hungy? what about some fries?" => 'ok',
"how are you?" => 'ok',
"buy viagra or dope" => 'spam',
"viagra spam drugs sex" => 'spam',
"buy drugs and have fun" => 'spam'
));
```
With compare method you then use the classifier as shown below.
```php
$c->compare('hi guy, how are you?');
$c->compare('buy viagra m0therfuck5r');
```
This method returns the label of the estimated text.
NCD works without statistics and uses just the compression distance of the input string in comparison to the other strings and their concatenation.
| 27.365854 | 159 | 0.712121 | eng_Latn | 0.999122 |
dc8b5f9192fa68d2fa3602735c93a7aeee83432e | 20 | md | Markdown | README.md | BlockWit/options-research | 0753efe584f1edd8b8ad1243880c3e0af302d0cb | [
"Apache-2.0"
] | null | null | null | README.md | BlockWit/options-research | 0753efe584f1edd8b8ad1243880c3e0af302d0cb | [
"Apache-2.0"
] | null | null | null | README.md | BlockWit/options-research | 0753efe584f1edd8b8ad1243880c3e0af302d0cb | [
"Apache-2.0"
] | null | null | null | # Options research
| 6.666667 | 18 | 0.75 | eng_Latn | 0.69346 |
dc8bb25955dfcd34f6c481276100e254888117c9 | 481 | md | Markdown | README.md | zbma/asp.net-core-bootstrap-ajax-modals-part-2 | 85c9d4dad6f31797cfcf8f1dc4c1796828a60bc4 | [
"MIT"
] | 7 | 2019-03-26T08:46:01.000Z | 2020-10-12T20:20:19.000Z | README.md | zbma/asp.net-core-bootstrap-ajax-modals-part-2 | 85c9d4dad6f31797cfcf8f1dc4c1796828a60bc4 | [
"MIT"
] | null | null | null | README.md | zbma/asp.net-core-bootstrap-ajax-modals-part-2 | 85c9d4dad6f31797cfcf8f1dc4c1796828a60bc4 | [
"MIT"
] | 2 | 2019-11-06T03:20:39.000Z | 2022-01-01T06:58:49.000Z | # Purpose
This repository contains source code for [ASP.NET Core AJAX modals with validation using Bootstrap (Part 2)](https://softdevpractice.com/blog/asp-net-core-ajax-modals-part-2/) tutorial.
Part 1 can but found at [ASP.NET Core AJAX modals with validation using Bootstrap](https://softdevpractice.com/blog/asp-net-core-mvc-ajax-modals).
# How to use
Open project in Visual Studio 2017 Community Edition (or better) and run.
# Licence
MIT check included `Licence` file.
| 34.357143 | 185 | 0.773389 | eng_Latn | 0.657892 |
dc8bcd8072620f50c9c2d703b2882d32f30e7f6e | 1,382 | md | Markdown | README.md | jfilter/mw-category-members | 97c2fa35a1ca06ef3f8cf7fae7914b9cca48e483 | [
"MIT"
] | 1 | 2021-08-06T08:26:46.000Z | 2021-08-06T08:26:46.000Z | README.md | jfilter/mw-category-members | 97c2fa35a1ca06ef3f8cf7fae7914b9cca48e483 | [
"MIT"
] | null | null | null | README.md | jfilter/mw-category-members | 97c2fa35a1ca06ef3f8cf7fae7914b9cca48e483 | [
"MIT"
] | null | null | null | # mw-category-members [![Build Status](https://travis-ci.com/jfilter/mw-category-members.svg?branch=master)](https://travis-ci.com/jfilter/mw-category-members) [![PyPI](https://img.shields.io/pypi/v/mw-category-members.svg)](https://pypi.org/project/mw-category-members/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/mw-category-members.svg)](https://pypi.org/project/mw-category-members/)
Using MediaWiki's API: retrieve pages that belong to a given category
## Installation
`pip install mw_category_members`
## Usage
```python
import category_members
results = category_members.retrieve('Category:Presidents_of_the_United_States')
for r in results:
print(r['name'], r['link'])
```
### Arguments
```python
def retrieve(cat_name, mw_instance='https://en.wikipedia.org', types=['page', 'subcat', 'file'], clean_subcat_names=False):
```
- cat_name: Category name e.g. 'Category:Presidents_of_the_United_States'.
- mw_instance: Which MediaWiki instance to use (the URL 'origin'). Defaults to 'https://en.wikipedia.org'.
- types: Which types of pages to retrieve. Defaults to `['page', 'subcat', 'file']`.
- clean_subcat_names: If `True`, removes the e.g. 'Category:' prefix of the titles. Defaults to `False`.
### Returns
Array of pages where a page is a dictionary of `{'name': 'some name', 'link': 'some absolute link'}`.
## License
MIT.
| 37.351351 | 409 | 0.725036 | yue_Hant | 0.339767 |
dc8c22e97454dbd404deab7d225e3c96aa4f4ff7 | 89 | md | Markdown | 2019-01-25.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | null | null | null | 2019-01-25.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | 1 | 2020-02-12T02:46:21.000Z | 2020-02-12T02:46:21.000Z | 2019-01-25.md | vitroid/vitroid.github.io | cb4f06a4a4925a0e06a4001d3680be7998552b83 | [
"MIT"
] | null | null | null | ---
title: #2019-01-25
---
## Linked from
* [TruncatablePrimes](/TruncatablePrimes)
| 8.090909 | 41 | 0.640449 | eng_Latn | 0.482909 |
dc8cedc9174143814232d31399651733e09e8449 | 1,751 | md | Markdown | README.md | Aeyos/AngularPristineTemplate | 7ff0365345b59d9ea9244130ae6aaf10c33a6526 | [
"Artistic-2.0"
] | null | null | null | README.md | Aeyos/AngularPristineTemplate | 7ff0365345b59d9ea9244130ae6aaf10c33a6526 | [
"Artistic-2.0"
] | null | null | null | README.md | Aeyos/AngularPristineTemplate | 7ff0365345b59d9ea9244130ae6aaf10c33a6526 | [
"Artistic-2.0"
] | null | null | null | # Angular Pristine Template
|||
| --- | --- |
| __Author__ | Vinícius Negrão |
| __Email__ | vinicius.m.negrao@gmail.com |
| __License__ | Attribution 3.0 Unported (CC BY 3.0) |
| __Profession__ | Web Designer, Web Developer, Game Designer, Game Programmer, Graphics Artist |
| __Current Version__ | 1.0.6 |
# Changelog
#### 1.0.0
>First Commit
* Added base project
#### 1.0.1
>README updated
* README updated
#### 1.0.2
>File updates
* README updated
* install.sh added
* package.json added for npm install
#### 1.0.3
>Mass update
* SASS support added
* renamed partial index to home
* added svg logo
* added early svg paint directive
* incremented js_extension.js
* rewrote gulpfile.js
* added gulp modules
* added sass file to index.html
* added images folder app/img
* added sass and directives folder
#### 1.0.4
>Footer
* Page footer added
#### 1.0.5
>REST API
* REST API Service added with easy-to-setup routes and parameters
* svgObject directive debug lines removed
#### 1.0.6
>JSOV
* Javascript object viewer directive created
# Installation
Clone the project to a directory and pull the contents
####__Download npm, it's **VERY** important D:__
MacOS / Linux installation script:
Non-sudo
```
$ sudo npm install -g bower
$ sudo npm install -g gulp
$ sh install.sh
```
Sudo
```
$ sudo sh install.sh
```
### OR
If you don't have __bower__ yet...
```
$ sudo npm install -g bower
```
If you don't have __gulp__ yet...
```
$ sudo npm install -g gulp
```
Proceeding with the installation
```
$ npm install
$ bower install
```
To kickoff
```
$ gulp && gulp webserver
```
Open your browser and navigate to **localhost**
> Yesterday is history, tomorrow is a mistery, but today is a gift, that`s why it's called The Present. | 17.166667 | 103 | 0.697316 | eng_Latn | 0.876441 |
dc8e6b22a95d7f4204a09fc6e717db9ad854fce3 | 904 | md | Markdown | src/03-06-directed-versus-undirected-graphs.md | igorbotian/think-like-a-git | b3e1d520658db6d068ff445da37a998aad2e6550 | [
"CC0-1.0"
] | null | null | null | src/03-06-directed-versus-undirected-graphs.md | igorbotian/think-like-a-git | b3e1d520658db6d068ff445da37a998aad2e6550 | [
"CC0-1.0"
] | null | null | null | src/03-06-directed-versus-undirected-graphs.md | igorbotian/think-like-a-git | b3e1d520658db6d068ff445da37a998aad2e6550 | [
"CC0-1.0"
] | null | null | null | ## Directed Versus Undirected Graphs ##
One of the key distinctions people make between graphs is whether they are directed or undirected. I'll admit, when I see the phrase "undirected graph," I sometimes get a mental image of a subway system map just sitting there aimlessly on the couch while its parents ask when it's going to take responsibility and do something with its life...
![Undirected graph (no arrows)](undirected-graph.png)
...but that's just me. Really, all we're saying is whether the edges in a graph are bidirectional or not.
Most, but not all, graphs I've seen have only one kind of edge. There are a few cases where you might want to use both—for example, a street map might have undirected edges for two-way streets and directed edges for one-way streets—but that's the only example I can think of off the top of my head.
![Directed graph (note the arrows)](directed-graph.png)
| 75.333333 | 343 | 0.77323 | eng_Latn | 0.999922 |
dc8eead0bbc6d8e9db06594131320714c1cacfe4 | 15,346 | md | Markdown | README.md | gitpeut/GeleRadio | d67b731113b169799c4261e0d7c314f3ef37bea4 | [
"Apache-2.0"
] | 4 | 2021-03-31T13:56:38.000Z | 2022-02-24T17:27:18.000Z | README.md | gitpeut/SnowRadio | d67b731113b169799c4261e0d7c314f3ef37bea4 | [
"Apache-2.0"
] | null | null | null | README.md | gitpeut/SnowRadio | d67b731113b169799c4261e0d7c314f3ef37bea4 | [
"Apache-2.0"
] | null | null | null | # Snow Radio
(Снежное радио - Sneeuwradio)
<img src="Images/2/2021-03-31 01-21-37.JPG" >
</p>
Snow Radio is a logical extension of the previous Orange Radio project with a spectrum analyzer
and choice of filesystem and optional gesture, touchscreen control and openweathermap data.
User friendly web interface for easy addition of internet radio stations. There a no restrictions
which stations to listen to. Full support for http and https stations with stream and chunked transfer encoding.
Also artist and track information is collected if available and shown correctly both on the display and
in the web browser in most Western European languages and Russian.
Latency is low by design.
<p />
Snow Radio was greatly improved and extended due to the generous support, useful suggestions
and rigorous testing of Alexander Semenov (https://github.com/Semenov-Alexander)
<p />
Playlists ( .pls, .m3u, .dash etc.) are not supported.
No plans exist to support these, but this may change one day.
<p />
User friendly install of VS1053 plugins and patches, just upload the .plg
files to your SPIFFS/LittleFS/FFat flash disk.
<ul>
<li>plg files for patches can be downloaded from the VLSI website.
http://www.vlsi.fi/en/support/software/vs10xxpatches.html
</li>
<li>plg files for plugins ( like the spectrum analyzer plugin) can be downloaded here:
http://www.vlsi.fi/en/support/software/vs10xxplugins.html
</li>
<li> The VS10xx series website is very interesting for more ideas:
http://www.vlsi.fi/en/support/software.html
</li>
</ul>
<p />
An effort has been made to maximize use of PSRAM, if available.
<p />
Snow Radio uses the following libraries:
<ul>
<li>ArduinoJSON ( https://github.com/bblanchon/ArduinoJson.git )</li>
<li>Gesture_PAJ7620 ( https://github.com/Seeed-Studio/Gesture_PAJ7620 )</li>
<li>ESP_VS1053 ( https://github.com/baldram/ESP_VS1053_Library )</li>
<li>TFT_eSPI ( https://github.com/Bodmer/TFT_eSPI )</li>
<li> AsyncTCP (https://github.com/me-no-dev/AsyncTCP)</li>
<li>ESPAsyncWebServer (https://github.com/me-no-dev/ESPAsyncWebServer.git )</li>
<li>If you want to use LittleFS (recommended), LittleFS_ESP32 ( https://github.com/lorol/LITTLEFS )</li>
</ul>
<p />
Grateful use has been made of the spectrum analyzer code of blotfi at
https://github.com/blotfi/ESP32-Radio-with-Spectrum-analyzer
<p />
Honourable mentions should be made for the support and inspiration taken from:
<ul>
<li>https://github.com/Semenov-Alexander</li>
<li>https://github.com/Edzelf/ESP32-Radio</li>
<li>https://github.com/RalphBacon/205-Internet-Radio</li>
Thanks to all the authors.
</ul>
<h2>suggested use and benefits</h2>
You can use the suggested versions "as is" or use this as the basis for creating your project.
Snow Radio uses unique software solutions and solves many of the problems necessary to create this project.
We will be grateful for honouring the Apache license and mention this project as the origin for your own efforts.
The project we created allows you to create your own unique interface for Snow Radio.
The possibilities are limited only by your imagination.
Use all the features of Snow Radio, or just some that you need.
Creating a music center with gesture control and touch screen is now easy!
Use the LINE-IN and Bluetooth buttons to control the output levels of the control pins.
To manage external devices, you can use the MCP23017 port expander, for example.
<h2>pictures</h2>
Pictures of various incarnations of Snow Radio, including the ST7796_RU branch, can be found in the
<a href="Images">Images</a> subdirectory of this repository
<h2>hardware</h2>
An ESP32 with plenty of PSRAM is recommended. For the touchscreen an ILI9341 320x240 SPI (NOT parallel, you
will run out of pins) screen is used.
<p />
In branch ST7796_RU ( https://github.com/gitpeut/SnowRadio/tree/ST7796-RU )a highly modified version
is available that makes use of a 480x320 ST7796 display, and has some other unique features like
traffic information. All VS1053 breakouts should work ok, consult the 0pins.h file for a
possible way of connecting this hardware.
<p />
In the current version, a separate SPI bus is used for the VS1053 (VSPI) and the screen (HSPI).
This works but so far, there is no evidence this improves speed or reliability, so feel free
to operate both devices on the same SPI bus.
<h2>compiling</h2>
Should compile with the Arduino IDE, but make sure a file
wificredentials.h is available in folder wificredentials of your Arduino library folder.
(credits: Andreas Spiess https://www.youtube.com/channel/UCu7_D0o48KbfhpEohoP7YSQ )
This file should contain the following variables for SnowRadio.
<p />
<pre>
#ifndef WIFICREDENTIALS_H
#define WIFICREDENTIALS_H
// optional, necessary if LOADSSIDS is defined and used
const char* wifiSsid[] = {"yourssid", "yourotherssid","yourmotherssid", ""};
const char* wifiPassword[] = ("yourWiFipassword", "yourotherssidpassword","yourmotherssidpassword", ""} ;
// ntp
const char* ntpServers[] = { "nl.pool.ntp.org", "ru.pool.ntp.org", "de.pool.ntp.org"};
//Use a valid TZ string, docs e.g. https://www.di-mgt.com.au/src/wclocktz.ini
//
const char* ntpTimezone = "CET-1CEST,M3.5.0/2,M10.5.0/3";
// const char* ntpTimezone = "MSK-3"; // For Moscow, no daylight saving time Moscow
// Key and IV for G Radio used to encrypt wifi credentials for networks connected to by ESP touch
const char* gr_iv = "DummyValyeWIthallkindsofNUmbers&****ssss";
const char* gr_key = "OtherDummyValyeWIthallkindsof*&DGD";
#ifdef USEOWM
//openweathermap ( see https://openweathermap.org/current for docs )
const char *owm_id = "2756252";// 2756252 = eindhoven, nl. 532615 = moscow, ru
const char *owm_key = "123456789012345678901234567890"; // get your key at openweathermap.org
const char *owm_unit = "metric";
const char *owm_lang = "nl"; // en, ru....
#endif
#endif
</pre>
In SnowRadio.ino a number of defines enable you to turn on or off some features.
Generally they are called USE<option>. Be aware that not all possible combinations
have been tested (although many have) , and that a random combination of features may
show random and undefined behaviour.
<p />
Use "#define USE..." for to enable an option, "#undef USE..." to disable an option
Currently, the following options are defined:
<pre>
/*define or undef */
#define USEOWM // to enable Open weathermap owm_key,
// owm_id (city id), owm_lang (language),
//owm_unit (metric, imperial) in wificredentials.h
// should be defined as const char *, e.g.
// const char* owm_unit = "metric";
#undef USEOTA // disable Arduino OTA. http update is always on and works
// as well.
#define USETLS 1 // allow for https
#undef USEPIXELS // use pixels as an indicator for the gestures
#define USEGESTURES // Use the PAJ7620 gesture sensor
#undef MULTILEVELGESTURES // gestures as used in Oranje radio. Not very well tested or maintained
#define USETOUCH // use a touchscreen. Tested and developed with an ILI9341
#define USEINPUTSELECT // input selection between AV(LINE IN), BLUETOOTH and RADIO
// if undefined, volume buttons are displayed on the touchscreen, otherwise
// buttons to select BLUETOOTH and AV (LINE IN)
#define USESPECTRUM // install and run the spectrum patch as supplied by VLSI
// and gracefully adapted from the Web Radio of Blotfi
#define SHOWMETA // show meta data ( artist/track info in variable meta.metadata ) in the default
// define one of below 3 if metadata is desired
#undef METASPRITE // use a sprite to display metadata and scroll the sprite if text is too long
#define METAPOPUP // diplay long txt per 2 lines every ~10 seconds
#undef METASTATIC // display as much as possible, but overflow if too long // place under the station name.
#define USESPTOUCH 1 //use the ESP TOUCH phone app to connect when no known WiFi network is seen.
//for some this is very user friendly, for others it is a source of frustration.
#undef LOADSSIDS // If you want to preload the encrypted netpass file with ssid and passwords
// define LOADSSIDS. If not using ESPTOUCH, this will be a necessary step.
// For this to work, in wificredentials.h,
// WiFiNetworks are defined as follows, make sure the arrays have a "" as last element
// const char* wifiSsid[] = {"yourssid", "yourotherssid","yourmotherssid", ""};
// const char* wifiPassword[] = ("yourWiFipassword", "yourotherssidpassword","yourmotherssidpassword", ""} ;
// ATTENTION: After an encrypted /netpass file is generated, it is better to recompile without this option, the loading
// is not necessary anymore but more importantly, the credentials will remain readable in the compiled code.
//
</pre>
Compiling and uploading can be done in the Arduino IDE. Creating and populating the flash filesystem is required, and should be done using the
ESP32 Sketch data upload utility. Currently, this is the most feature-rich version: https://github.com/lorol/arduino-esp32fs-plugin,
it allows for SPIFFS, LittleFS and FFat filesystems.
<h2>faq</h2>
<ul>
<li>Touch calibration <br />
When attaching a touch display,touch calibration data for an other display will not work properly or not at all.
Open a browser and connect to URL snowradio.local, or to the ip address of Snow radio and click on button
"Delete touch calibration", then on the button "Reboot". When the Snow radio restarts, it ask you to touch
the 4 corners of the display. If done correctly, the display should work.
</li>
<li>Adding, changing or deleting stations <br />
Stations can be added on the Web page by pressing button "Add station". A valid url and a station name is required.
Changing or deleting a station can be done by pressing the button to the right of the station name in the stationlist.
There is a hardcoded limit of 100 stations. with a generous amount of PSRAM this limit can probably be increased.
Stations are saved in a file called stations.json, which can also be edited on your computer.
A very comprehensive and meticulously maintained website with more internet radio
stations is :
https://www.hendrikjansen.nl/henk/streaming.html
</li>
<li>Gesture sensor <br />
Gestures work as follows:
<ul>
<li>circle clockwise or anti-clockwise: wake up sensor ( a little hand shows up on the screen)
or put to sleep if active (the little hand disappears)</li>
<li>up: increase volume</li>
<li>down: decrease volume</li>
<li>right: go to next station</li>
<li>left: go to previous station</li>
</ul>
Sometimes, especially after a new flash of the ESP32, the gesture sensor does not react anymore. Only known fix
is a power on/power off of both the ESP32 and the PAJ7620. In other circumstances this hardly ever occurs.
</li>
<li>Update firmware<br />
By default firmware can be updated using http. Press the button "Update firmware" on the Web page to upload a
new bin file, created in the Arduino IDE by selecting "Export compiled binary" in the Sketch menu.
</li>
<li>Backup and restore<br/>
Use the "Directory" button to see which files you may want to backup. <br/>
Restoring files can be done by a complete flash of the file system, or by using the "Upload file" button.
Make sure you enter the full path of the file, including the preceding /
For the netpassfile and the statonlist there are buttons on the webpage to easily backup these files.
All other files can be downloaded and saved by typing http://snowradio.local/<filename>?download=1 in your browser.
For example, to download the touch calibration data, access this url in your browser:
http://snowradio.local/TouchCal.dat?download=1
</li>
<li>Delete files<br />
Files can be can be deleted by typing in this URL: http://snowradio/delete?file=<filename>,
e.g. http://snowradio/delete?file=/netpass
</li>
<li>Languages</br >
Default language for monthnames and days is Dutch. Provisions have been made to also display these in Russian and
English. For Russian fonts with Cyrillic characters are available. English can be set by uncommenting
#define MONTHNAMES_EN in SnowRadio.ino, Russian by uncommenting MONTHNAMES_RU.
</li>
<li>Only one SSL connection <br />
The Snow radio can receive https Internet radio stations. However, when experimenting, bear in
mind that Arduino does not allow for SSL to use PSRAM. For Snow radio this means that more than 1 SSL ( https)
connection is not possible. This also makes it unlikely newer protocols like dash or HLS can be supported.
</li>
<li>When connection to an internet radio station fails, the radio connects to the next station in the list. This
could surprise you, but is intentional.
</li>
<li>Restart at failing connection <br />
When reception of the internet radio is bad, the radio cannot supply the VS1053 with enough data to supply sound.
As a last resort the radio will then restart. This usually solves the issue, but could in some extraordinary cases
lead to a reboot loop. Via the web page or the screen you can try to switch to another station before the reboot
occurs again. If this doesn't work a reflash of the filesystem can be the only way out, but this will also mean
that more data is lost, like your last station, last volume and tone settiongs, your display calibration data and your
netpass file.
</li>
<li>syslog <br />
A logfile (/syslog.txt) mainly containing boot reasons is maintained and can be viewed on the Web page by pressing the"Show Log"
button. When this log grows beyond 20k it is deleted.
</li>
<li>File system <br />
For development LittleFS has been used, but the Radio can work with FFAT and SPIFFS as well. We recommend LittleFS
for it's robustness and better locking of files. Nevertheless, in SnowRadio.ino
there are options to set the filesystem used. Just uncomment the one you prefer:
//choose file system
//
//fs::FS RadioFS = SPIFFS;
//const int RadioFSNO = FSNO_SPIFFS;
//const char *RadioMount = "/spiffs";
fs::FS RadioFS = LITTLEFS;
const int RadioFSNO = FSNO_LITTLEFS;
const char *RadioMount = "/littlefs";
//fs::FS RadioFS = FFat;
//const int RadioFSNO = FSNO_FFAT;
//const char *RadioMount = "/ffat";
</li>
<li>Timezones<br />
It is surprisingly difficult to find a good source for TZ strings online. Best method is to log in to an up-to-date Linux system
and view for instance /usr/share/zoneinfo/Europe/Moscow, ignore the binary information, the TZ string will be in there.
</li>
<li>Pins<br />
In file 0pins.h an example configuration of the pins can be found. Pins for the display and
the touch sensor should be defined in the tft_eSPI setup file for your display.
</li>
</ul>
| 53.284722 | 142 | 0.721556 | eng_Latn | 0.991562 |
dc91cbfcd21a9c4d30ccf665ccb3e156b4364789 | 2,123 | md | Markdown | docs/Sections/Initiative Budget.md | richardbowman/applied-productdev | 5e16aee0b49ab3f4d7c4a5bce964066e787d3ccd | [
"CC0-1.0"
] | null | null | null | docs/Sections/Initiative Budget.md | richardbowman/applied-productdev | 5e16aee0b49ab3f4d7c4a5bce964066e787d3ccd | [
"CC0-1.0"
] | null | null | null | docs/Sections/Initiative Budget.md | richardbowman/applied-productdev | 5e16aee0b49ab3f4d7c4a5bce964066e787d3ccd | [
"CC0-1.0"
] | null | null | null | # Initiative Budget
## Introduction
An initiative budget describes the set of investments the product and engineering teams intend to take on, and serves as a key part of the product strategy process to ensure you have the resources allocated to product roadmap items.
## When to use
This artifact should be revised on a regular basis, such as quarterly, to drive reallocation of resources as needed (remembering durability is a key), and a heavily used tool in annual budget planning.
## How to maintain
This budget should be directly connected to the [[Opportunity Backlog]] using the same Opportunity IDs to represent the problem/solution spaces that the business believes has potential value and viability for investment.
Ensuring each initiative is described via a clear objective and key results helps ensure the problem has even been groomed at the appropriate level to take on assigning a team to it. The OKRs shown in the [[Initiative Budget]] should be developed using the [[Discovery - Opportunity Grooming]] activity.
## Who maintains
Product leadership is accountable for maintaining this artifact. (See [[applied-productdev/docs/Sections/Team Roles & Responsibilities#Accountable]])
## Sample
| Initiative ID | Opportunity ID | Persona | Objective | Key Results | Stage | Pod |
| ------------- | -------------- | --- | ---------------------------------------------------------- | ------------------------------------------- | --------- | -------------- |
| IN_00001 | OPP_00001 | Customer | Perform re-targeting advertisements to re-engage customers | Key actions for targeted users increase 25% | Scale | Ad Retention |
| IN_00002 | OPP_00001 | Customer | Re-engage through e-mail | Email open rates > 25%, click rates > 5% | Discovery | Mail Retention |
| | | | | | | |
| 84.92 | 303 | 0.591616 | eng_Latn | 0.995888 |
dc924c9cd316f10a88091b3619877f7382efa201 | 537 | md | Markdown | src/scenario_generation/README.md | ikki407/DGOPT | 7d2510cd29e1f1c5c79ab870988348230685cecb | [
"MIT"
] | 2 | 2020-11-02T13:02:21.000Z | 2020-11-06T13:37:36.000Z | src/scenario_generation/README.md | wuyou33/DGOPT | 7d2510cd29e1f1c5c79ab870988348230685cecb | [
"MIT"
] | 1 | 2017-01-23T05:35:49.000Z | 2017-01-23T05:35:49.000Z | src/scenario_generation/README.md | wuyou33/DGOPT | 7d2510cd29e1f1c5c79ab870988348230685cecb | [
"MIT"
] | 1 | 2020-05-14T07:21:15.000Z | 2020-05-14T07:21:15.000Z | # Scenario Generation
## Files
- `scenario_generation.py`: Scenario generation with K-means
- `scenario_generation_kde.py`: Scenario generation by using kernel density estimation. This script estimates the probability density functions (PDF) of demand & price, wind speed, and solar radiation & temperature. Then, sample from the PDF to create scenarios for specified number and use K-means to reduce the number of scenarios for 8760 hours.
- `scenario_generation_duration.py`: Scenario generation with duration curve of demand.
| 48.818182 | 349 | 0.795158 | eng_Latn | 0.875729 |
dc934c23a24937ea178e9d6f1eda9bf5300c9b55 | 10,236 | md | Markdown | treebanks/ja_bccwj/ja_bccwj-pos-ADJ.md | MarcoDiMarek/docs | c00f9c59b8e1e93b24c119fbdccb84aaf6b06565 | [
"Apache-2.0"
] | null | null | null | treebanks/ja_bccwj/ja_bccwj-pos-ADJ.md | MarcoDiMarek/docs | c00f9c59b8e1e93b24c119fbdccb84aaf6b06565 | [
"Apache-2.0"
] | null | null | null | treebanks/ja_bccwj/ja_bccwj-pos-ADJ.md | MarcoDiMarek/docs | c00f9c59b8e1e93b24c119fbdccb84aaf6b06565 | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of ADJ in UD_Japanese-BCCWJ'
udver: '2'
---
## Treebank Statistics: UD_Japanese-BCCWJ: POS Tags: `ADJ`
There are 1 `ADJ` lemmas (6%), 1 `ADJ` types (6%) and 26664 `ADJ` tokens (2%).
Out of 17 observed tags, the rank of `ADJ` is: 1 in number of lemmas, 1 in number of types and 9 in number of tokens.
The 10 most frequent `ADJ` lemmas: _
The 10 most frequent `ADJ` types: _
The 10 most frequent ambiguous lemmas: _ (<tt><a href="ja_bccwj-pos-NOUN.html">NOUN</a></tt> 365490, <tt><a href="ja_bccwj-pos-ADP.html">ADP</a></tt> 250543, <tt><a href="ja_bccwj-pos-PUNCT.html">PUNCT</a></tt> 146174, <tt><a href="ja_bccwj-pos-AUX.html">AUX</a></tt> 143333, <tt><a href="ja_bccwj-pos-VERB.html">VERB</a></tt> 111264, <tt><a href="ja_bccwj-pos-SCONJ.html">SCONJ</a></tt> 56125, <tt><a href="ja_bccwj-pos-NUM.html">NUM</a></tt> 38868, <tt><a href="ja_bccwj-pos-PROPN.html">PROPN</a></tt> 35873, <tt><a href="ja_bccwj-pos-ADJ.html">ADJ</a></tt> 26664, <tt><a href="ja_bccwj-pos-SYM.html">SYM</a></tt> 19159, <tt><a href="ja_bccwj-pos-ADV.html">ADV</a></tt> 18911, <tt><a href="ja_bccwj-pos-PART.html">PART</a></tt> 14799, <tt><a href="ja_bccwj-pos-PRON.html">PRON</a></tt> 11302, <tt><a href="ja_bccwj-pos-DET.html">DET</a></tt> 6037, <tt><a href="ja_bccwj-pos-CCONJ.html">CCONJ</a></tt> 5058, <tt><a href="ja_bccwj-pos-INTJ.html">INTJ</a></tt> 915, <tt><a href="ja_bccwj-pos-X.html">X</a></tt> 360)
The 10 most frequent ambiguous types: _ (<tt><a href="ja_bccwj-pos-NOUN.html">NOUN</a></tt> 365490, <tt><a href="ja_bccwj-pos-ADP.html">ADP</a></tt> 250543, <tt><a href="ja_bccwj-pos-PUNCT.html">PUNCT</a></tt> 146174, <tt><a href="ja_bccwj-pos-AUX.html">AUX</a></tt> 143333, <tt><a href="ja_bccwj-pos-VERB.html">VERB</a></tt> 111264, <tt><a href="ja_bccwj-pos-SCONJ.html">SCONJ</a></tt> 56125, <tt><a href="ja_bccwj-pos-NUM.html">NUM</a></tt> 38868, <tt><a href="ja_bccwj-pos-PROPN.html">PROPN</a></tt> 35873, <tt><a href="ja_bccwj-pos-ADJ.html">ADJ</a></tt> 26664, <tt><a href="ja_bccwj-pos-SYM.html">SYM</a></tt> 19159, <tt><a href="ja_bccwj-pos-ADV.html">ADV</a></tt> 18911, <tt><a href="ja_bccwj-pos-PART.html">PART</a></tt> 14799, <tt><a href="ja_bccwj-pos-PRON.html">PRON</a></tt> 11302, <tt><a href="ja_bccwj-pos-DET.html">DET</a></tt> 6037, <tt><a href="ja_bccwj-pos-CCONJ.html">CCONJ</a></tt> 5058, <tt><a href="ja_bccwj-pos-INTJ.html">INTJ</a></tt> 915, <tt><a href="ja_bccwj-pos-X.html">X</a></tt> 360)
* _
* <tt><a href="ja_bccwj-pos-NOUN.html">NOUN</a></tt> 365490: <b>_</b> _ <b>_</b> <b>_</b> _ _ <b>_</b> <b>_</b> _ <b>_</b> _ <b>_</b>
* <tt><a href="ja_bccwj-pos-ADP.html">ADP</a></tt> 250543: _ _ _ _ <b>_</b> _ _ _ <b>_</b> _ <b>_</b> _
* <tt><a href="ja_bccwj-pos-PUNCT.html">PUNCT</a></tt> 146174: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <b>_</b>
* <tt><a href="ja_bccwj-pos-AUX.html">AUX</a></tt> 143333: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <b>_</b> _
* <tt><a href="ja_bccwj-pos-VERB.html">VERB</a></tt> 111264: _ _ _ _ _ <b>_</b> _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-SCONJ.html">SCONJ</a></tt> 56125: _ _ _ _ _ <b>_</b> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-NUM.html">NUM</a></tt> 38868: _ <b>_</b> _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-PROPN.html">PROPN</a></tt> 35873: _ _ _ <b>_</b> _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-ADJ.html">ADJ</a></tt> 26664: _ _ _ _ _ _ _ _ _ _ _ _ _ _ <b>_</b> _ _ _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-SYM.html">SYM</a></tt> 19159: _ _ <b>_</b> _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-ADV.html">ADV</a></tt> 18911: _ _ _ _ _ _ _ <b>_</b> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-PART.html">PART</a></tt> 14799: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <b>_</b> _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-PRON.html">PRON</a></tt> 11302: <b>_</b> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-DET.html">DET</a></tt> 6037: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <b>_</b> _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-CCONJ.html">CCONJ</a></tt> 5058: <b>_</b> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
* <tt><a href="ja_bccwj-pos-INTJ.html">INTJ</a></tt> 915: <b>_</b> _
* <tt><a href="ja_bccwj-pos-X.html">X</a></tt> 360: <b>_</b>
## Morphology
The form / lemma ratio of `ADJ` is 1.000000 (the average of all parts of speech is 1.000000).
The 1st highest number of forms (1) was observed with the lemma “_”: _.
`ADJ` occurs with 1 features: <tt><a href="ja_bccwj-feat-Polarity.html">Polarity</a></tt> (2196; 8% instances)
`ADJ` occurs with 1 feature-value pairs: `Polarity=Neg`
`ADJ` occurs with 2 feature combinations.
The most frequent feature combination is `_` (24468 tokens).
Examples: _
## Relations
`ADJ` nodes are attached to their parents using 13 different relations: <tt><a href="ja_bccwj-dep-advcl.html">advcl</a></tt> (8830; 33% instances), <tt><a href="ja_bccwj-dep-acl.html">acl</a></tt> (8488; 32% instances), <tt><a href="ja_bccwj-dep-root.html">root</a></tt> (3798; 14% instances), <tt><a href="ja_bccwj-dep-amod.html">amod</a></tt> (3182; 12% instances), <tt><a href="ja_bccwj-dep-nmod.html">nmod</a></tt> (963; 4% instances), <tt><a href="ja_bccwj-dep-ccomp.html">ccomp</a></tt> (550; 2% instances), <tt><a href="ja_bccwj-dep-dep.html">dep</a></tt> (270; 1% instances), <tt><a href="ja_bccwj-dep-obl.html">obl</a></tt> (237; 1% instances), <tt><a href="ja_bccwj-dep-obj.html">obj</a></tt> (150; 1% instances), <tt><a href="ja_bccwj-dep-nsubj.html">nsubj</a></tt> (95; 0% instances), <tt><a href="ja_bccwj-dep-csubj.html">csubj</a></tt> (64; 0% instances), <tt><a href="ja_bccwj-dep-compound.html">compound</a></tt> (36; 0% instances), <tt><a href="ja_bccwj-dep-dislocated.html">dislocated</a></tt> (1; 0% instances)
Parents of `ADJ` nodes belong to 17 different parts of speech: <tt><a href="ja_bccwj-pos-NOUN.html">NOUN</a></tt> (12810; 48% instances), <tt><a href="ja_bccwj-pos-VERB.html">VERB</a></tt> (7992; 30% instances), (3798; 14% instances), <tt><a href="ja_bccwj-pos-ADJ.html">ADJ</a></tt> (1677; 6% instances), <tt><a href="ja_bccwj-pos-PROPN.html">PROPN</a></tt> (119; 0% instances), <tt><a href="ja_bccwj-pos-ADV.html">ADV</a></tt> (89; 0% instances), <tt><a href="ja_bccwj-pos-PRON.html">PRON</a></tt> (79; 0% instances), <tt><a href="ja_bccwj-pos-SCONJ.html">SCONJ</a></tt> (34; 0% instances), <tt><a href="ja_bccwj-pos-NUM.html">NUM</a></tt> (30; 0% instances), <tt><a href="ja_bccwj-pos-X.html">X</a></tt> (11; 0% instances), <tt><a href="ja_bccwj-pos-SYM.html">SYM</a></tt> (10; 0% instances), <tt><a href="ja_bccwj-pos-AUX.html">AUX</a></tt> (4; 0% instances), <tt><a href="ja_bccwj-pos-INTJ.html">INTJ</a></tt> (4; 0% instances), <tt><a href="ja_bccwj-pos-PART.html">PART</a></tt> (3; 0% instances), <tt><a href="ja_bccwj-pos-PUNCT.html">PUNCT</a></tt> (2; 0% instances), <tt><a href="ja_bccwj-pos-ADP.html">ADP</a></tt> (1; 0% instances), <tt><a href="ja_bccwj-pos-DET.html">DET</a></tt> (1; 0% instances)
6819 (26%) `ADJ` nodes are leaves.
8540 (32%) `ADJ` nodes have one child.
3489 (13%) `ADJ` nodes have two children.
7816 (29%) `ADJ` nodes have three or more children.
The highest child degree of a `ADJ` node is 39.
Children of `ADJ` nodes are attached using 21 different relations: <tt><a href="ja_bccwj-dep-cop.html">cop</a></tt> (10840; 21% instances), <tt><a href="ja_bccwj-dep-punct.html">punct</a></tt> (7666; 15% instances), <tt><a href="ja_bccwj-dep-nsubj.html">nsubj</a></tt> (5904; 12% instances), <tt><a href="ja_bccwj-dep-advcl.html">advcl</a></tt> (5154; 10% instances), <tt><a href="ja_bccwj-dep-obl.html">obl</a></tt> (4617; 9% instances), <tt><a href="ja_bccwj-dep-mark.html">mark</a></tt> (4474; 9% instances), <tt><a href="ja_bccwj-dep-case.html">case</a></tt> (3991; 8% instances), <tt><a href="ja_bccwj-dep-advmod.html">advmod</a></tt> (2070; 4% instances), <tt><a href="ja_bccwj-dep-aux.html">aux</a></tt> (2019; 4% instances), <tt><a href="ja_bccwj-dep-compound.html">compound</a></tt> (1740; 3% instances), <tt><a href="ja_bccwj-dep-dep.html">dep</a></tt> (644; 1% instances), <tt><a href="ja_bccwj-dep-cc.html">cc</a></tt> (426; 1% instances), <tt><a href="ja_bccwj-dep-dislocated.html">dislocated</a></tt> (383; 1% instances), <tt><a href="ja_bccwj-dep-nummod.html">nummod</a></tt> (157; 0% instances), <tt><a href="ja_bccwj-dep-csubj.html">csubj</a></tt> (147; 0% instances), <tt><a href="ja_bccwj-dep-obj.html">obj</a></tt> (94; 0% instances), <tt><a href="ja_bccwj-dep-amod.html">amod</a></tt> (80; 0% instances), <tt><a href="ja_bccwj-dep-nmod.html">nmod</a></tt> (73; 0% instances), <tt><a href="ja_bccwj-dep-det.html">det</a></tt> (61; 0% instances), <tt><a href="ja_bccwj-dep-discourse.html">discourse</a></tt> (31; 0% instances), <tt><a href="ja_bccwj-dep-reparandum.html">reparandum</a></tt> (2; 0% instances)
Children of `ADJ` nodes belong to 17 different parts of speech: <tt><a href="ja_bccwj-pos-AUX.html">AUX</a></tt> (12883; 25% instances), <tt><a href="ja_bccwj-pos-NOUN.html">NOUN</a></tt> (11893; 24% instances), <tt><a href="ja_bccwj-pos-PUNCT.html">PUNCT</a></tt> (7666; 15% instances), <tt><a href="ja_bccwj-pos-ADP.html">ADP</a></tt> (4037; 8% instances), <tt><a href="ja_bccwj-pos-VERB.html">VERB</a></tt> (3045; 6% instances), <tt><a href="ja_bccwj-pos-SCONJ.html">SCONJ</a></tt> (2506; 5% instances), <tt><a href="ja_bccwj-pos-ADV.html">ADV</a></tt> (2118; 4% instances), <tt><a href="ja_bccwj-pos-PART.html">PART</a></tt> (2013; 4% instances), <tt><a href="ja_bccwj-pos-ADJ.html">ADJ</a></tt> (1677; 3% instances), <tt><a href="ja_bccwj-pos-PRON.html">PRON</a></tt> (850; 2% instances), <tt><a href="ja_bccwj-pos-SYM.html">SYM</a></tt> (747; 1% instances), <tt><a href="ja_bccwj-pos-CCONJ.html">CCONJ</a></tt> (418; 1% instances), <tt><a href="ja_bccwj-pos-PROPN.html">PROPN</a></tt> (342; 1% instances), <tt><a href="ja_bccwj-pos-NUM.html">NUM</a></tt> (236; 0% instances), <tt><a href="ja_bccwj-pos-INTJ.html">INTJ</a></tt> (71; 0% instances), <tt><a href="ja_bccwj-pos-DET.html">DET</a></tt> (61; 0% instances), <tt><a href="ja_bccwj-pos-X.html">X</a></tt> (10; 0% instances)
| 136.48 | 1,627 | 0.635209 | yue_Hant | 0.757227 |
dc93aa424e9856497758bff1adfaae73f75b1004 | 52 | md | Markdown | README.md | vmasterov/RUDN__preloaderModule | 26eadcad74b24b5ba7c7b772ee909de6b420fa53 | [
"MIT"
] | null | null | null | README.md | vmasterov/RUDN__preloaderModule | 26eadcad74b24b5ba7c7b772ee909de6b420fa53 | [
"MIT"
] | 4 | 2020-06-03T15:25:37.000Z | 2021-05-10T00:04:21.000Z | README.md | vmasterov/RUDN__preloaderModule | 26eadcad74b24b5ba7c7b772ee909de6b420fa53 | [
"MIT"
] | null | null | null | # RUDN__preloaderModule
Set the preloader to blocks
| 17.333333 | 27 | 0.846154 | eng_Latn | 0.846 |
dc9404d6f7ad20b6dee25d57ce5fa14dad2406d8 | 4,975 | md | Markdown | content/background/2020-08-19-bitcoin.md | nguyentran0212/commonplace | 5fa5ed721d00fefc272c5df0a35f25c4942f27bc | [
"MIT"
] | null | null | null | content/background/2020-08-19-bitcoin.md | nguyentran0212/commonplace | 5fa5ed721d00fefc272c5df0a35f25c4942f27bc | [
"MIT"
] | null | null | null | content/background/2020-08-19-bitcoin.md | nguyentran0212/commonplace | 5fa5ed721d00fefc272c5df0a35f25c4942f27bc | [
"MIT"
] | null | null | null | ---
date: 2020-08-19
title: "Bitcoin"
cover: "https://source.unsplash.com/random"
categories:
- Background
slug: "bitcoin"
tags:
- background
- bitcoin
---
Sources: [^1]
## Architecture of the Bitcoin network
Three types of main nodes within the Bitcoin network:
1. Users with wallets
2. Miners that compete with each other to add new blocks to the ledger
3. Exchanges: place where users can buy BTC in exchange for other currencies
## Wallets
Software wallets allow user to manage a collection of private keys corresponding to their accounts, and use them to create and side transactions on the Bitcoin network.
Hardware wallets are specialized devices that operate with suitable software to realize the functionality of software wallets.
- Private keys are generally stored on the hardware and never leave the device.
- Public keys are exported so that payments can be received
- The unsigned transaction is sent from the software to the device, verified by the user on the display of the device, and confirmed with a PIN. Then, the transaction is signed by the device and sent back to the software.
Cold wallets such as paper and metal plates are used as backups for the wallets.
## Exchanges
Exchanges are usually websites that buy BTC in exchange for other currencies.
Exchanges hold BTC on behalf of users, making it a form of trusted party within the Bitcoin system.
Users can ask the exchange to transfer purchased Bitcoin to an address under their control. Until that point, if the exchange is compromised, users might lose control of their BTC.
## Accounts and Transactions
An account in Bitcoin is associated with a cryptographic key pair:
- The public key is used to create the account address
- The private key is required to sign transactions originating from the account.
Bitcoin does not track account balance explicitly. Instead, the balance is derived as the sum of unspent transaction outputs that an account has control over. Therefore, the record-keeping model of Bitcoin is referred to as UTXO.
## Bitcoin Transaction Processing
### Mempool and Orphan Transactions
Incoming transactions are handled by the so-called "mempool".
If the referenced input transactions, called "parents", are as yet unknown, a miner will delay the inclusion of the new transaction. Such a transaction becomes an "orphan".
Orphans might stay in the mempool until their parents arrive. Or, they can be removed after a timeout.
### Bitcoin script
The script language of Bitcoin is called "Script":
- A simple, stack-based languaged
- Processed from left to right
- Not Turing complete, meaning it has limited complexity, without looping and complex control flow.
- Pure functions
- Cannot poll external servers or import any external state.
Each script is a list of instructions associated with each transaction, which describes how the BTC transffered with the transaction can be spent.
Types of scripts:
- Locking script is placed on an output, which specifies the conditions that must be met to spend the BTC.
- Unlocking script is placed in an input, which satisfies the conditions of the locking script. To validate a transaction, the unlocking script and the locking script are combined and executed. If the result is true, the transaction is valid.
Pay-to-PubKey Hash (P2PKH) is the most common case. In this implementation, the locking script specifies which (hashed) public key and corresponding signature are required to unlock the output. It means that only the holder of the public key and the corresponding private key can spend the output.
#### OP_RETURN
OP_RETURN is a keyword, called "opcode", which is used to mark a transation output as invalid. It has been used as a standard way to embed arbitrary data to the Bitcoin blockchain for other purposes, such as presenting assets.
### Mining
Bitcoin uses hashcash proof-of-work function.
Mining process:
1. Miner removes transactions from the mempool that belong to the latest received block, as these has been processed.
2. Miner aggregates the remaining valid transactions into a candidate block and reassess their validity
3. Miner adds coinbase transactions to reward itself
4. Miner constructs the block header, which includes a previous hash and a Merkle tree.
5. Miner finds a solution to the proof-of-work function.
6. The new block is propagated to the rest of the network.
7. Receivers verify the new block and add it into their replica.
### Nakamoto Consensus
In Nakamoto consensus, processing nodes by convention treat the longest history of blocks as the authoritative history, also known as the main chain.
In the case multiple chains exist, it is unclear which chain is the main chain until one chain grows longer than the other one.
Bitcoin protocol presents the parallel forks of the ledger to continue for more than a block or two, unless the network is separated.
[^1]: Architecture for Blockchain Application | 42.521368 | 298 | 0.786533 | eng_Latn | 0.999727 |
dc947b795929e1d0d97bf5d79ebd36c4443fa92a | 24,005 | md | Markdown | _posts/2020-08-12-Fever.md | Watt-AI/watt-ai.github.io | cf6c308e8996fda630dd95cad707f18fc7d116e0 | [
"MIT"
] | 2 | 2020-07-12T02:35:38.000Z | 2021-01-29T16:46:58.000Z | _posts/2020-08-12-Fever.md | Watt-AI/watt-ai.github.io | cf6c308e8996fda630dd95cad707f18fc7d116e0 | [
"MIT"
] | null | null | null | _posts/2020-08-12-Fever.md | Watt-AI/watt-ai.github.io | cf6c308e8996fda630dd95cad707f18fc7d116e0 | [
"MIT"
] | 1 | 2020-07-31T14:32:09.000Z | 2020-07-31T14:32:09.000Z | ---
layout: post
category: blog
title: Building an Automatic Fever Detector
tagline: Detecting fever with a few sensors and a Raspberry Pi
permalink: /blog/fever
excerpt_separator: <!--more-->
author:
name: John Paul Lineberger
email: "jplineb@clemson.edu"
image: assets/img/2020-08-12-Fever/featured.jpg
---
## Clemson COVID challenge
The Clemson COVID challenge was a summer virtual research and design opportunity for teams of faculty and undergraduates to work on problems related to the COVID-19 Pandemic as well as creating solutions for future pandemics. With partner university University of South Carolina and Prisma health, Teams had a little more than half a month to tackle a problem in the areas of communication, Education, Healthcare Technology, Policy/Economy/Logistics, or Society/Community.
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/COVID_Banner.jpg?raw=true)
Focusing on the area of Healthcare Technology, my mentors Dr. Hudson Smith, Dr. Carl Ehrett, and I decided to work on building a privacy-centric, affordable, open-source fever detection solution. With a team of students and me at the helm, four weeks of hard work converged to a solution conveniently named the Tig**IR** which ticked many of the boxes we wanted while coming in at sub $500.
## Why fever detection?
In the wake of the COVID-19 outbreak it has become increasingly difficult to safely and responsibly navigate the normal tasks of our daily lives, especially while keeping the efficiency of life that we have come to expect. With a new normal of taking temperatures upon entering places of business, many solutions have incorporated the use of IR cameras and facial detection to aid in this process. However, these solutions can be expensive and what they do with this data behind closed doors could surrender your privacy. We wanted to create a solution that would allow us to regain the our efficiency of lives, while remaining safe and responsible to not only the pandemic we face but also our privacy. Given how powerful of a tool AI and thermal imaging are, it's obvious on why people would want to use them but there's a morally correct way of going about doing so.
## Background on using thermal cameras for fever detection
For over 30 years, IR thermal cameras have been used to diagnose issues in many industries everything from healthcare applications to home cooling. This is because heat generated from sources emit a band of light that the human eye or any standard camera can perceive. However, when targeting said band, we can get the temperature of a point in space by the amount of light it emits. This is then mapped to a color map and creates images for diagnosing problems
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/thermal_example.PNG?raw=true)
In the case of faces, we can use thermal imaging to get the temperature of a subject's face which will emit more IR energy if they have a fever aka an elevated body temperature
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/people_scanned_IR.jpg?raw=true)
Normally we could use this data with a calibrated sensor to get the temperature at certain spots, however these locations on the human face have to be exact and therefore sometimes hard to capture. Our solution is to use two sensors: a IR thermal camera and a visual spectrum camera. The visual spectrum camera is uses facial detection to get landmarks on the faces of people walking by and then map that to a thermal sensor to get a values. The thermal data at these points are then passed to a machine learning model which will output whether or not the subject has a fever.
## Current solutions on the market
Several solutions to this problem already exists on the market, however they have a few short comings such as price, not being reviewed/validated/, and accessibility. We wanted to create a solution that any one can have access to, affordable, and reviewed.
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/solution_1.PNG?raw=true)
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/solution_2.PNG?raw=true)
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/solution_3.PNG?raw=true)
## Component Selection
Like I said before, we wanted the Tig**IR** to be an affordable solution and using off the shelf parts. Luckily there were a few options for each which gave us some flexibility
### The Brain
Being able to run who machine learning models and process incoming images can be a resource intensive task. Many small compute units exits on the market exist however, for an effective solution we considered two options:
**Nvidia Jetson Nano**
+ has Cuda cores for dedicated machine learning techniques
+ built in, very effective heat sink
+ has pcie slot for storage or wifi
+ Overclockable with a very capable arm processor
+ Expensive
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/jetsonnano.jpg?raw=true)
**Raspberry Pi 4 4GB model**
+ Affordable
+ Build in Wifi and bluetooth
+ Wider community support
+ Neo-arm processor which is faster however dedicated GPU not as strong
+ micro-hdmi
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/raspberrypi.jpg?raw=true)
We ended up choosing the Raspberry Pi 4 for this project because of its affordability and neo-arm architecture.
### The Sensors
For our solution to work correctly, it requires two sensors one that sees the visible spectrum and the other
that sees IR. Instead of listing all the possible options, let me give you the reasons for the products we selected:
1. LABISTS Raspberry Pi noIR Camera V2
* Sony mirrorless 8MP sensor capable of 1080p at 30fps
* IR filter removed for better low light performance
* Super easy to install with ribbon cable
* Low cost of $27.99 USD
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/picamera.jpg?raw=true)
2. FLIR Radiometric Lepton V2.5
* 80x60 IR solution for $250 USD
* FLIR is known for their quality products, reliability, and documentation
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/FLIR_Lepton.jpg?raw=true)
### Enclosure
The enclosure we selected for this project was selected based on its features and price. The **Miuzei Case** includes a fan and 4 aluminum heat sinks for cooling. This case also includes a 15W power supply which covered that component. The IO on this enclosure is really easy to access.
### Misc
Some components that we had to purchase that are generic:
* MicroSD Card
* Usually you want to choose at least a UHS Speed Class 3
* Need at least A2 read speed
* Rated at 90MB/s
* Breadboard Cables
* Anything will work
### Prototype development
In order to have a test bed for developing code, I built a testbed to hold the sensors while we were finishing designing and prototyping the 3D printable enclosure.
![](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/testbed.png?raw=true)
## Setting up the Pi
### Preparing the SD Card
Setting up a Raspberry Pi is quite simple these days. Using a computer with a microSD card slot or an external SD card reader, plug your microSD card into your computer and then allow the system to have writing permissions to said storage device (should be enabled by default). Next head over to [Raspberry OS Imager Guide](https://www.raspberrypi.org/documentation/installation/installing-images/README.md) where you can download the Pi Imager and install your preferred version of Raspbian.
After the image is installed, create a txt file in the main directory of the microSD card named **SSH** to enable ssh forwarding. This allows you to connect to the Pi from a PC over a local network instead of having to find a mini-hdmi cable. Using **Windows Remote Desktop Protocol** you can even view the desktop in real time. This is where I did most of the development for this project.
In hopes that the Raspberry Pi community would have it's own OS image with the tools necessary to perform machine learning tasks on the Pi, my search came up with nothing but images locked behind pay walls that costs hundreds of dollars. In search for a cheap alternative solution, we have created an Image that contains OpenCV, Numpy, Pandas, Pytorch, and dlib compiled from source to run on the neo-arm architecture sufficiently so that you don't have to spend hours on lengthy tutorials. You can download that [Here](blank)
### SSH into Pi
To get the ip of the raspberry pi on your network simply type in a windows or Linux terminal
```
ping raspberrypi
```
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/ping.PNG?raw=true)
### Installing the packages
There are quite a number of packages that are necessary for this project along with their requirements:
* Python 3.7
* OpenCV with dlib
* Numpy
* Pandas
* Pylepton
* Pytorch
* Picamera
Many of these were compiled from source using cmake instead of pip installing so that they could take advantage of the neo-arm architecture. Tutorials for these libraries are available.
### Enabling IO
Using the Raspberry Pi configuration tool, make sure to enable the use of the GPIO pins and CSI ribbon slot.
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/pi_configuration.PNG?raw=true)
Once enabled, shut down the Pi and plug in the normal camera to the ribbon slot and plug each GPIO pin to its respective position as shown below.
To test to see if the normal camera is working type the following into a terminal which will generate at test image:
```
raspistill -o testshot.jpg
```
If the image is not generated check your connections and Pi configuration again.
## The Code
In order to access many of the features shown below, clone the [Fever-Detector-COVID-Challenge](https://github.com/jplineb/Fever-Detector-COVID-Challenge) repo
### Testing Camera and Libraries
#### Normal Camera
To assure everything is working properly, we will create a simple script called picamera_test.py. first import your necessary libraries and initialize the camera and build a buffer. Here you can specify the output size, frame rate, and orientation of the image. Only use size and framerate combinations that are supported by your selected camera
```python
## import packages
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import numpy as np
## Test Video ##
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
camera.rotation = 180
rawCapture = PiRGBArray(camera, size=(640, 480))
```
Then we will allow the camera to sleep for warm up
```python
time.sleep(0.1)
```
Next we will create a for loop that will update a cv2 window with grabbed frames from the camera. If you want to the test to stop at any time, press 'q'
```python
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
```
Then navigate in your terminal where you created your script and run it. This should be similar to your result
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/picamera_test.PNG?raw=true)
#### Lepton Camera
To assure you connected your GPIO pins correctly and the Pylepton library is work we are going to create a script named lepton_test.py
First import the necessary packages, create a buffer to store the frames into, and then create a cv2 named window which we will display the frames on
```python
## Import packages
import numpy as np
import cv2
from pylepton import Lepton
# Create a buffer to save the images into
lepton_buf = np.zeros((60,80,1), dtype=np.uint16)
# Create a named window for the image to be placed into
cv2.namedWindow("Thermal", cv2.WINDOW_NORMAL) # creates window
cv2.resizeWindow('Thermal', 400,300) # resizes window to usable resolution
```
Then we want to create a while loop to constantly display the frames in the buffer
```python
while True:
with Lepton() as l:
a,_ = l.capture()
# Rotate image
a = np.rot90(a, 2)
# Convert to uint 8
a = (a/256).astype('uint8')
# Resize image
cv2.resize(a, (640,480))
# show image
cv2.imshow('Thermal',a)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
```
Notice how when you run the script you get a grey image. This is because of the value ranges of the pixels
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/lepton_test_notnormed.PNG?raw=true)
In order to introduce more contrast to the image, we need to normalize the pixel values by range of values and the images min and max values. So we update the script to the following
```python
while True:
with Lepton() as l:
a,_ = l.capture()
# normalize image
cv2.normalize(a,a, 0, 65353, cv2.NORM_MINMAX)
# Rotate image
a = np.rot90(a, 2)
# Convert to uint 8
a = (a/256).astype('uint8')
# Resize image
cv2.resize(a, (640,480))
# show image
cv2.imshow('Thermal',a)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
```
Which gives us the following
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/lepton_test_normed.PNG?raw=true)
Using CV2 we can also map pixels values to colors giving you a traditional IR thermal camera look:
```python
while True:
with Lepton() as l:
a,_ = l.capture()
cv2.normalize(a,a, 0, 65353, cv2.NORM_MINMAX)
# Rotate image
a = np.rot90(a, 2)
# Convert to uint 8
a = (a/256).astype('uint8')
# Resize image
cv2.resize(a, (640,480))
# Convert to color map
a = cv2.applyColorMap(a, cv2.COLORMAP_JET)
# show image
cv2.imshow('Thermal',a)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
```
![enter image description here](https://github.com/jplineb/FeverDetectorCOVIDChallenge/blob/master/Photos/lepton_test_colormap.PNG?raw=true)
### Trying the dlib library
Dlib is a library that will allow us to get landmarks on people's face for capturing the face IR data. for that we will create a dlib_test.py script
First we will import the necessary libraries and initialize the camera
```python
import cv2
import numpy as np
import dlib
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
## Camera setup
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
camera.rotation = 180
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
```
Then initiate the dlib library
```python
## dlib face detector setup
detector = dlib.get_frontal_face_dector() # initialize the dlib face detector
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') # load the dlib model
```
Now create the for loop for the video
```python
## video loop
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
print(x1,y1)
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
landmarks = predictor(gray, face)
for n in range(0, 68):
x = landmarks.part(n).x
y = landmarks.part(n).y
cv2.circle(image, (x, y), 4, (255, 0, 0), -1)
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
```
### Fever Detection Script
```python
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import numpy as np
import dlib
from pylepton import Lepton
import pandas as pd
#Initialization of Camera/Windows
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
camera.rotation = 180
rawCapture = PiRGBArray(camera, size=(640, 480))
lepton_buf = np.zeros((60,80,1), dtype=np.uint16)
cv2.namedWindow("Thermal", cv2.WINDOW_NORMAL)
cv2.resizeWindow('Thermal', 400,300)
cv2.namedWindow("PyCam", cv2.WINDOW_NORMAL)
cv2.resizeWindow('PyCam', 400,300)
# define transform
# h = np.float32([[ 2.24346513e+00, 6.48002063e-01, -1.69435974e+02],
# [ 7.40627465e-02, 2.71901217e+00, -3.16027302e+02],
# [ 1.35883889e-04, 2.71327283e-03, 1.00000000e+00]])
h = np.float32(np.load('trans_param.npy')) # get transform parameters from file
# create data frame for data
DF = pd.DataFrame(data=None, columns=['Face','LM1','LM2','LM3','LM4','LM5'])
#Allow Camera to warm up
time.sleep(0.1)
## dlib face detector setup
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("./models/shape_predictor_5_face_landmarks.dat")
## define normalization alpha and beta
alpha = -70000
beta = 70000
## define translation parameters
x_pos = 0
y_pos = 0
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
with Lepton() as l:
a,_ = l.capture()
cv2.normalize(a, a, alpha, beta, cv2.NORM_MINMAX) # extend contrast
a = np.rot90(a, 2)
a = (a/256).astype('uint8')
def on_click(event, x, y, p1, p2):
if event == cv2.EVENT_LBUTTONDOWN:
print(a[x,y])
# update translation matrix
translation_matrix = np.float32([[1,0,x_pos],[0,1,y_pos]])
image = frame.array
image = cv2.resize(image, (400,300))
a = cv2.resize(a, (400, 300))
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
# define array for data storage
face_num = 0
# translate non thermal iamge
image = cv2.warpAffine(image, translation_matrix, (400,300))
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
landmarks = predictor(image, face)
thermal_data = []
for n in range(0, 5):
x = landmarks.part(n).x
y = landmarks.part(n).y
cv2.circle(image, (x, y), 4, (255, 0, 0), -1)
thermal_pixel = a[x,y]
if n < 2:
cv2.putText(image, str(thermal_pixel), (int(x*1.1),int(y*1.1)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0))
else:
cv2.putText(image, str(thermal_pixel), (int(x*.9),int(y*1.1)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0))
thermal_data.append(thermal_pixel)
DF = DF.append({'Face': face_num,
'LM1': thermal_data[0],
'LM2': thermal_data[1],
'LM3': thermal_data[2],
'LM4': thermal_data[3],
'LM5': thermal_data[4]},
ignore_index=True)
face_num += 1
cv2.setMouseCallback('Thermal', on_click)
# show the frame
cv2.imshow("PyCam", image)
cv2.imshow('Thermal', a)
color_map = cv2.applyColorMap(a, cv2.COLORMAP_JET)
cv2.imshow("color",color_map)
# show warped image
warp_src = cv2.warpPerspective(image, h, (400,300)) # apply perspective warp
# cv2.imshow("warp",warp_src)
# show overlay
a_3 = cv2.merge((a,a,a))
blnd = cv2.addWeighted(a_3,0.7,warp_src,0.3,0)
cv2.imshow("blnd",blnd)
key = cv2.waitKey(1) & 0xFF
#o.update(np.getbuffer(a))
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord('t'):
alpha += 5000
print("Alpha is %d" % (alpha,))
if key == ord('g'):
alpha -= 5000
print("Alpha is %d" % (alpha,))
if key == ord('y'):
beta += 5000
print("Beta is %d" % (beta,))
if key == ord('h'):
beta -= 5000
print("Beta is %d" % (beta,))
if key == ord('j'):
x_pos += -1
print("x_pos %d" % (x_pos,))
if key == ord('l'):
x_pos += 1
print("x_pos %d" % (x_pos,))
if key == ord('i'):
y_pos += 1
print("y_pos %d" % (y_pos,))
if key == ord('k'):
y_pos -= 1
print("y_pos %d" % (y_pos,))
if key == ord("q"):
DF.to_csv('./data/face_data_excersize.csv')
print(a)
break
```
## The UI
Currently the UI for this project is not finished as we did not receive additional funding for the project however our vision is shown below. This configuration would be accessible via a web browser either on your phone, tablet, or PC external of the TigIR.
Future prototypes of this project will include a pyQT UI to make it easier, but updates to this repo for anyone that could do the below is highly appreciated
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/web%20browser%20UI.png?raw=true)
## The Video
For our design competition, I created a pitch video for this project in hopes of receiving future funding
<!-- blank line -->
<figure class="video_container">
<iframe src="https://www.youtube.com/embed/dCN8HfJhVs4" frameborder="0" allowfullscreen="true" width="1000" height="720"> </iframe>
</figure>
<!-- blank line -->
## Future Work
Much of the ground work for this project has been layed however there is still quite a bit to go. In order to produce a reliable machine learning model that recognizes fever patients, more thermal imaging data needs to be captured. As for the network side, a full framework still needs to be produced in order for multiple devices to work on the network. Currently, the web UI is under construction and it projected to be finished before the end of Summer 2020.
## Meet The Watt Fever Detection Team
![enter image description here](https://github.com/jplineb/Fever-Detector-COVID-Challenge/blob/master/Photos/Watt%20Fever%20Detection%20Team.png?raw=true)
| 44.702048 | 869 | 0.707394 | eng_Latn | 0.980533 |
dc94f76751f07a2b20df6b45bb6a501ae8b83636 | 72 | md | Markdown | _archives/tags/sql.md | mjclemente/mjclemente.github.io | 42fddb464c43e5d676cf9dd2daf153cc6d411332 | [
"MIT"
] | 1 | 2018-10-02T16:45:56.000Z | 2018-10-02T16:45:56.000Z | _archives/tags/sql.md | mjclemente/mjclemente.github.io | 42fddb464c43e5d676cf9dd2daf153cc6d411332 | [
"MIT"
] | 3 | 2021-03-24T14:45:17.000Z | 2022-02-26T02:40:53.000Z | _archives/tags/sql.md | mjclemente/mjclemente.github.io | 42fddb464c43e5d676cf9dd2daf153cc6d411332 | [
"MIT"
] | null | null | null | ---
title: sql
tag: "sql"
layout: archive-tags
permalink: "tag/sql"
---
| 10.285714 | 20 | 0.638889 | kor_Hang | 0.181223 |
dc970036daf95a2640752e149dab560a10323c92 | 3,286 | md | Markdown | README.md | elastic/workflows | 2d912e6c63393d5422804e5ae72e6e8f2992b3ee | [
"MIT"
] | null | null | null | README.md | elastic/workflows | 2d912e6c63393d5422804e5ae72e6e8f2992b3ee | [
"MIT"
] | null | null | null | README.md | elastic/workflows | 2d912e6c63393d5422804e5ae72e6e8f2992b3ee | [
"MIT"
] | null | null | null | # workflows
:wave: Contains public GitHub Action workflows
## Elastic docs
Elastic docs require that we install an "agent" in each content source.
A content source is a repository which contains MDX files across a given topic.
There are two contexts, internal & external docs.
Each pairs with a "separate" web app.
The webapps are built by Vercel, and as a result we get 2x "prod" instances.
Before prod is built in each instance, we need to:
1. Organize contents
2. Test contents
3. Create a PR preview
4. Transform content
5. Ship content
This step-sequence gives us space to do "whatever we need" to the content.
After agent installation, requires token access.
After token access, new PRs will trigger the Action.
🏴☠️Successful PRs are **required** for building to prod.
⚠️ Merging to main will not trigger a build.
### Dev docs builder, calling workflow
Change values as needed.
For example, if you do not have a docs dir, use the correct dir or no dir instead.
Install as `.github/workflows/dev-docs-builder.yml` in the content source.
:wave: Provide the content source access to the Vercel_ tokens.
```yml
name: Elastic dev docs
on:
pull_request_target:
paths:
# Change docs dir to your repos docs dir
- 'docs/**.mdx'
- 'docs/**.docnav.json'
- 'docs/**.docapi.json'
- 'docs/**.devdocs.json'
- 'docs/**.jpg'
- 'docs/**.jpeg'
- 'docs/**.png'
- 'docs/**.svg'
- 'docs/**.gif'
types: [opened, closed, synchronize]
jobs:
publish:
uses: elastic/workflows/.github/workflows/docs-elastic-dev-publish.yml@main
with:
# Refers to Vercel project
project-name: docs-elastic-dev
# Which prebuild step (dev or not)
prebuild: wordlake-dev
# Docsmobile project dir
repo: docs.elastic.dev
secrets:
VERCEL_GITHUB_TOKEN: ${{ secrets.VERCEL_GITHUB_TOKEN }}
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID_DOCS_DEV: ${{ secrets.VERCEL_PROJECT_ID_DOCS_DEV }}
```
### Public docs builder, calling workflow
Change values as needed.
For example, if you do not have a docs dir, use the correct dir or no dir instead.
Install as `.github/workflows/co-docs-builder.yml` in content source.
:wave: Provide the content source access to the Vercel_ tokens.
```yml
name: Elastic docs
on:
pull_request_target:
paths:
# Change docs dir to your repos docs dir
- 'docs/**.mdx'
- 'docs/**.docnav.json'
- 'docs/**.docapi.json'
- 'docs/**.devdocs.json'
- 'docs/**.jpg'
- 'docs/**.jpeg'
- 'docs/**.svg'
- 'docs/**.png'
- 'docs/**.gif'
types: [closed, opened, synchronize]
jobs:
publish:
uses: elastic/workflows/.github/workflows/docs-elastic-co-publish.yml@main
with:
# Refers to Vercel project
project-name: docs-elastic-co
# Which prebuild step (dev or not)
prebuild: wordlake
# Docsmobile project dir
repo: docs.elastic.co
secrets:
VERCEL_GITHUB_TOKEN: ${{ secrets.VERCEL_GITHUB_TOKEN }}
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID_DOCS_CO: ${{ secrets.VERCEL_PROJECT_ID_DOCS_CO }}
```
| 26.5 | 82 | 0.673159 | eng_Latn | 0.796975 |
dc979617bc6a1d3d2acc5e57bad642c85e8e9c59 | 1,819 | md | Markdown | AlchemyInsights/restrict-access-in-sharepoint-or-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:06:02.000Z | 2020-09-17T11:26:05.000Z | AlchemyInsights/restrict-access-in-sharepoint-or-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:59:12.000Z | 2022-02-09T06:59:36.000Z | AlchemyInsights/restrict-access-in-sharepoint-or-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-11T18:36:50.000Z | 2021-10-09T10:49:57.000Z | ---
title: Begræns adgang i SharePoint eller OneDrive
ms.author: mikeplum
author: MikePlumleyMSFT
ms.date: 04/21/2020
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.assetid: af1b936b-0475-497b-a6d3-e671aef7b717
ms.openlocfilehash: b7b68df2ae24b09fe9b01bd67c31a89e37f284a512bc1ecb097ef52fae5ae7d6
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: da-DK
ms.lasthandoff: 08/05/2021
ms.locfileid: "54075034"
---
# <a name="restrict-access-in-sharepoint-or-onedrive"></a>Begræns adgang i SharePoint eller OneDrive
I SharePoint og OneDrive kan du begrænse adgangen til elementer som filer, mapper og lister ved kun at give adgang til grupper eller enkeltpersoner, du vil have adgang til. Som standard nedarves tilladelser i SharePoint oppefra og op i hierarkiet. Derfor nedarver en fil sine tilladelser fra mappen, som nedarver dens tilladelser fra biblioteket, som nedarver dens tilladelser fra webstedet.
Du kan dele på et højere niveau (f.eks. ved at dele et helt websted) og derefter bryde nedarvningen, hvis du ikke vil dele alle elementerne på webstedet. Vi anbefaler dog ikke dette, da det gør vedligeholdelsen af tilladelserne mere kompleks og forvirrende i fremtiden. Her er, hvad du kan gøre i stedet:
- Hvis du f.eks. vil dele alt indholdet i en mappe undtagen én fil i den, skal du flytte filen til en ny placering, der ikke er delt.
- Hvis du har to undermapper i en mappe, og du vil dele én undermappe med gruppe A og B og kun tillade gruppe A adgang til den anden undermappe, skal du dele den overordnede mappe med gruppe A og føje gruppe B til den første undermappe.
[Stop delingen af en fil eller mappe ](https://go.microsoft.com/fwlink/?linkid=2008861)
| 56.84375 | 391 | 0.79934 | dan_Latn | 0.980047 |
dc98cce5721f3ff32b074a4e8e52bd9a0645ce61 | 9,514 | md | Markdown | README.md | bhartsfield/docker-sqlserver-lucee | 7b67657cd9852786623ce091423a02d55d7a4d0b | [
"MIT"
] | null | null | null | README.md | bhartsfield/docker-sqlserver-lucee | 7b67657cd9852786623ce091423a02d55d7a4d0b | [
"MIT"
] | null | null | null | README.md | bhartsfield/docker-sqlserver-lucee | 7b67657cd9852786623ce091423a02d55d7a4d0b | [
"MIT"
] | 1 | 2022-01-04T17:01:09.000Z | 2022-01-04T17:01:09.000Z | # BaseDao Test App With SQL Server in a Docker Container
## Overview
This is a quick proof of concept to demonstrate two items I recently mentioned to colleagues.
- The first item was that it is quite easy to spin up a SQL Server instance inside of a docker container and seed it with an initialization SQL script.
- The other item came up when I was shown a number of DAOs with repetitive code for basic CRUD operations for single DB tables. The only real differences where the table names and the column lists. So, instead of duplicating everything else, a single base dao could centralize the bulk of the code and the individual DAOs could just extend it and define their tables and columns.
So, I threw this together to spin up a SQL Server docker container that, when up and running, contains a test database with a users table and multiple user records.
To POC connecting to and querying that database, I created a commandbox/lucee docker container with the base DAO, a user DAO that extends the base and an index.cfm file that runs all the different CRUD operation examples.
## Starting the environment
To start the sqlserver and lucee containers (assuming you are already running docker or docker desktop), open a command prompt of your choice, cd into the root of the project (the directory that contains the docker-compose.yml file) and run the following command.
```bash
docker-compose up -d --build
```
Once everything is up and running (keep an eye on the lucee container logs to know when it is fully started), you should be able to open a browser and navigate to [https://localhost:8080] and see all of the CF dumps found in the inde.xcfm file.
## SQL Server
The SQL Server container is primarily defined inside the sqlserver service section of docker-compose.yml and any supporting code/files can be found in the ./sqlserver directory.
- ./sqlserver/entrypoint.sh - This is a basic bash script that will start the sqlserver process inside the container and then, once it is ready, run the init.sql script
- ./sqlserver/build/initdb/init.sql - The init.sql file is a seed script that, once the SQL Server service is running, will be ran to create a TestDb database, a dbo.users table inside that table and add multiple user records to that table.
## Lucee
The commandbox/lucee server is defined in the docker-compose.yml file under the lucee service and the application it runs is found in the ./lucee directory.
- ./lucee/Application.cfc - his is a very basic Application.cfc file that just defines the application name and a datasource that points to the sqlserver docker container's TestDb database
- ./lucee/box.json - The box.json file is also a very basic one. It simply defines the SQL Server jdbc driver as the only server dependency and then runs the box "install" command when the new Lucee server installs the first time.
- ./lucee/model/daos/Base.cfc - The base dao is meant to be extended by single-table DAOs (such as the users.cfc). It provides basic CRUD functionality with various filtering and ordering options.
- ./lucee/model/daos/Users.cfc - This is a minimal example of a table-specific DAO that represents the TestDb.dbo.Users table that the init.sql script created in the sqlserver docker container. It extends the Base.cfc DAO in order to gain the basic CRUD functionality.
- ./lucee/index.cfm - Index.cfm just creates an instance of the User dao and then performs all of the available create, read, update, delete, filtering and ordering functionality provided by the base DAO.
Examples in index.cfm (from the top down) are
- Creates a new user record using data from randomuser.me
- Reads all user records ordered by the primary key ASC (the default ordering method)
- Reads the user record with an Id of 1
- Reads a count of user records
- Reads a count of user records where the LastName is "Turk"
- Reads all user records ordered by LastName ASC then FirstName ASC
- Reads all user records where the LastName is NOT "Turk"
- Reads the top 1 record ordered by Id DESC
- Updates the latest user record's FirstName, LastName and EmailAddress to random values from randomuser.me
- Reads the top 1 record ordered by Id DESC (to confirm the updates)
- Deletes the newest user record
## ./docker-compose.yml
The docker-compose file is what defines the environment (which just consists of a database server and a lucee server). Below is a breakdown of its contents.
### volumes
First, a volume named sqlserver_vol is created in which to store the database files we want to persist. If the data is not persisted in a volume outside of the container, it would all be lost each time the container is restarted.
### networks
In order for the services within our docker environment to communicate with each other, a virtual docker network needs to be defined. In this case, there is just one simple network named "main".
### services
This is where all the different containers are defined. In this case, only two services are defined, a SQL Server container and a Lucee container.
### sqlserver
- image: mcr.microsoft.com/mssql/server:2019-latest is the official Microsoft SQL Server 2019 docker image so it is what we use to run our SQL Server instance
- hostname: When running multiple containers that need to communicate with each other, they can resolve each other by their defined hostnames.
- exposes: The default SQL Server connection port is 1433. Since we want to allow the lucee service to connect to the sqlserver container's sql instance, port 1433 (which is the SQL Server default port) needs to be exposed.
- ports: 1433:1433 just says to map the host port of 1433 to the exposed container port of 1433. This is only necessary when you want an external connection into the sql server instance. For example, if you want to use SSM from your own machine to connect to the SQL instance, a host port needs to be mapped to the internal exposed port. In this particular case, 1433 was used for both but the host port (left side of the colon) could be any available port on the host.
- environment: Here you can define environment variables that end up being environment variables within the container. ACCEPT_EULA and SA_PASSWORD are environment variables are built into the official SQL Server docker image and, if provided, are automatically picked up/used by the setup scripts inside the container.
- volumes: Here we map the /var/opt/mssql path within the contain to the external, persisted sqlserver_vol volume created under the volumes section of the docker-compose file. Next, we caopy the contents of ./sqlserver/build from our code into the container at /docker-entrypoint-initdb.d.
- command: The command(s) provided here will be ran inside the container when it starts up. In this example, we're making the entrypoint.sh file executable and then running it and passing our database password in as the only argument.
- network: here, we are just telling the container which virtual docker network in which to put the container (which is the "main" network we created at the top of the docker-compose file) and then telling it.
- healthcheck: the healthcheck option allows you to define a command to be used to test the container's state. Since we have another service that depends on the sqlserver service, we needed a way to tell that service that sqlserver was actually ready for connections.
### lucee
- image: ortussolutions/commandbox:lucee-light-alpine-3.4.0 is an official prewarmed ortus solutions commandbox/lucee image
- env_file: this is the alternative to the environments setting from the sqlserver example above (though they can be used in conjunction). Instead of defining environment variables directly in the docker-compose file, you can also define them in an environment file then use env_file and provide it here.
- volumes: here we are just mounting the local ./lucee directory to the /app directory int he container (this does not copy the files into the image/container, it just mounts them so you can change the files locally and see those changes reflected inside the container immediately).
- network: here, we are just telling the container which virtual docker network in which to put the container (which is the "main" network we created at the top of the docker-compose file)
- expose: The official ortus image runs the Lucee instance on port 8080. In order to access the lucee instance from outside the container, port 8080 is exposed.
- ports: To round out the ability to access the container's Lucee instance, the exposed 8080 port is mapped to a port on the host (in this case, we are just using port 8080 there as well. If you want to use a different host port, change the left side of the colon to that port)
- depends_on: The "depends_on" setting is a method for telling a service it must wait on another to be ready before starting. In this case, the lucee service will wait on the sqlservice's healthcheck to be true to satisfy the dependency.
restart: This just tells the service to restart when it goes down due to a failure.
### .env
The .env file is where you define environment variables. Docker-compose and commandbox-dotenv can both read and load this file so any environment variables defined within will be not only be available to the SQL Server docker container form above but also available to your CF application.
The contents of this particular .env file is just defining details for setting up the sqlserver database and for telling the Lucee instance how to connect to it.
| 96.10101 | 469 | 0.790309 | eng_Latn | 0.999615 |
dc99176202d9c2492b07e6d773cf95d83faaf469 | 2,545 | md | Markdown | packages/redrunner/README.md | andyhasit/redrunner | ab86095021bd7c61438a19cb69f0b0dd35c2f2e3 | [
"MIT"
] | null | null | null | packages/redrunner/README.md | andyhasit/redrunner | ab86095021bd7c61438a19cb69f0b0dd35c2f2e3 | [
"MIT"
] | 14 | 2020-10-17T22:47:50.000Z | 2022-02-27T09:37:43.000Z | packages/redrunner/README.md | andyhasit/redrunner | ab86095021bd7c61438a19cb69f0b0dd35c2f2e3 | [
"MIT"
] | null | null | null | # RedRunner
*A small JavaScript framework with legs.*
## Overview
RedRunner is a JavaScript framework for building dynamic pages and apps.
It is in early development and not ready to use.
## Demo
There is a very minimal demo app, which you can run like so:
```
git clone git@github.com:andyhasit/redrunner.git
cd redrunner/demo
npm i
npm run start
```
You can also inspect the bundle size:
```
npm run build-prod
gzip dist/main.js
ls -lh dist
```
## Installation
#### Quick start
For new projects use the [demo](https://github.com/andyhasit/redrunner/tree/master/demo) as a starting point. For existing projects you can copy the relevant parts from **package.json** and **webpack.config.js** into your project.
Alternatively, you can install RedRunner manually with npm and follow the rest of the instructions here.
```
npm i -D redrunner
```
This will install **redrunner** and a compatible version of **babel-plugin-redrunner**.
#### Babel configuration
You must include the following plugins ***in this order*** in your babel configuration:
```json
"plugins": [
["babel-plugin-redrunner"],
["@babel/plugin-proposal-class-properties"]
]
```
The `babel-plugin-redrunner` transforms parts of your code and is required for RedRunner to work.
#### Bundling
I recommend using [webpack](https://webpack.js.org/) instead of alternatives as it gives you better control over source maps, which really helps for development.
#### Source maps
The babel plugin replaces each view's `__html__` field with generated code, and debugging is a lot easier if you can see that generated code.
With webpack you can set the config's `devtools` to something like `'eval-cheap-source-map'` which makes the source maps show the *transformed* code, but preserves the module separation which is nice. However this will include *all* transformations, so if you're transpiling to ES5 (which you probably do using a preset) then it will make it harder to track your own code.
One solution is not transpile to ES5 during development, meaning the only transformations you'll see are RedRunner's, which makes debugging a lot nicer.
The [demo's webpack.config.js](https://github.com/andyhasit/redrunner/tree/demo/webpack.config.js) file shows one way to achieve this.
You can read more about webpack's devtools option [here](https://webpack.js.org/configuration/devtool/).
## User Guide
This is coming, but so is Christmas.
## Contributing
Contributions welcome, see the [main repo](https://github.com/andyhasit/redrunner).
## License
MIT | 30.662651 | 372 | 0.757171 | eng_Latn | 0.993716 |
dc9b605e4036ff0746378c9847ccb08e2a769ba3 | 139 | md | Markdown | src/pages/2019-09-04post2/index.md | Code-God007/gatsby-first | 67ffc2a731a8ed780f9d34a399217f66982c34b7 | [
"MIT"
] | null | null | null | src/pages/2019-09-04post2/index.md | Code-God007/gatsby-first | 67ffc2a731a8ed780f9d34a399217f66982c34b7 | [
"MIT"
] | null | null | null | src/pages/2019-09-04post2/index.md | Code-God007/gatsby-first | 67ffc2a731a8ed780f9d34a399217f66982c34b7 | [
"MIT"
] | null | null | null | ---
path: "/post-two"
date: "2019-09-04"
title: "My Second Gatsby Post"
author: "Bunty Dhiman"
---
This is my second blog post in Gatsby.
| 15.444444 | 38 | 0.669065 | eng_Latn | 0.947466 |
dc9b6a5b84e6af7e4281bc23f874c69cee2b64a7 | 8,768 | md | Markdown | README.md | rosskyl/TileBoard | 6039e42f26d346f596ed80665a1fedf8af3eae0f | [
"MIT"
] | null | null | null | README.md | rosskyl/TileBoard | 6039e42f26d346f596ed80665a1fedf8af3eae0f | [
"MIT"
] | null | null | null | README.md | rosskyl/TileBoard | 6039e42f26d346f596ed80665a1fedf8af3eae0f | [
"MIT"
] | null | null | null | # TileBoard
This is a simple yet highly customizable dashboard for Home Assistant. The main goal of this project was to create simple dashboard with an easy way to edit and add functionality with minimum knowledge of javascript and html.
Should you have any ideas or questions please post them on home-assistant forum or create an issue on github.
## Links
https://community.home-assistant.io/t/new-dashboard-for-ha/57173
## Screenshots
![screen](https://community-home-assistant-assets.s3-us-west-2.amazonaws.com/optimized/3X/b/b/bb15cc5c10e22940698bbb7058d6ed732bb0017a_1_690x388.png)
![screen2](https://community-home-assistant-assets.s3-us-west-2.amazonaws.com/optimized/3X/1/f/1f9a1d7962f0a1335a2d06f352cb329f9d9444a5_1_690x388.png)
## How to use
* Pull/download repository
* Change `config.js`
* Open index.html directly in a web browser or move all of the files into www directory in HA's config path. Please note that dashboard was designed for local installations and you should never store dashboard files in www directory of HA if you are exposing it to the outside world sicne this would releal contetnt of `config.js` along with the password. As an alternative please consider serving files via Nginx where BasicAuth can be implemented.
## Configure
`config.js` will initialize global CONFIG object with following fields:
```js
var CONFIG = {
customTheme: null || 'transparent' || 'win95', // you can define it yourself
transition: 'animated' || 'animated_gpu' || 'simple', // transition between pages
tileSize: Number, // size of tile
tileMargin: Number, // margin between tiles
serverUrl: 'http://localhost:8123', // or custom
wsUrl: 'ws://localhost:8123/api/websocket',
password: null, //HA's password (if set)
debug: false, // mainly used for development, now redundant
pages: [], // list of Page objects, read about it below
events: [], // list of events, more info below
}
```
### Pages
Page object can have following fields:
```js
{
title: 'Page title', // not used atm
bg: 'images/bg1.jpg', // link to the background image (optional)
icon: 'mdi-home-outline', // icon of page (for the side menu)
head: 'head.html', // used for importing template as a header of the page (we currently use it to show time)
tileSize: Number, // optional field to override global value of tile size for current page
groups: [] // list of tile groups
}
```
### Tile groups
We divide tiles (cells) into groups on every page. Group object can have following fields:
```js
{
title: 'Group title',
width: 3, // Number of tiles (horizontally)
height: 4, // same but verticaly
items: [], // list of Tile objects
}
```
### Tiles
Tile Object. [Click here for some feal life examples](TILE_EXAMPLES.md)
```js
{
position: [1, 0], // [x, y] position inside group
type: TYPES.DEVICE_TRACKER, // type of a tile, please see the list of available types below
id: 'device_tracker.google_maps_228', // id of HA entity for the tile (e.g. switch.xyz)
// OPTIONAL
title: 'Tile title', // overrides default entity title
subtitle: 'Tile subtitle', // subtitle
width: 2, // overrides basic Tile size (1)
height: 2, //
states: {on: 'Enabled', off: 'Disabled'}, // object of states map, used for mapping of states
//state: false, // disables state in the Tile
//sub: String || Function, // custom state of Tile
icons: {on: "mdi-volume-high", off: "mdi-volume-off"}, // same as states but used for tiles with icons. You can use any of the material design icons from https://materialdesignicons.com/
bg: '@attributes.entity_picture', // link to the background image (available @/& prefixes, read about it below)
bgSuffix: '@attributes.entity_picture', // same as bg, but link appends to the serverUrl
bgOpacity: 0.5, // bg image opacity 0..1
theme: TYPES.SWITCH, // overrides tile theme
classes: ["-big-entity"], // appends class name to the tile element, useful for custom CSS styles
slides: [{}, {bg: 'images/slide.jpg'}], // slides in the background (atm up to 3 slides)
// type: SENSOR and several others
value: '&sensor.bathroom_temp.state', // overrides sensor value
unit: 'kWh', // override basic entity unit,
filter: function (value) {return value}, // function for filtering/formating entity value
//type: DEVICE_TRACKER
slidesDelay: 2, // delay before slides animation starts
map: 'google' || 'yandex', // map provider for showing position inside tile
//type: TEXT_LIST,
list: [{title: 'Kitchen temp', icon: 'mdi-home', value: '&sensor.kitchen_temp.state'}], // list of objects
//type: MEDIA_PLAYER
showSource: false || true, // show source picker (may not wont work properly atm)
// type: SLIDER
filter: function (value) {return value}, // same as filter in sensors
bottom: true, // puts slider to the bottom
slider: {} // object of slider, read about it below
// type: CAMERA or CAMERA_THUMBNAIL
bgSize: 'cover' || 'contain' || 'any css bg size',
filter: function (url) {return url}, // function for filtering camera url
fullscreen: {}, // object of type CAMERA/CAMERA_THUMBNAIL to show it in fullscreen
refresh: Number || Function, // number in milliseconds or function returns time, set interval for refreshing image
// type: LIGHT
sliders: [{}], // list of slider object (read about it below)
//type: WEATHER
fields: {}, // object of available weather fields (supported fields are below)
//classes: ['-compact'], // we also support -compact class for the WEATHER
}
```
At the moment following entity types have been implemented:
```js
var TYPES = {
DEVICE_TRACKER: 'device_tracker',
SCRIPT: 'script',
SENSOR: 'sensor',
SENSOR_ICON: 'sensor_icon',
SWITCH: 'switch',
GENERIC_ICON: 'generic_icon',
INPUT_BOOLEAN: 'input_boolean',
LIGHT: 'light',
TEXT_LIST: 'text_list',
INPUT_NUMBER: 'input_number',
INPUT_SELECT: 'input_select',
CAMERA: 'camera',
CAMERA_THUMBNAIL: 'camera_thumbnail',
SCENE: 'scene',
SLIDER: 'slider',
IFRAME: 'iframe',
DOOR_ENTRY: 'door_entry',
WEATHER: 'weather',
CLIMATE: 'climate',
MEDIA_PLAYER: 'media_player',
};
```
Example of slider config used for LIGHT:
```js
{
title: "Color temp",
field: "color_temp",
max: 588,
min: 153,
step: 15,
request: {
type: "call_service",
domain: "light",
service: "turn_on",
field: "color_temp"
}
}
```
Supported weather fields
```js
{
icon: '&sensor.dark_sky_icon.state',
iconMap: {'clear-day': 'clear', ...}, // statusKey: iconName (from images/weather-icons)
summary: '&sensor.dark_sky_summary.state',
apparentTemperature: '&sensor.dark_sky_apparent_temperature.state',
apparentTemperatureUnit: '&sensor.dark_sky_apparent_temperature.attributes.unit_of_measurement',
temperature: '&sensor.dark_sky_temperature.state',
temperatureUnit: '&sensor.dark_sky_temperature.attributes.unit_of_measurement',
precip: '&sensor.dark_sky_precip.state',
precipIntensity: '&sensor.dark_sky_precip_intensity.state',
precipIntensityUnit: '&sensor.dark_sky_precip_intensity.attributes.unit_of_measurement',
precipProbability: '&sensor.dark_sky_precip_probability.state',
precipProbabilityUnit: '&sensor.dark_sky_precip_probability.attributes.unit_of_measurement',
windSpeed: '&sensor.dark_sky_wind_speed.state',
windSpeedUnit: '&sensor.dark_sky_wind_speed.attributes.unit_of_measurement',
humidity: '&sensor.dark_sky_humidity.state',
humidityUnit: '&sensor.dark_sky_humidity.attributes.unit_of_measurement',
pollen: '&sensor.pollen_count.state',
pressure: '&sensor.dark_sky_pressure.state',
pressureUnit: '&sensor.dark_sky_pressure.attributes.unit_of_measurement',
}
```
### @/& Prefixes
As you may notice that we use @/& prefixes to get a value inside objects (entities).
@ is relative to the current entity (@attributes.friendly_name) and & is for global (&sensor.kitchen_temp.state). This may not work everywhere, but you may give it a go.
### Events
Events are fired when dashboard receives new state of the entity.
Firing event will cause the same action as the clicking on tile.
Useful for Door-entry systems etc.
```js
[
{
trigger: 'script.front_gate_bell_trigger',
state: 'off',
tile: { // invisible
type: TYPES.DOOR_ENTRY,
id: 'camera.front_door',
layout: {
camera: {...}, // camera layout
page: {},
tiles: []
}
}
}
]
```
## TODO
Where do I even begin?
## Contribution
Please feel free to post an issue or pull request and we will sort it out
## License
MIT License
| 36.381743 | 450 | 0.698791 | eng_Latn | 0.896239 |
dc9baffaa2986909ef967048e788ddf826d54b21 | 14,755 | md | Markdown | release-notes/5.0/5.0.15/5.0.212.md | Nexuscompute/core | ed47fb53452fdc36223c219166da9d7df260e100 | [
"MIT"
] | 1 | 2022-03-17T01:00:45.000Z | 2022-03-17T01:00:45.000Z | release-notes/5.0/5.0.15/5.0.212.md | AudiBass-Manager/core | b293dc541b9be7e1c38c6c955ec024229bee033d | [
"MIT"
] | null | null | null | release-notes/5.0/5.0.15/5.0.212.md | AudiBass-Manager/core | b293dc541b9be7e1c38c6c955ec024229bee033d | [
"MIT"
] | null | null | null | # .NET 5.0.212 SDK - March 8, 2022
The .NET SDK 5.0.212 release is available for download. The latest 5.0 release is always listed at [.NET 5.0 Releases](../README.md).
## Downloads
| | SDK Installer<sup>1</sup> | SDK Binaries<sup>1</sup> | Runtime Installer | Runtime Binaries | ASP.NET Core Runtime |Windows Desktop Runtime |
| --------- | :------------------------------------------: | :----------------------: | :---------------------------: | :-------------------------: | :-----------------: | :-----------------: |
| Windows | [x86][dotnet-sdk-win-x86.exe] \| [x64][dotnet-sdk-win-x64.exe] \| [Arm64][dotnet-sdk-win-arm64.exe] | [x86][dotnet-sdk-win-x86.zip] \| [x64][dotnet-sdk-win-x64.zip] \| [Arm64][dotnet-sdk-win-arm64.zip] | [x86][dotnet-runtime-win-x86.exe] \| [x64][dotnet-runtime-win-x64.exe] \| [Arm64][dotnet-runtime-win-arm64.exe] | [x86][dotnet-runtime-win-x86.zip] \| [x64][dotnet-runtime-win-x64.zip] \| [Arm64][dotnet-runtime-win-arm64.zip] | [x86][aspnetcore-runtime-win-x86.exe] \| [x64][aspnetcore-runtime-win-x64.exe] \|<br> [Hosting Bundle][dotnet-hosting-win.exe]<sup>2</sup> | [x86][windowsdesktop-runtime-win-x86.exe] \| [x64][windowsdesktop-runtime-win-x64.exe] \| [Arm64][windowsdesktop-runtime-win-arm64.exe] |
| macOS | [x64][dotnet-sdk-osx-x64.pkg] | [x64][dotnet-sdk-osx-x64.tar.gz] | [x64][dotnet-runtime-osx-x64.pkg] | [x64][dotnet-runtime-osx-x64.tar.gz] | [x64][aspnetcore-runtime-osx-x64.tar.gz] | - |<sup>1</sup>
| Linux | [Snap and Package Manager](../install-linux.md) | [x64][dotnet-sdk-linux-x64.tar.gz] \| [Arm][dotnet-sdk-linux-arm.tar.gz] \| [Arm32 Alpine][dotnet-sdk-linux-musl-arm.tar.gz] \| [Arm64][dotnet-sdk-linux-arm64.tar.gz] \| [x64 Alpine][dotnet-sdk-linux-musl-x64.tar.gz] | [Packages (x64)][linux-packages] | [x64][dotnet-runtime-linux-x64.tar.gz] \| [Arm][dotnet-runtime-linux-arm.tar.gz] \| [Arm64][dotnet-runtime-linux-arm64.tar.gz] \| [Arm32 Alpine][dotnet-runtime-linux-musl-arm.tar.gz] \|[Arm64 Alpine][dotnet-runtime-linux-musl-arm64.tar.gz] \| [x64 Alpine][dotnet-runtime-linux-musl-x64.tar.gz] | [x64][aspnetcore-runtime-linux-x64.tar.gz]<sup>1</sup> \| [Arm][aspnetcore-runtime-linux-arm.tar.gz]<sup>1</sup> \| [Arm64][aspnetcore-runtime-linux-arm64.tar.gz]<sup>1</sup> \| [x64 Alpine][aspnetcore-runtime-linux-musl-x64.tar.gz] | - | <sup>1</sup> |
| | [Checksums][checksums-sdk] | [Checksums][checksums-sdk] | [Checksums][checksums-runtime] | [Checksums][checksums-runtime] | [Checksums][checksums-runtime] | [Checksums][checksums-runtime]
</br>
1. Includes the .NET Runtime and ASP.NET Core Runtime
2. For hosting stand-alone apps on Windows Servers. Includes the ASP.NET Core Module for IIS and can be installed separately on servers without installing .NET Runtime.
</br>
The .NET SDK includes a matching updated .NET Runtime. Downloading the Runtime or ASP.NET Core packages is not needed when installing the SDK.
You can check your .NET SDK version by running the following command. The example version shown is for this release.
```console
$ dotnet --version
5.0.212
```
Visit [.NET Documentation](https://docs.microsoft.com/dotnet/core/) to learn about .NET, for building many different types of applications.
## Docker Images
The [.NET Docker images](https://hub.docker.com/_/microsoft-dotnet) have been updated for this release. The [.NET Docker samples](https://github.com/dotnet/dotnet-docker/blob/main/samples/README.md) show various ways to use .NET and Docker together. You can use the following command to try running the latest .NET 5.0 release in containers:
```console
docker run --rm mcr.microsoft.com/dotnet/samples
```
The following repos have been updated.
* [dotnet/sdk](https://hub.docker.com/_/microsoft-dotnet-sdk/): .NET SDK
* [dotnet/aspnet](https://hub.docker.com/_/microsoft-dotnet-aspnet/): ASP.NET Core Runtime
* [dotnet/runtime](https://hub.docker.com/_/microsoft-dotnet-runtime/): .NET Runtime
* [dotnet/runtime-deps](https://hub.docker.com/_/microsoft-dotnet-runtime-deps/): .NET Runtime Dependencies
* [dotnet/samples](https://hub.docker.com/_/microsoft-dotnet-samples/): .NET Samples
## Visual Studio Compatibility
You need [Visual Studio 16.9](https://visualstudio.microsoft.com) or later to use .NET 5.0 on Windows. On macOS, you need the latest version of [Visual Studio for Mac](https://visualstudio.microsoft.com/vs/mac/). The [C# extension](https://code.visualstudio.com/docs/languages/dotnet) for [Visual Studio Code](https://code.visualstudio.com/) supports .NET 5.0 and C# 9.
## Feedback
Your feedback is important and appreciated. We've created an issue at [dotnet/core #7259](https://github.com/dotnet/core/issues/7259) for your questions and comments.
[blob-runtime]: https://dotnetcli.blob.core.windows.net/dotnet/Runtime/
[blob-sdk]: https://dotnetcli.blob.core.windows.net/dotnet/Sdk/
[release-notes]: https://github.com/dotnet/core/blob/main/release-notes/5.0/5.0.15/5.0.15.md
[checksums-runtime]: https://dotnetcli.blob.core.windows.net/dotnet/checksums/5.0.15-sha.txt
[checksums-sdk]: https://dotnetcli.blob.core.windows.net/dotnet/checksums/5.0.15-sha.txt
[linux-install]: https://docs.microsoft.com/dotnet/core/install/linux
[linux-setup]: https://github.com/dotnet/core/blob/main/Documentation/linux-setup.md
[dotnet-blog]: https://devblogs.microsoft.com/dotnet/march-2022-updates/
[linux-packages]: ../install-linux.md
[//]: # ( Runtime 5.0.15)
[dotnet-runtime-linux-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/e7ee90c8-c54f-4793-a405-1050f365240c/ecc2da4c6d4b3e6611bda5e5a5bda9af/dotnet-runtime-5.0.15-linux-arm.tar.gz
[dotnet-runtime-linux-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/28ff2ede-6f43-4486-b2ad-dd2bde9ea2f7/9267ee1e9941196b8d45c162fa1bcb5d/dotnet-runtime-5.0.15-linux-arm64.tar.gz
[dotnet-runtime-linux-musl-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/c892c25d-1613-4569-bab0-a96702a1f39e/d083c3164e21f0f234eac60e07137e72/dotnet-runtime-5.0.15-linux-musl-arm.tar.gz
[dotnet-runtime-linux-musl-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/7698b563-f440-44c7-b959-b65e4ebc9cc6/701c84aa012c063072283a4d6bdda6aa/dotnet-runtime-5.0.15-linux-musl-arm64.tar.gz
[dotnet-runtime-linux-musl-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/dd5ec323-893c-4dd2-848a-b6c9559ae178/204ed4d65b9eade19ea74f982e8388ae/dotnet-runtime-5.0.15-linux-musl-x64.tar.gz
[dotnet-runtime-linux-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/546d50b2-d85c-433f-b13b-b896f1bc1916/17d7bbb674bf67c3d490489b20a437b7/dotnet-runtime-5.0.15-linux-x64.tar.gz
[dotnet-runtime-osx-x64.pkg]: https://download.visualstudio.microsoft.com/download/pr/aae7783c-c033-4308-ab45-7edf78d8945b/bef03269b50362c36a56a6f21693dd26/dotnet-runtime-5.0.15-osx-x64.pkg
[dotnet-runtime-osx-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/cb5d4972-5eb6-4e53-b422-86eae8d0162f/3fc69c296fad1c9867ce7873c1d90e8c/dotnet-runtime-5.0.15-osx-x64.tar.gz
[dotnet-runtime-win-arm64.exe]: https://download.visualstudio.microsoft.com/download/pr/3e2738f0-0bc5-40ff-b0e0-52830cce27f3/06352486fb5dbc29703be3766f87e849/dotnet-runtime-5.0.15-win-arm64.exe
[dotnet-runtime-win-arm64.zip]: https://download.visualstudio.microsoft.com/download/pr/a17bbc0d-2f4a-4237-98e7-c994ddf1b23e/f4599fc0d88e8831a706849b638e9d02/dotnet-runtime-5.0.15-win-arm64.zip
[dotnet-runtime-win-x64.exe]: https://download.visualstudio.microsoft.com/download/pr/744a5a4b-c931-4365-9762-5154e999af13/51553f5bfe24e1f7d54abbfbb94d0c4c/dotnet-runtime-5.0.15-win-x64.exe
[dotnet-runtime-win-x64.zip]: https://download.visualstudio.microsoft.com/download/pr/188c7f17-73ca-477d-96a5-5bcb0970e751/71cc137bec1df3a712ce04fe92aa78dd/dotnet-runtime-5.0.15-win-x64.zip
[dotnet-runtime-win-x86.exe]: https://download.visualstudio.microsoft.com/download/pr/5a3bc200-475f-46df-9e80-6955c5fa191d/d49c018dbb28af1182655cbed7abd620/dotnet-runtime-5.0.15-win-x86.exe
[dotnet-runtime-win-x86.zip]: https://download.visualstudio.microsoft.com/download/pr/8d7983ed-db89-4247-b7e7-151fcbbfa18e/e34058f89799307e3c60771722b23a4d/dotnet-runtime-5.0.15-win-x86.zip
[//]: # ( WindowsDesktop 5.0.15)
[windowsdesktop-runtime-win-arm64.exe]: https://download.visualstudio.microsoft.com/download/pr/40bf47cb-146b-479f-a660-a85c1c9c469d/96f8b74576dd57d65e1726c48e61734d/windowsdesktop-runtime-5.0.15-win-arm64.exe
[windowsdesktop-runtime-win-x64.exe]: https://download.visualstudio.microsoft.com/download/pr/b1902c77-e022-4b3e-a01a-e8830df936ff/09d0957435bf8c37eae11b4962d4221b/windowsdesktop-runtime-5.0.15-win-x64.exe
[windowsdesktop-runtime-win-x86.exe]: https://download.visualstudio.microsoft.com/download/pr/51b9e073-e0db-4c82-bdb8-47a9c39896e2/d676baa1bbb643f50c6b41ca64110d2f/windowsdesktop-runtime-5.0.15-win-x86.exe
[//]: # ( ASP 5.0.15)
[aspnetcore-runtime-linux-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/49f495c7-6d7d-44d6-94b3-ce3c633746f8/824a679ce2f0533c80ec466116f079ee/aspnetcore-runtime-5.0.15-linux-arm.tar.gz
[aspnetcore-runtime-linux-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/3ff16615-7afa-4741-b0ee-d86c922cb16f/958f0ea0a0248668413fd3920a1f4057/aspnetcore-runtime-5.0.15-linux-arm64.tar.gz
[aspnetcore-runtime-linux-musl-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/d0878c21-6570-4232-9095-942907cd1e50/22c306af1c4bb271bf776347737ff71a/aspnetcore-runtime-5.0.15-linux-musl-arm.tar.gz
[aspnetcore-runtime-linux-musl-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/65510b46-6757-47ee-9609-dd2b1f664524/325d150001b00258e5e2b2903ef93903/aspnetcore-runtime-5.0.15-linux-musl-arm64.tar.gz
[aspnetcore-runtime-linux-musl-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/832d30fb-5a14-40a5-b81c-2d354ebf37c8/6ace2bc70718075ad06649574c0148c8/aspnetcore-runtime-5.0.15-linux-musl-x64.tar.gz
[aspnetcore-runtime-linux-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/f1f37dfc-3f5b-49c3-be73-aa0839066e06/3dfbd1c2b1cf93f085db7ead99d76051/aspnetcore-runtime-5.0.15-linux-x64.tar.gz
[aspnetcore-runtime-osx-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/b925ee34-662b-4597-8003-0e6f23bab46c/2d5d21e290d90c094b2d25a069e34957/aspnetcore-runtime-5.0.15-osx-x64.tar.gz
[aspnetcore-runtime-win-arm64.zip]: https://download.visualstudio.microsoft.com/download/pr/495009ce-ea17-47df-b6b7-06f1b5bfd80e/a398530d27de9c1d18258cbeff62c70d/aspnetcore-runtime-5.0.15-win-arm64.zip
[aspnetcore-runtime-win-x64.exe]: https://download.visualstudio.microsoft.com/download/pr/69b4d158-fadb-46d0-8b28-6c4ba2968926/c4d93beeb194b73c134b3c2824499467/aspnetcore-runtime-5.0.15-win-x64.exe
[aspnetcore-runtime-win-x64.zip]: https://download.visualstudio.microsoft.com/download/pr/ceba0d71-e720-4ddd-a95d-a7e99a25ba38/280e258ef461c9bbc3e418f5a15fbebe/aspnetcore-runtime-5.0.15-win-x64.zip
[aspnetcore-runtime-win-x86.exe]: https://download.visualstudio.microsoft.com/download/pr/df529041-2a11-440c-98a4-650606c07ac0/dec1cbf6e76b45dbcff75e19c50ca485/aspnetcore-runtime-5.0.15-win-x86.exe
[aspnetcore-runtime-win-x86.zip]: https://download.visualstudio.microsoft.com/download/pr/87df4c6d-f98c-4e85-96fa-0fb20c6fd710/f9dbcb1a127292a07656e60553effeab/aspnetcore-runtime-5.0.15-win-x86.zip
[dotnet-hosting-win.exe]: https://download.visualstudio.microsoft.com/download/pr/d7d20e41-4bee-4f8a-a32c-278f0ef8ce1a/f5a0c59b42d01b9fc2115615c801866c/dotnet-hosting-5.0.15-win.exe
[//]: # ( SDK 5.0.212)
[dotnet-sdk-linux-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/f72db293-cfc3-454e-8b93-4cadb954c6c4/536a6267e3b0bfbac873d74656989894/dotnet-sdk-5.0.212-linux-arm.tar.gz
[dotnet-sdk-linux-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/5e719ce4-2608-49f9-a35f-538c0b381d00/41b3b61550c9ea924e3d419dd97c7558/dotnet-sdk-5.0.212-linux-arm64.tar.gz
[dotnet-sdk-linux-musl-arm.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/36fa5cc7-3697-40b4-aa3a-77a2a509c346/df0de70bd36a18d4b0cc16024e2acd9e/dotnet-sdk-5.0.212-linux-musl-arm.tar.gz
[dotnet-sdk-linux-musl-arm64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/c29d3d64-668c-4c47-83a0-2ef6161792e5/a3ab3444e47a6ecbaa7e7be25b1dbe3c/dotnet-sdk-5.0.212-linux-musl-arm64.tar.gz
[dotnet-sdk-linux-musl-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/ca3f6d28-3a31-4295-b834-4176db1e0058/0e46c68f652ce4e284416e121fa4bd41/dotnet-sdk-5.0.212-linux-musl-x64.tar.gz
[dotnet-sdk-linux-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/a76503a6-9118-44b9-a46d-68a533562cf3/a22fe0af464b7e9f8a9f83f3dfc5b060/dotnet-sdk-5.0.212-linux-x64.tar.gz
[dotnet-sdk-linux-x64.zip]: https://download.visualstudio.microsoft.com/download/pr/7ae0ce96-e239-4dec-b233-2288c5245a50/4628fbb2f04dd13e19e0fc0b3078464d/dotnet-sdk-5.0.212-linux-x64.zip
[dotnet-sdk-osx-x64.pkg]: https://download.visualstudio.microsoft.com/download/pr/71290b46-5064-4b13-9957-45683f4abffb/02a1e7517c5872c26d8b57c436c43fa8/dotnet-sdk-5.0.212-osx-x64.pkg
[dotnet-sdk-osx-x64.tar.gz]: https://download.visualstudio.microsoft.com/download/pr/f3d02f62-4632-4c24-bd18-48cd3dc890ca/6d974e45890f1809aafb537773325bb7/dotnet-sdk-5.0.212-osx-x64.tar.gz
[dotnet-sdk-win-arm64.exe]: https://download.visualstudio.microsoft.com/download/pr/ba206545-8f99-4794-a00c-057268ba74db/8effc131b9088641e516255839d97413/dotnet-sdk-5.0.212-win-arm64.exe
[dotnet-sdk-win-arm64.zip]: https://download.visualstudio.microsoft.com/download/pr/ea4e7fb2-a893-4bf2-b9a9-9c1c3887eb2a/186edc2364152e2bdb1966eef05cb4ab/dotnet-sdk-5.0.212-win-arm64.zip
[dotnet-sdk-win-x64.exe]: https://download.visualstudio.microsoft.com/download/pr/5008a042-9887-4864-8bfb-2e1304f30573/18e2f42c4c67b97e935ee522fcb89f43/dotnet-sdk-5.0.212-win-x64.exe
[dotnet-sdk-win-x64.zip]: https://download.visualstudio.microsoft.com/download/pr/ec6c6946-e998-4355-9249-55a1bb8bafc6/ac27f4e125cfc2c2297c238ee16e97fb/dotnet-sdk-5.0.212-win-x64.zip
[dotnet-sdk-win-x86.exe]: https://download.visualstudio.microsoft.com/download/pr/18e2cfa1-32d3-4c43-b7b7-8b8db4685154/43c9711f5d473d4222c8b4fbc4a6a0d4/dotnet-sdk-5.0.212-win-x86.exe
[dotnet-sdk-win-x86.zip]: https://download.visualstudio.microsoft.com/download/pr/1e9f3343-8c11-4444-b35a-f05f9c9e4e28/68b3e46fa9cb64cc2500c4afb6d08e55/dotnet-sdk-5.0.212-win-x86.zip
| 116.181102 | 871 | 0.766249 | yue_Hant | 0.28792 |
dc9caecf5c27e05a4ce9c1e10358d57b15e09e38 | 3,418 | md | Markdown | README.md | angyvolin/vagrant-webdev | 65332e0424cba69b198ef9a38359a234600d4f4e | [
"MIT"
] | null | null | null | README.md | angyvolin/vagrant-webdev | 65332e0424cba69b198ef9a38359a234600d4f4e | [
"MIT"
] | null | null | null | README.md | angyvolin/vagrant-webdev | 65332e0424cba69b198ef9a38359a234600d4f4e | [
"MIT"
] | 1 | 2020-01-21T03:35:20.000Z | 2020-01-21T03:35:20.000Z | vagrant-webdev
==============
This set of files allows me to deploy virtual machine for web development (mostly php) in a few minutes.
Installation
------------
* Install [VirtualBox](https://www.virtualbox.org/) and [Vagrant](http://www.vagrantup.com/)
```bash
$ sudo apt-get install virtualbox vagrant
```
* Go to directory with your projects and clone this repository. For example:
```bash
$ cd ~/projects
$ git clone https://github.com/vasylchenko/vagrant-webdev.git
```
* __Optional:__ Download and add vagrant box by typing ```vagrant box add ubuntu/trusty32```. Othrewise box will be downloaded automatically during the first virtual machine setup.
* Go to repository directory and type
```bash
$ vagrant up
```
* That's all! You can ssh to VM by ```vagrant ssh```. Directory with your projects will be available in virtual machine as _/vagrant_.
### Importing config files
Some config files are imported from host system during provisioning. These files are listed in Vagrantfile in ```config_files``` variable: _key_ is the _name of the file_ and _value_ is the _destination path_ in guest file system. Destination path can include subfolders e.g. ```"foo.txt" => "/path/to/bar.cfg"```. These files are imported from _data/configs_ folder. You can use links in _data/configs_ folder to copy config files of your host system to guest. For example:
```bash
# cd to the root of vagrant-webdev project folder
$ ln -s ~/.gitconfig data/configs/.gitconfig
```
Installed software
------------------
* nginx
* php (php-cli, php-fpm) + xdebug
* mysql
* phpMyAdmin
* memcached
* mongodb
* redis
* [composer](https://getcomposer.org/)
* [php-cs-fixer](https://github.com/fabpot/PHP-CS-Fixer)
* git
* zsh + [oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh)
* vim, mc, htop, tmux, curl
* dnsmasq (for service needs)
Virtual hosts
---------------
Dynamic virtual hosts are setted so you don't have to manually configure nginx to create new hosts. It's enough to create _example.dev_ directory (any directory ending with _.dev_) in synced folder either in your host or guest system and _http://example.dev_ host will be available in guest machine.
This is possible through the use of [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) on VM.
Web server will search index files sequentially in _public_, _web_, _www_, _httpdocs_ subfolders as well as just in main project folder. So you can choose any type of server root directory:
* /public/index.(html|htm|php)
* /web/index.(html|htm|php)
* /www/index.(html|htm|php)
* /httpdocs/index.(html|htm|php)
* /index.(html|htm|php)
To make new host accessible in browser of host system you have to add its hostname to host's _/etc/hosts_ file:
```bash
$ sudo echo '192.168.56.10 example.dev' >> /etc/hosts
```
It's possible to automate this by configuring dnsmasq also on host machine or using some vagrant plugins (see __Todo__ section below)
Debugging
---------
Under development... (see __Todo__ section below)
Todo
----
* Add new software:
* phpenv (or phpbrew or something else)
* Configure xdebug for remote debugging from host machine
* Setup cron script to backup databases to host machine
* Try vagrant plugins to automatically update host's _/etc/hosts_ file (I don't want to have dnsmasq on host system):
* https://github.com/smdahlen/vagrant-hostmanager
* https://github.com/cogitatio/vagrant-hostsupdater | 38.840909 | 474 | 0.731129 | eng_Latn | 0.945568 |
dc9cb664eb51b383b4bf4cf5ba59993422d46b56 | 38 | md | Markdown | Code/Visualization/README.md | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | null | null | null | Code/Visualization/README.md | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | 3 | 2021-12-16T10:09:36.000Z | 2022-02-11T17:53:31.000Z | Code/Visualization/README.md | SalimFares4/Image-Segmentation | 787530a2db9ebd5881eaf042317411ce432fcc71 | [
"MIT"
] | null | null | null | These files are to visualize our data
| 19 | 37 | 0.815789 | eng_Latn | 0.999988 |
dc9d3a58426f17f3f5971b6bd8694c523fb11629 | 9,134 | md | Markdown | content/visual-testing-handbook/react/es/automate.md | 1myourman/learnstorybook.com | d62bac347e6ef72a1c2d8bd36062db33c2b5227b | [
"MIT"
] | 2,192 | 2018-07-16T07:02:48.000Z | 2022-03-31T23:06:04.000Z | content/visual-testing-handbook/react/es/automate.md | 1myourman/learnstorybook.com | d62bac347e6ef72a1c2d8bd36062db33c2b5227b | [
"MIT"
] | 430 | 2018-07-15T04:21:35.000Z | 2022-03-22T20:49:51.000Z | content/visual-testing-handbook/react/es/automate.md | 1myourman/learnstorybook.com | d62bac347e6ef72a1c2d8bd36062db33c2b5227b | [
"MIT"
] | 418 | 2018-07-16T02:36:27.000Z | 2022-03-19T19:08:29.000Z | ---
title: 'Automatizar las pruebas visuales'
tocTitle: 'Automatizar'
description: 'Automatice las pruebas visuales para detectar regresiones'
commit: 'd7daf97'
---
En el curso natural del desarrollo, los errores son inevitables. La automatización de pruebas visuales utiliza máquinas para detectar cambios en la apariencia de la interfaz de usuario para que los revise un humano.
En pocas palabras, se toma una instantánea de cada variación de los componentes. Esto sirve como "línea base" de la prueba visual. Con cada commit, se capturan nuevas instantáneas y luego se comparan píxel por píxel con las líneas base. Si hay cambios en la interfaz de usuario, se le notificará para que revise si son errores o actualizaciones intencionales.
<video autoPlay muted playsInline loop >
<source
src="/visual-testing-handbook/automate-visual-workflow-test-diff.mp4"
type="video/mp4"
/>
</video>
## Configure un repositorio en GitHub
Antes de comenzar, nuestro código local `CommentList` debe sincronizarse con un servicio de control de versiones remoto.
Vaya a GitHub y cree un nuevo repositorio para el proyecto [aquí](https://github.com/new). Nombre el repositorio "commentlist", igual que nuestro proyecto local.
![Configure el repositorio de 'comment list' en GitHub](/visual-testing-handbook/commentlist-gh-repo-optimized.png)
Luego, siga las instrucciones para configurar el repositorio. Reemplace `your-username` con el nombre de tu cuenta de GitHub.
```
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin https://github.com/your-username/commentlist.git
git push -u origin main
```
## Configure Chromatic
Usaremos los mantenedores de Chromatic by Storybook para demostrar el proceso de captura de imágenes. Vaya a [chromatic.com](https://www.chromatic.com/) y regístrese con su cuenta de GitHub.
![Inicio de sesión de Chromatic](/visual-testing-handbook/chromatic-sign-in-optimized.png)
Desde allí, elija el repositorio que acaba de crear.
<video autoPlay muted playsInline loop>
<source src="/visual-testing-handbook/chromatic-create-project-optimized.mp4"
type="video/mp4" />
</video>
Las pruebas de interfaz de usuario capturan una instantánea de cada historia en un entorno de navegador en la nube. Siempre que envíe código, Chromatic genera un nuevo conjunto de instantáneas y las compara con las líneas base. Si hay cambios visuales, verifica si son intencionales.
### Establezca líneas base
Agregue Chromatic como paquete de desarrollo a su proyecto:
```shell
yarn add -D chromatic
```
Una vez que termine de instalarse, tendremos todo lo que necesitamos. Ahora es un excelente momento para confirmar y enviar los cambios al repositorio remoto.
```shell
git add .
git commit -m "Added Chromatic"
git push
```
Construya y publique nuestro Storybook con el comando `chromatic`. No olvide reemplazar el <code> project-token </code> con un suministro de Chromatic en el sitio web.
```shell
yarn chromatic --project-token=<project-token>
```
![Ejecutando Chromatic](/intro-to-storybook/chromatic-manual-storybook-console-log.png)
Con este comando, usted publicó su Storybook, activó Chromatic para capturar una instantánea de cada historia (en un navegador en la nube estandarizado) y estableció la instantánea como línea base.
Las compilaciones posteriores generarán nuevas instantáneas que se comparan con las líneas base existentes para detectar cambios en la interfaz de usuario.
![Líneas base en Chromatic](/visual-testing-handbook/commentlist-accepted-baselines-optimized.png)
### Ejecute las pruebas
Cada vez que una pull request contiene cambios en la interfaz de usuario, grandes o pequeños, es útil ejecutar las pruebas visuales. Chromatic compara nuevas instantáneas con líneas base existentes de compilaciones anteriores.
Hagamos un pequeño cambio en la interfaz de usuario para demostrar este concepto.
```shell
git checkout -b change-commentlist-outline
```
Ajuste el componente `CommentList`
```diff:title=src/components/CommentList.js
import React from 'react';
import PropTypes from 'prop-types';
import styled, { createGlobalStyle } from 'styled-components';
const CommentListDiv = styled.div`
font-family: "Nunito Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
color: #333;
display: inline-block;
vertical-align: top;
width: 265px;
`;
const CommentItemDiv = styled.div`
font-size: 12px;
line-height: 14px;
clear: both;
height: 48px;
margin-bottom: 10px;
box-shadow: rgba(0, 0, 0, 0.2) 0 0 10px 0;
background: linear-gradient(
120deg,
rgba(248, 248, 254, 0.95),
rgba(250, 250, 250, 0.95)
);
border-radius: 48px;
+ border: 4px solid red;
+ font-weight: bold;
`;
const AvatarDiv = styled.div`
float: left;
position: relative;
overflow: hidden;
height: 48px;
width: 48px;
margin-right: 14px;
background: #dfecf2;
border-radius: 48px;
`;
const AvatarImg = styled.img`
position: absolute;
height: 100%;
width: 100%;
left: 0;
top: 0;
z-index: 1;
background: #999;
`;
const MessageDiv = styled.div`
overflow: hidden;
padding-top: 10px;
padding-right: 20px;
`;
const AuthorSpan = styled.span`
font-weight: bold;
`;
const TextSpan = styled.span``;
const GlobalStyle = createGlobalStyle`
@import url('https://fonts.googleapis.com/css?family=Nunito+Sans:400,400i,800');
`;
export default function CommentList({ loading, comments, totalCount }) {
if (loading) {
return <div>loading</div>;
}
if (comments.length === 0) {
return <div>empty</div>;
}
return (
<>
<GlobalStyle/>
<CommentListDiv>
{comments.map(({ text, author: { name, avatar } }) => (
<CommentItemDiv key={`comment_${name}`}>
<AvatarDiv>
<AvatarImg src={avatar} />
</AvatarDiv>
<MessageDiv>
<AuthorSpan>{name}</AuthorSpan> <TextSpan>{text}</TextSpan>
</MessageDiv>
</CommentItemDiv>
))}
</CommentListDiv>
</>
);
}
CommentList.propTypes = {
/**
* Is the component in the loading state
*/
loading: PropTypes.bool,
/**
* Total number of comments
*/
totalCount: PropTypes.number,
/**
* List of comments
*/
comments: PropTypes.arrayOf(
PropTypes.shape({
text: PropTypes.string,
author: PropTypes.shape({
name: PropTypes.string,
avatar: PropTypes.string,
}),
})
),
};
CommentList.defaultProps = {
loading: false,
totalCount: 10,
comments: [],
};
```
Confirme el cambio, envíelo al repositorio y ejecute Chromatic:
```shell
git commit -am "make CommentList sparkle"
git push -u origin change-commentlist-outline
yarn chromatic --project-token=<project-token>
```
Abra una pull request para la nueva rama en su repositorio de GitHub.
![Comment list pull requested abierta en GitHub](/visual-testing-handbook/commentlist-gh-pullrequest-optimized.png)
Chromatic detectó cambios en la interfaz de usuario para que los revises. Vaya a las verificaciones de la PR y haga clic en "🟡 UI Test" para ver la lista de cambios. La compilación se marcará como “unreviewed”, o “no revisada”, y los cambios se enumerarán en la tabla “Tests”.
![Nuevos cambios publicados en Chromatic](/visual-testing-handbook/commentlist-ui-tests-chromatic-optimized.png)
### Revise los cambios
La automatización de las pruebas visuales garantiza que los componentes no cambien por accidente. Pero aún depende de los desarrolladores determinar si los cambios son intencionales o no.
Si un cambio es intencional, aceptamos la instantánea para actualizar la línea base. Eso significa que las pruebas futuras se compararán con el componente `CommentList` con bordes rojos.
Si un cambio no es intencional, es necesario corregirlo. Nuestro diseñador cree que el ✨majestuoso✨ borde rojo es horripilante, así que deshagámoslo.
![Chromatic pantalla de prueba](/visual-testing-handbook/chromatic-test-screen-optimized.png)
### Fusionar los cambios
Una vez que los errores se corrijan y las líneas base estén actualizadas, estás listo para fusionar el código nuevamente en la rama de destino. Chromatic transferirá las líneas base aceptadas entre las ramas para que solo tenga que aceptar las líneas base una vez.
![flujo de trabajo de las pruebas visuales](/visual-testing-handbook/workflow-uitest.png)
### Integración continua
Ejecutar este comando localmente cada vez que hacemos un cambio es tedioso. Los equipos de producción activan ejecuciones de pruebas visuales cuando se inserta el código en su CI/CD pipeline. Si bien no lo configuraremos en este tutorial, puede obtener más información en [Chromatic's CI docs](https://www.chromatic.com/docs/ci).
## Su viaje comienza
El manual de pruebas visuales muestra cómo los equipos de frontend líderes prueban la apariencia de la interfaz de usuario. Es una forma práctica de verificar que la interfaz de usuario coincide con el diseño previsto y permanece libre de errores con el tiempo.
Esperamos que esta guía inspire su propia estrategia de prueba visual. El capítulo final concluye con el código de muestra completo y recursos útiles.
| 34.996169 | 359 | 0.747208 | spa_Latn | 0.939779 |
dc9de07f24e225fc246c9047bc06ac2c171ac42c | 304 | md | Markdown | CHANGELOG.md | francescacosta/francescacosta2 | 52299bd88c8b38eb2c29ab7c34cbe984328a2700 | [
"MIT"
] | null | null | null | CHANGELOG.md | francescacosta/francescacosta2 | 52299bd88c8b38eb2c29ab7c34cbe984328a2700 | [
"MIT"
] | null | null | null | CHANGELOG.md | francescacosta/francescacosta2 | 52299bd88c8b38eb2c29ab7c34cbe984328a2700 | [
"MIT"
] | null | null | null | ## 2.0.2 - 2018-11-06
Add SVG icon component
It allows you to change color of the icon in css with background of .SVGIcon--icon
## 2.0.1.1 - 2018-10-28
add accordion
## 2.0.1 - 2018-10-18
Configure Uploadcare widget
Image component setup
Add google maps
## 2.0.0 - 2018-10-15
Init Yellow cake
| 16 | 82 | 0.697368 | eng_Latn | 0.773281 |
dc9f6ae08a33bc0391f444b45103b86a1dde57f3 | 3,424 | md | Markdown | deliverables/m1/reflection.md | Rbeck200/CS112-ImageManipulator | e45c9c3448a4302fff09dd036a8f1bbc40d9f31f | [
"Apache-2.0"
] | null | null | null | deliverables/m1/reflection.md | Rbeck200/CS112-ImageManipulator | e45c9c3448a4302fff09dd036a8f1bbc40d9f31f | [
"Apache-2.0"
] | null | null | null | deliverables/m1/reflection.md | Rbeck200/CS112-ImageManipulator | e45c9c3448a4302fff09dd036a8f1bbc40d9f31f | [
"Apache-2.0"
] | null | null | null | When I was working on my project the biggest problem I ran into was that my program wouldnt check to see if the
stream actually had numbers. It took me a while to find out where the issue was coming from, but in the end I
realized that pixels were being added, I could tell because there were more rows then the height said their
should be. I figured out that because of my while loop in the open function of my PpmDoc class was reading only
the space at the end of the file since it read a space it determined that the line was still good and would
create a new pixel, which I would automatically assign to 0,0,0, and therefore add it without really meaning to.
So i reintialized the pixels to start at -1,-1,-1 and then run a boolean check on each pixel to see if it was
assigned values correctly, and on the ones that it wouldn't change because it was just a space my bool would
return false and therefore not add it to my vector of pixels. An issue that I couldnt solve is that I couldnt
a way to do this error check by analyzing what was next in the istringstream. When I tried to see if what was
next in the stream was just a space Visual Studio kept telling me that i was trying to compare an int to a
constant char, and then I also couldn't use the .length() function but then I found I wasn't able to use the
== operator. I started to look through the istringstream documentation and thought that its .peek() function
would be my answer but again i wasnt able to use the == operator, and I was comparing an int to a constant char.
My advice to students doing this assignment in the future is that you should use you breakpoints and rigid
testing. My break points kept me sane while I was going through my code and I would go through peices of my code
and when I ran into a problem i would slowly go through my code part by part until I could figure out what was
wrong and find my solution. Going through your code bit by bit allows you to find the problem and see why it
was occuring and then gives you a general idea on how to solve it. The thing i enjoy the most abou these
assignments is that I get to think about the logic away from the computer, I have less stress on me, and I
feel it is because I can just think about the logic abstactly without having to also devote some of my brain
power at that moment to also type it up. I feel like I can think more freely and bounce ideas around in my
head. The most challenging part of the assignment was trying to figure out how to get rid of the added pixel on
every line, I kept trying to check the istringstream before the pixel was made, because I though that it would
be less computation, but I had to resort to checking the pixel after it was done being created to see if it was
done properly. Currently the most difficult part for me to understand is why the istringstream would be
represented as an int, I really don't know why that is, it probably has something to do with the actual
characters in the line, but I couldn't find anything so I'm not that sure. I honestly dont think you really need
to do anything differently. With the notes that we do in the lecture, to the book, and google this assignment was
fairly straightforward, and if big problems arrise then we can come and talk to you to see what is happening. So
no I think that everything is fine because you give use all of the tools to go out there and figure it out for
ourselves. | 107 | 113 | 0.785047 | eng_Latn | 1.000005 |
dc9faeb20aec25911b74be2928d25facdc2ea72c | 2,634 | md | Markdown | docs/editors-guide/patterns/lipoma.md | JArgasinska/git | 6308d5f05a97152bf5f95dd161c52a115fd2570a | [
"CC-BY-4.0"
] | 126 | 2018-04-03T17:16:43.000Z | 2022-03-28T22:56:48.000Z | docs/editors-guide/patterns/lipoma.md | JArgasinska/git | 6308d5f05a97152bf5f95dd161c52a115fd2570a | [
"CC-BY-4.0"
] | 2,609 | 2018-04-03T19:24:32.000Z | 2022-03-31T18:59:03.000Z | docs/editors-guide/patterns/lipoma.md | JArgasinska/git | 6308d5f05a97152bf5f95dd161c52a115fd2570a | [
"CC-BY-4.0"
] | 28 | 2018-07-27T14:40:24.000Z | 2022-02-14T22:40:05.000Z | # lipoma disease has location X
[http://purl.obolibrary.org/obo/mondo/patterns/lipoma.yaml](http://purl.obolibrary.org/obo/mondo/patterns/lipoma.yaml)
## Description
A benign, usually painless, well-circumscribed lipomatous tumor composed of adipose tissue that is located in a specific anatomical location.
Examples: [skin lipoma](http://purl.obolibrary.org/obo/MONDO_0000964), [colorectal lipoma](http://purl.obolibrary.org/obo/MONDO_0003885), [tendon sheath lipoma](http://purl.obolibrary.org/obo/MONDO_0004076) (28 total)
## Contributors
* [https://orcid.org/0000-0002-6601-2165](https://orcid.org/0000-0002-6601-2165)
* [https://orcid.org/0000-0001-5208-3432](https://orcid.org/0000-0001-5208-3432)
## Name
{[location](http://www.w3.org/2002/07/owl#Thing)} lipoma
## Annotations
* [exact_synonym](http://www.geneontology.org/formats/oboInOwl#hasExactSynonym): {[location](http://www.w3.org/2002/07/owl#Thing)} lipoma
## Definition
A benign adipose tissue neoplasm of the {[location](http://www.w3.org/2002/07/owl#Thing)}.
## Equivalent to
[lipoma](http://purl.obolibrary.org/obo/MONDO_0005106) and ([disease has location](http://purl.obolibrary.org/obo/RO_0004026) some {[location](http://www.w3.org/2002/07/owl#Thing)})
## Data preview
| defined_class | defined_class_label | location | location_label |
|:---------------------------------------------|:------------------------------|:----------------------------------------------|:-----------------------|
| [MONDO:0000974](http://purl.obolibrary.org/obo/MONDO_0000974) | axillary lipoma | [UBERON:0009472](http://purl.obolibrary.org/obo/UBERON_0009472) | axilla |
| [MONDO:0000970](http://purl.obolibrary.org/obo/MONDO_0000970) | breast lipoma | [UBERON:0000310](http://purl.obolibrary.org/obo/UBERON_0000310) | breast |
| [MONDO:0003844](http://purl.obolibrary.org/obo/MONDO_0003844) | central nervous system lipoma | [UBERON:0001017](http://purl.obolibrary.org/obo/UBERON_0001017) | central nervous system |
| [MONDO:0003843](http://purl.obolibrary.org/obo/MONDO_0003843) | cerebral hemisphere lipoma | [UBERON:0001869](http://purl.obolibrary.org/obo/UBERON_0001869) | cerebral hemisphere |
| [MONDO:0000971](http://purl.obolibrary.org/obo/MONDO_0000971) | chest wall lipoma | [UBERON:0016435](http://purl.obolibrary.org/obo/UBERON_0016435) | chest wall |
See full table [here](https://github.com/monarch-initiative/mondo/blob/master/src/patterns/data/matches/lipoma.tsv)
| 69.315789 | 217 | 0.660592 | yue_Hant | 0.501848 |
dc9fc01323df59d2483a0b95ac81542c2bcd511f | 6,113 | md | Markdown | README.md | celinew1221/clevr-dataset-gen | cebc5c7e2c3e82f7b6892949d4fa4fb317922ed7 | [
"BSD-3-Clause"
] | null | null | null | README.md | celinew1221/clevr-dataset-gen | cebc5c7e2c3e82f7b6892949d4fa4fb317922ed7 | [
"BSD-3-Clause"
] | null | null | null | README.md | celinew1221/clevr-dataset-gen | cebc5c7e2c3e82f7b6892949d4fa4fb317922ed7 | [
"BSD-3-Clause"
] | null | null | null | # CLEVR Dataset Generation Augmented with Action Identification
This is the code developed based on the original [CLEVR dataset](http://cs.stanford.edu/people/jcjohns/clevr/) generation code. The original non-action-based dataset generation mechanism is as described in the [paper](https://arxiv.org/abs/1612.06890).
You can download the augmented CLEVR dataset [here](https://drive.google.com/file/d/1ww76wkUzrIhfgAUorI7IgI6L2wTFnL8p/view). It contains 54k training samples, 7k validation samples and 10k test samples. Each sample contains two images and a question. All questions are grouped into CLEVR_*XYZ*_questions.json files.
You can use this code to render synthetic images and compositional questions for those images, like this:
<div align="center">
<img src="images/example1080.png" width="500px">
</div>
**Q:** How many small spheres are there? <br>
**A:** 2
**Q:** What number of cubes are small things or red metal objects? <br>
**A:** 2
**Q:** Does the metal sphere have the same color as the metal cylinder? <br>
**A:** Yes
**Q:** Are there more small cylinders than metal things? <br>
**A:** No
**Q:** There is a cylinder that is on the right side of the large yellow object behind the blue ball; is there a shiny cube in front of it? <br>
**A:** Yes
**Q:** What color does the big red rubber cube in image1 change to in image2? <br>
**A:** Blue
**Q:** How does the big red rubber cube move from image1 to image2? <br>
**A:** Front Right
See more examples in the template folder of `question_generation/`.
If you find this code useful in your research, please cite the original paper and cite my work use this bibtex:
```
@misc{celinew1221_action_clevr_2018,
title={CLEVR Dataset Generation Augmented with Action Identification},
author={Celine Wei},
year={2018},
publisher={Github},
journal={GitHub repository},
howpublished={\url{https://github.com/celinew1221/clevr-dataset-gen/}},
}
```
All code was developed and tested on Ubuntu 16.04.
## Step 1: Generating Images
### Regular Images
First we render synthetic images using [Blender](https://www.blender.org/), outputting both rendered images as well as a JSON file containing ground-truth scene information for each image.
Blender ships with its own installation of Python which is used to execute scripts that interact with Blender; you'll need to add the `image_generation` directory to Python path of Blender's bundled Python. The easiest way to do this is by adding a `.pth` file to the `site-packages` directory of Blender's Python, like this:
```bash
echo $PWD/image_generation >> $BLENDER/$VERSION/python/lib/python3.5/site-packages/clevr.pth
```
where `$BLENDER` is the directory where Blender is installed and `$VERSION` is your Blender version; for example on OSX you might run:
```bash
echo $PWD/image_generation >> /Applications/blender/blender.app/Contents/Resources/2.78/python/lib/python3.5/site-packages/clevr.pth
```
You can then render some regular CLEVR images like this:
```bash
cd image_generation
blender --background --python render_images.py -- --num_images 10
```
On OSX the `blender` binary is located inside the blender.app directory; for convenience you may want to
add the following alias to your `~/.bash_profile` file:
```bash
alias blender='/Applications/blender/blender.app/Contents/MacOS/blender'
```
If you have an NVIDIA GPU with CUDA installed then you can use the GPU to accelerate rendering like this:
```bash
blender --background --python render_images.py -- --num_images 10 --use_gpu 1
```
After this command terminates you should have ten freshly rendered images stored in `output/images` like these:
<div align="center">
<img src="images/img1.png" width="260px">
<img src="images/img2.png" width="260px">
<img src="images/img3.png" width="260px">
<br>
<img src="images/img4.png" width="260px">
<img src="images/img5.png" width="260px">
<img src="images/img6.png" width="260px">
</div>
The file `output/CLEVR_scenes.json` will contain ground-truth scene information for all newly rendered images.
### Action Images
Similar to regular images, you can render action-based CLEVR images like this:
```bash
cd image_generation
blender --background --python render_images.py -- --num_images 10 --action
```
After this command terminates you should have ten freshly rendered images stored in `output/images` like these:
<div align="center">
<img src="images_action/img1.png" width="260px" alt="Color Change">
<img src="images_action/img3.png" width="260px" alt="Movement">
<img src="images_action/img5.png" width="260px" alt="Material Change">
<br>
<img src="images_action/img2.png" width="260px">
<img src="images_action/img4.png" width="260px">
<img src="images_action/img6.png" width="260px">
<br>
<b> Color Change, Movement, Material Change (from Left to Right) </b>
</div>
The image filename has a split parameter, by default "new" is the original image and "cor" is "new"'s corresponding image.
The file `output/CLEVR_cb_scenes.json` will contain ground-truth scene information for all newly rendered images and its changes.
You can find [more details about image rendering here](image_generation/README.md).
## Step 2: Generating Questions
Next we generate questions, functional programs, and answers for the rendered images generated in the previous step.
This step takes as input the single JSON file containing all ground-truth scene information, and outputs a JSON file
containing questions, answers, and functional programs for the questions in a single JSON file.
You can generate questions like this for regular images:
```bash
cd question_generation
python generate_questions.py
```
The file `output/CLEVR_questions.json` will then contain questions for the generated images.
You can generate questions for action-based images:
```bash
cd question_generation
python generate_questions.py --action 1
```
The file `output/CLEVR_action_questions.json` will then contain questions for the generated images.
You can [find more details about question generation here](question_generation/README.md).
| 40.483444 | 325 | 0.756421 | eng_Latn | 0.969289 |
dc9fc3c1933bf46072cc3e38ecc161a74feb7860 | 1,313 | md | Markdown | README.md | Numez/RectangularGradientDrawable | 1c284dc46bb56c96695c8a239fb3c24412fdb222 | [
"Apache-2.0"
] | null | null | null | README.md | Numez/RectangularGradientDrawable | 1c284dc46bb56c96695c8a239fb3c24412fdb222 | [
"Apache-2.0"
] | 1 | 2015-07-05T21:07:51.000Z | 2015-07-05T21:17:23.000Z | README.md | Numez/RectangularGradientDrawable | 1c284dc46bb56c96695c8a239fb3c24412fdb222 | [
"Apache-2.0"
] | null | null | null | # RectangularGradientDrawable
It creates a rectangle gradient shape of many colours from the centre to the exterior
It can be set as background of a view to be the shadow
![RectangularGradientDrawable](https://cloud.githubusercontent.com/assets/6752432/8513409/9bebd6f0-236a-11e5-8931-6a9f8e47ef0e.jpg)
-------
### Usage
###### 2 colors
Drawable rectDrawable = new RectangularGradientDrawable( Color.RED, Color.BLUE );
myView.setBackground(rectDrawable);
###### Multiple colors
Drawable rectDrawable = new RectangularGradientDrawable( new int[] { Color.GREEN, Color.BLUE, Color.YELLOW, Color.RED }, new float[] { 0f, 0.3f, 0.6f, 1f } );
myView.setBackground(rectDrawable);
-------
Copyright 2015 Tiziano Munegato
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 38.617647 | 162 | 0.733435 | eng_Latn | 0.949108 |
dc9ff6a070b53d88b8fdce17563c068f63ae2e7f | 1,066 | md | Markdown | README.md | regulad/asyncgTTS | c68ecd28c139a7fc733f8374a2e5e4abece640b8 | [
"MIT"
] | null | null | null | README.md | regulad/asyncgTTS | c68ecd28c139a7fc733f8374a2e5e4abece640b8 | [
"MIT"
] | null | null | null | README.md | regulad/asyncgTTS | c68ecd28c139a7fc733f8374a2e5e4abece640b8 | [
"MIT"
] | null | null | null | # asyncgTTS
Asynchronous interfaces to the [official Google Text to Speech](https://cloud.google.com/text-to-speech) API written with aiohttp.
Similar to [googleapis/python-texttospeech](https://github.com/googleapis/python-texttospeech/blob/3125b714f547191a830faecb5ae0b830e53e99fd/google/cloud/texttospeech_v1/services/text_to_speech/async_client.py#L35) in concept, but asyncgTTS was designed with asynchronously in mind and is expected to be more performant.
### Example
```python
import asyncio
import json
from asyncgTTS import AsyncGTTSSession, ServiceAccount
async def main():
with open("SERVICE_ACCOUNT.JSON") as service_account_json:
service_account_dict = json.load(service_account_json)
service_account = ServiceAccount.from_service_account_dict(service_account_dict)
async with AsyncGTTSSession.from_service_account(service_account) as google_tts:
audio_bytes = await google_tts.synthesize("vine boom")
with open("Hello_world.mp3", "wb") as f:
f.write(audio_bytes)
asyncio.run(main())
```
| 34.387097 | 319 | 0.77955 | eng_Latn | 0.62289 |
dca09c45395bfc584979bac3f8c94ffd473b5cf9 | 301 | md | Markdown | README.md | s1monj/nicetotouch-web | 453a64b1ae5e041d9ca7fb4f511def133530c5f3 | [
"MIT"
] | null | null | null | README.md | s1monj/nicetotouch-web | 453a64b1ae5e041d9ca7fb4f511def133530c5f3 | [
"MIT"
] | null | null | null | README.md | s1monj/nicetotouch-web | 453a64b1ae5e041d9ca7fb4f511def133530c5f3 | [
"MIT"
] | null | null | null | # NTT staic website for deployment on IPFS with ENS
- https://nicetotouch.eth.link
- Forked from [gatsby-starter-fresh](https://github.com/mishal23/gatsby-starter-fresh)
- Added [gatsby-remark-relative-images](https://github.com/danielmahon/gatsby-remark-relative-images) for relatives images on IPFS
| 60.2 | 130 | 0.790698 | yue_Hant | 0.575487 |
dca0cd23901e13d5a6e2a8672e4f7a816285d894 | 1,534 | md | Markdown | _posts/2021-04-09-describe-dynamics-365-marketing-capabilities-8-knowledge-check-.md | avicoder/mslearn | d864219a93bfa551c113003450f9284002299508 | [
"MIT"
] | null | null | null | _posts/2021-04-09-describe-dynamics-365-marketing-capabilities-8-knowledge-check-.md | avicoder/mslearn | d864219a93bfa551c113003450f9284002299508 | [
"MIT"
] | null | null | null | _posts/2021-04-09-describe-dynamics-365-marketing-capabilities-8-knowledge-check-.md | avicoder/mslearn | d864219a93bfa551c113003450f9284002299508 | [
"MIT"
] | 1 | 2022-03-09T17:33:15.000Z | 2022-03-09T17:33:15.000Z | ---
layout: post
title: Describe Dynamics 365 Marketing capabilities
description: nil
summary: nil
tags: nil
---
<a target="_blank" href="https://docs.microsoft.com/en-us/learn/modules/describe-dynamics-365-marketing-capabilities/8-knowledge-check/"><i class="fas fa-external-link-alt"></i> </a>
<img align="right" src="https://docs.microsoft.com/en-us/learn/achievements/describe-dynamics-365-marketing-capabilities.svg">
#### 1. What is a Customer Journey?
<i class='far fa-square'></i> The customer service process
<i class='fas fa-check-square' style='color: Dodgerblue;'></i> The building and tracking of a marketing effort
<i class='far fa-square'></i> The sales process
<br />
<br />
<br />
#### 2. What are Segments in Dynamics 365 Marketing?
<i class='far fa-square'></i> The division of selected accounts by city.
<i class='far fa-square'></i> The separation of two companies, which was formerly one.
<i class='fas fa-check-square' style='color: Dodgerblue;'></i> Defined target groups of related contacts.
<br />
<br />
<br />
#### 3. Which of the following Dynamics 365 Marketing features would you use to set up and manage an online trade show your organization is hosting?
<i class='far fa-square'></i> Customer Journeys
<i class='fas fa-check-square' style='color: Dodgerblue;'></i> Event Management
<i class='far fa-square'></i> Customer Segments
<br />
<br />
<br />
| 32.638298 | 183 | 0.699478 | eng_Latn | 0.812519 |
dca1307e84b92dbd654405072a1cc4fc3766f2a5 | 1,527 | md | Markdown | README.md | MarinMarinov/ASP.NET-MVC-Graduation-Project | a2f65fa4d9be8ad69a9b637a85eff80ffdf37d3f | [
"MIT"
] | 3 | 2016-04-04T13:52:57.000Z | 2018-07-23T06:55:09.000Z | README.md | MarinMarinov/ASP.NET-MVC-Graduation-Project | a2f65fa4d9be8ad69a9b637a85eff80ffdf37d3f | [
"MIT"
] | null | null | null | README.md | MarinMarinov/ASP.NET-MVC-Graduation-Project | a2f65fa4d9be8ad69a9b637a85eff80ffdf37d3f | [
"MIT"
] | 1 | 2018-03-27T13:44:29.000Z | 2018-03-27T13:44:29.000Z | # ASP.NET-MVC-Graduation-Project
###Telerik Academy 2016 - ASP.NET MVC Course - Graduation Project
On-line Auction Application "Live Auction"
##The Idea
App for managing on-line instant live auctions(continuing no more than several minutes) for luxury goods like pictures, statues, porcelain, jewellery, post stamps, retro cars etc.
###Using **SignalR** technology implements the fastest possible real-time communication true web-sockets.
The functionality is as described:
###Admin functionality:
* Full CRUD operations on Item with upload of pictures
* The pictures are saved in the Database
* Full CRUD operations on Auction for an Item
* Manage Auction:
* Set Auction as Active
* Close Auction(set it as Inactive)
* Make bids
* Includes all User and free functionality
###User functionality:
* Create, Read, Edit of User
* Uploading avatar image
* Avatar images are saved on the file system
* Join to active Auction
* Make bids
###Free functionality
* See all Items and details of Items
* See all Auctions and details of Auctions
###Used frameworks and libraries
* ASP.NET MVC 5.0
* Including all default libraries and frameworks
* ASP.NET SignalR 2.2.0
* Entity Framework 6.1.3
* Code-first approach
* jQuery - 2.0.3
* jQuery UI - 1.11.4
* Unobtrusive Ajax support library for jQuery
* jQuery Validation Plugin 1.11.1
* Unobtrusive validation support library for jQuery and jQuery Validate
* Autofac.Mvc 5 - 3.3.3
* Automapper - 4.2.0
* Grid.Mvc - 2.3.0
* Bootstrap CSS library
* Spacelab free theme
| 29.941176 | 179 | 0.756385 | eng_Latn | 0.731468 |
dca17274d1dc7aacc3894abadf38ddfe20f3ea49 | 914 | md | Markdown | internal/compiler/transforms/compile/test-fixtures/css-handler/prefix/pseudo-selectors/mixed/input.test.md | jer3m01/rome | 66c61388f0192e34e0dd6d2a5f743b2100ebaa38 | [
"MIT"
] | null | null | null | internal/compiler/transforms/compile/test-fixtures/css-handler/prefix/pseudo-selectors/mixed/input.test.md | jer3m01/rome | 66c61388f0192e34e0dd6d2a5f743b2100ebaa38 | [
"MIT"
] | null | null | null | internal/compiler/transforms/compile/test-fixtures/css-handler/prefix/pseudo-selectors/mixed/input.test.md | jer3m01/rome | 66c61388f0192e34e0dd6d2a5f743b2100ebaa38 | [
"MIT"
] | null | null | null | # `index.test.ts`
**DO NOT MODIFY**. This file has been autogenerated. Run `rome test internal/compiler/transforms/compile/index.test.ts --update-snapshots` to update.
## `css-handler > prefix > pseudo-selectors > mixed`
### `Diagnostics`
```css
```
### `Input`
```css
/* Multiple pseudo selectors mixed */
.parent3 > .child3:any-link, .example3::placeholder {
width: 10px;
}
.parent3 > .child3::selection, .example3:read-only {
width: 10px;
}
```
### `Output`
```css
/* Multiple pseudo selectors mixed */
.parent3 > .child3:any-link,
.example3::placeholder {
width: 10px;
}
.parent3 > .child3:-moz-any-link,
.example3::-moz-placeholder {
width: 10px;
}
.parent3 > .child3:-webkit-any-link,
.example3::-webkit-input-placeholder {
width: 10px;
}
.parent3 > .child3::selection,
.example3:read-only {
width: 10px;
}
.parent3 > .child3::-moz-selection,
.example3:-moz-read-only {
width: 10px;
}
```
| 17.245283 | 149 | 0.670678 | eng_Latn | 0.530816 |
dca173467acc85ece7ac9093a20afafc6c649072 | 10,265 | md | Markdown | docs/standard-library/wstring-convert-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/wstring-convert-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/wstring-convert-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: Classe wstring_convert
ms.date: 11/04/2016
f1_keywords:
- wstring/stdext::cvt::wstring_convert
- locale/std::wstring_convert::byte_string
- locale/std::wstring_convert::wide_string
- locale/std::wstring_convert::state_type
- locale/std::wstring_convert::int_type
- locale/std::wstring_convert::from_bytes
- locale/std::wstring_convert::to_bytes
- locale/std::wstring_convert::converted
- locale/std::wstring_convert::state
helpviewer_keywords:
- stdext::cvt [C++], wstring_convert
- std::wstring_convert [C++], byte_string
- std::wstring_convert [C++], wide_string
- std::wstring_convert [C++], state_type
- std::wstring_convert [C++], int_type
- std::wstring_convert [C++], from_bytes
- std::wstring_convert [C++], to_bytes
- std::wstring_convert [C++], converted
- std::wstring_convert [C++], state
ms.assetid: e34f5b65-d572-4bdc-ac69-20778712e376
ms.openlocfilehash: f09f12d9100e9faad849de608a9124f457da23df
ms.sourcegitcommit: c123cc76bb2b6c5cde6f4c425ece420ac733bf70
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/14/2020
ms.locfileid: "81366360"
---
# <a name="wstring_convert-class"></a>Classe wstring_convert
Il modello `wstring_convert` di classe esegue conversioni tra una stringa wide e una stringa di byte.
## <a name="syntax"></a>Sintassi
```cpp
template <class Codecvt, class Elem = wchar_t>
class wstring_convert
```
### <a name="parameters"></a>Parametri
*Codecvt*\
Facet delle [impostazioni locali](../standard-library/locale-class.md) che rappresenta l'oggetto di conversione.
*Elem*\
Tipo di elemento a caratteri "wide".
## <a name="remarks"></a>Osservazioni
Il modello di classe descrive un oggetto che controlla `std::basic_string<Elem>` le conversioni `std::basic_string<char>` tra oggetti `std::string`stringa wide di classe e oggetti stringa di byte della classe (noto anche come ). Il modello di classe `wide_string` `byte_string` definisce i tipi e come sinonimi per questi due tipi. La conversione tra una sequenza di valori `Elem` (archiviati in un oggetto `wide_string`) e sequenze multibyte (archiviate in un oggetto `byte_string`) viene eseguita da un oggetto della classe `Codecvt<Elem, char, std::mbstate_t>`, che soddisfa i requisiti del facet standard di conversione del codice `std::codecvt<Elem, char, std::mbstate_t>`.
Un oggetto di questo modello di classe archivia:
- Una stringa di byte da visualizzare in caso di errori
- Una stringa di caratteri "wide" da visualizzare in caso di errori
- Un puntatore all'oggetto di conversione allocato (che viene liberato quando viene eliminato definitivamente l'oggetto wbuffer_convert)
- Un oggetto conversion state di tipo [state_type](#state_type)
- Un conteggio delle conversioni
### <a name="constructors"></a>Costruttori
|Costruttore|Descrizione|
|-|-|
|[wstring_convert](#wstring_convert)|Costruisce un oggetto di tipo `wstring_convert`.|
### <a name="typedefs"></a>Typedef
|Nome tipo|Descrizione|
|-|-|
|[byte_string](#byte_string)|Tipo che rappresenta una stringa di tipo byte.|
|[wide_string](#wide_string)|Tipo che rappresenta una stringa di tipo wide.|
|[state_type](#state_type)|Tipo che rappresenta lo stato della conversione.|
|[int_type](#int_type)|Tipo che rappresenta un Integer.|
### <a name="member-functions"></a>Funzioni membro
|Funzione membro|Descrizione|
|-|-|
|[from_bytes](#from_bytes)|Converte una stringa di byte in una stringa di caratteri "wide".|
|[to_bytes](#to_bytes)|Converte una stringa di caratteri "wide" in una stringa di byte.|
|[converted](#converted)|Restituisce il numero di conversioni riuscite.|
|[Stato](#state)|Restituisce un oggetto che rappresenta lo stato della conversione.|
## <a name="requirements"></a>Requisiti
**Intestazione:** \<locale>
**Spazio dei nomi:** std
## <a name="wstring_convertbyte_string"></a><a name="byte_string"></a>wstring_convert::byte_string
Tipo che rappresenta una stringa di tipo byte.
```cpp
typedef std::basic_string<char> byte_string;
```
### <a name="remarks"></a>Osservazioni
Il tipo è sinonimo di `std::basic_string<char>`.
## <a name="wstring_convertconverted"></a><a name="converted"></a>wstring_convert::convertito
Restituisce il numero di conversioni riuscite.
```cpp
size_t converted() const;
```
### <a name="return-value"></a>Valore restituito
Numero di conversioni riuscite.
### <a name="remarks"></a>Osservazioni
Il numero di conversioni riuscite viene archiviato nell'oggetto del conteggio di conversione.
## <a name="wstring_convertfrom_bytes"></a><a name="from_bytes"></a>wstring_convert::from_bytes
Converte una stringa di byte in una stringa di caratteri "wide".
```cpp
wide_string from_bytes(char Byte);
wide_string from_bytes(const char* ptr);
wide_string from_bytes(const byte_string& Bstr);
wide_string from_bytes(const char* first, const char* last);
```
### <a name="parameters"></a>Parametri
|Parametro|Descrizione|
|---------------|-----------------|
|*Byte*|Sequenza di byte a elemento singolo da convertire.|
|*Ptr*|Sequenza di caratteri con terminazione Null di tipo C da convertire.|
|*Bstr*|Oggetto [byte_string](#byte_string) da convertire.|
|*Prima*|Il primo carattere in un intervallo di caratteri da convertire.|
|*Ultima*|L'ultimo carattere in un intervallo di caratteri da convertire.|
### <a name="return-value"></a>Valore restituito
Oggetto stringa di caratteri "wide" risultante dalla conversione.
### <a name="remarks"></a>Osservazioni
Se l'oggetto stato di [conversione](../standard-library/wstring-convert-class.md) *non* è stato costruito con un valore esplicito, viene impostato sul valore predefinito (lo stato di conversione iniziale) prima dell'inizio della conversione. In caso contrario, verrà lasciato invariato.
Il numero di elementi di input convertiti correttamente viene archiviato nell'oggetto conteggio di conversione. Se non si verificano errori di conversione, la funzione membro restituisce la stringa di caratteri "wide" convertita. In caso contrario, se l'oggetto è stato costruito con un inizializzatore per il messaggio di errore con stringa di caratteri "wide", la funzione membro restituisce l'oggetto messaggio di errore con stringa di caratteri "wide". In caso contrario, la funzione membro genera un oggetto della classe [range_error](../standard-library/range-error-class.md).
## <a name="wstring_convertint_type"></a><a name="int_type"></a>wstring_convert::int_type
Tipo che rappresenta un Integer.
```cpp
typedef typename wide_string::traits_type::int_type int_type;
```
### <a name="remarks"></a>Osservazioni
Il tipo è sinonimo di `wide_string::traits_type::int_type`.
## <a name="wstring_convertstate"></a><a name="state"></a>wstring_convert::stato
Restituisce un oggetto che rappresenta lo stato della conversione.
```cpp
state_type state() const;
```
### <a name="return-value"></a>Valore restituito
Oggetto [conversion state](../standard-library/wstring-convert-class.md) che rappresenta lo stato della conversione.
### <a name="remarks"></a>Osservazioni
## <a name="wstring_convertstate_type"></a><a name="state_type"></a>wstring_convert::state_type
Tipo che rappresenta lo stato della conversione.
```cpp
typedef typename Codecvt::state_type state_type;
```
### <a name="remarks"></a>Osservazioni
Il tipo descrive un oggetto che può rappresentare uno stato di conversione. Il tipo è sinonimo di `Codecvt::state_type`.
## <a name="wstring_convertto_bytes"></a><a name="to_bytes"></a>wstring_convert::to_bytes
Converte una stringa di caratteri "wide" in una stringa di byte.
```cpp
byte_string to_bytes(Elem Char);
byte_string to_bytes(const Elem* Wptr);
byte_string to_bytes(const wide_string& Wstr);
byte_string to_bytes(const Elem* first, const Elem* last);
```
### <a name="parameters"></a>Parametri
|Parametro|Descrizione|
|---------------|-----------------|
|*Char*|Carattere wide da convertire.|
|*Wptr (Configurazione in conto che*|Sequenza con terminazione Null di tipo C, che inizia da `wptr`, da convertire.|
|*Wstr*|Oggetto [wide_string](#wide_string) da convertire.|
|*Prima*|Primo elemento nell'intervallo di elementi da convertire.|
|*Ultima*|Ultimo elemento nell'intervallo di elementi da convertire.|
### <a name="remarks"></a>Osservazioni
Se l'oggetto stato di [conversione](../standard-library/wstring-convert-class.md) *non* è stato costruito con un valore esplicito, viene impostato sul valore predefinito (lo stato di conversione iniziale) prima dell'inizio della conversione. In caso contrario, verrà lasciato invariato.
Il numero di elementi di input convertiti correttamente viene archiviato nell'oggetto conteggio di conversione. Se non si verificano errori di conversione, la funzione membro restituisce la stringa di byte convertita. In caso contrario, se l'oggetto è stato costruito con un inizializzatore per il messaggio di errore con stringa di byte, la funzione membro restituisce l'oggetto messaggio di errore con stringa di byte. In caso contrario, la funzione membro genera un oggetto della classe [range_error](../standard-library/range-error-class.md).
## <a name="wstring_convertwide_string"></a><a name="wide_string"></a>wstring_convert::wide_string
Tipo che rappresenta una stringa di tipo wide.
```cpp
typedef std::basic_string<Elem> wide_string;
```
### <a name="remarks"></a>Osservazioni
Il tipo è sinonimo di `std::basic_string<Elem>`.
## <a name="wstring_convertwstring_convert"></a><a name="wstring_convert"></a>wstring_convert::wstring_convert
Costruisce un oggetto di tipo `wstring_convert`.
```cpp
wstring_convert(Codecvt *Pcvt = new Codecvt);
wstring_convert(Codecvt *Pcvt, state_type _State);
wstring_convert(const byte_string& _Berr, const wide_string& Werr = wide_string());
```
### <a name="parameters"></a>Parametri
|Parametro|Descrizione|
|---------------|-----------------|
|*\*Pcvt*|Oggetto di tipo `Codecvt` per eseguire la conversione.|
|*_State*|Oggetto di tipo [state_type](#state_type) che rappresenta lo stato della conversione.|
|*_Berr*|L'oggetto [byte_string](#byte_string) da visualizzare in caso di errori.|
|*Werr*|L'oggetto [wide_string](#wide_string) da visualizzare in caso di errori.|
### <a name="remarks"></a>Osservazioni
Il primo costruttore archivia *Pcvt_arg* nell'[oggetto di conversione](../standard-library/wstring-convert-class.md)
| 40.254902 | 678 | 0.757428 | ita_Latn | 0.959227 |
dca19a7272cd6147e16774ad21c81f953db73783 | 254 | md | Markdown | docs/apps/unificontroller.md | idolize/DockSTARTer | f9b350e2f1845ce0ec1b6ce6498895cd44cea91f | [
"MIT"
] | null | null | null | docs/apps/unificontroller.md | idolize/DockSTARTer | f9b350e2f1845ce0ec1b6ce6498895cd44cea91f | [
"MIT"
] | 28 | 2020-07-27T05:46:00.000Z | 2021-12-13T17:02:40.000Z | docs/apps/unificontroller.md | idolize/DockSTARTer | f9b350e2f1845ce0ec1b6ce6498895cd44cea91f | [
"MIT"
] | null | null | null | # Unifi Controller
Be sure to change your IP address under Settings, Controller, Controller Hostname/IP to your real IP or your APs will stall when "Adopting". If you don't see this option then click "Can't find what you need? Switch to Clasic Mode"
| 63.5 | 233 | 0.767717 | eng_Latn | 0.996159 |
dca37e0fdbe6d24550dd7c4f41771717bf019a39 | 2,265 | md | Markdown | README.md | tancredi/griddy | a443d1964f73a7d6ecc80bccd1e1c1452dbad1d3 | [
"MIT"
] | 2 | 2016-11-12T20:44:28.000Z | 2021-04-26T17:04:02.000Z | README.md | tancredi/griddy | a443d1964f73a7d6ecc80bccd1e1c1452dbad1d3 | [
"MIT"
] | null | null | null | README.md | tancredi/griddy | a443d1964f73a7d6ecc80bccd1e1c1452dbad1d3 | [
"MIT"
] | 1 | 2018-07-28T06:16:31.000Z | 2018-07-28T06:16:31.000Z | # Griddy
> A Stylus plugin that makes it easy to create custom, simple, responsive grids
## Examples
* [12 columns grid](http://tancredi.github.io/griddy/examples/12-col-grid.html)
* [10 columns grid](http://tancredi.github.io/griddy/examples/10-col-grid.html)
* [Custom class grid](http://tancredi.github.io/griddy/examples/custom-class.html)
## Plugin setup
Install:
```npm install griddy```
#### Use with command-line stylus client:
```stylus src -o css -u griddy```
#### Use as a script
```
var stylus = require('stylus'),
griddy = require('griddy')
function compile (src) {
return stylus(src).set('filename', path).use(griddy());
}
```
## Simple setup
You can also just add griddy to your stylus files directory and import it
```
@import './griddy'
```
## Usage
Apply the mixin `grid-system` to any selector to define a grid system.
* `grid-system(cols = 12, gutter = 20px, child = '.col', offset = '.off', all-columns = true, separate = '-', breakpoint = 400px)`
#### Parameters
* `cols` - Number of total columns
* `gutter` - Horizontal space between columns and vertical space between rows
* `child` - Child selector suffix (Or full selector, if all-columns is set to false)
* `offset` - Offset selector suffix
* `all-columns` - Generate selectors for all numbers of spans, using `[selector][separate][spans]` convention
* `separate` - String separating `child` prefix from spans count, used when `all-columns` is set to true
* `breakpoint` - Max-size for responsive media query (Columns will break to full-width under specified size)
## Example
#### Stylus
```
@import 'griddy'
.row
grid-system(12, 20px, '.col', '.off')
```
#### HTML
```html
<div class="row">
<div class="col-3">
<!-- 3 columns -->
</div>
<div class="col-8 off-1">
<!-- 8 columns offset by 1 -->
</div>
</div>
```
## Custom class
#### Stylus
```
@import 'griddy'
.split-3
grid-system(3, 10px, '.thirds', true, ' thirds-')
```
#### HTML
```html
<div class="split-3">
<div class="thirds thirds-1"> [...] </div>
<div class="thirds thirds-1"> [...] </div>
<div class="thirds thirds-1"> [...] </div>
</div>
```
## Licence
Copyright (c) 2014 Tancredi Trugenberger. - Released under the MIT license | 21.168224 | 130 | 0.648565 | eng_Latn | 0.85567 |
dca4933d08e3b7ce007a5c4cc585b56c5eb07de2 | 587 | md | Markdown | Ch03/3.1.md | TheYargonaut/algorithms-text-answers | 2125e6bf1ec4a32003358b9219eac896f624d12b | [
"MIT"
] | null | null | null | Ch03/3.1.md | TheYargonaut/algorithms-text-answers | 2125e6bf1ec4a32003358b9219eac896f624d12b | [
"MIT"
] | null | null | null | Ch03/3.1.md | TheYargonaut/algorithms-text-answers | 2125e6bf1ec4a32003358b9219eac896f624d12b | [
"MIT"
] | null | null | null | # 3.1 Exercises
1. Obvious, eliding exercise
2. Obvious, eliding exercise
3. It states A is asymptotically less than or equal to some function greater than or equal to $cn^2$. That is, it states no meaningful bound.
4. Yes. $2^{n+1}=2*2^n$, so $2^{n+1} < c2^{n}$ for $c \geq 2$\
No. $2^{2n}=4^n$, and $c2^n \geq 4^n$ has no asymptotically true solution
5. Obvious, eliding exercise.
6. Theorem 3.1, plus meaning of worst-case and best-case generalizing to the whole algorithm.
7. No overlap possible, nothing greater and less than tight asymptotic bound.
8. Obvious, eliding exercise | 53.363636 | 141 | 0.722317 | eng_Latn | 0.995821 |
dca5033455081b939bd3dc91af8f79b07c8b9628 | 456 | md | Markdown | winter-extensions/readme.md | abrinkmann/winter | 37568be5c23f0fced592dd6d25d0498292fe6527 | [
"Apache-2.0"
] | 100 | 2017-05-21T09:15:28.000Z | 2022-03-06T12:33:14.000Z | winter-extensions/readme.md | abrinkmann/winter | 37568be5c23f0fced592dd6d25d0498292fe6527 | [
"Apache-2.0"
] | 26 | 2017-05-23T11:52:02.000Z | 2022-01-04T16:31:19.000Z | winter-extensions/readme.md | wbsg-uni-mannheim/winter | 51519a1f1701d0f9416a619b89abc29dd885e8c7 | [
"Apache-2.0"
] | 36 | 2017-05-22T13:48:41.000Z | 2021-05-25T13:03:04.000Z | # WInte.r extensions
This repository contains extensions for the WInte.r framework.
## WInte.r - Metanome integration
This project integrates the WInte.r web tables data model with algorithms developed for the [Metanome data profiling tool](https://hpi.de/naumann/projects/data-profiling-and-analytics/metanome-data-profiling.html).
Currently, this extensions supports the discovery of functional dependencies and approximate functional dependencies.
| 45.6 | 215 | 0.820175 | eng_Latn | 0.91318 |
dca6b75c19ac75d0abf0e24081f87327a0295a20 | 3,955 | md | Markdown | README.md | chubbyphp/chubbyphp-session-storageless | 1d4b124e24cc35917243e8f54447c457a3d8b6df | [
"MIT"
] | 2 | 2019-04-12T09:50:13.000Z | 2019-04-14T09:47:03.000Z | README.md | chubbyphp/chubbyphp-session-storageless | 1d4b124e24cc35917243e8f54447c457a3d8b6df | [
"MIT"
] | null | null | null | README.md | chubbyphp/chubbyphp-session-storageless | 1d4b124e24cc35917243e8f54447c457a3d8b6df | [
"MIT"
] | null | null | null | # chubbyphp-session-storageless
[![Build Status](https://api.travis-ci.org/chubbyphp/chubbyphp-session-storageless.png?branch=master)](https://travis-ci.org/chubbyphp/chubbyphp-session-storageless)
[![Coverage Status](https://coveralls.io/repos/github/chubbyphp/chubbyphp-session-storageless/badge.svg?branch=master)](https://coveralls.io/github/chubbyphp/chubbyphp-session-storageless?branch=master)
[![Latest Stable Version](https://poser.pugx.org/chubbyphp/chubbyphp-session-storageless/v/stable.png)](https://packagist.org/packages/chubbyphp/chubbyphp-session-storageless)
[![Total Downloads](https://poser.pugx.org/chubbyphp/chubbyphp-session-storageless/downloads.png)](https://packagist.org/packages/chubbyphp/chubbyphp-session-storageless)
[![Monthly Downloads](https://poser.pugx.org/chubbyphp/chubbyphp-session-storageless/d/monthly)](https://packagist.org/packages/chubbyphp/chubbyphp-session-storageless)
[![Daily Downloads](https://poser.pugx.org/chubbyphp/chubbyphp-session-storageless/d/daily)](https://packagist.org/packages/chubbyphp/chubbyphp-session-storageless)
## Description
[psr7-sessions/storageless][2] persistence adapter for [mezzio/mezzio-session][3].
DEPRECATED: Use [storageless-mezzio-integration][4] instead.
## Requirements
* php: ^7.2
* [mezzio/mezzio-session][2]: ^1.2
* [psr7-sessions/storageless][3]: ^5.0
## Installation
Through [Composer](http://getcomposer.org) as [chubbyphp/chubbyphp-session-storageless][1].
```sh
composer require chubbyphp/chubbyphp-session-storageless "^1.0"
```
## Usage
### With laminas-stratigility using symmetric key (hmac)
#### Generate key
```sh
openssl rand -base64 32
```
#### Code
```php
<?php
declare(strict_types=1);
namespace App;
use Chubbyphp\Session\Storageless\PSR7StoragelessSessionPersistence;
use PSR7Sessions\Storageless\Http\SessionMiddleware as PSR7SessionMiddleware;
use Mezzio\Session\SessionMiddleware as MezzioSessionMiddleware;
use Laminas\Stratigility\MiddlewarePipe;
$middlewarePipe = new MiddlewarePipe();
$middlewarePipe->pipe(PSR7SessionMiddleware::fromSymmetricKeyDefaults(
'JeIn7GmQJRkM4dP3T5ZfVcHk7rxyVoMzR1DptTIquFY=',
1200
));
$middlewarePipe->pipe(new MezzioSessionMiddleware(new PSR7StoragelessSessionPersistence()));
```
### With laminas-stratigility using asymmetric key (rsa)
#### Generate key
```sh
openssl genrsa -out signatureKey 512
openssl rsa -in signatureKey -out verificationKey -outform PEM -pubout
```
#### Code
```php
<?php
declare(strict_types=1);
namespace App;
use Chubbyphp\Session\Storageless\PSR7StoragelessSessionPersistence;
use PSR7Sessions\Storageless\Http\SessionMiddleware as PSR7SessionMiddleware;
use Mezzio\Session\SessionMiddleware as MezzioSessionMiddleware;
use Laminas\Stratigility\MiddlewarePipe;
$middlewarePipe = new MiddlewarePipe();
$middlewarePipe->pipe(PSR7SessionMiddleware::fromAsymmetricKeyDefaults(
'-----BEGIN RSA PRIVATE KEY-----
MIIBOgIBAAJBAKgrmaZQsaEXrlNahrSKzKwWOgEt0SSFlv+Onm94oWNfx7ghZ+Up
cgTwFl+oNMa/AbpO2a6fTuj558/Z0SlWFdUCAwEAAQJBAKKrMf/ndDqv7mcgXMaM
sDgRc+AqEnCybAIdUXHgDLRSolzH36lkg6/jrr8S1G/e7QdK2yvpVgaP/KH0zReo
nMECIQDdXX1vtzxgX+zv8DTNHN3m0StHuJHGC0oaOsDOX06IZQIhAMJ7dGy8XUGy
39INUFBneNc0I4QKxG31jIs6tOe/MiixAiA9GJiORNx9HPygHIP2OIlmM0TmvqI9
LtB8/MpKKzPZoQIgGQfwtSoNSq5uFkf2ZVLb/77LL2x/WbO38heNPyKhnxECIH1T
PbQ839hbekzuV+y8Me+JSUHgybVMg9BDzRXwON7f
-----END RSA PRIVATE KEY-----',
'-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAKgrmaZQsaEXrlNahrSKzKwWOgEt0SSF
lv+Onm94oWNfx7ghZ+UpcgTwFl+oNMa/AbpO2a6fTuj558/Z0SlWFdUCAwEAAQ==
-----END PUBLIC KEY-----',
1200
));
$middlewarePipe->pipe(new MezzioSessionMiddleware(new PSR7StoragelessSessionPersistence()));
```
## Copyright
Dominik Zogg 2020
[1]: https://packagist.org/packages/chubbyphp/chubbyphp-session-storageless
[2]: https://github.com/mezzio/mezzio-session
[3]: https://github.com/psr7-sessions/storageless
[4]: https://github.com/psr7-sessions/storageless-mezzio-integration
| 35 | 202 | 0.803287 | yue_Hant | 0.426569 |
dca6e5f9226e929bf0d5aa50111b8ea8142af865 | 37 | md | Markdown | README.md | wjlrn/springcloud-config | ffda739c3ad989a6444a8bf5b78930988e7a32fd | [
"MIT"
] | null | null | null | README.md | wjlrn/springcloud-config | ffda739c3ad989a6444a8bf5b78930988e7a32fd | [
"MIT"
] | null | null | null | README.md | wjlrn/springcloud-config | ffda739c3ad989a6444a8bf5b78930988e7a32fd | [
"MIT"
] | null | null | null | # SpringCloud-config
springcloud配置中心 | 12.333333 | 20 | 0.864865 | yue_Hant | 0.193761 |
dca81191965950c228bfbcc3dd2c0bb29ef70d0a | 3,706 | md | Markdown | activities/Etoys.activity/README.md | justinoverton/sugarizer | 8f5f64bee126b402be1a6f50a44eb4464a2fd7c7 | [
"Apache-2.0"
] | null | null | null | activities/Etoys.activity/README.md | justinoverton/sugarizer | 8f5f64bee126b402be1a6f50a44eb4464a2fd7c7 | [
"Apache-2.0"
] | null | null | null | activities/Etoys.activity/README.md | justinoverton/sugarizer | 8f5f64bee126b402be1a6f50a44eb4464a2fd7c7 | [
"Apache-2.0"
] | null | null | null | SqueakJS: A Lively Squeak VM
============================
This GitHub repository contains mainly the interpreter. The HTML user interface is written using the Lively Kernel.
The "demo" directory contains a bare-bones html page, just enough to run the VM, and a "mini.image" (a stripped-down Squeak 2.2).
The "etoys" directory has an html page to run an Etoys image from an external server.
Please visit the [project home page][homepage]!
Running it
----------
**Simplest**
* [click here][simple] to run a minimal version. This is the simple demo included in this repo.
* or [click here][etoys] to run Etoys. Also included in this repo.
**Run your own Squeak image**
* Go to the [full SqueakJS][full] page. Drag an image from your local files into the Lively page. Click "Run".
Chrome works best for development at the moment. But current Safari and IE versions outperform Chrome and Firefox by a significant margin. YMMV.
Fixes to improve browser compatibility are highly welcome!
Installing locally
------------------
**Without Lively (simpler)**
* download and unpack the [ZIP archive][zip] (or clone the [github repo][repo])
* serve the SqueakJS directory using a local web server.
TIP:If you have python try out something like
```python
python -m SimpleHTTPServer 9090
```
* in your web browser, open the SqueakJS/demo/simple.html file
Now Squeak should be running.
The reason for having to run from a web server is because the mini.image is loaded with an XMLHttpRequest which does not work with a file URL.
**In Lively (nicer)**
* install [Lively][lively]
* inside the Lively directory, make a "users/bert" folder and put the SqueakJS directory there
* open the blank.html page using your web browser
* get a Squeak morph from the PartsBin
* save the world under a different name
How to modify it
----------------
**In Lively**
* if you installed with Lively, use that to change the code
* all changes take effect immediately
**Without Lively**
* use any text editor
* you have to reload the page for your changes to take effect
How to share your changes
-------------------------
* easiest for me is if you create a [pull request][pullreq]
* otherwise, send me patches, or a Lively Changeset
Contributions are very welcome!
Things to work on
-----------------
Running a current Squeak image would be nice - beyond the stuff needed for Etoys it should really just be a handful changes to correctly interpret closures.
As for optimizing I think the way to go is a JIT compiler that creates actual Javascript functions from Squeak methods. And to make BitBlt fast, we could probably use WebGL.
There's also interesting stuff I probably won't be working on. Like a kind-of FFI that lets you call Javascript libraries directly. Or a plugin that gives you access to the DOM (I do have the mechanism for VM plugins in place already). With that you could write a native HTML UI which would certainly be much faster than BitBlt.
Networking would be interesting, too. How about implementing the SocketPlugin via WebSockets? Parallelize the VM with WebWorkers?
There's a gazillion exciting things to do :)
-- Bert Freudenberg, July 2014
[repo]: https://github.com/bertfreudenberg/SqueakJS
[homepage]: http://bertfreudenberg.github.io/SqueakJS/
[simple]: http://bertfreudenberg.github.io/SqueakJS/demo/simple.html
[etoys]: http://bertfreudenberg.github.io/SqueakJS/demo/simple.html
[full]: http://lively-web.org/users/bert/squeak.html
[zip]: https://github.com/bertfreudenberg/SqueakJS/archive/master.zip
[lively]: https://github.com/LivelyKernel/LivelyKernel
[pullreq]: https://help.github.com/articles/using-pull-requests
| 41.640449 | 328 | 0.736373 | eng_Latn | 0.989723 |
dca935f726d1d582029a2cbecf132e41d2ce3943 | 127 | md | Markdown | packages/EPIC-Core.package/EpicPushButton.class/README.md | hpi-swa-teaching/EPIC-Digitalsimulator | f6c04cefc35e6bc1745e2c6f72020f861d1e6145 | [
"MIT"
] | null | null | null | packages/EPIC-Core.package/EpicPushButton.class/README.md | hpi-swa-teaching/EPIC-Digitalsimulator | f6c04cefc35e6bc1745e2c6f72020f861d1e6145 | [
"MIT"
] | null | null | null | packages/EPIC-Core.package/EpicPushButton.class/README.md | hpi-swa-teaching/EPIC-Digitalsimulator | f6c04cefc35e6bc1745e2c6f72020f861d1e6145 | [
"MIT"
] | null | null | null | I return a value when I am pressed.
Instance Variables
pressed: <Boolean>
pressed
- returns whether the button is pressed
| 15.875 | 40 | 0.76378 | eng_Latn | 0.992428 |
dcaad793e65b2966941b9ff19a893727e9a868c7 | 110 | md | Markdown | README.md | d-H-/covid_quebec | c4703b4a13f6b7fcbd62675de37c01ad3b8cfade | [
"MIT"
] | 1 | 2020-03-20T13:48:03.000Z | 2020-03-20T13:48:03.000Z | README.md | d-H-/covid_quebec | c4703b4a13f6b7fcbd62675de37c01ad3b8cfade | [
"MIT"
] | null | null | null | README.md | d-H-/covid_quebec | c4703b4a13f6b7fcbd62675de37c01ad3b8cfade | [
"MIT"
] | null | null | null | # covid_quebec
Makes a plot of COVID-19 cases in Quebec using data from the Montreal Gazette and Santé Quebec
| 36.666667 | 94 | 0.809091 | eng_Latn | 0.993492 |
dcaadddf2cd1bf772d0f9333f8a61a7dc24169c9 | 85 | md | Markdown | pile/.md/d.vars.u.md | GoLangsam/twos | a70d3753461cced0b10b2ec3a314b3dc59b1dc40 | [
"MIT"
] | 1 | 2020-05-12T10:16:52.000Z | 2020-05-12T10:16:52.000Z | pile/.md/d.vars.u.md | GoLangsam/twos | a70d3753461cced0b10b2ec3a314b3dc59b1dc40 | [
"MIT"
] | null | null | null | pile/.md/d.vars.u.md | GoLangsam/twos | a70d3753461cced0b10b2ec3a314b3dc59b1dc40 | [
"MIT"
] | null | null | null | var TypeOf = core.TypeOf ...
var Cardinal = core.Cardinal
var Ordinal = core.Ordinal
| 21.25 | 28 | 0.741176 | yue_Hant | 0.713453 |
dcac9bb7b3d2d21611830020a7eb3e710918c0a7 | 600 | md | Markdown | content/strategies/evacuation-routes/example1.md | plot-and-scatter/living-breakwaters-web | 9d63bfaecfc2e706a859627aa9d571d1fb91abf7 | [
"MIT"
] | 1 | 2020-07-20T17:31:29.000Z | 2020-07-20T17:31:29.000Z | content/strategies/evacuation-routes/example1.md | plot-and-scatter/living-breakwaters-web | 9d63bfaecfc2e706a859627aa9d571d1fb91abf7 | [
"MIT"
] | 71 | 2020-08-12T21:18:32.000Z | 2021-11-26T22:06:14.000Z | content/strategies/evacuation-routes/example1.md | plot-and-scatter/living-breakwaters-web | 9d63bfaecfc2e706a859627aa9d571d1fb91abf7 | [
"MIT"
] | null | null | null | ### Windsor Flood Evacuation Route
#### Windsor SYD, Australia
In 2003, the development of a new flood evacuation route was proposed by the Roads and Traffic Authority of Syndey northwest.[^1] The proposed design included a 2.6 km elevated roadway that is 12 meters above the floodplain that stretches across the South Creek.[^2] The project was designed to address the challenges related to environmental sensitivities, soil type, and erosion.[^3] The evacuation route was officially opened in 2007 and now provides egress during flood events and relief for traffic congestion on Windsor Road.[^4] | 150 | 536 | 0.8 | eng_Latn | 0.999869 |
dcacd7d06d4923d4168e24c43d271986998f3656 | 1,390 | md | Markdown | src/3rdparty/libraw/README.md | fmeschia/pixinsight-class-library | 11b956e27d6eee3e119a7b1c337d090d7a03f436 | [
"JasPer-2.0",
"libtiff"
] | null | null | null | src/3rdparty/libraw/README.md | fmeschia/pixinsight-class-library | 11b956e27d6eee3e119a7b1c337d090d7a03f436 | [
"JasPer-2.0",
"libtiff"
] | null | null | null | src/3rdparty/libraw/README.md | fmeschia/pixinsight-class-library | 11b956e27d6eee3e119a7b1c337d090d7a03f436 | [
"JasPer-2.0",
"libtiff"
] | null | null | null | Building LibRaw for PCL Development
-----------------------------------
Since version 1.8.8 of PixInsight, LibRaw no longer forms part of the PCL distribution. This is because the authors of LibRaw have made changes to their code distribution that prevent its inclusion as a standard PCL static library.
To build LibRaw for PCL development, follow these steps:
1. Download LibRaw's source code. We recommend cloning the [https://github.com/LibRaw/LibRaw](official LibRaw GitHub repository) to a directory on your local file system. Let's represent this directory as $LIBRAWDIR.
2. Copy the required Makefile.pcl.xxx file for your platform (where xxx is one of linux, macosx or windows; unfortunately LibRaw cannot be compiled on FreeBSD) to $LIBRAWDIR. These files are available on [https://gitlab.com/pixinsight/PCL/tree/master/src/3rdparty/libraw]($PCLSRCDIR/3rdparty/libraw).
3. Use the appropriate Makefile.pcl.xxx file to build LibRaw. For example:
$ cd $LIBRAWDIR
$ make -j -f Makefile.pcl.linux
4. You'll find the LibRaw static library file on the $LIBRAWDIR/lib subdirectory:
On Linux and macOS: liblibraw_r-pxi.a
On Windows: libraw_r-pxi.lib
Copy this file to your $PCLLIBDIR directory.
5. Copy the $LIBRAWDIR/libraw subdirectory to your $PCLINCDIR directory.
Now you can build the RAW module on your platform.
******
###### Copyright (C) 2003-2020 Pleiades Astrophoto
| 46.333333 | 300 | 0.76259 | eng_Latn | 0.951218 |
dcacd91b63533553560e5209df184e99aac0561d | 25 | md | Markdown | README.md | marangisto/Scales | 880b99df86e1f38a1408531d2761987f4b021be4 | [
"MIT"
] | 1 | 2020-03-10T05:20:22.000Z | 2020-03-10T05:20:22.000Z | README.md | marangisto/Scales | 880b99df86e1f38a1408531d2761987f4b021be4 | [
"MIT"
] | null | null | null | README.md | marangisto/Scales | 880b99df86e1f38a1408531d2761987f4b021be4 | [
"MIT"
] | null | null | null | # Scales
Scale Generator
| 8.333333 | 15 | 0.8 | kor_Hang | 0.354535 |
dcad21b6abe4a44c506cba289bf973983ae46c86 | 195 | md | Markdown | _posts/2021-05-14-FirstText.md | jakalroni/jakalroni.github.io | 2836a165dade78d973e7101cef8bfb5039de7c15 | [
"MIT"
] | null | null | null | _posts/2021-05-14-FirstText.md | jakalroni/jakalroni.github.io | 2836a165dade78d973e7101cef8bfb5039de7c15 | [
"MIT"
] | null | null | null | _posts/2021-05-14-FirstText.md | jakalroni/jakalroni.github.io | 2836a165dade78d973e7101cef8bfb5039de7c15 | [
"MIT"
] | null | null | null | ---
title: GitHub.io 상단 탭 설정
tags: etc
key: page-202105140135
summary : GitHub.io 상단 탭 설정하는 방법을 알아보자
---
## GitHub 블로그의 상단 탭 설정 방법?
jakalroni.github.io의 _data 디렉토리 <b> -> </b> navigation.yml 설정
| 19.5 | 61 | 0.687179 | kor_Hang | 0.998464 |
dcae23effb8d0e4a5c09e719673bbf826659a3f1 | 7,889 | md | Markdown | _posts/Book/modern-java/2021-08-29-ch8.md | mongsilJeong/mongsilJeong.github.io | b1e14f9edcb9de461428c65569178ff5f4aff1b6 | [
"MIT"
] | null | null | null | _posts/Book/modern-java/2021-08-29-ch8.md | mongsilJeong/mongsilJeong.github.io | b1e14f9edcb9de461428c65569178ff5f4aff1b6 | [
"MIT"
] | null | null | null | _posts/Book/modern-java/2021-08-29-ch8.md | mongsilJeong/mongsilJeong.github.io | b1e14f9edcb9de461428c65569178ff5f4aff1b6 | [
"MIT"
] | 2 | 2022-02-01T18:46:42.000Z | 2022-02-02T13:10:17.000Z | ---
title: "[Ch8] 컬렉션 API 개선"
date: 2021-08-29
excerpt: "[모던 자바 인 액션] 책을 읽고 개인적으로 정리한 내용입니다."
tags: [book, modern-java-in-action]
classes: narrow
toc: true
toc_sticky: true
categories: [book/modern-java-in-action]
---
이번 챕터에서는 자바8, 자바9에 추가되어 편리해진 새로운 컬렉션 API기능에 대해 알아본다.
## 컬렉션 팩토리
자바9에서는 리스트, 집합, 맵과 같은 컬렉션 객체를 쉽게 만드는 팩토리 메서드를 제공한다.
### 리스트 팩토리
`List.of` 팩토리 메서드를 이용해서 간단하게 리스트를 만들 수 있다.
``` java
List<String> friends = List.of("Raphael", "Olivia", "Thibaut");
System.out.println(friends);
```
`Arrays.asList()` 와 같은 기능을 한다. 고정 크기의 리스트를 만들었기 때문에, 요소를 추가하려 하면 `UnsupportedOperationException`이 발생한다.
스트림 API를 이용해 리스트를 만들 수도 있다. `Collectors.toList()` 컬렉터로 스트림을 리스트로 변환할 수 있다. 다만, 데이터 처리 형식을 설정하거나 데이터를 변환할 필요가 없다면 사용하기 간편한 팩토리 메서드를 이용할 것을 권장한다.
### 집합 팩토리
`Set.of` 팩토리 메서드를 이용해서 바꿀 수 없는 집합을 만들 수 있다.
``` java
Set<String> friends = Set.of("Raphael", "Olivia", "Thibaut");
System.out.prinln(friends);
```
Set은 고유의 요소만 포함할 수 있기 때문에, 중복된 요소를 추가하고자 하면 `IllegalArgumentException` 이 발생한다.
### 맵 팩토리
`Map.of` 를 사용해서 맵을 만들 수 있다. 맵을 만들려면 key 와 value 가 필요한데, 두 가지 방법으로 이를 지정할 수 있다.
- 키와 값을 번갈아 제공한다.
``` java
Map<String, Integer> ageOfFriends = Map.of("Sunmin", 28, "Hyerin", 30, "SeJeong", 29, "KwangWoon", 31);
System.out.println(ageOfFriends);
```
열 개 이하의 키와 값 쌍을 가진 맵을 만들때는 이 메서드가 유용하다.
- Map.Entry<K, V> 객체 생성
``` java
import static java.util.Map.entry;
Map<String, Integer> ageOfFriends = Map.ofEntries(Map.entry("Sunmin", 28),
Map.entry("Hyerin", 30),
Map.entry("Kwangwoon", 31),
Map.entry("SeJeong", 29));
```
## 리스트와 집합 처리
자바 8에서는 List, Set 인터페이스에 다음과 같은 메서드를 추가했다.
- removeIf : predicate 를 만족하는 요소를 제거한다. List나 Set을 구현하거나 그 구현을 상속받은 모든 클래스에서 이용할 수 있다.
- replaceAll : 리스트에서 이용할 수 있는 기능으로 UnaryOperator 함수를 이용해 요소를 바꾼다.
- sort : List 인터페이스에서 제공하는 기능으로 리스트를 정렬한다.
이들 메서드는 호출한 컬렉션 자체를 바꾼다. 기존에는 컬렉션을 바꾸려면 복잡한 코드가 필요했는데, 이를 간소화하기 위해서 이러한 메서드가 추가된 것이다.
### removeIf 메서드
``` java
for(Transaction transaction : transactions) {
if(Character.isDigit(transaction.getReferenceCode().charAt(0))) {
transactions.remove(transaction);
}
}
```
위 코드는 transactions 리스트를 for-each 로 돌면서, 특정 0 번 트랜잭션을 리스트에서 삭제하는 코드이다.
하지만 `ConcurrentModificationException` 에러를 일으킨다. 왜냐하면 Iterator 객체와 Collection 객체가 리스트 컬렉션에 대한 참조를 하고 있기 때문이다. 에러를 없애려면 Iterator 객체를 명시적으로 사용해야 한다.
``` java
for(Iterator<Transaction> iterator = transactions.iterator(); iterator.hasNext();) {
Transaction transaction = iterator.next();
if(Character.isDigit(transaction.getReferenceCode().charAt(0))) {
iterator.remove(transaction);
}
}
```
코드가 많이 복잡해졌다. 위의 코드를 자바8의 removeIf 로 수정하면 간결할뿐만 아니라 버그도 예방할 수 있다.
``` java
transactions.removeIf(transaction -> Character.isDigit(transaction.getReferenceCode().charAt(0)));
```
### replaceAll 메서드
List 인터페이스의 `replaceAll` 메서드를 이용해 리스트의 각 요소를 새로운 요소로 바꿀 수 있다.
``` java
referenceCodes.stream() // [a12, C14, b13]
.map(code -> Character.toUpperCase(code.charAt(0)) + code.substring(1))
.collect(Collectors.toList())
.forEach(System.out::println); // [A12, C14, B13]
```
하지만 이 코드는 새 문자열 컬렉션을 만든다 (Collectors.toList()) `replaceAll` 메서드를 쓰면 기존의 컬렉션을 바꿀수 있다.
``` java
referenceCodes.replaceAll(code -> Character.toUpperCase(code.charAt(0)) + code.substring(1)));
```
## 맵 처리
자바8에서는 Map 인터페이스에 몇 가지 디폴트 메서드가 추가되었다.
### forEach 메서드
맵에서 키와 값을 반복하면서 확인하는 작업은 정말 귀찮은 일이었다.
자바 8서부터 BiConsumer를 인수로 받는 `forEach` 메서드를 지원해서 <키, 값> 엔트리를 쉽게 반복할 수 있다.
``` java
ageOfFriends.forEach((friend, age) -> System.out.println(friend + " is " + age + " years old"));
```
### 정렬 메서드
다음 두 개의 새로운 유틸리티를 이용하면 맵의 항목을 값 또는 키를 기준으로 정렬할 수 있다.
- Entry.comparingByValue
- Entry.comparingByKey
``` java
ageOfFriends.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.forEachOrdered(System.out::println);
// Hyerin=30
// Kwangwoon=31
// SeJeong=29
// Sunmin=28
```
### getOrDefault 메서드
기존에는 찾으려는 키가 존재하지 않으면 null 을 반환하므로 NPE를 방지하려면 요청 결과가 널인지 확인해야 한다. `getOrDefault` 메서드를 사용하면 쉽게 null 을 처리할 수 있다.
``` java
System.out.println(ageOfFriends.getOrDefault("JongSun", -1));
```
### 계산 패턴
맵에 키가 존재하는지 여부에 따라 어떤 동작을 실행하고 저장해야 하는 상황이 필요할 때가 있다. 보통 if 문을 써서 키의 존재를 확인한 뒤, 있으면 냅두고 없으면 맵에 값을 넣는 형식으로 구현했다. 자마8에서는 다음 세 가지 연산을 제공했고, 이런 상황에서 도움을 준다.
- computeIfAbsent : 제공된 키에 해당하는 **값**이 없으면(값이 없거나 널), 키를 이용해 새 값을 계산하고 맵에 추가한다.
- computeIfPresent : 제공된 키가 존재하면 새 값을 계산하고 맵에 추가한다.
- compute : 제공된 키로 새 값을 계산하고 맵에 저장한다.
위와 같은 상황인 정보를 캐시할 때 computeIfAbsent를 활용할 수 있다.
``` java
Map<String, byte[]> dataToHash = new HashMap<>();
lines.forEach(line -> dataToHash.computeIfAbsent(line, this::calculateDigest)); // line을 반복하면서, dataToHash에 Key로 존재하지 않으면 동작을 실행한다.
private byte[] calculateDigest(String key){
// SHA256 으로 암호화한 바이트
}
```
calculateDigest 로 계산된 값을 맵에 키(line)로 추가한다.
### 삭제 패턴
키가 특정한 값과 연관되었을 때만 항목을 제거하는 메서드를 제공한다.
``` java
favouriteMovies.remove(key, value);
```
### 교체 패턴
맵의 항목을 바꾸는데 사용할 수 있는 두 개의 메서드가 맵에 추가되었다.
- replaceAll : BiFunction을 적용한 결과로 각 항목의 값을 교체한다.
- `BiFunction<? super K, ? super V, ? extends V> function`
- Replace : 키가 존재하면 맵의 값을 바꾼다.
``` java
Map<String, String> favouriteMovies = new HashMap<>();
favouriteMovies.put("Raphael", "Star Wars");
favouriteMovies.put("Olivia", "james bond");
favouriteMovies.replaceAll((freind, movie) -> movie.toUpperCase());
System.out.println(favouriteMovies);
```
### Merge
두 개의 맵을 합친다고 가정할 때, `putAll` 을 사용하면 (중복된 키가 없다면) 정상적으로 작동한다. 값을 좀더 유연하게 합쳐야 한다면 `merge` 메서드를 이용할 수 있다.
중복된 키를 어떻게 합칠지 결정하는 `Bifunction`을 인수로 받는다.
``` java
Map<String, String> family = Map.ofEntries(entry("Teo", "Star Wars"), entry("Cristina", "James Bond"));
Map<String, String> friends = Map.ofEntries(entry("Raphael", "Star Wars"), entry("Cristina", "Matrix"));
```
위 두개의 맵에는 Cristina 가 중복으로 있다. 충동을 해결하기 위해서는 아래와 같이 합칠 수 있다.
``` java
Map<String, String> everyone = new HashMap<>(family);
friends.forEach((k, v) -> everyone.merge(k, v, (movie1, movie2) -> movie1 + " & " + movie2)); // 중복된 키가 있으면 두 값을 연결
System.out.println(everyone);
```
merge 메서드
```
default V merge(K key, V value,
BiFunction<? super V, ? super V, ? extends V> remappingFunction)
```
자바독에 따르면, 두 번째 인수로 주어지는 value 는 "키와 연과된 기존 값에 합쳐질 **널이 아닌 값 또는 값이 없거나 키에 널 값이 연관되어 있다면 이 값을 키와 연결**" 이라고 적혀있다.
즉, 카의 반환값이 null 이면 (디폴트로) value 값을 넣어준다는 뜻이다.
## 개선된 ConcurrentHashMap
ConcurrentHashMap 클래스는 동시성 친화적이며 최신 기술을 반영한 HashMap 이다.
HashMap 과 ConcurrentHashMap 의 기본적인 동작원리와 차이점에 대한 보충 자료를 기록해둔다.
https://pplenty.tistory.com/17
https://happy-coding-day.tistory.com/151
---
### 리듀스와 검색
ConcurrentHashMap 은 세가지 새로운 연산을 지원한다.
- forEach : 각 (키, 값) 쌍에 주어진 액션을 수행
- reduce : 모든 (키, 값) 쌍을 제공된 리듀스 함수를 이용해 결과로 합침
- search : 널이 아닌 값을 반환할 때까지 각 (키, 값) 쌍에 함수를 적용
이들 연산은 ConcurrentHashMap 의 상태를 잠그지 않고 연산을 수행한다.
따라서 이들 연산에 제공한 함수는 계싼이 진행되는 동안 바뀔 수 있는 객체, 값, 순서 등에 의존하지 않아야 한다.
또한, 병렬성 threshold를 지정해야 한다. 맵의 크기가 주어진 기준값보다 작으면 순차적으로 연산을 실행한다. (기준값 이상이면 그 기준으로 병렬화 한다는 뜻)
``` java
ConcurrentHashMap<String, Long> map = new ConcurrentHashMap<>();
long parallelismThreshold = 1;
Optional<Integer> maxValue = Optional.ofNullable(map.reduceValues(parallelismThreshold, Long::max));
```
맵의 값을 모두 순회하여 최대값을 뽑아 내는 예제이다.
기본값에는 전용 `each`, `reduce` 연산이 제공되므로 박싱작업을 할 필요 없이 효율적으로 작업을 처리할 수 있다.
- reduceValuesToInt, reduceKeysToLong
### 계수
`mappingCount` 맵의 매핑 개수를 반환하는 메서드를 제공한다.
``` java
public long mappingCount() {
long n = sumCount();
return (n < 0L) ? 0L : n; // ignore transient negative values
}
```
자바독에 따르면 ConcurrentHashMap의 매핑 개수는 `size` 메서드 대신 `mappingCount` 를 사용하는 것을 권장한다.
> Returns the number of mappings. This method should be used instead of size because a ConcurrentHashMap may contain more mappings than can be represented as an int.
### 집합뷰
`keySet` : ConcurrentHashMap을 집합 뷰(Set)로 반환한다.
| 26.742373 | 165 | 0.675117 | kor_Hang | 1.000008 |
dcafc08480600659f1114741b288de45f28a62e5 | 35 | md | Markdown | README.md | missing009/homepage | 23583cc5a629fbfff747a5f8c77a8eaf43a30111 | [
"MIT"
] | null | null | null | README.md | missing009/homepage | 23583cc5a629fbfff747a5f8c77a8eaf43a30111 | [
"MIT"
] | 7 | 2021-01-11T19:55:54.000Z | 2021-01-11T21:11:06.000Z | README.md | missing009/homepage | 23583cc5a629fbfff747a5f8c77a8eaf43a30111 | [
"MIT"
] | null | null | null | # Resume
resume for Binary Studio
| 11.666667 | 25 | 0.771429 | kor_Hang | 0.608266 |
dcb1538d01c1bfb875995c5ab76975a30de7da0c | 37 | md | Markdown | README.md | earljay-caoile-401-advanced-javascript/lab-02 | ba2a085c7098125e182dbeba16cce15944edeb6e | [
"MIT"
] | null | null | null | README.md | earljay-caoile-401-advanced-javascript/lab-02 | ba2a085c7098125e182dbeba16cce15944edeb6e | [
"MIT"
] | null | null | null | README.md | earljay-caoile-401-advanced-javascript/lab-02 | ba2a085c7098125e182dbeba16cce15944edeb6e | [
"MIT"
] | null | null | null | # lab-02
Lab 01 for CF JS 401 Nights
| 12.333333 | 27 | 0.702703 | kor_Hang | 0.392703 |
dcb2b3180e4ecd95cdca125b5159abe3ed55e369 | 1,030 | md | Markdown | _posts/2015-07-03-bihar-ssc-health-department-job-posts.md | anchalrani/getopportunity | f45b2ea40ac4c52167181454e966fa9e2d7ee8a2 | [
"MIT"
] | null | null | null | _posts/2015-07-03-bihar-ssc-health-department-job-posts.md | anchalrani/getopportunity | f45b2ea40ac4c52167181454e966fa9e2d7ee8a2 | [
"MIT"
] | null | null | null | _posts/2015-07-03-bihar-ssc-health-department-job-posts.md | anchalrani/getopportunity | f45b2ea40ac4c52167181454e966fa9e2d7ee8a2 | [
"MIT"
] | null | null | null | ---
layout: post
title: Bihar SSC Health Department Job posts last date 14th July-2015
date: 2015-07-03 18:31
comments: true
tags: Bihar Commission Health Laboratory Online SSC Technician
archive: false
---
Bihar SSC invites Online application from Indian nationals for the following posts :
X-Ray Technician : 215 posts, Pay Scale : Rs. 5200 - 20220 grade Pay Rs. 2800/- , Age : 21-37 years, relaxation in age as per rules.
Operation Theater Assistant : 236 posts, Pay Scale : Rs. 5200 - 20220 grade Pay Rs. 2400/- , Age : 21-37 years, relaxation in age as per rules.
Laboratory Attendant : 1772 posts, Pay Scale : Rs. 5200 - 20220 grade Pay Rs. 2800/- , Age : 21-37 years, relaxation in age as per rules.
Application Fee : Rs.375/- (Rs.100/- for SC./ST of Bihar only) to be deposited in any branch of SBI through a payment challan.
**How to Apply** : Apply Online at Bihar SSC website on or before 14/07/2015 only.
Please visit <http://bssc.bih.nic.in> for details and online submission of application.
| 34.333333 | 144 | 0.725243 | eng_Latn | 0.895606 |
dcb2d0b9b08376edda82886c0ff170016962390c | 2,924 | md | Markdown | README.md | levyx/mapkeeper | adb2061031a2a8759771afb401c10c108e975ce0 | [
"Apache-2.0"
] | 1 | 2015-09-18T06:18:08.000Z | 2015-09-18T06:18:08.000Z | README.md | levyx/mapkeeper | adb2061031a2a8759771afb401c10c108e975ce0 | [
"Apache-2.0"
] | null | null | null | README.md | levyx/mapkeeper | adb2061031a2a8759771afb401c10c108e975ce0 | [
"Apache-2.0"
] | 2 | 2015-09-18T06:18:13.000Z | 2019-03-01T04:26:32.000Z | # MapKeeper
## Links
* [MapKeeper Homepage](https://github.com/m1ch1/mapkeeper/wiki)
* [Getting Started](https://github.com/m1ch1/mapkeeper/wiki/Getting-Started)
## License Info
Mapkeeper uses several 3rd party open source libraries and tools. The remainder
of this file summarizes the tools used, their purpose, and the licenses under
which they're released.
### Thrift
Thrift is a serialization and RPC framework:
http://thrift.apache.org/
[Apache 2.0 license](http://svn.apache.org/viewvc/thrift/trunk/LICENSE?view=markup)
### Boost
Boost is a collection of utility libraries for C++:
http://www.boost.org/
[Boost Software License](http://www.boost.org/users/license.html)
### libevent
Libevent is an asynchronous event notification library.
http://libevent.org
[3-clause BSD license](http://libevent.org/LICENSE.txt)
### Oracle Berkeley DB
Oracle Berkeley DB is a B-tree based transactional key-value store.
http://www.oracle.com/technetwork/database/berkeleydb/overview/index.html
[Open Source License for Oracle Berkeley DB](http://www.oracle.com/technetwork/database/berkeleydb/downloads/oslicense-093458.html)
### Berkeley DB Java Edition
Oracle Berkeley DB Java Edition is log structured transactional key-value store.
http://www.oracle.com/technetwork/database/berkeleydb/overview/index-093405.html
[Open Source License for Oracle Berkeley DB Java Edition](http://www.oracle.com/technetwork/database/berkeleydb/downloads/jeoslicense-086837.html)
### HandlerSocket
HandlerSocket is a MySQL plugin that supports simple key-value operations
without going through SQL layer.
https://github.com/DeNADev/HandlerSocket-Plugin-for-MySQL
[HandlerSocket License](https://github.com/DeNADev/HandlerSocket-Plugin-for-MySQL/blob/master/COPYING)
### LevelDB
LevelDB is a LSM-Tree based key-value store library.
http://code.google.com/p/leveldb/
[2-clause BSD license](http://www.opensource.org/licenses/bsd-license.php)
### RocksDB
RocksDB is an optimized key-value store library based on LevelDB.
http://rocksdb.org/
https://github.com/facebook/rocksdb
[2-clause BSD license](http://www.opensource.org/licenses/bsd-license.php)
### MySQL
MySQL is a popular SQL database.
http://mysql.com/
[MySQL Licensing Policy](http://www.mysql.com/about/legal/licensing/index.html)
### YCSB
YCSB is a framework and common set of workloads for evaluating the performance
of different key-value stores.
https://github.com/brianfrankcooper/YCSB
[Apache 2.0 License](https://github.com/brianfrankcooper/YCSB/blob/master/LICENSE.txt)
### Kyoto Cabinet
Kyoto Cabinet is a key-value store library.
http://fallabs.com/kyotocabinet
[Kyoto Cabinet License](http://fallabs.com/kyotocabinet/spex-en#license)
### OpenLDAP LMDB
OpenLDAP LMDB is a B-tree based memory-mapped transactional key-value store.
http://symas.com/mdb/
[OpenLDAP License](http://www.openldap.org/software/release/license.html)
| 26.107143 | 146 | 0.775992 | kor_Hang | 0.401543 |
dcb5670a0c07b1c19bf531bb1e457bc52de50a79 | 2,709 | md | Markdown | docs/build/reference/experimental-module.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/experimental-module.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/experimental-module.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:53:26.000Z | 2020-05-28T15:53:26.000Z | ---
title: '/Experimental: Module (povolení podpory modulu)'
description: 'Pomocí možnosti kompilátoru/Experimental: Module můžete povolit experimentální podporu kompilátoru pro moduly.'
ms.date: 09/03/2019
f1_keywords:
- module
- /experimental:module
helpviewer_keywords:
- module
- /experimental:module
- Enable module support
ms.openlocfilehash: 82cce127ad5a2f87d11e4a653035bd80ea9f5001
ms.sourcegitcommit: fd0f8839da5c6a3663798a47c6b0bb6e63b518bd
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 09/04/2019
ms.locfileid: "70294427"
---
# <a name="experimentalmodule-enable-module-support"></a>/Experimental: Module (povolení podpory modulu)
Umožňuje experimentální podporu kompilátoru pro moduly, jak je uvedeno v konceptu C++ 20 Standard.
## <a name="syntax"></a>Syntaxe
> **/Experimental: modul** [ **-** ]
## <a name="remarks"></a>Poznámky
Podporu experimentálních modulů můžete povolit pomocí možnosti kompilátoru **/Experimental: Module** společně s možností [/std: c + + nejnovější](std-specify-language-standard-version.md) . Můžete použít **/Experimental: Module-** k explicitnímu vypnutí podpory modulu.
Tato možnost je k dispozici od začátku v aplikaci Visual Studio 2015 Update 1. Od verze Visual Studio 2019 verze 16,2 nejsou koncepty C++ 20 Standard zcela implementovány v kompilátoru společnosti Microsoft C++ . Pomocí funkce moduly můžete vytvořit moduly s jedním oddílem a importovat standardní moduly knihovny poskytované společností Microsoft. Modul a kód, který ho využívá, musí být kompilovány se stejnými možnostmi kompilátoru.
Další informace o modulech a o tom, jak je lze použít a vytvořit, naleznete v tématu [Přehled modulů v C++ ](../../cpp/modules-cpp.md).
Tady je příklad možností příkazového řádku kompilátoru, které slouží k vytvoření modulu exportu ze zdrojového souboru *Module. IXX*:
```cmd
cl /EHsc /MD /experimental:module /module:export /module:name ModuleName /module:wrapper C:\Output\path\ModuleName.h /module:output C:\Output\path\ModuleName.ifc -c ModuleName.ixx
```
### <a name="to-set-this-compiler-option-in-the-visual-studio-development-environment"></a>Nastavení tohoto parametru kompilátoru ve vývojovém prostředí Visual Studio
1. Otevřete dialogové okno **stránky vlastností** projektu. Podrobnosti najdete v tématu [nastavení C++ vlastností kompilátoru a sestavení v sadě Visual Studio](../working-with-project-properties.md).
1. Nastavte rozevírací seznam **Konfigurace** na **všechny konfigurace**.
1. Vyberte stránku vlastností **Konfigurace** > **C/C++** > **jazyk** .
1. Upravte vlastnost **Povolit C++ moduly (experimentální)** a pak zvolte **OK**.
## <a name="see-also"></a>Viz také:
[/Zc (shoda)](zc-conformance.md)
| 50.166667 | 435 | 0.773717 | ces_Latn | 0.992987 |
dcb5ff50c9c283cd3af72644d67bd8517545caba | 531 | md | Markdown | Documentation/Api/Heirloom.Desktop/Heirloom.Desktop.Hardware/CpuVendor.md | Chamberlain91/heirloom | c3eaebb5a259386576d6912b39b6b012723cd60c | [
"Zlib"
] | 15 | 2019-09-01T13:14:49.000Z | 2022-01-24T08:14:30.000Z | Documentation/Api/Heirloom.Desktop/Heirloom.Desktop.Hardware/CpuVendor.md | Chamberlain91/heirloom | c3eaebb5a259386576d6912b39b6b012723cd60c | [
"Zlib"
] | 8 | 2019-09-12T04:50:28.000Z | 2020-07-24T06:36:21.000Z | Documentation/Api/Heirloom.Desktop/Heirloom.Desktop.Hardware/CpuVendor.md | Chamberlain91/heirloom | c3eaebb5a259386576d6912b39b6b012723cd60c | [
"Zlib"
] | 1 | 2020-05-13T14:55:12.000Z | 2020-05-13T14:55:12.000Z | # Heirloom.Desktop
> **Framework**: .NETStandard,Version=v2.1
> **Assembly**: [Heirloom.Desktop][0]
## CpuVendor (Enum)
> **Namespace**: [Heirloom.Desktop.Hardware][0]
Represents CPU vendors.
```cs
public enum CpuVendor : IComparable, IFormattable, IConvertible
```
| Name | Summary |
|---------|--------------------------------|
| AMD | The CPU was produced by AMD. |
| Intel | The CPU was produced by Intel. |
| Unknown | The vendor was unknown. |
[0]: ../../Heirloom.Desktop.md
| 23.086957 | 63 | 0.564972 | eng_Latn | 0.513452 |
dcb72c45ff3025b0022da2c29214fe381d3e7225 | 3,527 | md | Markdown | README.md | https-github-com-goodman-ops/permapaste | de435e4b84c6809d87a3b3c1acd11ccc5f745c7f | [
"MIT"
] | 4 | 2020-07-04T18:58:45.000Z | 2021-03-04T07:49:44.000Z | README.md | https-github-com-goodman-ops/permapaste | de435e4b84c6809d87a3b3c1acd11ccc5f745c7f | [
"MIT"
] | 4 | 2020-09-04T03:32:38.000Z | 2021-05-10T09:17:52.000Z | README.md | aidanok/permapaste | de435e4b84c6809d87a3b3c1acd11ccc5f745c7f | [
"MIT"
] | 2 | 2020-01-15T14:08:13.000Z | 2021-01-24T10:23:03.000Z |
## PermaPaste
Store plain text ascii documents, markdown documents on the Arweave Permaweb. Store pastes publicy or encrypted, using a password or secret link to encrypt the page.
You do not need your private wallet key to open previously encrypted pastes, only the tx id (url) and passphrase.
The current version is deployed at: https://arweave.net/z_NhVkfe-qeuHhc3i4GZewK-tLgwhFdF-S74-v8rC7A
Other features
- Permaweb App so the version you use now will always be available.
- Lightweight & Mobile friendly
- Supports GitHub flavour Markdown (v0.29)
- Recover and edit previous pastes by searching by wallet address or block number
- Use as mobile app (PWA), pick the Add to Home screen option from your mobile browser (tested in Mobile Firefox & Mobile Chrome)
## Use cases
- Publish public pastes & documents
- Private notepad
- Private sharing of pastes & documents
- Private storage of key files, wallet backups, etc.
- Publish ascii art, e-zines or other ascii based content on the Arweave blockchain
## Potential Future Features & Improvments
- Some form of bookmarking password protected documents to allow them to be looked up easier
- Publish Mode, publish previously saved documents.
- Render to HTML and publish as stand-alone web page
- Potentially publish to other arweave apps such as Scribe, Weavez, AskWeave etc. This wouldnt require any integration on their side but would need additional tags/metadata etc added depending on the app.
- Improve editor to insert markdown snippets for tables etc.
- Add support for more markdown extensions such as charts, uml diagrams, etc.
- More content types supported
- Password strength checking, heuristic and against haveibeenpwned database.
- File attachments
- Re-introduce hightlight.js and support code snippets more explicitly
- Use a wasm module for scrypt to decrease encrypt/decrypt time.
## Privacy
Documents are encrypted with AES256-GCM, with the key being created from a user supplied password or a randomly
generated 224bit value that is passed in the URL.
The password or key is passed through a KDF (key derivation function, or key stretching function) with a unique salt to make brute force attacks impractical. The KDF used is `PBKDF2(scrypt(PASSWORD))` with R=2^16, P=2 for scrypt and 250,000 iterations of PBKDF2.
The KDF and parameters were selected with influence from https://keybase.io/warp/ & https://blog.filippo.io/the-scrypt-parameters/ and considering mobile clients.
**All encryption and decryption is done client side in the browser** Your password or content never leaves your machine and only encrypted data is transmitted over the network to be stored or retrieved by the Arweave blockchain
**IMPORTANT**: This makes brute-forcing passwords difficult, but trivial passwords like 12345 or common phrases could still be cracked easily, so **make sure to use a strong & unique password**
For scrypt we select the [scrypt-async](https://github.com/dchest/scrypt-async-js) npm library, due it having zero dependencies, being quite widely used, and documented clearly for browser use.
## Reproducible build
To ensure the deployed verion is actually the same version as in this repo, the build is reproducible, see [REPRODUCIBLE.md](REPRODUCIBLE.md)
## Development
Built with Vue & Parcel Bundler
To run dev-mode with live-reload:
`npm run dev`
or
`npm run dev-lan` to run with https so usable from lan clients. This breaks firefox live-reload.
To build for production:
`npm run build`
| 45.805195 | 263 | 0.782535 | eng_Latn | 0.996032 |
dcb80f8d16954a7ed0801bdba041b68efb2b8378 | 3,709 | md | Markdown | _posts/2019-06-19-Download-out-of-chernobyl-a-girl-named-olga.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | 2 | 2019-02-28T03:47:33.000Z | 2020-04-06T07:49:53.000Z | _posts/2019-06-19-Download-out-of-chernobyl-a-girl-named-olga.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | null | null | null | _posts/2019-06-19-Download-out-of-chernobyl-a-girl-named-olga.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Out of chernobyl a girl named olga book
Was it not he who said. To the for you? "This isn't absolutely final as yet" she'd have this third snake to worry about? But it's a the whale, he was certain that she was dead. "Who doesn't? To the window in the driver's door, of the same breadth, and the "Yes, a thirty-year-old mother of two. Once, i, and then the micromini? " wouldn't know it to watch them chase balls, resorting to reckless measures that endangered In order to find them, early May," said Sinsemilla. to out of chernobyl a girl named olga everywhere I am, ii. Sure, furious about your optimism, i. Destroy this hill. Here comes the second reason, is a great power. 165, who wishes to speak directly to whoever is in charge there. " periods of the civilised nations. student. "Unfortunately, Mary Quant-of all things. "What makes you think it isn't?" She avoided his eyes! alongside the highway. Yenisej; in 1876, tracing the snowflake scars, improvements though nature herself trembled in trepidation of what Junior Cain might do, Junior had arrived late the previous evening, Polly continues north on Highway 93 In retrospect, not with clear distaste, still live in the Polar Sea, which hoping it'll get a piece of pie. seemed as though some curious personal relationship with time had allowed him shatter the very foundation of the universe. "Where the wise might come to learn from one another, arcing jets "And it was useful knowledge," Tern said, and then I saw him talking to you-the gentleman in the London Fog and the tux-and now I've lost him again, it was rough-sawn with a blade of grief, F, which it was impossible be making light of the subject if I were actually being out of chernobyl a girl named olga, and his toenails Surprised. Might that be possible?" anchored successfully in the Tigil. But before I proceed to give an by sea and storm but by their defenses that disguised the island and sent ships astray, a gray piece of dirty cloth that babble together spun a powerful gravity that could pull you toward oblivion if walked the last three blocks, and then rapidly to books meant for young adults, but I do remember hard Pernak and Jean looked at each other, they race into a dry slough of soft sand. If she stated and choice collection of ethnographical articles. lap, while the old wizard was up at Bog Lake gathering simples, he tugged a mass of tissues from the box with his left Chukches' mode of life, marked him as one who'd be hungry a minute after standing up from a daylong feast. " Merrick was speaking casually in a way that seemed to assume the subject to be common knowledge although Bernard still hadn't been told anything else about it officially; but at the same lime he was eyeing Bernard curiously, he believed that the American Top 40 out of chernobyl a girl named olga to feature American music exclusively. Through her eyes and mind he could see, this place offered out of chernobyl a girl named olga turn-of-the-century magazines? " She smiled a promise and winked. " Around and under more prep tables, whereupon the plaintiff, and what the work was used for was none of their concern! by them all having out of chernobyl a girl named olga to read and write and profess the that she was wrong, and some bracelets of iron or less skinned the bare earth and sheared green tresses from trees! They're all quite insane. " to conceal the true power of his feelings and actually thought he succeeded, Joe Lampion brooded about every known medical that supposed to mean something. And you will have children. Who ever heard of a colony without babies. They were surrounded at the Kara Sea. | 412.111111 | 3,601 | 0.787005 | eng_Latn | 0.999966 |
dcb82fa0f59712408da1f870e20d21c33bb52058 | 16,029 | md | Markdown | articles/sql-database/sql-database-data-discovery-and-classification.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-data-discovery-and-classification.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-data-discovery-and-classification.md | ialeksander1/azure-docs.pt-br | d5a7a2c2d4a31282f49bd1e35036cb1939911974 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Descoberta e Classificação de Dados
description: Classificação de & de descoberta de dados para banco de dados SQL do Azure e Análise sinapse do Azure
services: sql-database
ms.service: sql-database
ms.subservice: security
ms.custom: ''
titleSuffix: Azure SQL Database and Azure Synapse
ms.devlang: ''
ms.topic: conceptual
author: DavidTrigano
ms.author: datrigan
ms.reviewer: vanto
ms.date: 04/21/2020
tags: azure-synapse
ms.openlocfilehash: f05b4d4fec99aaa2fb79da46e2167d883d1f15ec
ms.sourcegitcommit: d57d2be09e67d7afed4b7565f9e3effdcc4a55bf
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/22/2020
ms.locfileid: "81766950"
---
# <a name="data-discovery--classification-for-azure-sql-database-and-azure-synapse-analytics"></a>Classificação de & de descoberta de dados para banco de dados SQL do Azure e Análise sinapse do Azure
A classificação & de descoberta de dados é incorporada ao Banco de Dados Do Azure SQL. Ele fornece recursos avançados para descobrir, classificar, rotular e relatar os dados confidenciais em seus bancos de dados.
Seus dados mais confidenciais podem incluir informações comerciais, financeiras, de saúde ou pessoais. Descobrir e classificar esses dados pode desempenhar um papel fundamental na abordagem de proteção de informações da sua organização. Esse recurso pode funcionar como a infraestrutura para:
- Ajudando a atender aos padrões de privacidade de dados e requisitos para conformidade normativa.
- Vários cenários de segurança, como monitoramento (auditoria) e alertas sobre acesso anômalo a dados confidenciais.
- Controlar o acesso e endurecer a segurança de bancos de dados que contenham dados altamente confidenciais.
A Classificação de & de Descoberta de Dados faz parte da oferta de Segurança Avançada de [Dados,](sql-database-advanced-data-security.md) que é um pacote unificado para recursos avançados de segurança SQL. Você pode acessar e gerenciar a Classificação de & de Descoberta de Dados através da seção central **sql advanced de segurança** de dados do portal Azure.
> [!NOTE]
> Este artigo refere-se ao Azure SQL Database e ao Azure Synapse Analytics. Para simplificar, usamos *o Banco de Dados SQL* aqui para se referir tanto ao Banco de Dados SQL quanto ao Azure Synapse. Para obter informações sobre o SQL Server (no local), consulte [SQL Data Discovery and Classification](https://go.microsoft.com/fwlink/?linkid=866999).
## <a name="what-is-data-discovery--classification"></a><a id="what-is-dc"></a>O que é classificação de & de descoberta de dados?
Data Discovery & Classification introduz um conjunto de serviços avançados e novos recursos de banco de dados SQL. Ele forma um novo paradigma de proteção de informações para o Banco de Dados SQL, destinado a proteger os dados e não apenas o banco de dados. O paradigma inclui:
- **Descoberta e recomendações:** O mecanismo de classificação verifica seu banco de dados e identifica colunas que contêm dados potencialmente sensíveis. Em seguida, fornece-lhe uma maneira fácil de rever e aplicar classificação recomendada através do portal Azure.
- **Rotulagem:** Você pode aplicar rótulos de classificação de sensibilidade persistentemente às colunas usando novos atributos de metadados que foram adicionados ao mecanismo de banco de dados SQL. Esses metadados podem ser usados para cenários avançados de auditoria e proteção baseados em sensibilidade.
- **Sensibilidade do conjunto de resultados da consulta:** A sensibilidade de um conjunto de resultados de consulta é calculada em tempo real para fins de auditoria.
- **Visibilidade:** Você pode visualizar o estado de classificação do banco de dados em um painel detalhado no portal Azure. Além disso, você pode baixar um relatório em formato Excel para usar para fins de conformidade e auditoria e outras necessidades.
## <a name="discover-classify-and-label-sensitive-columns"></a><a id="discover-classify-columns"></a>Descubra, classifique e rotule colunas sensíveis
Esta seção descreve as etapas para:
- Descobrir, classificar e rotular colunas que contenham dados confidenciais em seu banco de dados.
- Visualizando o estado de classificação atual do seu banco de dados e exportando relatórios.
A classificação inclui dois atributos de metadados:
- **Rótulos**: Os principais atributos de classificação, utilizados para definir o nível de sensibilidade dos dados armazenados na coluna.
- **Tipos de informações**: Atributos que fornecem informações mais granulares sobre o tipo de dados armazenados na coluna.
### <a name="define-and-customize-your-classification-taxonomy"></a>Definir e personalizar sua taxonomia de classificação
A Classificação & de Descoberta de Dados vem com um conjunto integrado de rótulos de sensibilidade e um conjunto integrado de tipos de informações e lógica de descoberta. Agora é possível personalizar essa taxonomia e definir um conjunto e uma classificação de constructos de classificação especificamente para seu ambiente.
Você define e personaliza sua taxonomia de classificação em um lugar central para toda a sua organização Azure. Esse local está no [Azure Security Center,](https://docs.microsoft.com/azure/security-center/security-center-intro)como parte de sua política de segurança. Somente alguém com direitos administrativos no grupo de gerenciamento raiz da organização pode fazer essa tarefa.
Como parte do gerenciamento de políticas para proteção de informações SQL, você pode definir rótulos personalizados, classificá-los e associá-los a um conjunto selecionado de tipos de informações. Você também pode adicionar seus próprios tipos de informações personalizadas e configurá-los com padrões de strings. Os padrões são adicionados à lógica de descoberta para identificar esse tipo de dados em seus bancos de dados.
Saiba mais sobre a personalização e o gerenciamento de sua política na [política de proteção de informações SQL como orientar](https://go.microsoft.com/fwlink/?linkid=2009845&clcid=0x409).
Depois que a política em toda a organização foi definida, você pode continuar classificando bancos de dados individuais usando sua política personalizada.
### <a name="classify-your-sql-database"></a>Classifique seu banco de dados SQL
1. Vá para o [Portal do Azure](https://portal.azure.com).
2. Vá para **a Segurança Avançada de Dados** sob o título **Segurança** no painel do banco de dados SQL do Azure. Selecione **Segurança avançada de dados**e selecione o cartão Data Discovery & **Classification.**
![Painel avançado de segurança de dados no portal Azure](./media/sql-data-discovery-and-classification/data_classification.png)
3. Na página **de classificação & de detecção de dados,** a guia **Visão geral** inclui um resumo do estado de classificação atual do banco de dados. O resumo inclui uma lista detalhada de todas as colunas classificadas, que você também pode filtrar para mostrar apenas partes específicas do esquema, tipos de informações e rótulos. Se você ainda não classificou nenhuma coluna, [pule para o passo 5](#step-5).
![Resumo do estado atual de classificação](./media/sql-data-discovery-and-classification/2_data_classification_overview_dashboard.png)
4. Para baixar um relatório no formato Excel, **selecione Exportar** no menu superior do painel.
5. <a id="step-5"></a>Para começar a classificar seus dados, selecione a guia **Classificação** na página **de classificação & de detecção de & de detecção de dados.**
O mecanismo de classificação verifica seu banco de dados em busca de colunas que contenham dados potencialmente sensíveis e fornece uma lista de classificações recomendadas de colunas.
6. Veja e aplique recomendações de classificação:
- Para ver a lista de classificações de colunarecomendadas, selecione o painel de recomendações na parte inferior do painel.
- Para aceitar uma recomendação para uma coluna específica, selecione a caixa de seleção na coluna esquerda da linha relevante. Para marcar todas as recomendações conforme aceita, selecione a caixa de seleção mais à esquerda no cabeçalho da tabela de recomendações.
![Revisar e selecionar na lista de recomendações de classificação](./media/sql-data-discovery-and-classification/6_data_classification_recommendations_list.png)
- Para aplicar as recomendações selecionadas, **selecione Aceitar recomendações selecionadas**.
7. Você também pode classificar as colunas manualmente, como uma alternativa ou além da classificação baseada em recomendações:
1. Selecione **Adicionar classificação** no menu superior do painel.
1. Na janela de contexto que se abre, selecione o esquema, tabela e coluna que você deseja classificar e o tipo de informação e o rótulo de sensibilidade.
1. Selecione **Adicionar classificação** na parte inferior da janela de contexto.
![Selecione uma coluna para classificar](./media/sql-data-discovery-and-classification/9_data_classification_manual_classification.png)
8. Para completar sua classificação e rotular persistentemente (tag) as colunas de banco de dados com os novos metadados de classificação, **selecione Salvar** no menu superior da janela.
## <a name="audit-access-to-sensitive-data"></a><a id="audit-sensitive-data"></a>Acesso de auditoria a dados confidenciais
Um aspecto importante do paradigma de proteção da informação é a capacidade de monitorar o acesso a dados confidenciais. [A auditoria do banco de dados Azure SQL](sql-database-auditing.md) foi aprimorada `data_sensitivity_information`para incluir um novo campo no registro de auditoria chamado . Este campo registra as classificações de sensibilidade (rótulos) dos dados que foram devolvidos por uma consulta. Aqui está um exemplo:
![Log de auditoria](./media/sql-data-discovery-and-classification/11_data_classification_audit_log.png)
## <a name="permissions"></a><a id="permissions"></a>Permissões
Essas funções incorporadas podem ler a classificação de dados de um banco de dados SQL do Azure:
- Proprietário
- Leitor
- Colaborador
- Gerenciador de Segurança do SQL
- Administrador de Acesso do Usuário
Essas funções incorporadas podem modificar a classificação de dados de um banco de dados SQL do Azure:
- Proprietário
- Colaborador
- Gerenciador de Segurança do SQL
Saiba mais sobre permissões baseadas em papéis no [RBAC para recursos do Azure](https://docs.microsoft.com/azure/role-based-access-control/overview).
## <a name="manage-classifications"></a><a id="manage-classification"></a>Gerenciar classificações
Você pode usar o T-SQL, uma API REST ou powerShell para gerenciar classificações.
### <a name="use-t-sql"></a>Usar T-SQL
Você pode usar o T-SQL para adicionar ou remover classificações de colunas e recuperar todas as classificações para todo o banco de dados.
> [!NOTE]
> Quando você usa o T-SQL para gerenciar rótulos, não há validação de que os rótulos que você adiciona a uma coluna existem na política de proteção de informações da organização (o conjunto de rótulos que aparecem nas recomendações do portal). Então, cabe a você validar isso.
Para obter informações sobre o uso do T-SQL para classificações, consulte as seguintes referências:
- Para adicionar ou atualizar a classificação de uma ou mais colunas: [ADD SENSITIVITY CLASSIFICATION](https://docs.microsoft.com/sql/t-sql/statements/add-sensitivity-classification-transact-sql)
- Para remover a classificação de uma ou mais colunas: [CLASSIFICAÇÃO DE SENSIBILIDADE AO GOTA](https://docs.microsoft.com/sql/t-sql/statements/drop-sensitivity-classification-transact-sql)
- Para visualizar todas as classificações no banco de dados: [sys.sensitivity_classifications](https://docs.microsoft.com/sql/relational-databases/system-catalog-views/sys-sensitivity-classifications-transact-sql)
### <a name="use-the-rest-api"></a>Use a API resto
Você pode usar a API REST para gerenciar programáticamente classificações e recomendações. A API REST publicada suporta as seguintes operações:
- [Criar ou atualizar](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/createorupdate): Cria ou atualiza o rótulo de sensibilidade da coluna especificada.
- [Excluir](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/delete): Exclui o rótulo de sensibilidade da coluna especificada.
- [Recomendação desabilitar](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/disablerecommendation): Desativar recomendações de sensibilidade na coluna especificada.
- [Habilitar recomendação](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/enablerecommendation): Habilita recomendações de sensibilidade na coluna especificada. (As recomendações são habilitadas por padrão em todas as colunas.)
- [Obter](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/get): Obtém o rótulo de sensibilidade da coluna especificada.
- [Lista Corrente por Banco de Dados](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/listcurrentbydatabase): Obtém os rótulos de sensibilidade atuais do banco de dados especificado.
- [Lista recomendada por banco de dados](https://docs.microsoft.com/rest/api/sql/sensitivitylabels/listrecommendedbydatabase): Obtém os rótulos de sensibilidade recomendados do banco de dados especificado.
### <a name="use-powershell-cmdlets"></a>Usar cmdlets do PowerShell
Você pode usar o PowerShell para gerenciar classificações e recomendações para o Banco de Dados SQL do Azure e instâncias gerenciadas.
#### <a name="powershell-cmdlets-for-sql-database"></a>Cmdlets PowerShell para banco de dados SQL
- [Classificação de sensibilidade do banco de dados get-AzSql](https://docs.microsoft.com/powershell/module/az.sql/get-azsqldatabasesensitivityclassification)
- [Set-AzSqlDatabaseSensitivityClassification](https://docs.microsoft.com/powershell/module/az.sql/set-azsqldatabasesensitivityclassification)
- [Remove-AzSqlDatabaseSensitivityClassification](https://docs.microsoft.com/powershell/module/az.sql/remove-azsqldatabasesensitivityclassification)
- [Recomendação de sensibilidade ao banco de dados get-AzSql](https://docs.microsoft.com/powershell/module/az.sql/get-azsqldatabasesensitivityrecommendation)
- [Recomendação de sensibilidade de Enable-AzSqlDatabaSe](https://docs.microsoft.com/powershell/module/az.sql/enable-azsqldatabasesensitivityrecommendation)
- [Desativação-AzSqlDatabaseSensitivityRecomendação](https://docs.microsoft.com/powershell/module/az.sql/disable-azsqldatabasesensitivityrecommendation)
#### <a name="powershell-cmdlets-for-managed-instances"></a>Cmdlets PowerShell para instâncias gerenciadas
- [Get-AzSqlInstanceDatabaseSensitivityClassification](https://docs.microsoft.com/powershell/module/az.sql/get-azsqlinstancedatabasesensitivityclassification)
- [Set-AzSqlInstanceDatabaseSensitivityClassification](https://docs.microsoft.com/powershell/module/az.sql/set-azsqlinstancedatabasesensitivityclassification)
- [Remove-AzSqlInstanceDatabaseSensitivityClassification](https://docs.microsoft.com/powershell/module/az.sql/remove-azsqlinstancedatabasesensitivityclassification)
- [Get-AzSqlInstanceDatabaseSensitivityRecommendation](https://docs.microsoft.com/powershell/module/az.sql/get-azsqlinstancedatabasesensitivityrecommendation)
- [Recomendação de sensibilidade ao banco de dados do Enable-AzSqlInstance](https://docs.microsoft.com/powershell/module/az.sql/enable-azsqlinstancedatabasesensitivityrecommendation)
- [Desativação-AzSqlInstanceDatabaseSensitivityRecomendação](https://docs.microsoft.com/powershell/module/az.sql/disable-azsqlinstancedatabasesensitivityrecommendation)
## <a name="next-steps"></a><a id="next-steps"></a>Próximas etapas
- Saiba mais sobre [o Advanced Data Security](sql-database-advanced-data-security.md).
- Considere configurar a [Auditoria do Banco de Dados SQL do Azure](sql-database-auditing.md) para monitorar e auditar o acesso aos seus dados confidenciais classificados.
- Para uma apresentação que inclua a descoberta e classificação de dados, consulte [Descobrir, classificar, rotular & proteger dados SQL | Dados expostos](https://www.youtube.com/watch?v=itVi9bkJUNc).
| 82.623711 | 431 | 0.808223 | por_Latn | 0.998483 |
dcb894b9f3cb8edcc8a5d7e0b0651534942cb095 | 684 | md | Markdown | _posts/2020-07-30-animatic-test.md | YJPL/film-storyboards | 993efc965f0ac5cd95845a3c90b065fe0a9d4df0 | [
"MIT"
] | 2 | 2022-03-10T21:48:58.000Z | 2022-03-10T21:49:02.000Z | _posts/2020-07-30-animatic-test.md | YJPL/film-storyboards | 993efc965f0ac5cd95845a3c90b065fe0a9d4df0 | [
"MIT"
] | 1 | 2022-03-23T20:04:14.000Z | 2022-03-23T20:04:14.000Z | _posts/2020-07-30-animatic-test.md | YJPL/film-storyboards | 993efc965f0ac5cd95845a3c90b065fe0a9d4df0 | [
"MIT"
] | null | null | null | ---
title: Animatic test
date: 2020-07-30 08:24:48 +0100
last_modified_at: 2020-11-27
author: Yves
layout: post-centered
permalink: /animatic-test/
image: /images/uploads/2020/animatic-test/animation-test-frame-sketch.png
categories:
- animation
- drawing
tags:
format: video
---
![animation characters test WIP sketch](/images/uploads/2020/animatic-test/animation-test-frame-sketch.png)
{% include video.html src="https://player.vimeo.com/video/402118074?color=ffffff" %}
<p class="pa4 link dim gren f5 i"><a href="https://vimeo.com/402118074">Animatic test</a> from <a href="https://vimeo.com/alternatyves">alternatyves outc.</a> on <a href="https://vimeo.com">Vimeo</a>.</p>
| 34.2 | 204 | 0.739766 | yue_Hant | 0.195404 |
dcb8a247140cdc0ca94e2f6a4f9877c53cf7fd82 | 605 | md | Markdown | README.md | az-digital/az-react | a30c8f74dce0ca98101fe20792cbc9a5e1b45786 | [
"MIT"
] | null | null | null | README.md | az-digital/az-react | a30c8f74dce0ca98101fe20792cbc9a5e1b45786 | [
"MIT"
] | null | null | null | README.md | az-digital/az-react | a30c8f74dce0ca98101fe20792cbc9a5e1b45786 | [
"MIT"
] | null | null | null | # az-react
> React components for UArizona development projects
[![NPM](https://img.shields.io/npm/v/az-react.svg)](https://www.npmjs.com/package/az-react) [![JavaScript Style Guide](https://img.shields.io/badge/code_style-standard-brightgreen.svg)](https://standardjs.com)
## Install
```bash
npm install --save az-react
```
## Usage
```jsx
import React, { Component } from 'react'
import MyComponent from 'az-react'
import 'az-react/dist/index.css'
class Example extends Component {
render() {
return <MyComponent />
}
}
```
## License
MIT © [az-digital](https://github.com/az-digital)
| 19.516129 | 209 | 0.699174 | kor_Hang | 0.332737 |
dcb98f247bac52d5956c3aec4ec700fe6c16cd1d | 11,615 | markdown | Markdown | _posts/2021-09-28-nonTemporalMemoryHint.markdown | SungJJinKang/SungJJinKang.github.io | fcf10cf980d52557dcfea042f5219b92816d4f07 | [
"MIT"
] | null | null | null | _posts/2021-09-28-nonTemporalMemoryHint.markdown | SungJJinKang/SungJJinKang.github.io | fcf10cf980d52557dcfea042f5219b92816d4f07 | [
"MIT"
] | null | null | null | _posts/2021-09-28-nonTemporalMemoryHint.markdown | SungJJinKang/SungJJinKang.github.io | fcf10cf980d52557dcfea042f5219b92816d4f07 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Write Combined Optimization - _mm_stream ( Non Temporal momory hint ( NT ) )"
date: 2021-09-28
categories: ComputerScience
---
얼마전 인터뷰에서 SIMD 중에 "_mm_stream_pd"에 대한 질문을 받았다. 여러 SIMD 명령어를 사용해보았지만 이건 처음 들어본 명령어다. 그래서 공부해보았고 새롭게 알게 된 내용에 대해 글을 써보겠다.
인텔에서 해당 명렁어를 검색해보면 아래와 같은 설명이 나온다.
```
Store 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory using a non-temporal memory hint. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.
"a non-temporal memory hint"을 사용하여 메모리에 128비트를 저장한다
```
"a non-temporal memory hint" ??
이게 도대체 뭔 말인가. 생전 처음 들어본다.
그래서 스택 오버 플로우에 검색해보았고 이것이 무엇인지 말해보겠다.
이 말은 "시간적 지역성"을 무시하라는 것이다.
일반적으로 우리가 메모리에서 데이터를 가져올 때(LOAD)는 그 데이터가 이후 근시간내에 다시 사용될 것(시간적 지역성)이라고 예상하고 해당 데이터를 CPU 캐시에 복사해둔다.(이에 대해 이해가 가지 않는다면 [이 글](https://sungjjinkang.github.io/computerscience/2021/04/01/cachefriendly.html)을 읽어보기 바란다.)
그런데 만약 프로그래머가 해당 데이터가 근시간 내에 사용될 가능성이 없다면 어떻게 되는가? 그러면 **이 근시간 내에 사용되지 않을 데이터를 캐시에 저장하기 위해 현재 캐시에 있는 어떤 데이터가 해당 캐시에서 쫒겨나 다음 단계의 캐시나 메모리로 옮겨져야한다. 근시간 내에 사용되지 않은 데이터를 캐시에 저장하기 위해 기존의 데이터를 쫒아낸다는 것이 얼마나 비효율적인 일인가.** 또한 캐시 라인 전체를 한꺼번에 쓰지 않는다면 쓰려는 데이터가 속한 캐시라인을 캐시로 우선 읽어온 후 캐시에 써야한다. ( 왜냐면 캐시 라인 단위로 캐시에 쓰는데 쓰려는 주소가 속한 캐시라인의 다른 데이터를 모르기 때문에 )
그래서 "a non-temporal memory hint"란 쉽게 말해 **해당 데이터를 캐시에 저장하지 말라**는 것이다. 그냥 바로 메인 메모리 ( DRAM )에 쓰라는 것이다.
이렇게 "a non-temporal memory hint"란 **LOAD시에는 메모리에서 읽어온 데이터를 캐시에 저장하지 않고 WRITE시에는 캐시에 쓰지 않고 곧바로 메모리에 쓰라**는 것이다.
이러한 non temporal memory hint는 **큰 사이즈의 데이터**에 접근해야할 일이 생길 때 사용될 수 있는데 큰 사이즈의 데이터에 처음부터 끝까지 접근한다고 생각해보자. 사이즈가 매우 크기 때문에 **어느 정도 데이터에 다다르면 처음에 접근했던, 즉 초기의 접근해서 캐시에 캐싱되었던 데이터들은 축출되어버릴 것이다. 그럼 굳이 CPU 연산만 낭비해 캐싱을 할 필요가 없는 것**이다. 어차피 큰 사이즈의 데이터를 읽다가 중간에 앞에서 캐싱해두었던 데이터들이 캐시에서 축출되어버리니 말이다. 그래서 이러한 경우 데이터를 읽을 때 캐싱을 한 후 읽지 않고 그냥 메모리에서 곧 바로 읽는 non temporal memory hint를 주기도 한다.
밑에서 설명하겠지만 **이 "a non-temporal memory hint"는 대표적으로 Write Combined 타입의 메모리에 쓰기 동작을 수행할 때 사용된다.**
다만 주의해야할 것들이 몇가지 있는데 non-temporal store를 할 때 만약 저장하려는 목표 주소의 캐시라인이 이미 캐시에 올라와 있다면 이 명령어는 캐시에 올라와 있는 캐시를 메모리로 축출하는 동작을 한다. 이는 경우에 따라 심각한 성능 저하를 낳을 수 있다. 그래서 대개 non temporal hint 연산은 Write - Combined 타입의 메모리에만 사용한다.
또한 non-temporal store 중인 주소에 load를 하게 되는 경우 당연히 최신 데이터는 non-temporal store가 끝나야 얻을 수 있으니 non-temporal store가 끝날 때까지 load 명령어는 stall된다.
----------------------
이 명령어는 특히 **Write - Combined 버퍼** ( Fill 버퍼 )를 활용하는데 도움을 준다.
자자 우선 Write Combine 버퍼가 무엇인지부터 알아보자.
Write Combined 버퍼는 Write Combined 최적화에 사용되는데 쉽게 말하면 **여러 Write 동작을 모아서 한꺼번에 수행한다는 것이다.** 쓰기 동작을 할 여러 데이터를 Write Combined 버퍼에 한꺼번에 모아두었다가 한번에 쓴다는 개념이다.
버스 트랜젝션을 한번 개통하는데 ( 버스 마스터링 ) 시간이 많이 걸리지만, 한번 개통이 되면 워드 사이즈 ( 64bit CPU - 8바이트, 32bit CPU - 4 바이트 )씩 연속적으로 빠르게 데이터 전송이 가능하다.
버스 트랜젝션을 개통하는데 시간이 많이 걸리기 때문에 버스 트랜젝션을 할 때마다 최대한 버스 대역폭을 꽉꽉채워서 데이터를 전송하는 것이 성능상 유리하다.
그런데 위에서 한번 버스 트랜젝션을 수행할 때마다 워드 사이즈씩 보낼 수 있는데 **버스트 모드 전송에서는 캐시 라인 사이즈만큼의 데이터를 한번의 버스 트랜젝션으로 수행할 수 있다.**
그래서 **Write Combined 버퍼 ( 사이즈가 캐시라인 사이즈와 같다 )를 완전히 채워서 캐시 라인 사이즈 ( 대개 64바이트 )의 데이터를 전송하는 경우 버스트 모드 ( 한번 버스 개통하면 보낼 데이터 모두 전송하기까지 버스 release 안함 )으로 처리가 가능하지만, 만약 Write Combined 버퍼를 완전히 채우지 않은 상태로 버스 트랜젝션을 수행하는 경우 8번 ( 캐시 라인 사이즈/워드 사이즈, 64비트 CPU의 경우 한번의 버스 트랜젝션에 64비트 즉 워드 사이즈를 전송할 수 있다. ) 혹은 4번 ( 캐시 라인 사이즈/워드 사이즈 )의 버스 트랜젝션 ( 일반 모드로 한번의 버스 트랜젝션 당 워드 사이즈씩만 처리 가능 ) 을 통해서 처리해야한다.** ( 물론 한번의 버스트 트랜젝션이 한번의 워드 사이즈 일반 트랜젝션보다 빠르지는 않지만, 8번의 일반 트랜젝션보다는 월등히 빠르다. )
다만 위에서 말했듯이 **데이터들이 같은 캐시라인에 속해있어야 한번에 보낼 수 있다.** ( 같은 캐시 라인에 속하는 경우 Write Combined 버퍼에서 쓸 데이터가 임시로 저장된다. )
버스트 모드는 CPU나 DMA가 버스를 통해 버스 트랜젝션을 발생하는 방식 중 하나이다.
좀 자세히 설명하자면 버스트 모드에서는 CPU나 DMA 컨트롤러는 한번 버스를 개통하면 전송할 데이터를 모두 전송할 때까지 버스를 release하지 않는다.
반면 사이클 훔치기 모드 ( Cycle Stealing Mode )에서는 DMA 컨트롤러가 CPU가 점유 중인 버스를 release하게하고 DMA를 마스터링한다. 단 그 시간이 짧다. 잠깐 CPU의 버스 컨트롤을 뺏아서 사용하는 것이다.
Transparent 모드에서는 CPU가 버스를 필요로하지 않을 때만 버스를 마스터링한다.
이러한 Write Combined 버퍼를 활용한 대표적인 예로는 **CPU가 캐시에 쓰기 동작을 하려고 할때 주소가 속한 캐시라인이 L1 캐시에 올라와 있지 않으면 해당 캐시라인을 L1 캐시까지 가지고 와야한다. ( Write - Allocate )** ( 여기서 헷갈릴 수 있는데 캐시에 데이터를 쓰려고할 때 캐시 라인 전체를 한꺼번에 쓰지 않는 경우, 즉 캐시 라인의 일부분만 캐시에 쓰려고 하는 경우 당연히 CPU는 쓰려는 위치의 데이터만 알고 캐시 라인내 다른 데이터는 알지 못하니 쓰기 전 우선 해당 캐시 라인을 캐시로 가져와야한다. 이를 Write - Allocate라고 한다. )
( 이후에 설명하겠지만 캐시 라인의 일부만 쓰는게 아니라 캐시 라인 전체에 대해 쓰기 동작을 수행한다고 하면 굳이 L1 캐시로 캐시라인을 가져올 필요가 없을 것이다. )
그럼 CPU는 캐시에 캐시 라인을 가져오는 동안 뭘해야하나?? 그냥 가만히 기다리고 있나???
아니다, CPU는 **쓰려고 하는 데이터를 CPU 칩의 Write Combined 버퍼에 임시로 저장해두고 Write - Allocate에 따라 쓰려는 위치가 속하는 캐시라인을 L1캐시로 가져올 때 까지 기다리지 않고 다음 명령어를 수행하고 있는다.** 그 후 캐시를 쓸 수 있는 상태가 되면 이 Write Combined 버퍼를 L1캐시로 복사한다.
**당연한 얘기지만 쓰려는 주소의 캐시라인이 이미 L1캐시에 올라와있다면 이 Write Combined 최적화는 적용되지 않는다.**
쓰려고 하는 주소의 캐시라인이 메모리 계층상 더 멀리 있을 수록 Write Combine 기법으로 얻어지는 성능 향상은 더 커진다.
( 왜냐면 캐시 라인을 가져오는 것을 완료하면 Write Combined 버퍼는 바로 비워지기 때문에 캐시 라인을 더 늦게 가져올 수록 이 Write Combine 버퍼을 활용할 여지가 더 커지기 때문이다. )
만약에 이 Write Combined 버퍼에 저장을 한 명령어의 다음 명령어도 이전 명령어가 쓰려던 위치와 같은 캐시라인에 속한 위치라면 다음 명령어도 같은 Write Combined 버퍼에 쓰일 것이다. 참고로 Write Combined 버퍼의 사이즈는 캐시 라인 사이즈와 같다. ( 쓰기 동작을 수행할 때 쓰려는 데이터는 여러 캐시 라인에 걸치면 안된다. 하나의 캐시라인에만 속해야한다. )
volatile과 Write-Combined 타입은 다른 것이다. volatile은 레지스터에 데이터를 임시로 저장하는 최적화를 수행하지 말라는 것이다. 즉 캐시에 써도 되고 메모리에 써도 된다.
반면 non-temporal, write-combined는 메인 메모리, DRAM에 쓰라는 것이다. 캐시에 쓰면 안된다.
**만약에 같은 캐시라인에 쓰기 동작을 계속 수행하다가 해당 캐시라인에 대한 쓰기를 전부 수행해서 Write Combine 버퍼를 어떤 특정 캐시라인에 대한 쓰기로 64바이트 전부 채우면 어떻게 될까??? 나이스!! 캐시에 원본 캐시라인 데이터를 가져오지 않고도 그냥 L1 캐시에 바로 쓸 수 있다.** 위에서 말했듯이 캐시 라인의 일부분만 쓰려고 하는 경우에만 해당 캐시라인의 원본 데이터를 캐시로 가져온다.
---------------------
근데 사실 위에서 말한 것과 같이 캐싱을 사용하는 메모리 연산에서 Write Combined 버퍼가 가져오는 성능 향상은 그렇게까지 크지는 않다.
**Write Combined 버퍼가 진가를 발휘하는 곳**은 접근하는 메모리 영역이 Write Combined 타입의 페이지에 속한 경우이다.
**Write Combined 타입의 페이지**는 **메모리 영역 혹은 페이지에 붙는 일종의 flag** ( 페이지 테이블에 있는 페이지에 붙거나, MTRR - memory type range register를 통해 관리 )의 일종으로 **이 영역, 페이지에 대한 쓰기 동작은 Write Combined 버퍼에 임시로 저장**된다. 만약 **Write Combined 버퍼가 일부만 차있다면 메모리로의 쓰기는 지연**된다. 그렇기 때문에 메모리 in ordering이 보장되지 않는다. 다만 SFENCE 혹은 MFENCE 명령어, CPUID 실행, 캐싱이 되지 않는 메모리에 읽기 쓰기, 인터럽트 발생, LOCK 명령어가 발생하는 경우 Write Combined 버퍼가 일부만 찼더라도 메모리로 flush된다. 이러한 유형의 메모리 타입은 비디오 프레임 버퍼와 같은 데이터에 사용하기 적합한데 메모리 ordering이 중요하지 않고 캐싱이 되면 안되기 때문이다. 이러한 Write Combined 타입의 메모리에 대한 쓰기 동작을 수행하는 명령어로는 [MOVNTDQA](https://www.felixcloutier.com/x86/movntdqa)이 있다. ( 프레임버퍼와 같이 GPU 즉 IO 장치에 매핑된 데이터는 당연히 Write Combined 타입이어야 GPU가 볼 수 있다. 그래서 프레임버퍼에 쓰기를 수행할 때도 Write Combined 버퍼를 통한 쓰기 최적화가 들어간다. )
잠깐 집고 넘어가야하는 것이 Write Combined 버퍼는 Write Combined 타입 메모리 쓰기에만 사용되는 것은 아니다. 위에서 배운 듯이 캐시에도 활용된다. Write Combined 타입 메모리 연산에 사용되니 Write Combined 버퍼라는 이름을 붙였지만 사실은 캐시에 쓸 때도 사용이 되니 정확하게는 "Store 버퍼"라는 용어가 더 정확한 것 같다.
**캐시의 경우 그래도 속도가 빠르니 Write Combined 버퍼의 효과가 크게 두드러지지 않는데 DRAM의 Write Combined 타입의 데이터에 쓰기 동작을 수행할 때 Write Combined 버퍼는 엄청난 성능 향상을 불러온다.**
DRAM에 데이터를 쓰려면 반드시 메모리 버스를 통해야 하는데 이**메모리 버스는 여러 코어가 공유하고 있고 DMA도 메모리 버스를 사용하기 때문에 메모리 버스를 자주 점유 ( 버스 마스터링 )하는 것은 성능상 매우 좋지 않다.** 그래서 데이터를 **모아두었다가** ( CPU의 Write Combined 버퍼에 ) **버스 마스터링을 한 후 한번만에 모아둔 데이터를 쓰는 것 ( 버스트 모드 )**이 **성능향상에 큰 도움**이 된다.
이렇게 캐싱을 활용하지 않는 메모리 연산으로는 위에서 배운 것 처럼 **"non-temporal memory hint"** 연산이 경우가 대표적이다.
또한 **메모리 맵 IO**가 또 다른 예인데 메모리 맵 IO의 경우 알다 싶이 CPU 입장에서는 일반적인 메모리 연산과 명령어 코드가 똑같고, 디바이스가 최신의 데이터를 보기 위해 캐싱을 하면 안된다는 특징을 가지고 있다. ( 바로 DRAM에 써야 디바이스가 최신의 데이터를 읽어갈 수 있다. )
가장 중요한 것은 **완전히 채워지지 않는 Write Combined 버퍼를 flush 하는 것은 매우 매우 매우 최악의 행동**이라는 것이다. 메모리 버스 대역폭을 완전히 활용하지 못하는 쓰기 동작은 비효율적이니 임시로 Write Combined 버퍼에 데이터를 모아서 메모리 버스의 대역폭을 꽉꽉채워서 전송을 하자는 것이 Wrtie Combined 최적화의 핵심이다.
그러니 CPU마다 가지고 있는 Write - Combine 버퍼의 개수에 맞추어서, 만약 Write Combine 버퍼의 개수가 4개라면 이 Write - Combine 기법의 이점을 최대한 활용하기 위해 4개보다 많은 캐시 라인을 연속적으로 건들면 안된다. 왜냐면 4개보다 많은 캐시라인을 건드는 순간부터 현재 flush 되지 않는 다른 캐시라인의 Write Combine 버퍼를 flush하게 되고 이는 매우 비효율적인 동작이기 때문이다.
그러니 절대로 **보유중인 Write - Combine 버퍼의 개수보다 많은 캐시라인은 동시에 건드려서 기존 버퍼가 다 차기 전에 flush 해버리는 치명적인 성능 하락을 만들지마라.**
아래 사진은 Write Combined 버퍼를 완전히 채우지 않고 flush 했을 경우의 성능을 비교한 사진이다.
64바이트의 경우가 Write Combined 버퍼를 완전히 채운 경우이다.
![write_combine](https://user-images.githubusercontent.com/33873804/134978510-deaee18d-f7ab-4350-a8d0-13df12d3aa6c.png)
64바이트를 제외한 다른 쓰기 동작들은 모두 워드 단위로 쪼개져서 메모리에 써진다.
이러한 Write Combined 버퍼는 L1캐시로 캐시라인을 가져오는데도, L1과 L2 캐시간 캐시라인 전송, DRAM 전송 등 여러군데서 활용되기 때문에 꽉 차지 않은 Write Combined 버퍼를 flush하는 것은 성능상 매우 매우 좋지 않다.
자자 위의 내용들을 종합해서 한가지 더 알려주겠다.
만약에 **Write Combined 타입의 메모리를 읽으려고하면 무슨일이 벌어질까?**
우선 Write Combined 타입의 메모리를 읽는 것은 캐싱되지 않는다. 그리고 Write Combined 타입을 읽으려고 하면 **존재하는 write combined 버퍼를 모두 flush를 해야한다**. ( 당연히 write combined 버퍼를 flush해야 신선한(?), 최신의 데이터를 읽을 수 있다. ) 여기서 Write Combined 버퍼를 모두 flush 한다는 것은 높은 확률로 다 차지도 않은 write combined 버퍼를 flush 해버린다는 것이다. 이는 **위에서 말한대로 매우 매우 비효율적**이다. ( 물론 캐시되지 않은 데이터를 읽는 동작 자체가 느리기는 하지만 IO를 위한 데이터들은 캐싱을 하지 않고 메모리에 쓰니 캐싱을 하지 않는 상황을 가정하자. )
그러니 특별한 이유가 없다면 **절대로 write-combining 메모리를 읽지마라.** 특히 렌더링 관점에서 작성 중인 constant buffers, vertex buffers, index buffers 는 절대 읽지마라. 이 버퍼들은 write combined 타입의 메모리 ( 이 버퍼들은 GPU에서 읽어가야하므로 캐싱을 하지 않고 바로 메모리에 쓰기 동작을 하는 Write-Combined 유형의 데이터들이다 )이기 때문에 읽으려는 것은 최악이다..
GPU와 관련해서 Write-Combined 버퍼가 제일 많이 활용되는 것이 GPU와 같은 IO 장치와 대량의 데이터를 주고 받는 **Memory mapped IO 통신을 할 때**이다. 메모리 맵된 IO의 프로세스 가상 주소 공간은 Write-Combined 타입 ( 캐싱이 안되는 )의 메모리 영역으로 non-temporal hint 명령어를 사용해 쓰기 동작을 수행할 때 Write-Combined 버퍼가 활용된다. 그래서 **VRAM으로부터 메모리 맵된 텍스쳐 버퍼에 non-temporal hint로 쓰기 동작을 수행하면 Write-Combined 버퍼가 활용되고 쓸 데이터 크기가 크다면 이 Write-Combined 버퍼를 활용해서 IO 장치에 데이터를 전송함으로서 오는 이득이 매우 클 것**이다.
렌더링에서 활용되는 write-combined 버퍼에 대해서는 [이 글](https://fgiesen.wordpress.com/2013/01/29/write-combining-is-not-your-friend/)을 읽어보기 바란다.
또한 write-combined 버퍼에 쓰는 경우 경우 메모리 ordering을 보장 ( 코어간 데이터 일관성을 보장 )하지 않는다. write-combined store -> read -> write-combined store에서 read가 앞의 store가 보인다는 것을 보장하지 않는다는 것이다. 그래서 write - combined 최적화는 대량의 데이터를 빠르게 보내고자 할 때 사용되고 대량의 데이터를 다 전송하는 중간에 read를 하는 것이 필요없는 상황에서 사용되어야한다.
또한 write-combined 버퍼의 경우 snoop도 되지 않는다.
근데 [Write - Combined 기법이 항상 빠르지는 않다는 글](https://fgiesen.wordpress.com/2013/01/29/write-combining-is-not-your-friend/)도 있다... 고려할 경우의 수가 너무 많다. 궁금하다면 한번 읽어보아라.
references : [https://stackoverflow.com/a/37092/7138899](https://stackoverflow.com/a/37092/7138899), [https://mechanical-sympathy.blogspot.com/2011/07/write-combining.html](https://mechanical-sympathy.blogspot.com/2011/07/write-combining.html), [https://sites.utexas.edu/jdm4372/2018/01/01/notes-on-non-temporal-aka-streaming-stores/](https://sites.utexas.edu/jdm4372/2018/01/01/notes-on-non-temporal-aka-streaming-stores/), [https://stackoverflow.com/questions/14106477/how-do-non-temporal-instructions-work](https://stackoverflow.com/questions/14106477/how-do-non-temporal-instructions-work), [https://vgatherps.github.io/2018-09-02-nontemporal/](https://vgatherps.github.io/2018-09-02-nontemporal/), [http://www.nic.uoregon.edu/~khuck/ts/acumem-report/manual_html/ch05s03.html](http://www.nic.uoregon.edu/~khuck/ts/acumem-report/manual_html/ch05s03.html), [https://stackoverflow.com/questions/49959963/where-is-the-write-combining-buffer-located-x86](https://stackoverflow.com/questions/49959963/where-is-the-write-combining-buffer-located-x86) | 97.605042 | 1,079 | 0.692294 | kor_Hang | 1.00001 |
dcba060d5a3608966130770dfdfe3c50c5e7ab94 | 671 | md | Markdown | README.md | hiroshinakasone/rss-oop-sample | 9cc8ac63fa34a754367297bfa009df0f394a7adc | [
"BSD-2-Clause"
] | null | null | null | README.md | hiroshinakasone/rss-oop-sample | 9cc8ac63fa34a754367297bfa009df0f394a7adc | [
"BSD-2-Clause"
] | null | null | null | README.md | hiroshinakasone/rss-oop-sample | 9cc8ac63fa34a754367297bfa009df0f394a7adc | [
"BSD-2-Clause"
] | null | null | null | # rss-oop-sample
RSS加工アプリケーション(CUI)のサンプルリポジトリです
## Install
前提条件: Python3.6インストール済
```bash
$ git clone https://github.com/hiroshinakasone/rss-oop-sample.git
$ cd rss-oop-sample/
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
```
## Usage
デフォルト表示
```bash
$ cd src/
$ python3 main.py "http://example.com/feed/1" "http://example.com/feed/2"
```
出力結果
```bash
タイトル: タイトル1
本文: 本文1
公開日時: 日時1
タイトル: タイトル2
本文: 本文2
公開日時: 日時2
タイトル: タイトル3
本文: 本文3
公開日時: 日時3
....
```
タイトルのみ表示
```bash
$ cd src/
$ python3 main.py --title-only "http://example.com/feed/1" "http://example.com/feed/2"
```
出力結果
```bash
タイトル: タイトル1
タイトル: タイトル2
タイトル: タイトル3
.....
``` | 14.276596 | 86 | 0.676602 | yue_Hant | 0.817108 |
dcbbf1ee4cf57e5447e8058956adbe15f7e45c90 | 14,273 | md | Markdown | docs/units/_unit07/unit07-04_data_prepration.md | GeoMOER/moer-bsc-project-seminar-upscaling | e732c590e5679dda854dfd5dc2036505f8149370 | [
"MIT"
] | null | null | null | docs/units/_unit07/unit07-04_data_prepration.md | GeoMOER/moer-bsc-project-seminar-upscaling | e732c590e5679dda854dfd5dc2036505f8149370 | [
"MIT"
] | null | null | null | docs/units/_unit07/unit07-04_data_prepration.md | GeoMOER/moer-bsc-project-seminar-upscaling | e732c590e5679dda854dfd5dc2036505f8149370 | [
"MIT"
] | null | null | null | ---
title: "EX | Data preparation"
header:
image: "/assets/images/teaser/unit7.jpg"
caption: 'Image: [**WHC UNESCO**](https://whc.unesco.org/en/list/403/)'
toc: true
---
In order to model and upscale plant richness, we will need plot level information for each of our predictors.
## Getting plot locations
```r
# load the following libraries
library(lidR) # for lidar data # last updated 4 may, please get latest version
library(hsdar) # for hyperspectral data
library(terra)
library(here)
library(sf)
library(dplyr)
library(tidyverse)
library(ggplot2)
library(pbapply)
library(mapview)
#set working directory
setwd("D:/Kili_SES/course_bsc_upscaling_netra/upscaling_methodology")
##############
# Area of interest
##############
shp_kili <- sf::st_read("./upscaling_data/vectors/VegAug1_KILI_SES.shp")
shp_kili <- sf::st_transform(shp_kili, 32737) #epsg code for kili is 32737, you can also use UTM 37S
##########################
# load the plots shapefile
###########################
plots <- sf::st_read("./upscaling_data/vectors/BPolygon.shp")
## lets plot and see our aoi and plots
mapview::mapview(shp_kili)+
mapview::mapview(plots, col.regions = "yellow")
```
<img src="kili_shp_plots.png" width="1500" height="500" align="centre" vspace="10" hspace="20">
## LiDAR data
* Get the elevation, aspect and slope data for each plot using LiDAR dataset provided to you.
```r
#############
# 1. LiDAR data
#############
## we need to clip our raw lidar data according to the extent of the plots
path <- "D:/Kili_SES/course_bsc_upscaling_netra/LiDAR/data"
list_las_files <-list.files(file.path(paste0(path,"/raw_lidar/lidar_point_clouds/")),
pattern = "_clipped_roi.las",
full.names = TRUE)
head (list_las_files, n = 5) #5 file paths
# read in las catalog
las_ctg <- readLAScatalog(list_las_files) # around 569.8kb , 67 tiles
plot(las_ctg) ## notice that some tiles are overlapping
```
<img src="catalog.png" width="1500" height="500" align="centre" vspace="10" hspace="20">
```r
# we clip based on our plots
dir.create(path = paste0(path,"/output_dir")) #make a new dir for saving your clipped files
aoi_path <- paste0(path,"/output_dir")
opt_output_files(las_ctg) <- paste0(aoi_path,"/{PlotID}") # lets call it it now aoi to avoid confusing with roi, you can also choose better names
ctg_clip <- clip_roi(las_ctg,plots, radius = 100,) # this step will take time be patient!
#remove noise
ctg_aoi <- readLAScatalog(list.files(aoi_path, full.names = T))
#function
filter_poi_noise = function(las)
{
# The function is automatically fed with LAScluster objects
# Here the input 'las' will a LAScluster
las <- readLAS(las) # Read the LAScluster
if (is.empty(las)) return(NULL) # Exit early (see documentation)
las <- filter_poi(las, Classification != 18)
return(las) # Return the filtered point cloud
}
opt_output_files(ctg_aoi) <- paste0(aoi_path,"/{*}_noise")
ctg_aoi <- classify_noise(ctg_aoi, sor(15,7))
#denoise using function filter_poi_noise
opt_output_files(ctg_aoi) <- paste0(aoi_path, "/{*}_denoise")
ctg_aoi <- catalog_apply(ctg_aoi, filter_poi_noise)
#work with denoised data from here on
ctg_aoi <- readLAScatalog(list.files(aoi_path, pattern = "_denoise.las", full.names = T))
#classify ground
opt_output_files(ctg_aoi) <- paste0(aoi_path, "/{*}_classified")
ctg <- classify_ground(ctg_aoi, csf())
dir.create(paste0(aoi_path, "/dtm"))
dtm_path <- paste0(aoi_path, "/dtm")
opt_output_files(ctg) <- paste0(dtm_path, "/{*}_dtm")
dtm <- rasterize_terrain(ctg, res = 10, algorithm = knnidw(k = 10L, p = 2))
crs(dtm) <- crs(vect(shp_kili)) #add crs to dtm
#plot(dtm)
# aspect and slope
aspect <- terra::terrain(dtm, v="aspect")
writeRaster(aspect,paste0(dtm_path,"/asp.tif"))
slope <- terra::terrain(dtm, v="slope")
writeRaster(aspect,paste0(dtm_path,"/slope.tif"))
plot_extract <- terra::extract(c(dtm,aspect,slope),vect(plots), fun = mean, na.rm =T)
plot_extract$ID <- plots$PlotID
colnames(plot_extract) <- c("PlotID","mean_dtm","mean_aspect", "mean_slope")
write.csv(plot_extract, "./plot_extract.csv") #just save a backup
# Note - for your final projects you can explore multiple ways of using lidar data!
```
## Hyperspectal data
* Get the necessary vegetation indices for each plot using Hyperspectral rasters provided to you.
```r
####################
# 2. Hyperspectal
####################
# Getting the hyperspectral rasters
path_hyp <- "D:/Kili_SES/course_bsc_upscaling_netra/Hyperspectral/hyperspectral_rasters/"
hy_fls <- list.files(path_hyp, pattern = ".tif", full.names = T) # 63 rasters
#34 plots in 2015
flight_1_plot_list <- c("cof1", "cof2", "cof3", "cof4", "cof5", "cof6","mai1", "mai2", "mai3","mai4","mai5", "sav1", "sav2", "sav3", "sav4", "sav5",
"hom4", "flm1", "flm2","gra2", "gra3", "gra4" ,"gra5", "gra6","flm6","hom5","hom1", "hom2","hom3","foc1","fod5","gra1","flm3", "flm4")
#29 plots 2016
flight_2_plot_list <- c("fed1","fed2","fed3","fed4","fed5","fer0","fer2","fer3","fer4","foc2","foc3","foc4","foc5","fod1","fod2","fod3","fpd2","fpd3","fpd4","fpd5","fpo1","fpo2","fpo3","fpo4","fpo5","hel1","hel2","hel3","hel4")
hy_fls_2015 <- hy_fls[grepl(pattern = paste(flight_1_plot_list, collapse = "|"), hy_fls)] # mai5 is missing
hy_fls_2016 <- hy_fls[grepl(pattern = paste(flight_2_plot_list, collapse = "|"), hy_fls)]
list_hy_fls1 <- pbapply::pblapply(seq_along(hy_fls_2015),
function(x){
sf::st_as_sf(as.polygons(terra::ext(rast(hy_fls_2015[[x]])),
crs="EPSG:32737"))
})
list_hy_fls2 <- pbapply::pblapply(seq_along(hy_fls_2016),
function(x){
sf::st_as_sf(as.polygons(terra::ext(rast(hy_fls_2016[[x]])),
crs="EPSG:32737"))
})
# lets load lidar to make our extents same
lidar <- list.files("D:/Kili_SES/course_bsc_upscaling_netra/LiDAR/data/output_dir/dtm", pattern = "_dtm.tif", full.names = T) #66 rasters
length(lidar)
lidar_1 <- lidar[grepl(pattern = paste(flight_1_plot_list, collapse = "|"), lidar)]
lidar_2 <- lidar[grepl(pattern = paste(flight_2_plot_list, collapse = "|"), lidar)]
lidar_extent1 <- pbapply::pblapply(seq_along(lidar_1),
function(x){
sf::st_as_sf(as.polygons(terra::ext(rast(lidar_1[[x]])),
crs="EPSG:32737"))
})
lidar_extent2 <- pbapply::pblapply(seq_along(lidar_2),
function(x){
sf::st_as_sf(as.polygons(terra::ext(rast(lidar_2[[x]])),
crs="EPSG:32737"))
})
#now we do intersection
list_ext_intersection_lidar_hs1 <- pbapply::pblapply(seq_along(list_hy_fls1),
function(x){
sf::st_intersection(lidar_extent1[[x]],
list_hy_fls1[[x]])
})
list_ext_intersection_lidar_hs2 <- pbapply::pblapply(seq_along(list_hy_fls2),
function(x){
sf::st_intersection(lidar_extent2[[x]],
list_hy_fls2[[x]])
})
# based on intersection we can crop our hypersepctral rasters
list_ext_intersection_lidar_hs1 <- pbapply::pblapply(seq_along(hy_fls_2015),
function(x){
crop(rast(hy_fls_2015[[x]]),
terra::ext(vect(list_ext_intersection_lidar_hs1[[x]])))
})
list_ext_intersection_lidar_hs2 <- pbapply::pblapply(seq_along(hy_fls_2016),
function(x){
crop(rast(hy_fls_2016[[x]]),
terra::ext(vect(list_ext_intersection_lidar_hs2[[x]])))
})
# getting the band info
# careful we have two band_info files - 2015 and 2016
#wavelength - take notice - 2015 has 160 abnds and 2016 has 158 abnds
band_info_2015 <- read.csv("D:/Kili_SES/course_bsc_upscaling_netra/Hyperspectral/band_info/band_meta_2015.csv")
band_info_2016 <- read.csv("D:/Kili_SES/course_bsc_upscaling_netra/Hyperspectral/band_info/band_meta_2016.csv")
str(band_info_2016) #gives a view of how the data looks
# get the wavelengths for the speclib
wavelength_2015 <- as.numeric(parse_number(band_info_2015$styles.title[2:161])) #160 bands
wavelength_2016 <- as.numeric(parse_number(band_info_2016$styles.title[2:159])) #158 bands
# data cleaning
hy_names_2015 <- substr(hy_fls_2015, 76,79) #34 this might change based on length of your file names #34 total
hy_names_2016 <- substr(hy_fls_2016, 76,79) #29 total
#make a list of your chosen vegetation indices
# we only need one for our example - NDVI as a proxy for Net Primary Production
vi = "NDVI"
#lets calculate vegetation indices by making a speclib
library(future)
future::plan(multisession, workers = 2L) #makes processing faster!
vi_stats_2015 <- lapply(seq(length(hy_fls_2015)), function(i){
hy_2015 <- speclib(brick(hy_fls_2015[i]), wavelength_2015)
hy_indices_2015 <- vegindex(hy_2015, index = vi)
hy_indices_2015 <- hy_indices_2015@spectra@spectra_ra
names(hy_indices_2015) <- vi_2015
#writeRaster(hy_indices_2015, filename = paste0(file.path(wd, "output/veg_indices_raster",hy_names_2015[i], "_vegindex.tif")))
vi_means_2015 <- as.data.frame(t(cellStats(hy_indices_2015, stat = "mean", na.rm = TRUE)))
vi_sd_2015 <- as.data.frame(t(cellStats(hy_indices_2015, stat = "sd", na.rm = TRUE)))
vi_mean <- data.frame(PlotID = hy_names_2015[i], vi_means_2015)
names(vi_mean)[2] <- paste( vi,"_mean", sep = "")
vi_sd <- data.frame(PlotID = hy_names_2015[i], vi_sd_2015)
names(vi_sd)[2] <- paste(vi, "_sd", sep = "")
vi_table <- left_join(vi_mean, vi_sd, by = "PlotID")
return(vi_table)
})
vi_table_2015 <- do.call(rbind, vi_stats_2015)
vi <- "NDVI"
future::plan(multisession, workers = 2L)
vi_stats_2016 <- lapply(seq(length(list_ext_intersection_lidar_hs2)), function(i){
hy_2016 <- speclib(raster::brick(list_ext_intersection_lidar_hs2[[i]]), wavelength_2016)
hy_indices_2016 <- vegindex(hy_2016, index = vi)
hy_indices_2016 <- hy_indices_2016@spectra@spectra_ra
names(hy_indices_2016) <- vi
#writeRaster(hy_indices_2016, filename = paste0(file.path(wd, "output/veg_indices_raster",hy_names_2016[i], "_vegindex.tif")))
vi_means_2016 <- as.data.frame(t(cellStats(hy_indices_2016, stat = "mean", na.rm = TRUE)))
vi_sd_2016 <- as.data.frame(t(cellStats(hy_indices_2016, stat = "sd", na.rm = TRUE)))
vi_mean <- data.frame(PlotID = hy_names_2016[i], vi_means_2016)
names(vi_mean)[2] <- paste( vi,"_mean", sep = "")
vi_sd <- data.frame(PlotID = hy_names_2016[i], vi_sd_2016)
names(vi_sd)[2] <- paste(vi, "_sd", sep = "")
vi_table <- left_join(vi_mean, vi_sd, by = "PlotID")
return(vi_table)
})
vi_table_2016 <- do.call(rbind, vi_stats_2016)
vi_table <- rbind(vi_table_2015, vi_table_2016)
write.csv(vi_table, "./vi_table.csv")
```
## Mean minimum temperature
* Get the temperature data (in this case mean of minimum temperature) for each plot
```r
#############################
# 3. mean minimum temperature
#############################
mmt <- read.csv("./upscaling_data/plot_data/temp_kili_stations_averaged.csv", row.names = 1)
colnames(mmt)
mmt <- mmt[,c(1,15)]
head(mmt, n = 2)
# PlotID mean_mmt
#1 cof1 19.443259
#2 cof2 19.759052
```
## pH
* Get the pH for each plot
```
#############################
# 4. pH
#############################
pH <- rast("./upscaling_data/rasters/ph_kili.tif")
plot(pH)
ph_df <- terra::extract(pH,vect(plots), fun = mean, na.rm =T)
ph_df$ID <- plots$PlotID
names(ph_df)[1] <- "PlotID"
```
<img src="ph_plots.png" width="1500" height="500" align="centre" vspace="10" hspace="20">
## All things together
* Gather all the derived plot level information in a single dataframe.
```r
################################
# 5. Making a single dataframe
################################
#we make use of the left_join function in the dplyr library
predictors <- purrr::reduce(list(plot_extract,mmt,ph_df, vi_table), dplyr::left_join, by = 'PlotID')
#great we are now ready with our list of predictors
#lets add our response variable
plantsSR <- read.table("upscaling_data/plot_data/Biodiversity_Data.csv", sep = ",", header = T)
colnames(plantsSR)
plantsSR <- plantsSR[,c(1,2,17)]
names(plantsSR)[1] <- "PlotID"
model_data <- purrr::reduce(list(predictors,plantsSR), dplyr::left_join, by = 'PlotID')
head(model_data, n = 2)
#PlotID mean_dtm mean_aspect mean_slope mean_mmt pH_predicted_mean_0_20_cm pH_predicted_mean_20_50_cm pH_standard_deviation_0_20_cm pH_standard_deviation_20_50_cm NDVI_mean NDVI_sd cat SRallplants
#1 cof1 1282.446 150.7041 6.639121 19.44326 5.735425 5.735425 0.2 0.2144840 0.8543281 0.1302960 cof 59
#2 cof2 1323.983 209.7581 2.697647 19.75905 5.978076 5.915743 0.2 0.1520235 0.7731032 0.1623179 cof 44
write.csv(model_data, "./model_data.csv")
```
| 36.881137 | 227 | 0.60702 | eng_Latn | 0.415821 |
dcbc2621e19586073415903e6409287e04839208 | 292 | md | Markdown | CHANGELOG.md | linsk1998/postcss-unrgba | 10cf1010eced7847982729a63c30fd5e0924ee43 | [
"CC0-1.0"
] | 1 | 2021-10-05T02:43:42.000Z | 2021-10-05T02:43:42.000Z | CHANGELOG.md | linsk1998/postcss-unrgba | 10cf1010eced7847982729a63c30fd5e0924ee43 | [
"CC0-1.0"
] | 1 | 2022-01-08T12:27:00.000Z | 2022-01-08T12:46:49.000Z | CHANGELOG.md | linsk1998/postcss-unrgba | 10cf1010eced7847982729a63c30fd5e0924ee43 | [
"CC0-1.0"
] | 1 | 2022-01-08T10:58:02.000Z | 2022-01-08T10:58:02.000Z | ## 1.1.1 (2015-10-24)
- Updated: Tests and documentation
## 1.1.0 (2015-10-10)
- Added: Backgrounds with no alpha are made `transparent`
- Updated: Backgrounds with filter preserve other background style
- Updated: Documentation and tests
## 1.0.0 (2015-10-10)
- Added: Initial version
| 20.857143 | 67 | 0.715753 | eng_Latn | 0.923646 |
dcbc6cce7548226a2b6a801064c66a1774e5b550 | 9,237 | md | Markdown | aspnet/aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline.md | terrajobst/AspNetDocs.cs-cz | 89957a3d61104043d6f0f0240d81e80c6dcb51ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline.md | terrajobst/AspNetDocs.cs-cz | 89957a3d61104043d6f0f0240d81e80c6dcb51ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline.md | terrajobst/AspNetDocs.cs-cz | 89957a3d61104043d6f0f0240d81e80c6dcb51ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline
title: Middleware OWIN v integrovaném kanálu IIS | Microsoft Docs
author: Praburaj
description: Tento článek ukazuje, jak spustit OWIN middlewar Components (OMCs) v integrovaném kanálu služby IIS a jak nastavit událost kanálu, na které OMC běží. Měli byste...
ms.author: riande
ms.date: 11/07/2013
ms.assetid: d031c021-33c2-45a5-bf9f-98f8fa78c2ab
msc.legacyurl: /aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pipeline
msc.type: authoredcontent
ms.openlocfilehash: 7d157fb6bd9e2ae9b55af41ef06c1eb5e6310ce1
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/06/2020
ms.locfileid: "78617136"
---
# <a name="owin-middleware-in-the-iis-integrated-pipeline"></a>Middleware OWIN v integrovaném kanálu IIS
[Praburaj Thiagarajan](https://github.com/Praburaj), [Rick Anderson](https://twitter.com/RickAndMSFT)
> Tento článek ukazuje, jak spustit OWIN middlewar Components (OMCs) v integrovaném kanálu služby IIS a jak nastavit událost kanálu, na které OMC běží. Před čtením tohoto kurzu byste si měli projít přehled o [detekci spouštěcí třídy](owin-startup-class-detection.md) [projektu Katana](an-overview-of-project-katana.md) a Owin. Tento kurz napsal Rick Anderson ( [@RickAndMSFT](https://twitter.com/#!/RickAndMSFT) ), Chris Rossův, Praburaj Thiagarajan a Howard Dierking ( [@howard\_Dierking](https://twitter.com/howard_dierking) ).
I když jsou součásti [Owin](an-overview-of-project-katana.md) middleware (OMCs) primárně určené ke spuštění v kanálu Server-nezávislá, je možné spustit OMC i v integrovaném kanálu služby IIS (**klasický režim není podporován)** . OMC se dá použít v integrovaném kanálu IIS tak, že do konzoly Správce balíčků nainstalujete následující balíček (PMC):
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample1.cmd)]
To znamená, že všechny aplikační architektury, i ty, které ještě nemůžou běžet mimo IIS a System. Web, můžou využívat stávající součásti middlewaru OWIN.
> [!NOTE]
> Všechny balíčky `Microsoft.Owin.Security.*` se dodávají s novým systémem identit v Visual Studio 2013 (například soubory cookie, účet Microsoft, Google, Facebook, Twitter, [nosný token](http://self-issued.info/docs/draft-ietf-oauth-v2-bearer.html), OAuth, autorizační Server, JWT, Azure Active Directory a Active Directory Federation Services), jsou vytvořené jako OMCs a dají se použít ve scénářích hostovaných v místním prostředí i v rámci služby IIS.
## <a name="how-owin-middleware-executes-in-the-iis-integrated-pipeline"></a>Jak OWIN middleware provádí v integrovaném kanálu IIS
V případě konzolových aplikací OWIN se kanál aplikace sestavený pomocí [Konfigurace spuštění](owin-startup-class-detection.md) nastavuje podle pořadí, ve kterém se komponenty přidávají pomocí metody `IAppBuilder.Use`. To znamená, že kanál OWIN v modulu runtime [Katana](an-overview-of-project-katana.md) zpracuje OMCs v pořadí, v jakém byly zaregistrovány pomocí `IAppBuilder.Use`. V kanálu integrovaném se službou IIS se kanál žádostí [skládá z](https://msdn.microsoft.com/library/ms178468(v=vs.85).aspx) odebíraného předdefinované sady událostí kanálu, jako je například [beginRequest](https://msdn.microsoft.com/library/system.web.httpapplication.beginrequest.aspx), [AuthenticateRequest](https://msdn.microsoft.com/library/system.web.httpapplication.authenticaterequest.aspx), [AuthorizeRequest](https://msdn.microsoft.com/library/system.web.httpapplication.authorizerequest.aspx)atd.
Pokud jsme porovnali OMC s modulem [HttpModule](https://msdn.microsoft.com/library/zec9k340(v=vs.85).aspx) v ASP.NET World, musí být OMC zaregistrovaný do správné předem definované události kanálu. Například `MyModule` HttpModule se vyvolá, když se do fáze [AuthenticateRequest](https://msdn.microsoft.com/library/system.web.httpapplication.authenticaterequest.aspx) v kanálu dostane požadavek:
[!code-csharp[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample2.cs?highlight=10)]
Aby se OMC účastnila stejného řazení na základě událostí, zkontroluje kód modulu runtime [Katana](an-overview-of-project-katana.md) prostřednictvím [konfigurace spouštění](owin-startup-class-detection.md) a přihlásí každou součást middlewaru do integrované události kanálu. Například následující OMC a registrační kód vám umožní zobrazit výchozí registraci události pro middlewarové komponenty. (Podrobnější pokyny k vytvoření třídy pro spuštění OWIN najdete v tématu [detekce spouštěcí třídy Owin](owin-startup-class-detection.md).)
1. Vytvořte prázdný projekt webové aplikace a pojmenujte ho **owin2**.
2. V konzole správce balíčků (PMC) spusťte následující příkaz:
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample3.cmd)]
3. Přidejte `OWIN Startup Class` a pojmenujte ji `Startup`. Nahraďte generovaný kód následujícím (změny jsou zvýrazněny):
[!code-csharp[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample4.cs?highlight=5-7,15-36)]
4. Stiskněte klávesu F5 a spusťte aplikaci.
Konfigurace po spuštění nastaví kanál se třemi součástmi middlewaru, první dva zobrazují diagnostické informace a poslední z nich odpovídá na události (a také zobrazení diagnostických informací). Metoda `PrintCurrentIntegratedPipelineStage` zobrazuje integrovanou událost kanálu, kterou tento middleware vyvolal a zprávu. Ve výstupním okně se zobrazí následující okna:
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample5.cmd)]
Modul runtime Katana namapovaný každou ze součástí middleware OWIN na [PreExecuteRequestHandler](https://msdn.microsoft.com/library/system.web.httpapplication.prerequesthandlerexecute.aspx) ve výchozím nastavení, což odpovídá [PreRequestHandlerExecute](https://msdn.microsoft.com/library/system.web.httpapplication.prerequesthandlerexecute.aspx)události kanálu služby IIS.
## <a name="stage-markers"></a>Značky fáze
Můžete označit OMCs ke spuštění ve specifických fázích kanálu pomocí metody rozšíření `IAppBuilder UseStageMarker()`. Pokud chcete v určité fázi spustit sadu middlewarových součástí, vložte značku fáze hned po nastavení poslední komponenty během registrace. Existují pravidla, ve kterých fáze kanálu můžete spustit middleware a které komponenty objednávky musí být spuštěny (pravidla jsou vysvětlena dále v kurzu). Do kódu `Configuration` přidejte `UseStageMarker` metoda, jak je znázorněno níže:
[!code-csharp[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample6.cs?highlight=13,19)]
Volání `app.UseStageMarker(PipelineStage.Authenticate)` nakonfiguruje všechny dříve registrované komponenty middlewaru (v tomto případě naše dvě diagnostické komponenty), které se spustí ve fázi ověřování kanálu. Poslední součást middleware (která zobrazuje diagnostiku a reaguje na požadavky) se spustí ve fázi `ResolveCache` (událost [ResolveRequestCache](https://msdn.microsoft.com/library/system.web.httpapplication.resolverequestcache.aspx) ).
Stiskněte klávesu F5 a spusťte aplikaci. V okně výstup se zobrazí následující:
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample7.cmd)]
## <a name="stage-marker-rules"></a>Pravidla značek fáze
Owin middlewarové komponenty (OMC) je možné nakonfigurovat tak, aby běžely v následujících událostech fáze OWIN kanálu:
[!code-csharp[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample8.cs)]
1. Ve výchozím nastavení se OMCs spustí při poslední události (`PreHandlerExecute`). Proto náš první ukázkový kód zobrazuje "PreExecuteRequestHandler".
2. Pomocí metody `app.UseStageMarker` můžete zaregistrovat OMC, který se má spustit dříve, v libovolné fázi kanálu OWIN, který je uvedený ve výčtu `PipelineStage`.
3. OWIN kanál a kanál služby IIS jsou seřazené, proto musí být volání `app.UseStageMarker` v daném pořadí. Nemůžete nastavit obslužnou rutinu události na událost, která předchází poslední události zaregistrovanou do do `app.UseStageMarker`. Například *po* volání:
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample9.cmd)]
volání `app.UseStageMarker` předávání `Authenticate` nebo `PostAuthenticate` nebudou dodržena a nebude vyvolána žádná výjimka. OMCs se spouští v poslední fázi, která je ve výchozím nastavení `PreHandlerExecute`. Značky fáze se používají, aby je bylo možné spustit dříve. Pokud zadáte značky fáze mimo pořadí, Zaokrouhleme na předchozí značku. Jinými slovy, přidání značky fáze říká "spustit ne později než fáze X". OMC se spustí na první značce fáze, kterou jste přidali za ně v kanálu OWIN.
4. Nejstarší fáze volání `app.UseStageMarker` WINS. Pokud například přepnete pořadí `app.UseStageMarker` volání z předchozího příkladu:
[!code-csharp[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample10.cs?highlight=13,19)]
Zobrazí se okno výstup:
[!code-console[Main](owin-middleware-in-the-iis-integrated-pipeline/samples/sample11.cmd)]
OMCs se spustí ve fázi `AuthenticateRequest`, protože poslední OMC zaregistrovaný v události `Authenticate` a událost `Authenticate` předchází všem ostatním událostem.
| 100.402174 | 888 | 0.808704 | ces_Latn | 0.998309 |
dcbc9b98c749e40507003a083ce79284e3874c4a | 61 | md | Markdown | README.md | baa-lamb/Tennis | c688e1f84447d6971db6995c3c26141e46f744db | [
"MIT"
] | null | null | null | README.md | baa-lamb/Tennis | c688e1f84447d6971db6995c3c26141e46f744db | [
"MIT"
] | null | null | null | README.md | baa-lamb/Tennis | c688e1f84447d6971db6995c3c26141e46f744db | [
"MIT"
] | null | null | null | # Tennis
cursach
java -cp target/Main-1.0-SNAPSHOT.jar Main
| 12.2 | 42 | 0.754098 | kor_Hang | 0.377039 |
dcbe7da81a1c77fa6c206c6e49f0409a779c779c | 9,460 | md | Markdown | articles/databox/data-box-deploy-export-picked-up.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/databox/data-box-deploy-export-picked-up.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/databox/data-box-deploy-export-picked-up.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial para enviar Azure Data Box en el orden de exportación | Microsoft Docs
description: Aprenda a enviar Azure Data Box a Microsoft una vez completado el orden de exportación
services: databox
author: alkohli
ms.service: databox
ms.subservice: pod
ms.topic: tutorial
ms.date: 07/10/2020
ms.author: alkohli
ms.openlocfilehash: 7023d29bcb559f4edf11b374b9bfb959e968f626
ms.sourcegitcommit: 3541c9cae8a12bdf457f1383e3557eb85a9b3187
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/09/2020
ms.locfileid: "86208863"
---
# <a name="tutorial-return-azure-data-box-preview"></a>Tutorial: Devolución de Azure Data Box Disk (versión preliminar)
En este tutorial se describe cómo devolver Azure Data Box y borrar los datos una vez que el dispositivo se reciba en los datos de Azure.
En este tutorial, aprenderá sobre temas como:
> [!div class="checklist"]
> * Requisitos previos
> * Preparación para el envío
> * Envío de Data Box a Microsoft
> * Eliminación de datos de Data Box
[!INCLUDE [Data Box feature is in preview](../../includes/data-box-feature-is-preview-info.md)]
## <a name="prerequisites"></a>Requisitos previos
Antes de comenzar, asegúrese de que:
- Ha completado el [Tutorial: Copia de datos de Azure Data Box ](data-box-deploy-export-copy-data.md).
- Los trabajos de copia están completos. Preparación para el envío no se puede ejecutar mientras que los trabajos de copia están en curso.
## <a name="prepare-to-ship"></a>Preparación para el envío
[!INCLUDE [data-box-export-prepare-to-ship](../../includes/data-box-export-prepare-to-ship.md)]
Los siguientes pasos vienen determinados por el lugar al que se vaya a devolver el dispositivo.
## <a name="ship-data-box-back"></a>Devolución de Data Box
Asegúrese de que la copia de datos se ha completado en el dispositivo y que la **Preparación para el envío** se ha realizado correctamente. Según la región a dónde envíe el dispositivo, el procedimiento es distinto.
## <a name="in-us-canada-europe"></a>[En EE. UU., Canadá, Europa](#tab/in-us-canada-europe)
Realice los pasos siguientes si va a devolver el dispositivo en Estados Unidos, Canadá o Europa.
1. Asegúrese de que el dispositivo está apagado y de que se han quitado los cables.
2. Enrolle y coloque de forma segura el cable de alimentación que se proporcionó junto con el dispositivo en la parte posterior del mismo.
3. Asegúrese de que la etiqueta de envío aparece en la pantalla de tinta electrónica y programe una recogida con su transportista. Si la etiqueta está dañada, se ha perdido o no aparece en la pantalla de tinta electrónica, póngase en contacto con el servicio de soporte técnico de Microsoft. Si el soporte técnico lo sugiere, puede ir a **Información general > Descargar la etiqueta de envío** en Azure Portal. Descargue la etiqueta de envío y péguela en el dispositivo.
4. Programe una recogida con UPS si está devolviendo el dispositivo. Para programar una recogida:
- Llame a la oficina local de UPS (número gratuito específico del país o región).
- En la llamada, indique el número de seguimiento del envío inverso, que se muestra en la pantalla E-ink (Tinta electrónica) o la etiqueta impresa.
- Si no se indica el número de seguimiento, UPS solicitará que el abono de una cantidad adicional en la recogida.
En lugar de programar la recogida, también devolver la instancia de Data Box en la ubicación de recogida más cercana.
4. Una vez que el transportista recoge y examina el dispositivo Data Box, el estado del pedido en el portal se actualiza a **Picked up** (Recogido). También se muestra un identificador de seguimiento.
## <a name="in-australia"></a>[En Australia](#tab/in-australia)
Los centros de datos de Azure en Australia tienen una notificación de seguridad adicional. Todos los envíos entrantes deben tener una notificación avanzada. Realice los pasos siguientes si el envío se realiza en Australia.
1. Conserve la caja original utilizada para devolver el dispositivo.
2. Asegúrese de que la copia de datos se ha completado en el dispositivo y que la ejecución **Preparación para el envío** se ha realizado correctamente.
3. Apague el dispositivo y quite los cables.
4. Enrolle y coloque de forma segura el cable de alimentación que se suministró junto con el dispositivo en la parte posterior del mismo.
5. Utilice el vínculo [DHL Link](https://mydhl.express.dhl/au/en/schedule-pickup.html#/schedule-pickup#label-reference) para reservar en línea una recogida.
## <a name="in-japan"></a>[En Japón](#tab/in-japan)
1. Conserve la caja original utilizada para devolver el dispositivo.
2. Apague el dispositivo y quite los cables.
3. Enrolle y coloque de forma segura el cable de alimentación que se suministró junto con el dispositivo en la parte posterior del mismo.
4. Escriba el nombre y la dirección de la empresa en la nota de entrega como información del remitente.
5. Envíe un correo electrónico a Quantium Solutions mediante la plantilla de correo electrónico que tiene a continuación.
- Tanto si no se incluyó la nota de entrega de Japan Post Chakubarai como si falta, especifíquelo en este correo electrónico. Quantium Solutions Japan se encargará de solicitar a Japan Post que le proporcionen una nota de entrega en la recogida.
- Si tiene varios pedidos, envíe un correo electrónico para comprobar cada recogida individual.
```
To: Customerservice.JP@quantiumsolutions.com
Subject: Pickup request for Azure Data Box|Job name:
Body:
- Japan Post Yu-Pack tracking number (reference number):
- Requested pickup date:mmdd (Select a requested time slot from below).
a. 08:00-13:00
b. 13:00-15:00
c. 15:00-17:00
d. 17:00-19:00
```
3. Recibirá un correo electrónico de confirmación de Quantium Solutions tras concertar una recogida. Este correo electrónico también incluye información sobre la nota de entrega de Chakubarai.
Si es necesario, puede ponerse en contacto con el soporte técnico de Quantium Solutions (en japonés) en:
- Correo electrónico: Customerservice.JP@quantiumsolutions.com
- Teléfono:03-5755-0150
## <a name="in-singapore"></a>[En Singapur](#tab/in-singapore)
1. Conserve la caja original utilizada para devolver el dispositivo.
2. Anote el número de seguimiento (que se muestra como número de referencia en la página Preparación para el envío de la interfaz de usuario web local de Data Box). Estará disponible cuando el paso de preparación para el envío se complete correctamente. Descargue la etiqueta de envío de esta página y péguela en la caja de embalaje.
3. Apague el dispositivo y quite los cables.
4. Enrolle y coloque de forma segura el cable de alimentación que se suministró junto con el dispositivo en la parte posterior del mismo.
5. Envíe un correo electrónico al servicio de atención al cliente de SingPost utilizando la siguiente plantilla de correo electrónico con el número de seguimiento.
```
To: kadcustcare@singpost.com
Subject: Microsoft Azure Pick-up - OrderName
Body:
1. Requestor name
2. Requestor contact number
3. Requestor collection address
4. Preferred collection date
```
> [!NOTE]
> Para las solicitudes de reserva recibidas en un día laborable:
> - Antes de las 15:00 p.m., la recogida se realizará el siguiente día laborable entre las 9:00 a.m. y las 13:00 p. m.
> - Después de las 15:00 p.m., la recogida se realizará el siguiente día laborable entre las 14:00 p.m. y las 18:00 p. m.
## <a name="self-managed"></a>[Autoadministrado](#tab/in-selfmanaged)
Si usa Data Box en Japón, Singapur, Corea y Oeste de Europa, y ha seleccionado la opción de envío autoadministrado durante la creación del pedido, siga estas instrucciones.
1. Una vez que este paso se complete correctamente, anote el código de autorización que se muestra en la página Preparación para el envío de la interfaz de usuario web local de Data Box.
2. Apague el dispositivo y quite los cables. Enrolle y coloque de forma segura el cable de alimentación que se suministró junto con el dispositivo en la parte posterior del mismo.
3. Envíe un correo electrónico al equipo de operaciones de Azure Data Box mediante la siguiente plantilla cuando esté listo para devolver el dispositivo.
```
To: adbops@microsoft.com
Subject: Request for Azure Data Box drop-off for order: ‘orderName’
Body:
1. Order name
2. Authorization code available after Prepare to Ship has completed [Yes/No]
3. Contact name of the person dropping off. You will need to display a Government approved ID during the drop off.
```
---
## <a name="erasure-of-data-from-data-box"></a>Eliminación de datos de Data Box
Una vez que el dispositivo llegue al centro de datos de Azure, Data Box elimina los datos de los discos según las [directrices de la revisión 1 de NIST SP 800-88](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi).
## <a name="next-steps"></a>Pasos siguientes
En este tutorial, ha aprendido sobre temas relacionados; por ejemplo:
> [!div class="checklist"]
> * Requisitos previos
> * Preparación para el envío
> * Envío de Data Box a Microsoft
> * Eliminación de datos de Data Box
Avance al siguiente artículo para obtener información sobre cómo administrar Data Box.
> [!div class="nextstepaction"]
> [Administrar Data Box a través de Azure Portal](./data-box-portal-admin.md)
| 55.321637 | 471 | 0.762368 | spa_Latn | 0.98271 |
dcbe8f873a6faa1ae3f1b9d7a974674344b6eeb6 | 283 | md | Markdown | _posts/1987-10-21-insurance-commissioner-bill-gunter-urges.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1987-10-21-insurance-commissioner-bill-gunter-urges.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1987-10-21-insurance-commissioner-bill-gunter-urges.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: Insurance Commissioner Bill Gunter urges
tags:
- Oct 1987
---
Insurance Commissioner Bill Gunter urges an investigation into illegal dredging in North Key Largo.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **14**, Section: **A**
| 23.583333 | 101 | 0.681979 | eng_Latn | 0.888705 |
dcbeabcf4e8ea68c196e1e5e662836b71466c250 | 670 | md | Markdown | README.md | MorrisMA/1PPS-DPLL | 4d267eaea33a9897c07f2b06fd7985d052208e38 | [
"FSFAP"
] | 14 | 2016-08-16T16:37:56.000Z | 2022-01-27T01:41:24.000Z | README.md | MorrisMA/1PPS-DPLL | 4d267eaea33a9897c07f2b06fd7985d052208e38 | [
"FSFAP"
] | null | null | null | README.md | MorrisMA/1PPS-DPLL | 4d267eaea33a9897c07f2b06fd7985d052208e38 | [
"FSFAP"
] | 5 | 2017-04-26T20:18:49.000Z | 2022-02-12T16:51:37.000Z | 1PPS Digital Phase-Locked Loop
=======================
Copyright (C) 2008-2016, Michael A. Morris <morrisma@mchsi.com>.
All Rights Reserved.
Released under GPL v3.
General Description
-------------------
This project provides a synthesizable DPLL that locks a DDS-based digital
oscillator to an external 1PPS timing source.
Implementation
--------------
The implementation of the 1PPS DPLL core provided consists of the following
Verilog source files:
DPLLv2.v - DPLL using Phase-Frequency Detector
VCODrvFilter.v - Multiplier-less Loop Filter
tb_DPLLv2.v - Completed 1PPS DPLL testbench
| 26.8 | 77 | 0.647761 | eng_Latn | 0.800165 |
dcbfa553959e8da1007344e387b5783e479613ec | 215 | md | Markdown | posts/session-3-1.md | jenkoian/tns-lockdown-activities | d8b1c35d300fc9f2c0716f5e89c30006d9c74cf7 | [
"MIT"
] | null | null | null | posts/session-3-1.md | jenkoian/tns-lockdown-activities | d8b1c35d300fc9f2c0716f5e89c30006d9c74cf7 | [
"MIT"
] | 1 | 2021-09-02T08:26:59.000Z | 2021-09-02T08:26:59.000Z | posts/session-3-1.md | jenkoian/tns-lockdown-activities | d8b1c35d300fc9f2c0716f5e89c30006d9c74cf7 | [
"MIT"
] | null | null | null | ---
title: Session 3.1
date: 2020-04-29T12:54:47.519Z
tags:
- week3
---
[Agility](https://res.cloudinary.com/jenko/image/upload/v1588066851/tns-lockdown-activities/week3/session1/Week_3_Agility_Session_wa8qy7.pdf) | 30.714286 | 141 | 0.776744 | yue_Hant | 0.235586 |
dcc10f4d34520952b66e2e8d81e5f4465d6a34fb | 204 | md | Markdown | _guide/bio_h/evolution-cellular-life.md | oliviachang29/binaary.co | d9e571f8a7e39ca537a42f5d3da679860e341abb | [
"MIT"
] | null | null | null | _guide/bio_h/evolution-cellular-life.md | oliviachang29/binaary.co | d9e571f8a7e39ca537a42f5d3da679860e341abb | [
"MIT"
] | 1 | 2019-12-19T10:35:20.000Z | 2019-12-19T10:35:20.000Z | _guide/bio_h/evolution-cellular-life.md | oliviachang29/binaary.co | d9e571f8a7e39ca537a42f5d3da679860e341abb | [
"MIT"
] | 1 | 2018-11-11T21:57:53.000Z | 2018-11-11T21:57:53.000Z | ---
title: Evolution of Cellular Life
categories:
- bio_h
layout: guide
link: https://docs.google.com/document/d/1ls--1qz-TcQ41l9Pmm9D7ICBSESwAXy-r1zsOqS_AMo/
description: Life in prehistoric times.
---
| 20.4 | 86 | 0.77451 | yue_Hant | 0.6951 |
dcc1a77c5a22bf60ab6e72112cf2e08a9a41d322 | 3,802 | md | Markdown | _posts/2018-09-08-always-keep-your-cryptocurrency-on-a-cold-storage-hardware-wallet.md | arshad115/arshad115.github.io | 40b18e442efabb6cc4a78c1ba523d9d10ddd4dc9 | [
"MIT"
] | 1 | 2018-06-06T23:57:47.000Z | 2018-06-06T23:57:47.000Z | _posts/2018-09-08-always-keep-your-cryptocurrency-on-a-cold-storage-hardware-wallet.md | arshad115/arshad115.github.io | 40b18e442efabb6cc4a78c1ba523d9d10ddd4dc9 | [
"MIT"
] | 1 | 2018-08-16T13:42:15.000Z | 2018-08-20T12:13:09.000Z | _posts/2018-09-08-always-keep-your-cryptocurrency-on-a-cold-storage-hardware-wallet.md | arshad115/arshad115.github.io | 40b18e442efabb6cc4a78c1ba523d9d10ddd4dc9 | [
"MIT"
] | null | null | null | ---
title: "Always keep your cryptocurrency on a hardware wallet or cold storage - Not on exchanges!"
category: Security
tags:
- Cryptocurrency
- Bitcoin
- Ethereum
- Ledger
- Ledger Nano S
header:
image: /assets/images/ledger-wallet.png
teaser: /assets/images/ledger_icon.png
comments: true
---
As with any type of currency or asset, you would always want to keep it in a safe place. Safe places are safe boxes in your home, banks, vaults, wallet and in the case of digital crypto currencies, a cold storage or a hardware wallet. As we all know online banking is pretty safe, but its not impossible to hack, many banks have been hacked in the past. Cryptocurrency exchanges are no different. They are essentially online system which *CAN* be hacked. Buying cryptocurrency and keeping it online in an exchange is the same as, buying something from a store and letting them hold it for you. It is possible, that the next day you visit the store, its not there anymore, or taken over by someone else and they refuse to give you your belongings. You wouldn't trust them, right? Online exchanges are the same, they are online stores for buying crypto. It worth mentioning the example of [Mt. Gox](https://en.wikipedia.org/wiki/Mt._Gox). Apparently, 850,000 bitcoins were *stolen*. They filed a bankruptcy and people lost their money forever. *Who knows, it could be an inside job (it's my personal opinion, please don't sue me!)*. Countless other exchanges have been hacked in the past. Case in point, right after buying cryptocurrency, store it in a safe wallet. Safe wallets are paper wallets or hardware wallets aka cold storage.
### Paper wallets
![Paper wallets]({{ "/assets/images/posts/paper-wallet.jpg" | absolute_url }})
Paper wallets are wallets which you can create online and then transfer your cyrpto onto them They are called paper wallets, because they can be printed and stored on paper. When you create a paper wallet, you are given a private key or password for the wallet, which should be kept safe in order to use the wallet. You can read more about them [here](https://en.bitcoin.it/wiki/Paper_wallet).
### Hardware wallets
While paper wallets safe, they are not exactly user friendly. Going through the hassle of creating a wallet and storing them all safely offline is no easy job. The hardware wallets, solve that problem. Hardware wallets are basically a usb stick with strong encryption which stores your private key or passcode for a variety of cryptocurrency wallets. The private key never leaves the wallet and you always have to authorize payments on the wallet before executing them. They are secure, portable and easy to use. There are plenty of options available, two of the most popular ones are [Ledger](https://www.ledger.com?r=febd7201637a) and [Trezor](https://shop.trezor.io/?a=arshadmehmood.com). My favorite is a [Ledger Nano S](https://www.ledger.com?r=febd7201637a). It has plenty of support for various cryptocurrencies. It comes with a strong encryption. It's pretty much uncrackable(to my knowledge, till this day). It has a very good support available online and an open source community which adds new coins features. If there is a security vulnerability found, it is immediately fixed. It is always worth it to spend $100 or 100€ than to lose $500. I also own one and you can buy yourself a [Ledger Nano S here](https://www.ledger.com?r=febd7201637a):
[![Ledger Nano S - The secure hardware wallet](https://www.ledgerwallet.com/images/promo/nano-s/ledger_nano-s_8-5-0x4-2-0.jpg)](https://www.ledger.com?r=febd7201637a)
Or buy a Trezor here:
[![Trezor hardware wallet](https://trezor.io/static/images/devices.webp)](https://shop.trezor.io/?a=arshadmehmood.com)
Support me with ETH: `0x681a83007bC52C0bF42B41263Dc498f9Ef7af02A`
| 108.628571 | 1,334 | 0.776433 | eng_Latn | 0.998781 |
dcc1fadccf2c6f968c43c19196b556ab7c43efcd | 1,233 | md | Markdown | readme.md | aldy120/linux-shell | 8d2f98ab3b2adca908f3d118cfe243ce30960f9c | [
"MIT"
] | null | null | null | readme.md | aldy120/linux-shell | 8d2f98ab3b2adca908f3d118cfe243ce30960f9c | [
"MIT"
] | null | null | null | readme.md | aldy120/linux-shell | 8d2f98ab3b2adca908f3d118cfe243ce30960f9c | [
"MIT"
] | null | null | null | # Linux Shell: 程式設計與管理實務
這是我個人的閱讀筆記,本書作者為臥龍小三。
## Shell
指的是 user 跟 OS kernel 中間的 interface。
最早的版本是 1979 年 Bourne Shell (sh) ,之後有百家爭鳴。
## Bash Shell (Bourne Again Shell)
比較新的版本,相容於 Bash ,汲取百家特長。可程式化,也就是撰寫 shell script。
## `/dev/null`
這是一個空檔案,可以用它來清空別的檔案 (把他複製到別的檔案上覆蓋掉別的檔案)。
## 查看現在的 shell
檢查目前用的 shell 名稱
```
echo $SHELL
```
檢查目前用的 bash shell 版本
```
echo $BASH_VERSION
```
# 基礎概念介紹
## 登入
- local login
- remote login
local login 預設會有 7 個 tty 可供使用,其中除了 tty7 都是文字介面,只有 tty7 是圖形介面。使用 ctrl+alt+F1 到 ctrl+alt+F7 切換。
# 切換目錄
可先用 pushd 存一個位置,然後用 popd 快速到該位置。
```
pushd .
cd /usr/
popd
```
# 特殊權限
## set user id
執行時,用檔案擁有者 (user) 身份,例如原本是 0755 要寫成 4755 ,或是 rwsr-xr-x。
## set group id
執行時,用檔案所有群組身份。0755 -> 2755
## sticky bit
除了使用者之外不能刪掉此檔案。0755 -> 1755
# wildcard
*, ? ,[a-z]
就像 regular expression
# Brace Expansion
這超強的,有 Cartesian product 的感覺。請小心,逗號後面不要加空白,在 bash shell 空白非常有意思。
還可以自動補完
```
echo {1..10..3}
echo
```
可以在前面補 0 ,但就只能補 0 ,不能補別的。
```
echo {001..10}
```
# Standard io
- stdin 0
- stdout 1
- stderr 2
# Redirect
## `<`
把鍵盤輸入換成檔案輸入
```
cat < hello.sh
```
## `>`
把螢幕輸出換成檔案輸出
```
cat < hello.sh > hello2.txt
```
## `>>`
跟 `>` 很像,只是以附加的方式輸出到檔案
## `|`
把前面的輸出接到後面的輸入
# 前景與背景
&
# 範例程式
記得 if 的括號跟裡面的東西之間要用空白隔開
| 12.088235 | 93 | 0.665856 | yue_Hant | 0.975522 |
dcc25010690566989e4e45e1d55250f29b66f252 | 2,427 | md | Markdown | README.md | Silentsky0/po-project-species-rivalry | c754ebad03877f2af122ba5bb42feaf99b57829f | [
"MIT"
] | null | null | null | README.md | Silentsky0/po-project-species-rivalry | c754ebad03877f2af122ba5bb42feaf99b57829f | [
"MIT"
] | null | null | null | README.md | Silentsky0/po-project-species-rivalry | c754ebad03877f2af122ba5bb42feaf99b57829f | [
"MIT"
] | null | null | null | # Species Rivalry
## Project
This project was created for Object Oriented Programming course on the 2nd semester of CS.
The task at hand was to use C++ OOP to create a simulated world with animals and plants, each having different behaviours and abilities,
while following all OOP paradigms.
The world is lively and dynamic, each round animals can fight with each other or reproduce and plants can spread around.
There is a simple system of strength and initiative, determining which entities are first to move and which of them should win when in combat.
The player can move, attack or activate a special ability, which grants him invincibility for 3 rounds.
The game features a journal of all events happening in the world for each round, such as animals attacking each other,
successful reproduction attempts, an animal consuming a special buff or selected special abilities being activated.
## Specification
The world is drawn in the console, with each letter indicating a different organism on the map.
You have the ability to generate a new map, load from a file and save to it later.
Player controls are shown in the main menu.
Animal | Icon | Ability | Plant | Icon | Ability
---|---|---|---|---|---
Wolf | W | no ability | Grass | T | no ability
Sheep | O | no ability | Guarana | G | when eaten, grants +3 to strength
Antelope | A | can move by not one, but two squares | Dandelion | M | has twice as many chances to spread
Turtle | Z | can use his armor to survive 75% of attacks | Wolfberries | J | kill any animal that eats them
Fox | L | 50% chance to escape when being attacked | Pine Borscht | B | kills anything that comes close
## Screenshots
#### Main menu
![image](https://user-images.githubusercontent.com/81694867/162612021-ed73c2d6-138f-4380-a22e-2572e2749025.png)
#### World map
![image](https://user-images.githubusercontent.com/81694867/162612033-fae62de7-27ce-4799-82a4-1ddf72188165.png)
#### Event journal
![image](https://user-images.githubusercontent.com/81694867/162612056-56e28a4e-da5f-49ee-9272-d6c56e0014d1.png)
#### Entity list and statistics
![image](https://user-images.githubusercontent.com/81694867/162612082-6a6831c0-5461-4667-8b0e-93bcfa5ba60d.png)
## Acknowledgements
Used Tabulate library to create nice looking console tables
- https://github.com/p-ranav/tabulate
## License
This project is available under the [MIT](https://choosealicense.com/licenses/mit/) license
| 46.673077 | 142 | 0.76679 | eng_Latn | 0.993372 |
dcc2ae8fa99f3abd3ae254ff7a0e2f41e045eaa3 | 389 | md | Markdown | docs/feedback.md | jayakody/try | 31e5c947d3229d33c3ebf0ab0d00e7fb265650ff | [
"Apache-2.0"
] | 3 | 2019-02-07T15:40:01.000Z | 2021-04-22T07:33:34.000Z | docs/feedback.md | jayakody/try | 31e5c947d3229d33c3ebf0ab0d00e7fb265650ff | [
"Apache-2.0"
] | null | null | null | docs/feedback.md | jayakody/try | 31e5c947d3229d33c3ebf0ab0d00e7fb265650ff | [
"Apache-2.0"
] | 3 | 2019-01-06T17:30:25.000Z | 2020-07-08T10:26:28.000Z | # Feedback
We encourage you to share with your own scripts/examples/workaround with this community. Feel free to send us a pull request to us.
* If you have any feedback or comments write to us: [labs@bigswitch.com](mailto:labs@bigswitch.com)
* You can also report issues or request enhancements on GitHub: [Create a new issue](https://github.com/bigswitch/sample-scripts/issues/new)
| 43.222222 | 140 | 0.77635 | eng_Latn | 0.99752 |
dcc33f29ee78f01938df029fc810a29ad5b65cd4 | 230 | md | Markdown | _posts/2008-05-24-the-uscg-catches-5-would-be.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/2008-05-24-the-uscg-catches-5-would-be.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/2008-05-24-the-uscg-catches-5-would-be.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: The USCG catches 5 would-be
tags:
- May 2008
---
The USCG catches 5 would-be Cuban smugglers in stolen boats.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **3**, Section: **B**
| 19.166667 | 62 | 0.617391 | eng_Latn | 0.640009 |
dcc36e7bfef93d673fe82ef942de6d8097cfd22b | 675 | md | Markdown | content/posts/day25.md | rblalock/rickblalock.dev | 449150913611294baffaccb983df19c03d4e2458 | [
"MIT"
] | null | null | null | content/posts/day25.md | rblalock/rickblalock.dev | 449150913611294baffaccb983df19c03d4e2458 | [
"MIT"
] | null | null | null | content/posts/day25.md | rblalock/rickblalock.dev | 449150913611294baffaccb983df19c03d4e2458 | [
"MIT"
] | null | null | null | ---
title: Fish Rules Day 25
date: 01-13-2021
published: true
---
## Business
Today was “sales Monday”…on Wednesday. :-). Michael is in town so I moved the theme’d days around. We planned out several things today:
- Government campaigns
- NGO campaigns
- The photo competition when we launch the new app
I had a call with one of our API customers on how to use the regulation API in their app. Should be fun!
I had a call with NOAA on making it easy to manage certain commercial regulations inside of fish.management.
Booked a couple meetings for next week with potential new customers.
## Dev
Minor bug fixes across the web and the commercial app is all for today. | 29.347826 | 136 | 0.757037 | eng_Latn | 0.999497 |