hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
f49355c3202e600b7372b82e79f54defcbe9558c
2,827
md
Markdown
wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgkarg_signalmonitoredfence.md
vladp72/windows-driver-docs-ddi
1ebee61ed547f2cad203f8bff9a8b9bd1a8ab480
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgkarg_signalmonitoredfence.md
vladp72/windows-driver-docs-ddi
1ebee61ed547f2cad203f8bff9a8b9bd1a8ab480
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgkarg_signalmonitoredfence.md
vladp72/windows-driver-docs-ddi
1ebee61ed547f2cad203f8bff9a8b9bd1a8ab480
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NS:d3dkmddi._DXGKARG_SIGNALMONITOREDFENCE title: _DXGKARG_SIGNALMONITOREDFENCE (d3dkmddi.h) description: Arguments used to add a GPU instruction to signal the paging monitored fence object to the DMA buffer. ms.assetid: 15e5d633-9227-4ada-a7bc-91d8e1983e02 ms.date: 10/19/2018 ms.topic: struct f1_keywords: - "d3dkmddi/_DXGKARG_SIGNALMONITOREDFENCE" ms.keywords: _DXGKARG_SIGNALMONITOREDFENCE, DXGKARG_SIGNALMONITOREDFENCE, *INOUT_PDXGKARG_SIGNALMONITOREDFENCE req.header: d3dkmddi.h req.include-header: req.target-type: req.target-min-winverclnt: Windows 10, version 1809 req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.lib: req.dll: req.ddi-compliance: req.unicode-ansi: req.max-support: req.typenames: DXGKARG_SIGNALMONITOREDFENCE topic_type: - apiref api_type: - HeaderDef api_location: - d3dkmddi.h api_name: - _DXGKARG_SIGNALMONITOREDFENCE product: - Windows targetos: Windows tech.root: display ms.custom: RS5 --- # _DXGKARG_SIGNALMONITOREDFENCE structure ## -description Arguments used by the [DXGKDDI_SIGNALMONITOREDFENCE](nc-d3dkmddi-dxgkddi_signalmonitoredfence.md) callback function to add a GPU instruction to signal the paging monitored fence object to the DMA buffer. ## -struct-fields ### -field KernelSubmissionType Describes what type of kernel submission is being performed. This information can be used by the driver to choose the right synchronization class, and to ensure fence write instruction is generated in the format compatible with the buffer type being used. ### -field pDmaBuffer A pointer to the start of the DMA buffer, which is aligned on 4 KB. ### -field DmaBufferGpuVirtualAddress A <b>D3DGPU_VIRTUAL_ADDRESS</b> data type that indicates the virtual address where the DMA buffer was paged in. If the physical address is zero, the DMA buffer is not correctly paged in. ### -field DmaSize The size, in bytes, of the DMA buffer that *pDmaBuffer* points to. ### -field pDmaBufferPrivateData A pointer to a driver-resident private data structure that is used for generating the DMA buffer that *pDmaBuffer* points to. ### -field DmaBufferPrivateDataSize The number of bytes that remain in the private data structure that *pDmaBufferPrivateData* points to for the current operation. ### -field MultipassOffset A value that specifies the progress of the rendering operation. ### -field MonitoredFenceGpuVa GPU VA of the monitored fence value to be updated. ### -field MonitoredFenceValue Fence value to write from the DMA buffer being built. ### -field MonitoredFenceCpuVa Kernel mode CPU VA of the monitored fence value to be updated. ### -field hHwQueue Handle to the hardware queue that the DMA buffer will be submitted to. ## -remarks ## -see-also
30.074468
256
0.768659
eng_Latn
0.907439
f4937ff7c8afb8e60d2f13a61a312d5b9ebe6909
84
md
Markdown
README.md
KingMarine-GH/marine-package
73cb0ff02ba54d946c3759250a914d34ee3946ce
[ "BSD-3-Clause" ]
null
null
null
README.md
KingMarine-GH/marine-package
73cb0ff02ba54d946c3759250a914d34ee3946ce
[ "BSD-3-Clause" ]
null
null
null
README.md
KingMarine-GH/marine-package
73cb0ff02ba54d946c3759250a914d34ee3946ce
[ "BSD-3-Clause" ]
null
null
null
![npm](https://img.shields.io/npm/v/@kingmarine/marine-package?style=for-the-badge)
42
83
0.75
yue_Hant
0.109982
f493a44e73eb1ed08ad6503ac7af374218c690d1
12
md
Markdown
README.md
dwifirmansyah1/dwi
c359de345ebce79ee3d7e5eb4888db754d9f1ad3
[ "Apache-2.0" ]
null
null
null
README.md
dwifirmansyah1/dwi
c359de345ebce79ee3d7e5eb4888db754d9f1ad3
[ "Apache-2.0" ]
null
null
null
README.md
dwifirmansyah1/dwi
c359de345ebce79ee3d7e5eb4888db754d9f1ad3
[ "Apache-2.0" ]
null
null
null
# dwi Haiii
4
5
0.666667
pol_Latn
0.957745
f493ecb8a5c2b61e09e05b8c3d273a08e8549e9e
90
md
Markdown
_featured_tags/algorithm-SWEA.md
Dong-wook94/Dong-wook94.github.io
1dec55edd4629f4c2ab048670a5ef6cdb04c9706
[ "MIT" ]
1
2020-09-09T01:27:04.000Z
2020-09-09T01:27:04.000Z
_featured_tags/algorithm-SWEA.md
Dong-wook94/Dong-wook94.github.io
1dec55edd4629f4c2ab048670a5ef6cdb04c9706
[ "MIT" ]
null
null
null
_featured_tags/algorithm-SWEA.md
Dong-wook94/Dong-wook94.github.io
1dec55edd4629f4c2ab048670a5ef6cdb04c9706
[ "MIT" ]
1
2020-09-09T01:27:05.000Z
2020-09-09T01:27:05.000Z
--- layout: tag-blog title: SWEA slug: SWEA category: algorithm menu: false order: 3 ---
9
19
0.688889
eng_Latn
0.258681
f4957d8a4c48441605f439e43a638e5b58167076
2,952
md
Markdown
docs/Reference_site_and_doc.md
kosslab-kr/Tizen-NN-Framework
132fc98ed57e4b19ad1f4cb258ad79fa9df1db7a
[ "Apache-2.0" ]
8
2018-09-10T01:32:26.000Z
2020-05-13T06:05:40.000Z
docs/Reference_site_and_doc.md
kosslab-kr/Tizen-NN-Framework
132fc98ed57e4b19ad1f4cb258ad79fa9df1db7a
[ "Apache-2.0" ]
28
2018-09-10T05:01:09.000Z
2021-03-04T10:07:12.000Z
docs/Reference_site_and_doc.md
kosslab-kr/Tizen-NN-Framework
132fc98ed57e4b19ad1f4cb258ad79fa9df1db7a
[ "Apache-2.0" ]
4
2018-09-13T04:16:08.000Z
2018-12-03T07:34:44.000Z
* 개발자 사이트(https://source.tizen.org) * rpm 파일 및 이미지파일 다운로드(https://download.tizen.org) * relaeses 또는 snapshots 폴더, 이미지랑 GBS 빌드를 위한 repo등 * Tizen 4.0 : /snapshots/tizen/4.0-unified * Tizen 5.0 : /snapshots/tizen/unified * Tizen에서 사용되는 프로젝트 git(https://review.tizen.org) * repository clone을 위해서 회원가입 및 ssh설정 필요(https://source.tizen.org/documentation/developer-guide/environment-setup) * tizen git 참고 프로젝트 리스트(origin/tizen 또는 origin/upstream branch확인) * nnfw : https://git.tizen.org/cgit/platform/core/ml/nnfw https://review.tizen.org/gerrit/#/admin/projects/platform/core/ml/nnfw * tensorflow : https://git.tizen.org/cgit/platform/upstream/tensorflow https://review.tizen.org/gerrit/#/admin/projects/platform/upstream/tensorflow * armcl : https://git.tizen.org/cgit/platform/upstream/armcl https://review.tizen.org/gerrit/#/admin/projects/platform/upstream/armcl * gbs 빌드 실행 : ex)gbs build --include-all --arch armv7l * Package 설치 * rpm -Uvh xxx.rpm * Tizen Target Platform read/write remount * mount -o rw,remount / * TensorFlow Lite(https://www.tensorflow.org/mobile/tflite/) * tflite 예제(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/examples) * [tensorflow lite 설명](https://jaehwant.github.io/machinelearning/2018/01/04/9/) * [Download pre-trained models](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md) * TensorFlow Lite Optimizing Converter * 학습된 tensorflow model을 tensorflow lite에서 사용하기 위한 컨버터 * https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/python_api.md * [toco github page](http://blog.canapio.com/tag/FlatBuffer) * http://gmground.tistory.com/14 * Arm Compute Library(https://developer.arm.com/technologies/compute-library) * armCL예제(https://github.com/ARM-software/ComputeLibrary/tree/master/examples) * Arm NN(https://github.com/ARM-software/armnn) * nnfw 관련 * [Tizen 5.0 Public M1 release note](https://developer.tizen.org/tizen/release-notes/tizen-5.0-public-m1) * 2018.05.31 릴리즈 (Experimental Release) * ACL 기반의 CPU/GPU 가속 지원 * Android NN API 일부 호환 * TensorFlow Lite 일부 호환 * Inception V3 모델 지원 * nnfw README.md 내용 * 고성능의 on-device 신경망 추론 프레임워크 * 타이젠 또는 Smart Machine Platform(SMP)과 같은 타겟 플랫폼에서 CPU/GPU/NPU 등의 프로세스상에 주어진 NN 모델의 추론을 수행하는 고성능 on-device 신경망 제공 * experimental 버전으로서 Inception V3만 실행할 수 있는 제한된 기능만 제공 * 향후 릴리즈에서 이전 버전과의 호환성이 보장되지 않을 수 있음 * NNFW Architecture ![nnfw architecture](./fig/nnfw_architecture.png) * NNFW Behavior ![nnfw behavior](./fig/nnfw_behavior.png) * Inception V3 * 2014년 ILSVRC(ImageNet Large Scale Visual Recognition Challenge)에서 1등을 차지한 GoogLeNet(Inception V1)의 개량 모델 * V2에서 3x3 conv만 사용하도록 변경했으며 V3에서 여러가지 성능향상을 위한 기법을 적용함 * 구성도 ![Inception V3](https://cloud.google.com/tpu/docs/images/inceptionv3onc--oview.png)
47.612903
127
0.723238
kor_Hang
0.973165
f4958ea4cb8a6b4448e85d231f8ba67b27229d41
883
md
Markdown
CHANGELOG.md
mremi/ContactBundle
4ba6fec3f58633a8978ef6d1325bb9c48dfc92de
[ "MIT" ]
25
2015-01-15T16:57:08.000Z
2021-01-29T23:33:18.000Z
CHANGELOG.md
mremi/ContactBundle
4ba6fec3f58633a8978ef6d1325bb9c48dfc92de
[ "MIT" ]
34
2015-01-04T20:10:05.000Z
2019-11-20T14:34:37.000Z
CHANGELOG.md
mremi/ContactBundle
4ba6fec3f58633a8978ef6d1325bb9c48dfc92de
[ "MIT" ]
29
2015-01-23T19:07:49.000Z
2018-07-12T16:45:39.000Z
CHANGELOG ========= ### master (2018-07-02) * [BC BREAK] add Symfony 3 support * Travis CI : remove hhvm support * Travis CI : support php 7.2 ### master (2015-10-02) * [BC BREAK] The Doctrine manager registry is now injected in the ``ContactManager`` to fix Symfony 3.0 compatibility. ### master (2015-09-25) * The fields are now optionals for doctrine. So you can remove some fields if you don't want them and store in database. ### master (2015-09-15) * [BC BREAK] Removed ``mremi/bootstrap-bundle`` dependency. ### master (2015-01-28) * Allow multiple To for Swift mailer * [BC BREAK] Removed ``mremi_contact.email.recipient_adress`` config, replaced by ``mremi_contact.email.to``. ### master (2015-01-05) * Allow any captcha form field type. * [BC BREAK] Removed Genemu dependency. * [BC BREAK] Removed ``mremi_contact.form.captcha_disabled`` configuration option.
27.59375
120
0.713477
eng_Latn
0.846676
f4963f08dfe66a250fe9cbcfb6f85f01364387b5
284
md
Markdown
Technology/DevOps/Security/pscp/README.md
MislavJaksic/Knowledge-Repository
3bab8b00d79a776725bcce88b5a1b66a24ecea23
[ "MIT" ]
3
2019-07-09T09:46:58.000Z
2020-12-10T12:46:12.000Z
Technology/DevOps/Security/pscp/README.md
MislavJaksic/Knowledge-Repository
3bab8b00d79a776725bcce88b5a1b66a24ecea23
[ "MIT" ]
null
null
null
Technology/DevOps/Security/pscp/README.md
MislavJaksic/Knowledge-Repository
3bab8b00d79a776725bcce88b5a1b66a24ecea23
[ "MIT" ]
2
2019-11-14T23:07:10.000Z
2019-11-30T19:12:34.000Z
## pscp A way to transfer data between a remote server and a Windows machine. Source: https://www.ssh.com/ssh/putty/putty-manuals/0.68/Chapter5.html ### Commands From a local machine to a remote one: ``` $: pscp C:\path\to\your\file\File-Name.zip _username@_hostname:/_user ```
21.846154
70
0.721831
eng_Latn
0.424563
f4966b60a2528d9bc436f4c540a073743c3a311b
6,760
md
Markdown
articles/store-sendgrid-nodejs-how-to-send-email.md
ChangJoo-Park/azure-docs.ko-kr
eda2616bf019f7e3da6b5664d6de219a5fb8c8df
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/store-sendgrid-nodejs-how-to-send-email.md
ChangJoo-Park/azure-docs.ko-kr
eda2616bf019f7e3da6b5664d6de219a5fb8c8df
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/store-sendgrid-nodejs-how-to-send-email.md
ChangJoo-Park/azure-docs.ko-kr
eda2616bf019f7e3da6b5664d6de219a5fb8c8df
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SendGrid 메일 서비스를 사용하는 방법(Node.js) | Microsoft Docs description: Azure에서 SendGrid 메일 서비스를 사용하여 메일을 보내는 방법을 알아봅니다. 코드 샘플은 Node.js API를 사용하여 작성되었습니다. services: '' documentationcenter: nodejs author: erikre manager: wpickett editor: '' ms.assetid: cac444b4-26b0-45ea-9c3d-eca28d57dacb ms.service: multiple ms.workload: na ms.tgt_pltfrm: na ms.devlang: nodejs ms.topic: article ms.date: 01/05/2016 ms.author: erikre ms.openlocfilehash: f2d653441598a47986913d525057672eed24b435 ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 06/13/2019 ms.locfileid: "60931720" --- # <a name="how-to-send-email-using-sendgrid-from-nodejs"></a>Node.js에서 SendGrid를 사용하여 메일을 보내는 방법 이 가이드에서는 Azure에서 SendGrid 전자 메일 서비스로 일반 프로그래밍 작업을 수행하는 방법을 보여 줍니다. 샘플은 Node.js API를 사용하여 작성되었습니다. **전자 메일 생성**, **전자 메일 보내기**, **첨부 파일 추가**, **필터 사용**, **속성 업데이트** 등의 시나리오를 다룹니다. SendGrid 및 전자 메일 보내기에 대한 자세한 내용은 [다음 단계](#next-steps) 섹션을 참조하세요. ## <a name="what-is-the-sendgrid-email-service"></a>SendGrid 전자 메일 서비스 정의 SendGrid는 사용자 지정 통합을 쉽게 만드는 유연한 API와 함께 신뢰할 만한 [트랜잭션 전자 메일 발송], 확장성 및 실시간 분석을 제공하는 [클라우드 기반 전자 메일 서비스]입니다. 일반적인 SendGrid 사용 시나리오는 다음과 같습니다. * 고객에게 확인 메일 자동으로 보내기 * 월간 전자 전단 및 판촉 행사를 고객에게 보내기 위한 분산 목록 관리 * 차단된 전자 메일, 고객 응답 같은 항목의 실시간 메트릭 수집 * 경향을 식별하는 데 도움이 되도록 보고서 생성 * 고객 문의 전달 * 애플리케이션의 전자 메일 알림 자세한 내용은 [https://sendgrid.com](https://sendgrid.com)을 참조하세요. ## <a name="create-a-sendgrid-account"></a>SendGrid 계정 만들기 [!INCLUDE [sendgrid-sign-up](../includes/sendgrid-sign-up.md)] ## <a name="reference-the-sendgrid-nodejs-module"></a>SendGrid Node.js 모듈 참조 Node.js용 SendGrid 모듈은 다음 명령을 사용하여 NPM(Node Package Manager)을 통해 설치할 수 있습니다. ```bash npm install sendgrid ``` 설치 후에는 다음 코드를 사용하여 애플리케이션에서 이 모듈을 요청할 수 있습니다. ```javascript var sendgrid = require('sendgrid')(sendgrid_username, sendgrid_password); ``` SendGrid 모듈은 **SendGrid** 및 **Email** 함수를 내보냅니다. **SendGrid**는 Web API를 통해 전자 메일을 보내는 역할을 맡고, **Email**은 전자 메일 메시지를 캡슐화합니다. ## <a name="how-to-create-an-email"></a>방법: 이메일 만들기 SendGrid 모듈을 사용하여 전자 메일 메시지를 만들려면 먼저 Email 함수로 전자 메일 메시지를 만든 후, SendGrid 함수를 사용하여 이 메시지를 보내면 됩니다. 다음 예제는 Email 함수를 사용하여 새 메시지를 만드는 과정을 보여 줍니다. ```javascript var email = new sendgrid.Email({ to: 'john@contoso.com', from: 'anna@contoso.com', subject: 'test mail', text: 'This is a sample email message.' }); ``` 또한 html 속성을 설정하여 HTML 메시지를 지원하는 클라이언트를 위해 HTML 메시지를 지정할 수도 있습니다. 예를 들면 다음과 같습니다. ```javascript html: This is a sample <b>HTML<b> email message. ``` 텍스트 속성과 html 속성을 모두 설정하면 HTML 메시지를 지원할 수 없는 클라이언트에서 텍스트 콘텐츠로 안정적으로 대체됩니다. Email 함수에서 지원하는 모든 속성에 대한 자세한 내용은 [sendgrid-nodejs][sendgrid-nodejs]를 참조하세요. ## <a name="how-to-send-an-email"></a>방법: 이메일 보내기 Email 함수를 사용하여 전자 메일 메시지를 만든 후에는 SendGrid에서 제공하는 Web API를 사용하여 해당 메시지를 보낼 수 있습니다. ### <a name="web-api"></a>Web API ```javascript sendgrid.send(email, function(err, json){ if(err) { return console.error(err); } console.log(json); }); ``` > [!NOTE] > 위 예제에서는 email 개체 및 콜백 함수를 전달하고 있지만, email 속성을 직접 지정하여 send 함수를 바로 호출할 수도 있습니다. 예를 들면 다음과 같습니다. > > ```javascript > sendgrid.send({ > to: 'john@contoso.com', > from: 'anna@contoso.com', > subject: 'test mail', > text: 'This is a sample email message.' > }); > ``` > ## <a name="how-to-add-an-attachment"></a>방법: 첨부 파일 추가 **files** 속성에서 파일 이름 및 경로를 지정하여 메시지에 첨부 파일을 추가할 수 있습니다. 다음 예제에서는 첨부 파일을 보내는 방법을 보여 줍니다. ```javascript sendgrid.send({ to: 'john@contoso.com', from: 'anna@contoso.com', subject: 'test mail', text: 'This is a sample email message.', files: [ { filename: '', // required only if file.content is used. contentType: '', // optional cid: '', // optional, used to specify cid for inline content path: '', // url: '', // == One of these three options is required content: ('' | Buffer) // } ], }); ``` > [!NOTE] > **files** 속성을 사용할 경우, 해당 파일은 [fs.readFile](https://nodejs.org/docs/v0.6.7/api/fs.html#fs.readFile)을 통해 액세스할 수 있어야 합니다. 첨부하려는 파일이 Blob 컨테이너 등의 Azure Storage에서 호스트되는 경우, 먼저 파일을 로컬 스토리지 또는 Azure 드라이브에 복사해야 **files** 속성을 사용하여 해당 파일을 첨부 파일로 보낼 수 있습니다. > > ## <a name="how-to-use-filters-to-enable-footers-and-tracking"></a>방법: 필터를 사용 하도록 설정 바닥글 및 추적 사용 SendGrid는 필터 사용을 통해 추가 전자 메일 기능을 제공합니다. 클릭 추적, Google 분석, 구독 추적 등을 사용하도록 설정하는 것과 같이 특정 기능을 사용하도록 설정하기 위해 전자 메일 메시지에 추가할 수 있는 설정입니다. 전체 필터 목록은 [필터 설정][Filter Settings](영문)을 참조하십시오. 필터는 **filters** 속성을 사용하여 메시지에 적용할 수 있습니다. 각 필터는 필터별 설정을 포함하는 해시에 의해 지정됩니다. 다음 예에서는 바닥글 및 클릭 추적 필터를 보여 줍니다. ### <a name="footer"></a>바닥글 ```javascript var email = new sendgrid.Email({ to: 'john@contoso.com', from: 'anna@contoso.com', subject: 'test mail', text: 'This is a sample email message.' }); email.setFilters({ 'footer': { 'settings': { 'enable': 1, 'text/plain': 'This is a text footer.' } } }); sendgrid.send(email); ``` ### <a name="click-tracking"></a>클릭 추적 ```javascript var email = new sendgrid.Email({ to: 'john@contoso.com', from: 'anna@contoso.com', subject: 'test mail', text: 'This is a sample email message.' }); email.setFilters({ 'clicktrack': { 'settings': { 'enable': 1 } } }); sendgrid.send(email); ``` ## <a name="how-to-update-email-properties"></a>방법: 전자 메일 속성 업데이트 일부 메일 속성을 **setProperty**를 사용하여 덮어쓰거나 **addProperty**를 사용하여 추가할 수 있습니다. 예를 들어 다음을 사용하여 받는 사람을 더 추가할 수 있습니다. ```javascript email.addTo('jeff@contoso.com'); ``` 또는 다음을 사용하여 필터를 설정할 수 있습니다. ```javascript email.addFilter('footer', 'enable', 1); email.addFilter('footer', 'text/html', '<strong>boo</strong>'); ``` 자세한 내용은 [sendgrid-nodejs][sendgrid-nodejs]를 참조하세요. ## <a name="how-to-use-additional-sendgrid-services"></a>방법: 추가 SendGrid 서비스 사용 SendGrid는 Azure 애플리케이션에서 추가 SendGrid 기능을 활용하는 데 사용할 수 있는 웹 기반 API를 제공합니다. 자세한 내용은 [SendGrid API 설명서][SendGrid API documentation](영문)를 참조하십시오. ## <a name="next-steps"></a>다음 단계 SendGrid 전자 메일 서비스에 관한 기본적인 사항들을 익혔으며 자세한 내용을 보려면 다음 링크를 따라가십시오. * SendGrid Node.js 모듈 리포지토리: [sendgrid-nodejs][sendgrid-nodejs] * SendGrid API 설명서: <https://sendgrid.com/docs> * Azure 고객을 위한 SendGrid 특별 제공: [http://sendgrid.com/azure.html](https://sendgrid.com/windowsazure.html) [special offer]: https://sendgrid.com/windowsazure.html [sendgrid-nodejs]: https://github.com/sendgrid/sendgrid-nodejs [Filter Settings]: https://sendgrid.com/docs/API_Reference/SMTP_API/apps.html [SendGrid API documentation]: https://sendgrid.com/docs [클라우드 기반 전자 메일 서비스]: https://sendgrid.com/email-solutions [트랜잭션 전자 메일 발송]: https://sendgrid.com/transactional-email
30.178571
248
0.664497
kor_Hang
0.99998
f496a1966cc2bfa07912f036010ff1a47e2f90c1
1,513
md
Markdown
README.md
AndresCampuzano/Node-and-Express-Movie-API
181a46d1eb9a6050e4a10c36f6ea0189b73d7ccc
[ "MIT" ]
null
null
null
README.md
AndresCampuzano/Node-and-Express-Movie-API
181a46d1eb9a6050e4a10c36f6ea0189b73d7ccc
[ "MIT" ]
null
null
null
README.md
AndresCampuzano/Node-and-Express-Movie-API
181a46d1eb9a6050e4a10c36f6ea0189b73d7ccc
[ "MIT" ]
null
null
null
# Node and Express Movie API CRUD REST API for movies, it can be shorted by genre or id of each movie. Using **Joi** for validations. ## DOCUMENTATION ### GET ##### Getting all genres and movies ``` http://localhost:9000/api/movies ``` ##### Getting a genre ``` http://localhost:9000/api/movies/drama ``` ##### Getting a movie inside a genre ``` http://localhost:9000/api/movies/drama/5 ``` ### DELETE We can delete an entire genre or a movie inside a genre. _id_ of each movie is required. ### Update ##### Updating a genre ``` http://localhost:9000/api/movies/drama ``` then ``` { "genre": "new genre name here (drama02)" } ``` ##### Updating a movie inside a genre ``` http://localhost:9000/api/movies/drama/5 ``` then ``` { "name": "new movie name here (Harry Potter)" } ``` ### CREATE ##### Creating a genre ``` http://localhost:9000/api/movies ``` then ``` { "genre": "Adventure" } ``` ##### Creating a movie inside a genre ``` http://localhost:9000/api/movies/drama ``` then ``` { "name": "Another Harry Potter movie" } ``` 👉Since this is my first Node project, any feedback will be appreciated. **Dependencies** - cosr - joi - express ## Contact me Reach out to me at one of the following places! - Website at <a href="https://andrescampuzano.com" target="_blank">`andrescampuzano.com`</a> - Twitter at <a href="http://twitter.com/andrescampuzan0" target="_blank">`@AndresCampuzan0`</a> - <a href='mailto:hello@andrescampuzano.com'>Email</a>
14.140187
98
0.647059
eng_Latn
0.679366
f496deba64be6c1964f9666753cf836606143f11
6,241
md
Markdown
README.md
GriffinG1/PKHeX-Plugins
0567847334571c44479461c18bf5c55c6f4f7107
[ "MIT" ]
null
null
null
README.md
GriffinG1/PKHeX-Plugins
0567847334571c44479461c18bf5c55c6f4f7107
[ "MIT" ]
null
null
null
README.md
GriffinG1/PKHeX-Plugins
0567847334571c44479461c18bf5c55c6f4f7107
[ "MIT" ]
null
null
null
# About This project uses `PKHeX.Core` and PKHeX's `IPlugin` interface to add enhancements to the PKHeX program, namely **Auto**mated **Mod**ifications to simplify creation of legal Pokémon. Please refer to the [Wiki](https://github.com/architdate/PKHeX-Plugins/wiki) for more information regarding the functionalities provided by this project. This project is owned by [@architdate](https://github.com/architdate) (Discord: thecommondude#8240) and [@kwsch](https://github.com/kwsch) (Discord: Kurt#6024). [Feature Demonstration Video](https://www.youtube.com/watch?v=pKuElH0hWWA) by AAron#2420. ## Building This project requires an IDE that supports compiling .NET based code (Ideally .NET 4.6+). Recommended IDE is Visual Studio 2019. **Regular Builds** Regular builds will usually succeed unless there are changes that are incompatible with the NuGet [PKHeX.Core](https://www.nuget.org/packages/PKHeX.Core) package dependency specified in the `.csproj` files of the projects. If building fails, use the bleeding edge method instead. - Clone the PKHeX-Plugins repository using: `$ git clone https://github.com/architdate/PKHeX-Plugins.git`. - Right-click on the solution and click `Rebuild All`. - These DLLs should be placed into a `plugins` directory where the PKHeX executable is. You may also combine these DLL files using ILMerge. - The compiled DLLs for AutoLegality will be in the `AutoLegalityMod/bin/Release/net46` directory: * AutoModPlugins.dll * LibUsbDotNet.LibUsbDotNet.dll * NtrSharp.dll * PKHeX.Core.AutoMod.dll * PKHeX.Core.Enhancements.dll * PKHeX.Core.Injection.dll - If you want to use QRPlugins, you will need additional DLLs from `QRPlugins/bin/Release/net46`: * BouncyCastle.CryptoExt.dll * QRCoder.dll * QRPlugins.dll * zxing.dll * zxing.presentation.dll **Bleeding Edge Builds** Use this build method only if the regular builds fail. The AppVeyor CI will always use the bleeding edge build method. More details regarding this can be seen in the [appveyor.yml](https://github.com/architdate/PKHeX-Plugins/blob/master/appveyor.yml) file. - Clone the PKHeX repository using: `$ git clone https://github.com/kwsch/PKHeX.git`. - Clone the PKHeX-Plugins repository using: `$ git clone https://github.com/architdate/PKHeX-Plugins.git`. - Open the PKHeX solution, change your environment to `Release`, right-click on the `PKHeX.Core` project, and click `Rebuild` to build the project. - Open the PKHeX-Plugins solution and right-click to `Restore NuGet Packages`. - Next, replace the most recent NuGet packages with the newly-built `PKHeX.Core.dll` files. - Copy the `PKHeX.Core.dll` file located in `PKHeX.Core/bin/Release/net46` the following folders: * `C:/Users/%USERNAME%/.nuget/packages/pkhex.core/YY.MM.DD/lib/net46` - Copy the `PKHeX.Core.dll` file located in `PKHeX.Core/bin/Release/netstandard2.0` to the following folders: * `C:/Users/%USERNAME%/.nuget/packages/pkhex.core/YY.MM.DD/lib/netstandard2.0` - Right click the PKHeX-Plugins solution and `Rebuild All`. This should build the mod with the latest `PKHeX.Core` version so that it can be used with the latest commit of PKHeX. - The compiled DLLs will be in the same location as with the regular builds. ## Usage To use the plugins: - Create a folder named `plugins` in the same directory as PKHeX.exe. - Put the compiled plugins from this project in the `plugins` folder. - Start PKHeX.exe. - The plugins should be available for use in `Tools > Auto Legality Mod` drop-down menu. ## Support Server Come join the dedicated Discord server for this mod! Ask questions, give suggestions, get help, or just hang out. Don't be shy, we don't bite: [<img src="https://canary.discordapp.com/api/guilds/401014193211441153/widget.png?style=banner2">](https://discord.gg/tDMvSRv) ## Contributing To contribute to the repository, you can submit a pull request to the repository. Try to follow a format similar to the current codebase. All contributions are greatly appreciated! If you would like to discuss possible contributions without using GitHub, please contact us on the support server above. ## Credits **Repository Owners** - [architdate (thecommondude)](https://github.com/architdate) - [kwsch (Kaphotics)](https://github.com/kwsch) **Credit must be given where due...** This project would not be as amazing without the help of the following people who have helped me since the original [Auto-Legality-Mod](https://github.com/architdate/PKHeX-Auto-Legality-Mod). - [@kwsch](https://github.com/kwsch) for providing the IPlugin interface in PKHeX, which allows loading of this project's Plugin DLL files. Also for the support provided in the support server. - [@berichan](https://github.com/berichan) for adding USB-Botbase support to LiveHeX. - [@soopercool101](https://github.com/soopercool101) for many improvements to Smogon StrategyDex imports and various other fixes. - [@Lusamine](https://github.com/Lusamine) for all the help with stress testing the code with wacky sets! - TORNADO for help with test cases. - [@Rino6357](https://github.com/Rino6357) and [@crzyc](https://github.com/crzyc) for initial help with the Wiki. - [@hp3721](https://github.com/hp3721) for help with adding localization based on PKHeX's implementation. - [@Bappsack](https://github.com/Bappsack) for his help on Discord in voice chats! - [@chenzw95](https://github.com/chenzw95) for help with integration. - [@BernardoGiordano](https://github.com/BernardoGiordano) for many ideas on improving speed. - [@olliz0r](https://github.com/olliz0r) for developing and maintaining `sys-botbase` as well which is necessary for LiveHeX to work and for [LedyLib](https://github.com/olliz0r/Ledybot/tree/master/LedyLib) from which a lot of the NTR processing code is liberally referenced. - [@fishguy6564](https://github.com/fishguy6564) for creating `USB-Botbase` (by extending sys-botbase). - [FlatIcon](https://www.flaticon.com/) for their icons. Author credits (Those Icons, Pixel perfect). - [Project Pokémon](https://github.com/projectpokemon/) for their Mystery Gift Event Gallery. - And all the countless users who have helped improve this project with ideas and suggestions!
74.297619
336
0.760615
eng_Latn
0.939868
f496dff0d18ce1651d15dfbf418e5f0faffc8b1a
25,752
md
Markdown
articles/commerce/Apply-multiple-retail-discounts.md
ddlarsc/dynamics-365-unified-operations-public
fc753a70d9bd5503de0c494e90d7ee2485d1828f
[ "CC-BY-4.0", "MIT" ]
1
2021-03-30T14:19:33.000Z
2021-03-30T14:19:33.000Z
articles/commerce/Apply-multiple-retail-discounts.md
ddlarsc/dynamics-365-unified-operations-public
fc753a70d9bd5503de0c494e90d7ee2485d1828f
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/commerce/Apply-multiple-retail-discounts.md
ddlarsc/dynamics-365-unified-operations-public
fc753a70d9bd5503de0c494e90d7ee2485d1828f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # required metadata title: Apply multiple Retail discounts to a product description: This topic reviews all the factors considered when multiple discounts can be applied to a product. author: shajain manager: AnnBe ms.date: 12/31/2018 ms.topic: article ms.prod: ms.service: dynamics-365-retail ms.technology: # optional metadata ms.search.form: # ROBOTS: audience: IT Pro # ms.devlang: ms.reviewer: josaw # ms.tgt_pltfrm: ms.custom: 16181 ms.assetid: b1b57734-1406-4ed6-8e28-21c705ee17e2 ms.search.region: global ms.search.industry: Retail ms.author: shajain ms.search.validFrom: 2018-10-23 ms.dyn365.ops.version: AX 8.1.0, Retail October 2018 update --- # Apply multiple Retail discounts to a product [!include [banner](includes/banner.md)] ## Overview This topic reviews all the factors considered when multiple discounts can be applied to a product. In this scenario, the commerce pricing engine applies as many discounts as it can, to maximize the total discount amount on a product. Multiple options affect the order in which the discounts are applied. Throughout this topic it's noted when a setting affects the order of discount application and exclusivity of a discount. The following settings affect how multiple discounts, applicable on a product, are processed. - **Discount concurrency control model** - **Pricing Priority** - **Discount type** (**Simple**, **Mix and Match**, **Quantity**, and **Threshold**) - **Discount concurrency mode** (**Exclusive**, **Best price**, and **Compound**) - **Multiple occurrences mode**, when it's set to **Favor retailer** (for mix-and-match least-expensive discounts only) The **Discount concurrency control model** is described below in detail; however, the rest of the properties are covered in [Retail discounts](retail-discounts-overview.md). ## Discount concurrency control model The discount concurrency control model changes when and how multiple discounts are applied to products in a transaction. The **Best price and compound concurrency control model** option on the **Discounts** tab on the **Commerce parameters** page is different from the **Discount concurrency mode** property on each discount. In earlier versions of the app, there was only one way to apply multiple discounts based on the **discount type**, **discount concurrency mode**, and **pricing priority** (if used) properties of discounts. Now, the discount concurrency control model setting affects how all discounts compete and compound together. ### Background on why this change was made In previous versions of the app, you could directly customize the price engine by overlaying their custom business logic in the price engine. With the transition to an online service and to improve overall application lifecycle management, the Dynamics 365 application has been sealed and overlaying customizations are no longer allowed. New extensibility points have been added to enable the same types of customizations that were the most common. Most discount customizations are in one of the following categories. - **Minor changes to existing discounts:** For example, moving the start date and end date from the discount header to the discount lines. - **New discount types:** In some cases, companies need to introduce a new type of discount. For example, capping the total discount amount for a simple discount. - **Changing the when and how (the flow) of multiple discounts being applied:** For example, having all mix and match discounts applied on top of quantity or simple discounts while still having quantity and simple discounts compete for best price or having store-specific and customer-specific discounts compete for best price and then compound the winning discount with loyalty program discounts. The first two types of customizations are handled by providing a new extensibility model within the price engine that enables these scenarios. However, to address the third type of customization we expanded the functional capabilities of the system by introducing this setting. A discount's concurrency mode and pricing priority already gave the user significant flexibility over the order of discount application. By introducing a new configuration setting that affects how a discount's concurrency mode and pricing priority interact, all discount ordering customization is covered, which results in the concurrency model option. ### Best price and compound within priority, never compound across priorities This is the default and is the legacy way in which multiple discounts are processed. When this option is selected, all compound discounts within the same pricing priority are combined, and the combined result competes with any best price discounts in the same pricing priority. After a discount is applied to a product, all discounts at lower pricing priorities are ignored. ### Best price only within priority, always compound across priority This is the new way multiple discounts can be processed. When this option is selected, discounts with **Discount concurrency mode** set to **Best price** and **Compound** are all treated as "best price" within a single pricing priority. When applied, the best price discount within a priority, is compounded with the best price and compound discounts at lower pricing priorities. In this concurrency control model, only a single discount can be applied to a product per pricing priority, and if that single discount is a best price or compound discount, then it will compound with all additional best price or compound discounts at lower pricing priorities. ### Examples The following examples show how the pricing engine processes a pool of discounts for different concurrency control models. #### Example 1 In the first scenario, **Best price and compound within priority, never compound across priorities** is selected as the discount concurrency control model. There are two pricing priorities, and for each pricing priority, there is one discount of each discount type, for example **Simple**, **Mix and Match**, **Quantity**, and **Threshold**. Let's assume there are discounts at two priorities 5 and 10 and all products have multiple discounts at both these priorities. The pool of possible pricing priorities is determined by the price groups and discounts that can be applied to the product. 1. Given that a discount concurrency control model is selected, for each product, the pricing engine next considers the highest pricing priority of the discounts that are applicable on the product. Thus, the pricing engine evaluates and applies the simple, quantity, and mix-and-match discounts with priority 10. > [!NOTE] > Threshold discounts are not evaluated yet because, as indicated by their name, they will be evaluated against the transaction amount, after all the other discounts have been applied. The following image shows a concise view of how the pricing algorithm loops through the discounts across various priorities. Note that this diagram applies for both the discount concurrency control models, but the difference is in the way in which the pricing algorithm treats discounts at different priorities. This difference is elaborated using the following example. ![Simplified pricing logic](./media/Simplified-pricing-logic.png "Simplified pricing logic") 2. Within priority 10, the pricing engine first considers the discounts that have the concurrency mode set to **Exclusive**. If there is more than one exclusive discount applicable to the product, then the best exclusive discount is applied. When a product gets an exclusive discount, no other discounts can be applied to this product at any priority. > [!NOTE] > Mix-and-match, least-expensive discounts that have the **Multiple occurrences mode** property set to **Favor retailer** are skipped in this step. After all the **Exclusive** discounts (**Simple**, **Quantity**, **Mix and Match**) at pricing priority 10 have been applied, then the exclusive mix-and-match **Favor retailer** discounts, at pricing priority 10, are applied to any undiscounted products. 3. Within priority 10, the pricing engine then considers the discounts that have the discount concurrency mode set to **Best price** and **Compound**. If multiple **Compound** discounts apply to a product, then they are compounded, and the resulting total discount amount competes against the other **Best price** discounts. Either one of the **Best price** discounts or the combination of **Compound** discounts gets applied to the product, depending on which discount gives the most benefit to the customer. Like the previous step, mix-and-match least-expensive discounts that have the **Multiple occurrences mode** property set to **Favor retailer** are skipped in this step. Once all the **Best price** and **Compound** discounts at pricing priority 10 have been applied, then **Best price** and **Compound** mix-and-match **Favor retailer** discounts are evaluated against each other and gets applied. A **Best price** discount applies only to undiscounted products, but a **Compound** discount applies to undiscounted products and products that are discounted with another **Compound** discounts at the same pricing priority. 4. Because the discount concurrency control mode is set to **Best price and compound within priority, never compound across priorities** so the simple, quantity, and mix-and-match discounts applicable to the product, at pricing priority 5 do not compete with the applied discounts. At this point, for a product, all the simple, quantity, and mix-and-match discounts at the highest priority have been evaluated. 5. Next, within priority 10, the pricing engine evaluates threshold discounts that have the concurrency mode set to **Exclusive**. An **Exclusive** threshold discount can't be applied to a product that already has a discount applied, so a threshold amount is applied and evaluated only on the undiscounted products. If more than one of these discounts apply to the transaction, the discounts compete, and the largest discount is applied. 6. Next, within priority 10, the pricing engine evaluates threshold discounts that have the concurrency mode set to **Best price** and **Compound**. The pricing engine evaluates and applies threshold discounts at pricing priority 10. **Compound** threshold discounts are compounded with other **Compound** threshold discounts and compete against the other **Best price** discounts within the same pricing priority. A **Best price** threshold discount applies only to undiscounted products, but a **Compound** threshold discount applies to undiscounted products and products that are discounted with another **Compound** (**Simple**, **Quantity**, and **Mix and Match**) discount. > [!NOTE] > If there were threshold discounts set at a higher priority, for example if it's set to 11, and all the other discount types were at priority 10 and 5, then the threshold discounts would have evaluated at priority 11 and then compounded with the simple, quantity, and mix and match discounts at priority 10. This is important because simple, quantity, and mix and match discounts are evaluated within their highest priority and the threshold discounts are evaluated within their highest priority and then compounded. Any threshold discounts at the lower priority are ignored. At this point, all the discounts at the highest priorities have been evaluated. The above logic is showcased in the following image, which shows the detailed view of how the pricing algorithm loops through the discounts across various priorities. Note that this diagram applies for both the discount concurrency control models, but the difference is in the way in which the pricing algorithm treats discounts at different priorities. ![Detailed pricing logic](./media/Detailed-pricing-logic.png "Detailed pricing logic") In this example, let's assume the following setup. **Product information** | Product \# | Product price | |---|---| | Prod1 | $10 | | Prod2 | $20 | | Prod3 | $10 | **Discount setup** | Discount \# | Discount concurrency | Priority | Discount amount | Discount type | Applicable on products | |---|---|---|---|---|---| | BP1 | Best price | 10 | 15% | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | BP2 | Best price | 5 | 20% | Simple, Quantity, or Mix and Match | All products | | C1 | Compound | 10 | $1 | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | C2 | Compound | 10 | 10% | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | C3 | Compound | 5 | 25% | Simple, Quantity, or Mix and Match | All products | | C4 | Compound | 5 | 10% | Threshold only | All products | **Step 1:** For each product, determine the highest priority where a Simple, Quantity or Mix and Match discount exists. In this case, for prod1 it is priority 10, for prod2 it is priority 10 and for prod3 it is priority 5. **Step 2:** For each product, find **Simple**, **Quantity**, or **Mix and Match** discounts, with discount concurrency as **Exclusive**, at the highest priority applicable to individual products. In this case, there are none for prod1 and prod2 at priority 10 and similarly, there are none for prod3 at priority 5. **Step 3:** For each product, evaluate **Simple**, **Quantity**, or **Mix and Match** discounts, with discount concurrency as **Best price** and **Compound**, at the highest priority applicable to individual products. The following table illustrates this. > [!NOTE] > Two asterisks (\*\*) indicate the discount that gets applied to a product. | Transaction quantity | Product | Price | Priority 10 (C1 + C2) | Priority 10 (BP1) | Priority 5 (C3) | Priority 5 (BP2) | Total | Explanation | |---|---|---|---|---|---|---|---|---| | 1 | Prod1 | $10 | $1.90\*\* | $1.50 | (NA) | (NA) | $10 – 1.90 = $8.1 | Because the combination of compound discounts is more than the best price discount, C1 and C2 are applied on the product. The discounts at lower priority, such as 5, are ignored. | | 1 | Prod2 | $20 | $2.90 | $3\*\* | (NA) | (NA) | $20 – 3 = $17.00 | Because the best price discount is more than the combination of compound discounts, BP1 is applied on the product. The discounts at lower priority, such as 5, are ignored. | | 1 | Prod3 | $10 | | | $2.50\*\* | $2.0 | $10 – 2.50 = $7.5 | Priority 5 is highest applicable priority for this product. The compound discount is more than the best price discount, so C3 is applied on the product. | **Step 4:** Evaluate **Threshold** discounts applicable to the individual products at the highest priority. For this example, it is priority 5 for all the products. | Transaction quantity | Product | Discount applied | Discounted price | Priority 5 (C4) | Amount due | Explanation | |---|---|---|---|---|---|---| | 1 | Prod1 | C1, C2 | $8.1 | $0.81\*\* | 8.1 – 0.81 = $7.29 | For Threshold discounts, Priority 5 is the highest applicable priority for this product, so any applicable threshold discounts at priority 5 will be compounded with the applied discounts, if the applied discounts are of discount concurrency mode **Compound**. Because Prod1 has compound discounts only, the compound threshold discounts can be compounded. | | 1 | Prod2 | BP1 | $17 | (NA) | $17 | For Threshold discounts, Priority 5 is the highest applicable priority for this product, so any applicable threshold discounts at priority 5 will be compounded with the applied discounts if the applied discounts are of discount concurrency mode **Compound**. Because Prod2 has a "Best price" discount, other discounts CANNOT be applied to this product.| | 1 | Prod3 | C3 | $7.5 | $0.75\*\* | 7.5 – 0.75 = $6.75 | For Threshold discounts, Priority 5 is highest applicable priority for this product, so any applicable threshold discounts at priority 5 will be compounded with the applied discounts, if the applied discounts are of discount concurrency mode **Compound**. Because Prod3 has compound discounts only, the compound threshold discounts can be compounded. | The final amount due for Prod1 is 7.29, Prod 2 is 17, and Prod 3 is 6.75. #### Example 2 In the second scenario, **Best price only within priority, always compound across priorities** is selected as the discount concurrency control model while rest of the discounts remain as is. 1. Given that **Discount concurrency control model** is selected, for each product the pricing engine next considers the highest pricing priority of the discounts that are applicable on a product. If, for a product, discounts from more than one priority are applicable, then each is evaluated independently, and in descending order. Thus, the pricing engine first evaluates and applies the simple, quantity, and mix-and-match discounts with priority 10 followed by the discounts at priority 5. > [!NOTE] > Like the previous discount concurrency control model, the **Threshold** discounts are not evaluated yet. 2. Within priority 10, the pricing engine first considers the discounts that have the concurrency mode set to **Exclusive**. If there is more than one exclusive discount applicable to the product, then the best exclusive discount is applied. When a product gets an exclusive discount, no other discounts can be applied to this product at any priority. > [!NOTE] > Mix-and-match, least-expensive discounts that have the **Multiple occurrences mode** property set to **Favor retailer** are skipped in this step. After all the **Exclusive** discounts (**Simple**, **Quantity** and **Mix and Match**) at pricing priority 10 have been applied, then the **Exclusive** mix-and-match **Favor retailer** discounts, at pricing priority 10, are applied to any undiscounted products. 3. Within priority 10, the pricing engine then considers the discounts that have the discount concurrency mode set to **Best price** and **Compound**. As stated before, for this discount concurrency control model, discounts with **Discount concurrency mode** set to **Best price** and **Compound** are all treated as "best price"; within a single pricing priority. So, if multiple **Compound** and **Best price** discounts apply to a product, then all these discounts compete for best price and only the best discount wins within a priority. Like the previous step, mix-and-match least-expensive discounts that have the **Multiple occurrences mode** property set to **Favor retailer** are skipped in this step. When all the **Best price** and **Compound** discounts at pricing priority 10 have been applied, then **Best price** and **Compound** mix-and-match **Favor retailer** discounts are evaluated against each other and applied. Because both **Best price** and **Compound** are treated as best price, only one discount can be applied per product at a given priority. 4. The pricing engine repeats the steps 1 through 3 for any simple, quantity, and mix-and-match discounts at pricing priority 5. > [!NOTE] > The pricing engine completes steps 1 through 3, one time, for every pricing priority that applies to the transaction. Therefore, we recommend that you keep the number of pricing priorities to a minimum, based on your business requirements. At this point, all the simple, quantity, and mix-and-match discounts at all priorities have been evaluated and applied. 5. Next, within priority 10, the pricing engine evaluates threshold discounts that have the concurrency mode set to **Exclusive**. An **Exclusive** threshold discount can't be applied to a product that already has a discount applied, so a threshold amount is applied and evaluated only on undiscounted products. If more than one of these discounts apply to the transaction, the discounts compete, and the largest discount is applied. 6. Next, within priority 10, the pricing engine evaluates threshold discounts that have the concurrency mode set to **Best price** and **Compound**. Because **Best price** and **Compound** are all treated as "best price", these discounts compete for the best discount. The selected threshold discount gets applied to those products which do not have any other types of discounts already applied at priority 10. If there are other discounts, then the threshold discount is not applied because both Best price and Compound discounts are treated as Best price, and only one discount per priority is allowed with this discount concurrency control. > [!NOTE] > If there were threshold discounts set at a higher priority, for example, it's set to 11 and all the other discount types were at priority 10 and 5, then the threshold discounts would have evaluated at priority 11 and the best threshold discount would have been applied at priority 11 (assuming there is no **Exclusive** discount (**Simple**, **Quantity**, or **Mix and Match**) applied at a lower priority). 7. The pricing engine repeats the steps 5 and 6 for threshold discounts at pricing priority 5. Let's use the same example as before. **Product information** | Product \# | Product price | |---|---| | Prod1 | $10 | | Prod2 | $20 | | Prod3 | $10 | **Discount setup** | Discount \# | Discount concurrency | Priority | Discount amount | Discount type | Applicable on products | |---|---|---|---|---|---| | BP1 | Best price | 10 | 15% | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | BP2 | Best price | 5 | 20% | Simple, Quantity, or Mix and Match | All products | | C1 | Compound | 10 | $1 | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | C2 | Compound | 10 | 10% | Simple, Quantity, or Mix and Match | Prod1, Prod2 | | C3 | Compound | 5 | 25% | Simple, Quantity, or Mix and Match | All products | | C4 | Compound | 5 | 10% | Threshold only | All products | **Step 1:** For each product, determine the highest priority where a **Simple**, **Quantity**, or **Mix and Match** discount exists. In this case, for prod1 it is priority 10, for prod2 it is priority 10, and for prod3 it is priority 5. **Step 2:** For each product, find **Simple**, **Quantity**, or **Mix and Match** discounts, with discount concurrency as **Exclusive**, at the highest priority applicable to individual products. In this case, there are none for prod1 and prod2 at priority 10 and there are none for prod3 at priority 5. **Step 3:** For each product, evaluate **Simple**, **Quantity**, or **Mix and Match** discounts, with discount concurrency as **Best price** and **Compound**, at the highest priority applicable to individual products. See the table below. > [!NOTE] > Two asterisks (\*\*) indicate the discount that gets applied to a product. | Transaction quantity | Product | Price | Priority 10 (C1) | Priority 10 (C2) | Priority 10 (BP1) | Total | Explanation | |---|---|---|---|---|---|---|---| | 1 | Prod1 | $10 | $1 | $1 | $1.50\*\* | $10 – 1.50 = $8.50 | Because the compound discounts are treated as the "Best price" for discounts for this discount concurrency control model, the compound discounts will not combine. Rather they all compete for the best discount. Because BP1 gives the highest discount, BP1 gets applied at priority 10. | | 1 | Prod2 | $20 | $1 | $2 | $3\*\* | $20 – 3 = $17.00 | Same as above. Because BP1 gives the highest discount, BP1 gets applied at priority 10. | | 1 | Prod3 | $10 | | | | $10 | No discounts applicable at priority 10. | **Step 4:** For each product, determine the next highest priority where a **Simple**, **Quantity**, or **Mix and Match** discount exists. In this case, it is priority 5 for all three products. **Step 5:** At priority 5, find **Simple**, **Quantity**, or **Mix and Match** discounts with discount concurrency mode as **Exclusive**. In this case, none. > [!NOTE] > If an exclusive discount existed at priority 5, then it would have been ignored as exclusive discounts cannot co-exist with other discounts which have been applied at a higher priority **Step 6:** At priority 5, evaluate **Simple**, **Quantity**, or **Mix and Match** discounts. The following table illustrates this. | Transaction quantity | Product | Discounted Price | Priority 5 (C3) | Priority 5 (BP2) | Total | Explanation | |---|---|---|---|---|---|---| | 1 | Prod1 | $8.50 | $2.13\*\* | $1.7 | $8.50 – 2.13 = $6.37 | Because the discounts across priorities are compounded, the discounts at priority 5 are compounded on the discounts applied at priority 10. C3 gives the highest discount so C3 gets applied at priority 5. | | 1 | Prod2 | $17 | $4.25\*\* | $3.40 | $17.00 – 4.25 = $12.75 | Same as above. Because C3 gives the highest discount, C3 gets applied at priority 5. | | 1 | Prod3 | $10 | $2.5\*\* | $2.0 | $10 – 2.5 = $7.5 | Same as above. Because C3 gives the highest discount, C3 gets applied at priority 5. | **Step 7:** Evaluate **Threshold** discounts. | Transaction quantity | Product | Discount applied at priority 10 | Discount applied at priority 5 | Discounted price | Priority 5 (C4) | Amount due | Explanation | |---|---|---|---|---|---|---|---| | 1 | Prod1 | BP1 | C3 | $6.37 | (NA) | $6.37 | For Threshold discounts, Priority 5 is highest applicable priority for this product. But the threshold discount at priority 5 will only get applied if there is no other discount applied at the priority 5. This is because both best price and compound discounts are treated as "Best price" and only one discount per priority is allowed with this discount concurrency control. So, the threshold discount is ignored. | | 1 | Prod2 | BP1 | C3 | $12.75 | (NA) | $12.75 | Same as above | | 1 | Prod3 | | C3 | $7.5 | (NA) | $7.5 | Same as above | The final amount due for prod1 is 6.37, Prod 2 is 12.75, and Prod 3 is 7.5. > [!NOTE] > For the same discount setting, the results vastly differ depending on which discount concurrency control model is selected. [!INCLUDE[footer-include](../includes/footer-banner.md)]
103.421687
1,131
0.751903
eng_Latn
0.998771
f496ec7755797cd396bff7fd3263a408a92c6c2c
6,724
md
Markdown
docs/OrganizingTestSuites.md
apkatsikas/webdriverio
21539bb965e87df2df5ae4971cc7ce826a538582
[ "MIT" ]
2
2020-06-09T19:56:37.000Z
2020-11-06T02:45:01.000Z
docs/OrganizingTestSuites.md
apkatsikas/webdriverio
21539bb965e87df2df5ae4971cc7ce826a538582
[ "MIT" ]
7
2021-03-01T21:22:56.000Z
2022-02-27T08:39:31.000Z
docs/OrganizingTestSuites.md
apkatsikas/webdriverio
21539bb965e87df2df5ae4971cc7ce826a538582
[ "MIT" ]
null
null
null
--- id: organizingsuites title: Organizing Test Suite --- As projects grow, inevitably more and more integration tests are added. This increases build time and slows productivity. To prevent this, you should run your tests in parallel. WebdriverIO already tests each spec (or <dfn>feature file</dfn> in Cucumber) in parallel within a single session. In general, try to test a only a single feature per spec file. Try to not have too many or too few tests in one file. (However, there is no golden rule here.) Once your tests have several spec files, you should start running your tests concurrently. To do so, adjust the `maxInstances` property in your config file. WebdriverIO allows you to run your tests with maximum concurrency—meaning that no matter how many files and tests you have, they can all run in parallel. (This is still subject to certain limits, like your computer’s CPU, concurrency restrictions, etc.) > Let's say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have set `maxInstances` to `1`. The WDIO test runner will spawn 3 processes. Therefore, if you have 10 spec files and you set `maxInstances` to `10`, _all_ spec files will be tested simultaneously, and 30 processes will be spawned. You can define the `maxInstances` property globally to set the attribute for all browsers. If you run your own WebDriver grid, you may (for example) have more capacity for one browser than another. In that case, you can _limit_ the `maxInstances` in your capability object: ```js // wdio.conf.js exports.config = { // ... // set maxInstance for all browser maxInstances: 10, // ... capabilities: [{ browserName: 'firefox' }, { // maxInstances can get overwritten per capability. So if you have an in-house WebDriver // grid with only 5 firefox instance available you can make sure that not more than // 5 instance gets started at a time. browserName: 'chrome' }], // ... } ``` ## Inherit From Main Config File If you run your test suite in multiple environments (e.g., dev and integration) it may help to use multiple configuration files to keep things manageable. Similar to the [page object concept](PageObjects.md), the first thing you’ll need is a main config file. It contains all configurations you share across environments. Then create another config file for each environment, and supplement the the main config with the environment-specific ones: ```js // wdio.dev.config.js import merge from 'deepmerge' import wdioConf from './wdio.conf.js' // have main config file as default but overwrite environment specific information exports.config = merge(wdioConf.config, { capabilities: [ // more caps defined here // ... ], // run tests on sauce instead locally user: process.env.SAUCE_USERNAME, key: process.env.SAUCE_ACCESS_KEY, services: ['sauce'] }, { clone: false }) // add an additional reporter exports.config.reporters.push('allure') ``` ## Grouping Test Specs You can easily group test specs in suites and run single specific suites instead of all of them. First, define your suites in your WDIO config: ```js // wdio.conf.js exports.config = { // define all tests specs: ['./test/specs/**/*.spec.js'], // ... // define specific suites suites: { login: [ './test/specs/login.success.spec.js', './test/specs/login.failure.spec.js' ], otherFeature: [ // ... ] }, // ... } ``` Now, if you want to only run a single suite, you can pass the suite name as a CLI argument: ```sh wdio wdio.conf.js --suite login ``` Or, run multiple suites at once: ```sh wdio wdio.conf.js --suite login --suite otherFeature ``` ## Run Selected Tests In some cases, you may wish to only execute a single test (or subset of tests) of your suites. With the `--spec` parameter, you can specify which _suite_ (Mocha, Jasmine) or _feature_ (Cucumber) should be run. For example, to run only your login test: ```sh wdio wdio.conf.js --spec ./test/specs/e2e/login.js ``` Or run multiple specs at once: ```sh wdio wdio.conf.js --spec ./test/specs/signup.js --spec ./test/specs/forgot-password.js ``` If the `--spec` value does not point to a particular spec file, it is instead used to filter the spec filenames defined in your configuration. To run all specs with the word “dialog” in the spec file names, you could use: ```sh wdio wdio.conf.js --spec dialog ``` Note that each test file is running in a single test runner process. Since we don't scan files in advance (see the next section for information on piping filenames to `wdio`), you _can't_ use (for example) `describe.only` at the top of your spec file to instruct Mocha to run only that suite. This feature will help you to accomplish the same goal. ## Exclude Selected Tests When needed, if you need to exclude particular spec file(s) from a run, you can use the `--exclude` parameter (Mocha, Jasmine) or feature (Cucumber). For example, to exclude your login test from the test run: ```sh wdio wdio.conf.js --exclude ./test/specs/e2e/login.js ``` Or, exclude multiple spec files: ```sh wdio wdio.conf.js --exclude ./test/specs/signup.js --exclude ./test/specs/forgot-password.js ``` Or, exclude a spec file when filtering using a suite: ```sh wdio wdio.conf.js --suite login --exclude ./test/specs/e2e/login.js ``` ## Run Suites and Test Specs Run an entire suite along with individual specs. ```sh wdio wdio.conf.js --suite login --spec ./test/specs/signup.js ``` ## Run Multiple, Specific Test Specs It is sometimes necessary&mdash;in the context of continuous integration and otherwise&mdash;to specify multiple sets of specs to run. WebdriverIO's `wdio` command line utility accepts piped-in filenames (from `find`, `grep`, or others). Piped-in filenames override the list of globs or filenames specified in the configuration's `spec` list. ```sh grep -r -l --include "*.js" "myText" | wdio wdio.conf.js ``` _**Note:** This will_ not _override the `--spec` flag for running a single spec._ ## Stop testing after failure With the `bail` option, you can tell WebdriverIO to stop testing after any test fails. This is helpful with large test suites when you already know that your build will break, but you want to avoid the lengthy wait of a full testing run. The `bail` option expects a number, which specifies how many test failures can occur before WebDriver stop the entire testing run. The default is `0`, meaning that it always runs all tests specs it can find. Please see [Options Page](Options.md) for additional information on the bail configuration.
36.150538
411
0.71981
eng_Latn
0.996465
f4970fbc6ef516049db1c7c36ec38f66db4b7c1f
18,221
md
Markdown
docs/editor/extension-gallery.md
Alex-EEE/vscode-docs
7a9d50a6837dcd9cc5f47c08ab580d5545e221b9
[ "CC-BY-3.0", "MIT" ]
1
2020-02-09T09:58:52.000Z
2020-02-09T09:58:52.000Z
docs/editor/extension-gallery.md
Alex-EEE/vscode-docs
7a9d50a6837dcd9cc5f47c08ab580d5545e221b9
[ "CC-BY-3.0", "MIT" ]
null
null
null
docs/editor/extension-gallery.md
Alex-EEE/vscode-docs
7a9d50a6837dcd9cc5f47c08ab580d5545e221b9
[ "CC-BY-3.0", "MIT" ]
1
2020-02-09T09:58:54.000Z
2020-02-09T09:58:54.000Z
--- Order: 3 Area: editor TOCTitle: Extension Marketplace ContentId: 319916C4-93F2-471F-B448-FD416736C40C PageTitle: Managing Extensions in Visual Studio Code DateApproved: 2/5/2020 MetaDescription: Discover, add, update, disable and uninstall Visual Studio Code extensions (plug-ins) through the Extension Marketplace. --- # Extension Marketplace **Increase the power of Visual Studio Code through Extensions** The features that Visual Studio Code includes out-of-the-box are just the start. VS Code extensions let you add languages, debuggers, and tools to your installation to support your development workflow. VS Code's rich extensibility model lets extension authors plug directly into the VS Code UI and contribute functionality through the same APIs used by VS Code. This topic explains how to find, install, and manage VS Code extensions from the [Visual Studio Code MarketPlace](https://marketplace.visualstudio.com/VSCode). ## Browse for extensions You can browse and install extensions from within VS Code. Bring up the Extensions view by clicking on the Extensions icon in the **Activity Bar** on the side of VS Code or the **View: Extensions** command (`kb(workbench.view.extensions)`). ![Extensions view icon](images/extension-gallery/extensions-view-icon.png) This will show you a list of the most popular VS Code extensions on the [VS Code Marketplace](https://marketplace.visualstudio.com/VSCode). ![popular extensions](images/extension-gallery/extensions-popular.png) Each extension in the list includes a brief description, the publisher, the download count, and a five star rating. You can click on the extension item to display the extension's details page where you can learn more. > **Note:** If your computer's Internet access goes through a proxy server, you will need to configure the proxy server. See [Proxy server support](/docs/setup/network.md#proxy-server-support) for details. ### Install an extension To install an extension, click the **Install** button. Once the installation is complete, the **Install** button will change to the **Manage** gear button. ### Extension details On the extension details page, you can read the extension's README as well as review the extension's: * **Contributions** - The extension's additions to VS Code such as settings, commands and keyboard shortcuts, language grammars, debugger, etc. * **Changelog** - The extension repository CHANGELOG if available. * **Dependencies** - Lists if the extension depends on any other extensions. ![extension contributions](images/extension-gallery/extension-contributions.png) If an extension is an Extension Pack, the **Extension Pack** section will display which extensions will be installed when you install the pack. [Extension Packs](/api/references/extension-manifest.md#extension-packs) bundle separate extensions together so they can be easily installed at one time. ![extension dependencies](images/extension-gallery/extension-dependencies.png) ### Extensions view commands You can run various Extensions view commands by clicking on the Extensions view's `...` **More Actions** button. ![more button](images/extension-gallery/more-button.png) There are commands to show: * The list of currently installed extensions * The list of outdated extensions that can be updated * The list of currently enabled/disabled extensions * The list of recommended extensions based on your workspace * The list of globally popular extensions You can sort the extension list by **Install Count** or **Rating** in either ascending or descending order. You can learn more about extension search filters [below](#extensions-view-filters). ![more dropdown](images/extension-gallery/more-dropdown.png) ### Search for an extension You can clear the Search box at the top of the Extensions view and type in the name of the extension, tool, or programming language you're looking for. For example, typing 'python' will bring up a list of Python language extensions: ![python extensions](images/extension-gallery/extensions-python.png) If you know the exact identifier for an extension you're looking for, you can use the `@id:` prefix, for example `@id:octref.vetur`. Additionally, to filter or sort results, you can use the [filter](#extensions-view-filters) and [sort](#sorting) commands, detailed below. ## Manage extensions VS Code makes it easy to manage your extensions. You can install, disable, update, and uninstall extensions through the Extensions view, the **Command Palette** (commands have the **Extensions:** prefix) or command-line switches. ### List installed extensions By default, the Extensions view will show the extensions you currently have enabled, all extensions that are recommended for you, and a collapsed view of all extensions you have disabled. You can use the **Show Installed Extensions** command, available in the **Command Palette** (`kb(workbench.action.showCommands)`) or the **More Actions** (`...`) drop-down menu, to clear any text in the search box and show the list of all installed extensions, which includes those that have been disabled. ### Uninstall an extension To uninstall an extension, click the gear button at the right of an extension entry and then choose **Uninstall** from the drop-down menu. This will uninstall the extension and prompt you to reload VS Code. ![uninstall an extension](images/extension-gallery/uninstall-extension.png) ### Disable an extension If you don't want to permanently remove an extension, you can instead temporarily disable the extension by clicking the gear button at the right of an extension entry. You can disable an extension globally or just for your current Workspace. You will be prompted to reload VS Code after you disable an extension. If you want to quickly disable all installed extensions, there is a **Disable All Installed Extensions** command in the **Command Palette** and **More Actions** (`...`) drop-down menu. Extensions remain disabled for all VS Code sessions until you re-enable them. ### Enable an extension Similarly if you have disabled an extension (it will be in the **Disabled** section of the list and marked ***Disabled***), you can re-enable it with the **Enable** or **Enable (Workspace)** commands in the drop-down menu. ![enable extension](images/extension-gallery/enable-extension.png) There is also an **Enable All Extensions** command in the **More Actions** (`...`) drop-down menu. ### Extension auto-update VS Code checks for extension updates and installs them automatically. After an update, you will be prompted to reload VS Code. If you'd rather update your extensions manually, you can disable auto-update with the **Disable Auto Updating Extensions** command that sets the `extensions.autoUpdate` [setting](/docs/getstarted/settings.md) to `false`. If you don't want VS Code to even check for updates, you can set the `extensions.autoCheckUpdates` setting to false. ### Update an extension manually If you have extensions auto-update disabled, you can quickly look for extension updates by using the **Show Outdated Extensions** command that uses the `@outdated` filter. This will display any available updates for your currently installed extensions. Click the **Update** button for the outdated extension and the update will be installed and you'll be prompted to reload VS Code. You can also update all your outdated extensions at one time with the **Update All Extensions** command. If you also have automatic checking for updates disabled, you can use the **Check for Extension Updates** command to check which of your extensions can be updated. ## Recommended extensions You can see a list of recommended extensions using **Show Recommended Extensions**, which sets the `@recommended` [filter](#extensions-view-filters). Extension recommendations can either be: * **Workspace Recommendations** - Recommended by other users of your current workspace. * **Other Recommendations** - Recommended based on recently opened files. See the section below to learn how to [contribute](#workspace-recommended-extensions) recommendations for other users in your project. ### Ignoring recommendations To dismiss a recommendation, click on the extension item to open the Details pane and then press the **Ignore Recommendation** button. Ignored recommendations will no longer be recommended to you. ![Ignore extension recommendation](images/extension-gallery/ignore-recommendation.png) ## Configuring extensions VS Code extensions may have very different configurations and requirements. Some extensions contribute [settings](/docs/getstarted/settings.md) to VS Code, which can be modified in the Settings editor. Other extensions may have their own configuration files. Extensions may also require installation and setup of additional components like compilers, debuggers, and command-line tools. Consult the extension's README (visible in the Extensions view details page) or go to the extension page on the [VS Code Marketplace](https://marketplace.visualstudio.com/VSCode) (click on the extension name in the details page). Many extensions are open source and have a link to their repository on their Marketplace page. ## Command line extension management To make it easier to automate and configure VS Code, it is possible to list, install, and uninstall extensions from the [command line](/docs/editor/command-line.md). When identifying an extension, provide the full name of the form `publisher.extension`, for example `ms-python.python`. Example: ```bash code --extensions-dir <dir> Set the root path for extensions. code --list-extensions List the installed extensions. code --show-versions Show versions of installed extensions, when using --list-extension. code --install-extension (<extension-id> | <extension-vsix-path>) Installs an extension. code --uninstall-extension (<extension-id> | <extension-vsix-path>) Uninstalls an extension. code --enable-proposed-api (<extension-id>) Enables proposed API features for extensions. Can receive one or more extension IDs to enable individually. ``` You can see the extension ID on the extension details page next to the extension name. ![extension identifier](images/extension-gallery/extension-identifier.png) ## Extensions view filters The Extensions view search box supports filters to help you find and manage extensions. You may have seen filters such as `@installed` and `@recommended` if you used the commands **Show Installed Extensions** and **Show Recommended Extensions**. Also, there are filters available to let you sort by popularity or ratings and search by category (for example 'Linters') and tags (for example 'node'). You can see a complete listing of all filters and sort commands by typing `@` in the extensions search box and navigating through the suggestions: ![intellisense on extension search filters](images/extension-gallery/extension-search-filters.png) Here are the Extensions view filters: * `@builtin` - Show extensions that come with VS Code. Grouped by type (Programming Languages, Themes, etc.). * `@disabled` - Show disabled installed extensions. * `@installed` - Show installed extensions. * `@outdated` - Show outdated installed extensions. A newer version is available on the Marketplace. * `@enabled` - Show enabled installed extensions. Extensions can be individually enabled/disabled. * `@recommended` - Show recommended extensions. Grouped as Workspace specific or general use. * `@category` - Show extensions belonging to specified category. Below are a few of supported categories. For a complete list, type `@category` and follow the options in the suggestion list: * `@category:themes` * `@category:formatters` * `@category:linters` * `@category:snippets` These filters can be combined as well. For example: Use `@installed @category:themes` to view all installed themes. If no filter is provided, the Extensions view displays the currently installed and recommended extensions. ### Sorting You can sort extensions with the `@sort` filter, which can take the following values: * `installs` - Sort by Marketplace installation count, in descending order. * `rating` - Sort by Marketplace rating (1-5 stars), in descending order. * `name` - Sort alphabetically by extension name. ![sort by install count](images/extension-gallery/sort-install-count.png) ### Categories and tags Extensions can set **Categories** and **Tags** describing their features. ![extension categories and tags](images/extension-gallery/categories-and-tags.png) You can filter on category and tag by using `category:` and `tag:`. Supported categories are: `[Programming Languages, Snippets, Linters, Themes, Debuggers, Formatters, Keymaps, SCM Providers, Other, Extension Packs, Language Packs]`. They can be accessed through IntelliSense in the extensions search box: ![categories debuggers](images/extension-gallery/extension-search-categories.png) Note that you must surround the category name in quotes if it is more than one word (for example, `category:"SCM Providers"`). Tags may contain any string and are not provided by IntelliSense, so review the Marketplace to find helpful tags. ## Install from a VSIX You can manually install a VS Code extension packaged in a `.vsix` file. Using the **Install from VSIX** command in the Extensions view command drop-down, or the **Extensions: Install from VSIX** command in the **Command Palette**, point to the `.vsix` file. You can also install using the VS Code `--install-extension` command-line switch providing the path to the `.vsix` file. ```bash code --install-extension myextension.vsix ``` You may provide the `--install-extension` multiple times on the command line to install multiple extensions at once. If you'd like to learn more about packaging and publishing extensions, see our [Publishing Extensions](/api/working-with-extensions/publishing-extension.md) topic in the Extension API. ## Workspace recommended extensions A good set of extensions can make working with a particular workspace or programming language more productive and you'd often like to share this list with your team or colleagues. You can create a recommended list of extensions for a workspace with the **Extensions: Configure Recommended Extensions (Workspace)** command. In a single folder workspace, the command creates an `extensions.json` file located in the workspace `.vscode` folder where you can add a list of extensions identifiers ({publisherName}.{extensionName}). In a [multi-root workspace](/docs/editor/multi-root-workspaces.md), the command will open your `.code-workspace` file where you can list extensions under `extensions.recommendations`. You can still add extension recommendations to individual folders in a multi-root workspace by using the **Extensions: Configure Recommended Extensions (Workspace Folder)** command. An example `extensions.json` could be: ```json { "recommendations": [ "ms-vscode.vscode-typescript-tslint-plugin", "dbaeumer.vscode-eslint", "msjsdiag.debugger-for-chrome" ] } ``` which recommends two linter extensions, TSLint and ESLint, as well as the Chrome debugger extension. An extension is identified using its publisher name and extension identifier `publisher.extension`. You can see the name on the extension's detail page. VS Code will provide you with auto-completion for installed extensions inside these files. ![Extension identifier](images/extension-gallery/extension-identifier.png). VS Code prompts a user to install the recommended extensions when a workspace is opened for the first time. The user can also review the list with the **Extensions: Show Recommended Extensions** command. ![Show Recommendations](images/extension-gallery/recommendations.png) ## Next steps Here are a few topics you may find interesting... * [Extension API](/api) - Start learning about the VS Code extension API. * [Your First Extension](/api/get-started/your-first-extension.md) - Try creating a simple Hello World extension. * [Publishing to the Marketplace](/api/working-with-extensions/publishing-extension.md) - Publish your own extension to the VS Code Marketplace. ## Common questions ### Where are extensions installed? Extensions are installed in a per user extensions folder. Depending on your platform, the location is in the following folder: * **Windows** `%USERPROFILE%\.vscode\extensions` * **macOS** `~/.vscode/extensions` * **Linux** `~/.vscode/extensions` You can change the location by launching VS Code with the `--extensions-dir <dir>` command-line [option](/docs/editor/command-line.md). ### Whenever I try to install any extension, I get a connect ETIMEDOUT error You may see this error if your machine is going through a proxy server to access the Internet. See the [Proxy server support](/docs/setup/network.md#proxy-server-support) section in the setup topic for details. ### Can I download an extension directly from the Marketplace? Some users prefer to download an extension once from the Marketplace and then install it multiple times from a local share. This is useful when there are connectivity concerns or if your development team wants to use a fixed set of extensions. To download an extension, navigate to the details page for the specific extension within the [Marketplace](https://marketplace.visualstudio.com/vscode). On that page, there is a **Download Extension** link in the **Resources** section, which is located on the right-hand side of the page. Once downloaded, you can then install the extension via the **Install from VSIX** command in the Extensions view command drop-down. ### Can I stop VS Code from providing extension recommendations? Yes, if you would prefer to not have VS Code display extension recommendations in the Extensions view or through notifications, you can modify the following settings: * `extensions.showRecommendationsOnlyOnDemand` - Set to true to remove the **RECOMMENDED** section. * `extensions.ignoreRecommendations` - Set to true to silence extension recommendation notifications. The **Show Recommended Extensions** command is always available if you wish to see recommendations.
62.400685
710
0.781241
eng_Latn
0.991716
f4971a5558d077cdde7ff798f8bbd9ca57b00faf
5,849
md
Markdown
docs/spark/what-is-spark.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/spark/what-is-spark.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/spark/what-is-spark.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: O que é Apache Spark? description: Saiba mais sobre cenários de Apache Spark e Big Data. ms.date: 10/15/2019 ms.topic: conceptual ms.custom: mvc ms.openlocfilehash: cde66c4084b7c86e1b78d89c2bad94402dbd7d60 ms.sourcegitcommit: b7a8b09828bab4e90f66af8d495ecd7024c45042 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 08/04/2020 ms.locfileid: "87555988" --- # <a name="what-is-apache-spark"></a>O que é Apache Spark? [Apache Spark](https://spark.apache.org/) é uma estrutura de processamento paralelo de software livre que dá suporte ao processamento na memória para melhorar o desempenho de aplicativos que analisam Big Data. As soluções de Big data são projetadas para lidar com os dados muito grandes ou complexos para os bancos tradicionais de dados. O Spark processa grandes quantidades de dados na memória, o que é muito mais rápido do que as alternativas baseadas em disco. ## <a name="common-big-data-scenarios"></a>Cenários comuns de Big Data Você pode considerar uma arquitetura de Big Data se precisar armazenar e processar grandes volumes de dados, transformar dados não estruturados ou processar dados de streaming. O Spark é um mecanismo de processamento distribuído de finalidade geral que pode ser usado para vários cenários de Big Data. ### <a name="extract-transform-and-load-etl"></a>ETL (extrair, transformar e carregar) [Extração, transformação e carregamento (ETL)](/azure/architecture/data-guide/relational-data/etl) é o processo de coleta de dados de uma ou várias fontes, modificação dos dados e movimentação dos dados para um novo armazenamento de dados. Há várias maneiras de transformar dados, incluindo: * Filtragem * Classificação * Agregar * Adição * Limpe * Eliminação * Validando ### <a name="real-time-data-stream-processing"></a>Processamento de fluxo de dados em tempo real Os dados em tempo real e de streaming são dados em movimento. A telemetria de dispositivos IoT, weblogs e cliques são exemplos de dados de streaming. Os dados em tempo real podem ser processados para fornecer informações úteis, como análise geoespacial, monitoramento remoto e detecção de anomalias. Assim como os dados relacionais, você pode filtrar, agregar e preparar dados de streaming antes de mover os dados para um coletor de saída. O Apache Spark dá suporte ao [processamento de fluxo de dados em tempo real](/azure/architecture/data-guide/big-data/real-time-processing) por meio do [streaming do Spark](https://spark.apache.org/streaming/). ### <a name="batch-processing"></a>Processamento em lotes O [processamento em lotes](/azure/architecture/data-guide/big-data/batch-processing) é o processamento de Big data em repouso. Você pode filtrar, agregar e preparar conjuntos de grandes volumes de trabalho usando trabalhos de longa execução em paralelo. ### <a name="machine-learning-through-mllib"></a>Aprendizado de máquina por meio do MLlib O Machine Learning é usado para problemas analíticos avançados. Seu computador pode usar dados existentes para prever ou prever comportamentos, resultados e tendências futuros. A biblioteca de Machine Learning do Apache Spark, [MLlib](https://spark.apache.org/mllib/), contém vários algoritmos e utilitários de aprendizado de máquina. ### <a name="graph-processing-through-graphx"></a>Processamento de grafo por meio de GraphX Um grafo é uma coleção de nós conectados por bordas. Você pode usar um banco de dados de grafo se você tiver dado de hierarquia ou dados com relações interconectadas. Você pode processar esses dados usando a API [GraphX](https://spark.apache.org/graphx/) do Apache Spark. ### <a name="sql-and-structured-data-processing-with-spark-sql"></a>Processamento de dados estruturado e SQL com Spark SQL Se você estiver trabalhando com dados estruturados (formatados), poderá usar consultas SQL em seu aplicativo Spark usando o [Spark SQL](https://spark.apache.org/sql/). ## <a name="apache-spark-architecture"></a>Arquitetura de Apache Spark Apache Spark, que usa a arquitetura mestre/de trabalho, tem três componentes principais: o driver, executores e Gerenciador de cluster. ![Arquitetura de Apache Spark](media/spark-architecture.png) ### <a name="driver"></a>Driver O driver consiste em seu programa, como um aplicativo de console em C# e uma sessão do Spark. A sessão do Spark leva seu programa e divide-o em tarefas menores que são manipuladas pelos executores. ### <a name="executors"></a>Executores Cada executor, ou nó de trabalho, recebe uma tarefa do driver e executa essa tarefa. Os executores residem em uma entidade conhecida como um cluster. ### <a name="cluster-manager"></a>Gerenciador de cluster O Gerenciador de cluster se comunica com o driver e os executores para: * Gerenciar alocação de recursos * Gerenciar divisão de programa * Gerenciar a execução do programa ## <a name="language-support"></a>Suporte ao idioma O Apache Spark dá suporte às seguintes linguagens de programação: * Scala * Python * Java * SQL * R * Linguagens .NET (C#/F #) ## <a name="spark-apis"></a>APIs do Spark O Apache Spark dá suporte às seguintes APIs: * [API de escala do Spark](https://spark.apache.org/docs/2.2.0/api/scala/index.html) * [API do Spark Java](https://spark.apache.org/docs/2.2.0/api/java/index.html) * [API do Python do Spark](https://spark.apache.org/docs/2.2.0/api/python/index.html) * [API do Spark R](https://spark.apache.org/docs/2.2.0/api/R/index.html) * [Spark SQL](https://spark.apache.org/docs/latest/api/sql/index.html), funções internas ## <a name="next-steps"></a>Próximas etapas Saiba como você pode usar Apache Spark em seu aplicativo .NET. Com o .NET para Apache Spark, os desenvolvedores com experiência .NET e lógica de negócios podem escrever Big Data consultas em C# e F #. > [!div class="nextstepaction"] > [O que é o .NET para Apache Spark](what-is-apache-spark-dotnet.md)
57.343137
649
0.776543
por_Latn
0.997633
f497e2c1e3e995b9385c0511f215a8f4f1b84518
333
md
Markdown
source/_posts/SSH-Configuration.md
0x2CA/0x2CA.github.io
fb89003fcb96b50687b46980e021219a59c75c4d
[ "MIT" ]
2
2016-08-30T03:02:32.000Z
2017-08-04T14:38:23.000Z
source/_posts/SSH-Configuration.md
0x2CA/0x2CA.github.io
fb89003fcb96b50687b46980e021219a59c75c4d
[ "MIT" ]
37
2020-03-18T06:03:45.000Z
2022-02-26T01:51:53.000Z
source/_posts/SSH-Configuration.md
0x2CA/0x2CA.github.io
fb89003fcb96b50687b46980e021219a59c75c4d
[ "MIT" ]
null
null
null
--- title: SSH配置 date: 2016-09-02 15:53:07 tags: [Course] category: [Linux] toc: true comments: true --- ## 前提 ``` $ sudo pacman -S openssh ``` ## 开机 启动 ``` $ sudo systemctl enable sshd $ sudo reboot ``` <!--more--> ## 配置 `/etc/ssh/sshd_config`文件中添加: ``` Port 22 # 端口 MaxAuthTries 3 # 输错 三次机会 ``` ## 登录 密码 ``` $ sudo passwd 用户名 ```
11.1
28
0.597598
yue_Hant
0.180732
f498b058869ebba17fe88161a804b70b968861b0
49,797
md
Markdown
content/_repos/dat_sci_2/09_ml_h2o_I.md
christophihl/startupengineer.io
b236e7a07b7a097a52adc8acf48cb01d881d44cc
[ "MIT" ]
null
null
null
content/_repos/dat_sci_2/09_ml_h2o_I.md
christophihl/startupengineer.io
b236e7a07b7a097a52adc8acf48cb01d881d44cc
[ "MIT" ]
null
null
null
content/_repos/dat_sci_2/09_ml_h2o_I.md
christophihl/startupengineer.io
b236e7a07b7a097a52adc8acf48cb01d881d44cc
[ "MIT" ]
1
2020-04-15T15:45:10.000Z
2020-04-15T15:45:10.000Z
--- title: Automated Machine Learning with H20 (I) linktitle: Automated Machine Learning with H2O (I) toc: true type: docs date: "2019-05-05T00:00:00+01:00" draft: false menu: dat_sci_2: parent: II. Machine Learning weight: 11 # Prev/next pager order (if `docs_section_pager` enabled in `params.toml`) weight: 10 --- In the next chapters, we learn `H2O`, an advanced open source machine learning tool available in R. The algorithm we focus on is Automated Machine Learning (AutoML). In the next chapters, you will learn: * How to generate high performance models using `h2o.automl()` * What the `H2O Leaderboard` is and how to inspect its models visually * How to **select and extract H2O models** from the leaderboard by name and by position * How to **make predictions** using the H2O AutoML models We show you how to assess performance and visualize model quality in a way that executives and other business decision makers understand. The next sessions are arranged according to the CRISP-DM process model. CRISP-DM breaks the process of data analysis into six major phases: 1. Business Understanding 2. Data Understanding 3. Data Preparation 4. Modeling 5. Evaluation 6. Deployment This session focuses mainly on the first two steps. I will show you sturctured and flexible ways to review your data, so that you can communicate them easily with your stakeholders. This will be a good basis for preparing our data and to build models with H2O (next session) ## <i class="fab fa-r-project" aria-hidden="true"></i>&nbsp;Theory Input ### H20 <a href="https://docs.h2o.ai/h2o/latest-stable/h2o-docs/welcome.html#r-users" target="_blank"> <img src="/img/icons/logo_h2o.png" align="right" style="width:200px; height:200px; padding:0px 0px 10px 10px; margin-top:0px; margin-bottom:0px;"/> </a> `H2O` is the scalable, open-source Machine Learning library that features `AutoML`. H2O AutoML automates the machine learning workflow, which includes automatic training and tuning of many models. This allows you to spend your time on more important tasks like feature engineering and understanding the problem. The most popular algorithms are incorporated including: * XGBoost * GBM * GLM * Random Forest * and more. AutoML ensembles (combines) these models to provide superior performance. **Setup** H2O requires Java. If you do not already have Java installed, install it from [java.com](https://java.com/en/download/) before installing H2O. Supported versions include: Java 8, 9, 10, 11, 12, 13, 14. Don't use Version 15! To load a recent H2O package from CRAN, run: `install.packages("h2o")`. After H2O is installed on your system, verify the installation completed successfully by initializing H2O: ```r library(h2o) # To launch H2O locally with default initialization arguments, use the following: h2o.init() ``` We can ignore the warning, that the h2o cluster version is too old. **Data preparation** Although it may seem like you are manipulating the data in R, once the data has been passed to H2O, all data munging occurs in the H2O instance. The information is passed to R through JSON APIs. You are limited by the total amount of memory allocated to the H2O instance, not by R’s ability to handle data. To process large datasets, make sure to allocate enough memory. Because we will not use H20 in this session yet, I won't go into any more detail about h2o at this point. This will be part of the next session. But feel free to click on the logo above to get further information about h2o. ### Tidy Eval <a href="https://rlang.r-lib.org" target="_blank"> <img src="/img/icons/logo_rlang.svg" align="right" style="width:200px; height:200px; padding:0px 0px 10px 10px; margin-top:0px; margin-bottom:0px;"/> </a> `rlang` is a toolkit for working with core R and `tidyverse` features, and hosts the `tidy evaluation` concepts and tools. Tidy Evaluation (Tidy Eval) is not a package, but a framework for doing non-standard evaluation (i.e. delayed evaluation) that makes it easier to program with tidyverse functions. Let’s consider a simple example of calculating summary statistics with the built in `mtcars` dataset. Below we calculate maximum and minimum horsepower (hp) by the number of cylinders (cyl) using the `group_by` and `summarize` functions from `dplyr`. ```r library(tidyverse) hp_by_cyl <- mtcars %>% group_by(cyl) %>% summarize(min_hp=min(hp), max_hp=max(hp)) hp_by_cyl ## # A tibble: 3 x 3 ## cyl min_hp max_hp ## <dbl> <dbl> <dbl> ## 1 4 52 113 ## 2 6 105 175 ## 3 8 150 335 ``` Now let’s say we wanted to repeat this calculation multiple times while changing which variable we group by. A brute force method to accomplish this would be to copy and paste our code as many times as necessary and modify the group by variable in each iteration. However, this is inefficient especially if our code gets more complicated, requires many iterations, or requires further development. To avoid this inelegant solution you might think to store the name of a variable inside of another variable like this `groupby_var <- "vs"`. Then you could attempt to use your newly created `groupby_var` variable in your code: `group_by(groupby_var)`. However, if you try this you will find it doesn’t work. The `group_by` function expects the name of the variable you want to group by as an input, not the name of a variable that contains the name of the variable you want to group by. ```r groupby_var <- "vs" hp_by_vs <- mtcars %>% group_by(groupby_var) %>% summarize(min_hp=min(hp), max_hp=max(hp)) ## Error: Must group by variables found in `.data`. ## * Column `groupby_var` is not found. ## Run `rlang::last_error()` to see where the error occurred. ``` This is the kind of headache that tidy evaluation can help you solve. In the example below we use the `quo()` function and the bang-bang `!!` operator to set `vs` (engine type, 0 = automatic, 1 = manual) as our group by variable. The `quo()` function allows us to store the variable name in our `groupby_var` variable and `!!` extracts the stored variable name. ```r groupby_var <- quo(vs) hp_by_vs <- mtcars %>% group_by(!!groupby_var) %>% summarize(min_hp=min(hp), max_hp=max(hp)) hp_by_vs ## # A tibble: 2 x 3 ## vs min_hp max_hp ## <dbl> <dbl> <dbl> ## 1 0 91 335 ## 2 1 52 123 ``` The code above provides a method for setting the group by variable by modifying the input to the `quo()` function when we define `groupby_var`. This can be useful, particularly if we intend to reference the group by variable multiple times. However, if we want to use code like this repeatedly in a script then we should consider packaging it into a function. This is what we will do next. To use tidy evaluation in a function, we will still use the `!!` operator as we did above, but instead of `quo()` we will use the `enquo()` function. Our new function below takes the group by variable and the measurement variable as inputs so that we can now calculate maximum and minimum values of any variable we want. Note two new optional features, that are introduced in this function: * The `as_label()` function extracts the string value of the `measure_var` variable (“hp” in this case). We use this to set the value of the “measure_var” column. * The `walrus operator :=` is used to create a column named after the variable name stored in the `measure_var` argument (“hp” in the example). The walrus operator allows you to use strings and evaluated variables (such as “measure_var” in our example) on the left hand side of an assignment operation (where there would normally be a “=” operator) in functions such as “mutate” and “summarize”. ```r car_stats <- function(groupby_var, measure_var) { groupby_var <- enquo(groupby_var) measure_var <- enquo(measure_var) ret <- mtcars %>% group_by(!!groupby_var) %>% summarize(min = min(!!measure_var), max = max(!!measure_var)) %>% # Optional: as_label() and "walrus operator" := mutate( measure_var = as_label(measure_var), !!measure_var := "test" ) return(ret) } car_stats(am,hp) ## # A tibble: 2 x 5 ## am min max measure_var hp ## <dbl> <dbl> <dbl> <chr> <chr> ## 1 0 62 245 hp test ## 2 1 52 335 hp test car_stats(gear,cyl) ## # A tibble: 3 x 5 ## gear min max measure_var cyl ## <dbl> <dbl> <dbl> <chr> <chr> ## 1 3 4 8 cyl test ## 2 4 4 6 cyl test ## 3 5 4 8 cyl test ``` We now have a flexible function that contains a dplyr workflow. You can experiment with modifying this function for your own purposes. We can use this approach also for plotting the data with ggplot: ```r scatter_plot <- function(data, x_var, y_var) { x_var <- enquo(x_var) y_var <- enquo(y_var) ret <- data %>% ggplot(aes(x = !!x_var, y = !!y_var)) + geom_point() + geom_smooth() + ggtitle(str_c(as_label(y_var), " vs. ",as_label(x_var))) return(ret) } scatter_plot(mtcars, disp, hp) ``` As you can see, you’ve plotted the `hp` (horsepower) variable against `disp` (displacement) and added a regression line. Now, instead of copying and pasting ggplot code to create the same plot with different datasets and variables, we can just call our function. We will see application of this framework in the business case. ## <i class="fas fa-user-tie"></i>&nbsp;Business case **Business problem** Let's begin this business case by introducing you the employee attrition problem. Attrition is a problem that impacts all businesses, irrespective of geography, industry and size of the company. Employee attrition leads to significant costs for a business, including the cost of business disruption, hiring new staff and training new staff. As such, there is great business interest in understanding the drivers of, and minimizing staff attrition. In this context, the use of classification models to predict if an employee is likely to quit could greatly increase the HR’s ability to intervene on time and remedy the situation to prevent attrition. While this model can be routinely run to identify employees who are most likely to quit, the key driver of success would be the human element of reaching out the employee, understanding the current situation of the employee and taking action to remedy controllable factors that can prevent attrition of the employee. The following data set presents an employee survey from IBM, indicating if there is attrition or not. The data set contains approximately 1500 entries. Given the limited size of the data set, the model should only be expected to provide modest improvement in indentification of attrition vs a random allocation of probability of attrition. While some level of attrition in a company is inevitable, minimizing it and being prepared for the cases that cannot be helped will significantly help improve the operations of most businesses. As a future development, with a sufficiently large data set, it would be used to run a segmentation on employees, to develop certain “at risk” categories of employees. This could generate new insights for the business on what drives attrition, insights that cannot be generated by merely informational interviews with employees.[^1] [^1]: Rohan Jain, Ali Shahid, Sehrish Saud, Julian Ramirez **The True Cost Of Employee Attrition** Core to analyzing any business problem is being able to tie a financial figure to it. You will see how the process works specifically for employee turnover, and even get an Excel Calculator that can be sent to your boss and your bosses boss to communicate the size of the problem financially. An organization that loses 200 productive employees per year could have a hidden cost of $15M/year in lost productivity. And the problem... most organizations don't realize it because productivity is a hidden cost! Many business owners have the belief that you turn on a tap and a staff member is automatically profitable. Sadly, this is not the case. In all business, new staff have so much to learn, understand and integrate that the first 3 months are usually a negative on the business. This is because you have to take some of your productive time (if you don’t have other staff to train them) to get them up to speed. After this time, they become a truly productive member of your team and fly forward to give you the success you want. {{< figure src="/img/courses/dat_sci/09/employee_attrition.png">}} A Excel Employee Turnover Cost Calculator is a great way to show others in your organization the true cost of losing good employees. It's simple to use and most business professionals have Excel, so you can easily review your organization's cost of turnover with them. You will find plenty of employee turnover cost calculator templates by using google. {{< figure src="/img/courses/dat_sci/09/excel_sheet.png" width="75%">}} <div id="header">Download</div> <div id="container"> <div id="first">{{% icon download %}}</div> <div id="second"><a href="https://github.com/TUHHStartupEngineers/dat_sci_ss20/raw/master/09/employee_turnover_cost_calc.xlsx" target="_blank"><b>employee_turnover_cost_calc.xlsx</b></a></div> <div id="clear"></div> </div> **CRISP Data Science framework** We will organize and execute this analysis project following a data analysis process called `CRISP-DM`. According to Wikipedia: CRISP-DM is a Cross-Industry Standard Process for Data Mining — an open standard process model that describes common approaches used by data mining experts. {{< figure src="/img/courses/dat_sci/09/CRISP.png" width="75%" caption="CRISP-DM Process Diagram">}} More information can be found here: https://www.the-modeling-agency.com/crisp-dm.pdf Following CRISP-DM guidelines, we start with a Business Understanding. It is an astoundingly common mistake to start projects without first properly defining the problem and objectives. This mistake is not specific to data analysis but is common to all types of problem-solving activities. As a result, all major problem-solving methodologies, including 8-D, six-sigma DMAIC and, of course, CRISP-DM, place first and stress the importance of Problem Definition or Business Understanding. In the end, we want to use h2o to determine the probability of a certain employee to fall into the condition of Attrition and thus its high risk of leaving the company. Before we are able to do that we need a profound understanding of the business and the data. **1. Business Understanding** * Determine Business objective * Assess Situation * Determine Data Mining Goals * Produce Project Plan IBM has gathered information on employee satisfaction, income, seniority and some demographics. It includes the data of 1470 employees. It can be found on [kaggle](https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset) or just download it here: <div id="header">Download</div> <div id="container"> <div id="first">{{% icon download %}}</div> <div id="second"><a href="https://github.com/TUHHStartupEngineers/dat_sci_ss20/raw/master/09/datasets-1067-1925-WA_Fn-UseC_-HR-Employee-Attrition.csv" target="_blank"><b>datasets-1067-1925-WA_Fn-UseC_-HR-Employee-Attrition.csv</b></a></div> <div id="clear"></div> </div> The definition table will be needed later as well: <div id="header">Download</div> <div id="container"> <div id="first">{{% icon download %}}</div> <div id="second"><a href="https://github.com/TUHHStartupEngineers/dat_sci_ss20/raw/master/09/data_definitions.xlsx" target="_blank"><b>data_definitions.xlsx</b></a></div> <div id="clear"></div> </div> | Name | Description | | --- | --- | | AGE Numerical Value | | ATTRITION | Employee leaving the company (0=no, 1=yes) | | BUSINESS TRAVEL | (1=No Travel, 2=Travel Frequently, 3=Tavel Rarely) | | DAILY RATE | Numerical Value - Salary Level | | DEPARTMENT | (1=HR, 2=R&D, 3=Sales) | | DISTANCE FROM HOME | Numerical Value - THE DISTANCE FROM WORK TO HOME | | EDUCATION | Numerical Value | | EDUCATION FIELD | (1=HR, 2=LIFE SCIENCES, 3=MARKETING, 4=MEDICAL SCIENCES, 5=OTHERS, 6= TEHCNICAL) | | EMPLOYEE COUNT | Numerical Value | | EMPLOYEE NUMBER | Numerical Value - EMPLOYEE ID | | ENVIROMENT SATISFACTION | Numerical Value - SATISFACTION WITH THE ENVIROMENT | | GENDER | (1=FEMALE, 2=MALE) | | HOURLY RATE | Numerical Value - HOURLY SALARY | | JOB INVOLVEMENT | Numerical Value - JOB INVOLVEMENT | | JOB LEVEL | Numerical Value - LEVEL OF JOB | | JOB ROLE | (1=HC REP, 2=HR, 3=LAB TECHNICIAN, 4=MANAGER, 5= MANAGING DIRECTOR, 6= REASEARCH DIRECTOR, 7= RESEARCH SCIENTIST, 8=SALES EXECUTIEVE, 9= SALES REPRESENTATIVE) | | JOB SATISFACTION | Numerical Value - SATISFACTION WITH THE JOB | | MARITAL STATUS | (1=DIVORCED, 2=MARRIED, 3=SINGLE) | | MONTHLY INCOME | Numerical Value - MONTHLY SALARY | | MONTHY RATE | Numerical Value - MONTHY RATE | | NUMCOMPANIES WORKED | Numerical Value - NO. OF COMPANIES WORKED AT | | OVER 18 | (1=YES, 2=NO) | | OVERTIME | (1=NO, 2=YES) | | PERCENT SALARY HIKE | Numerical Value - PERCENTAGE INCREASE IN SALARY | | PERFORMANCE RATING | Numerical Value - ERFORMANCE RATING | | RELATIONS SATISFACTION | Numerical Value - RELATIONS SATISFACTION | | STANDARD HOURS | Numerical Value - STANDARD HOURS | | STOCK OPTIONS LEVEL | Numerical Value - STOCK OPTIONS | | TOTAL WORKING YEARS | Numerical Value - TOTAL YEARS WORKED | | TRAINING TIMES LAST YEAR | Numerical Value - HOURS SPENT TRAINING | | WORK LIFE BALANCE | Numerical Value - TIME SPENT BEWTWEEN WORK AND OUTSIDE | | YEARS AT COMPANY | Numerical Value - TOTAL NUMBER OF YEARS AT THE COMPNAY | | YEARS IN CURRENT ROLE | Numerical Value -YEARS IN CURRENT ROLE | | YEARS SINCE LAST PROMOTION | Numerical Value - LAST PROMOTION | | YEARS WITH CURRENT MANAGER | Numerical Value - YEARS SPENT WITH CURRENT MANAGER | ```r # Load data employee_attrition_tbl <- read_csv("datasets-1067-1925-WA_Fn-UseC_-HR-Employee-Attrition.csv") ``` Let's start with creating a subset and analyzing the attrition in terms of Department and Job Roles. Keep in mind that the overall objective is to retain high performers: ```r # Business & Data Understanding: Department and Job Role # Data subset dept_job_role_tbl <- employee_attrition_tbl %>% select(EmployeeNumber, Department, JobRole, PerformanceRating, Attrition) dept_job_role_tbl %>% group_by(Attrition) %>% summarize(n = n()) %>% ungroup() %>% mutate(pct = n / sum(n)) ## # A tibble: 2 x 3 ## Attrition n pct ## <chr> <int> <dbl> ## 1 No 1233 0.839 ## 2 Yes 237 0.161 ``` We have around 16 % Attrition. We need to find out whether or not that is a bad thing. In our case we want to examine the drivers: * Investigate objectives: 16 % Attrition * Synthesize outcomes: High Counts and High percentages * Hypothesize drivers: Job Role and Departments We have different departments and different Job roles. These are common cohorts (Group within a population that often has specific sub-population trends). We need to investigate the counts and percents of attrition within each cohort. ```r # Attrition by department dept_job_role_tbl %>% # Block 1 group_by(Department, Attrition) %>% summarize(n = n()) %>% ungroup() %>% # Block 2: Caution: It's easy to inadvertently miss grouping when creating counts & percents within groups group_by(Department) %>% mutate(pct = n / sum(n)) ## # A tibble: 6 x 4 ## # Groups: Department [3] ## Department Attrition n pct ## <chr> <chr> <int> <dbl> ## 1 Human Resources No 51 0.810 ## 2 Human Resources Yes 12 0.190 ## 3 Research & Development No 828 0.862 ## 4 Research & Development Yes 133 0.138 ## 5 Sales No 354 0.794 ## 6 Sales Yes 92 0.206 ``` There might be something going on by department. Next thing is Attrition by job role. ```r # Attrition by job role dept_job_role_tbl %>% # Block 1 group_by(Department, JobRole, Attrition) %>% summarize(n = n()) %>% ungroup() %>% # Block 2 group_by(Department, JobRole) %>% mutate(pct = n / sum(n)) %>% ungroup() %>% # Block 3 filter(Attrition %in% "Yes") ## # A tibble: 10 x 5 ## Department JobRole Attrition n pct ## <chr> <chr> <chr> <int> <dbl> ## 1 Human Resources Human Resources Yes 12 0.231 ## 2 Research & Development Healthcare Representative Yes 9 0.0687 ## 3 Research & Development Laboratory Technician Yes 62 0.239 ## 4 Research & Development Manager Yes 3 0.0556 ## 5 Research & Development Manufacturing Director Yes 10 0.0690 ## 6 Research & Development Research Director Yes 2 0.025 ## 7 Research & Development Research Scientist Yes 47 0.161 ## 8 Sales Manager Yes 2 0.0541 ## 9 Sales Sales Executive Yes 57 0.175 ## 10 Sales Sales Representative Yes 33 0.398 ``` Sales Representatives and Laboratory Technician stand out. After determining the drivers, we need to measure the drivers by devoloping Key performance indicators (KPIs). A KPI is a metric that is developed to monitor critical performance measures within an organization such as those related to cost, quality, lead time and so on. Typically these relate directly to business needs of satisfied customers and profitability. A KPI is always your organization's goal for how the business should run. KPI's are usually based on data, which can be external (industry data) or internal (customer feedback). We develop a KPI by collecting information on employee attrition. We will use industry KPIs as a benchmark. Just google some, but they may vary as they are not a static value. Let's use 8.8 % as comparison. 8.8 % may be conservative compared to the Bureau of LAbor statistics. For our purposes, consider this a conservative KPI that indicate a major problem if exceeded. Let's see which Departments/Job Role have attrition above/below the industry average. Just add the `mutate()` and `case_when()` function to the last piece of code. ```r # Develop KPI dept_job_role_tbl %>% # Block 1 group_by(Department, JobRole, Attrition) %>% summarize(n = n()) %>% ungroup() %>% # Block 2 group_by(Department, JobRole) %>% mutate(pct = n / sum(n)) %>% ungroup() %>% # Block 3 filter(Attrition %in% "Yes") %>% arrange(desc(pct)) %>% mutate( above_industry_avg = case_when( pct > 0.088 ~ "Yes", TRUE ~ "No" ) ) ``` Now that we know which specific Job Roles are above the industry average we need to uncover the problems and opportunities. How much is turnover costing the organization? Look at Excel sheet above and convert it to a function in R, that calculates the attrition cost. Assign default values for all input values. **Solution** <section class="hide"> <pre><code class="r"># Function to calculate attrition cost calculate_attrition_cost <- function(</br> # Employee n = 1, salary = 80000,</br> # Direct Costs separation_cost = 500, vacancy_cost = 10000, acquisition_cost = 4900, placement_cost = 3500,</br> # Productivity Costs net_revenue_per_employee = 250000, workdays_per_year = 240, workdays_position_open = 40, workdays_onboarding = 60, onboarding_efficiency = 0.50</br> ) {</br> # Direct Costs direct_cost <- sum(separation_cost, vacancy_cost, acquisition_cost, placement_cost)</br> # Lost Productivity Costs productivity_cost <- net_revenue_per_employee / workdays_per_year * (workdays_position_open + workdays_onboarding * onboarding_efficiency)</br> # Savings of Salary & Benefits (Cost Reduction) salary_benefit_reduction <- salary / workdays_per_year * workdays_position_open</br> # Estimated Turnover Per Employee cost_per_employee <- direct_cost + productivity_cost - salary_benefit_reduction</br> # Total Cost of Employee Turnover total_cost <- n * cost_per_employee</br> return(total_cost)</br> }</br> calculate_attrition_cost() ## [1] 78483.33 calculate_attrition_cost(200) ## [1] 15696667</code></pre> </section> *** Now add this newly created function with a `mutate()` function to our code above to calculate the attrition for each Department/Job Role. Except for the argument `n`, use the default arguments. We can leave the salary at 80000 for now: **Solution** <section class="hide"> <pre><code class="r">dept_job_role_tbl %>%</br> # Block 1 group_by(Department, JobRole, Attrition) %>% summarize(n = n()) %>% ungroup() %>%</br> # Block 2 group_by(Department, JobRole) %>% mutate(pct = n / sum(n)) %>% ungroup() %>%</br> # Block 3 filter(Attrition %in% "Yes") %>% arrange(desc(pct)) %>% mutate( above_industry_avg = case_when( pct > 0.088 ~ "Yes", TRUE ~ "No" ) ) %>%</br> # Block 4. Set salaray to 80000 for now mutate( cost_of_attrition = calculate_attrition_cost(n = n, salary = 80000) )</code></pre> </section> *** You see that cost can be high even if the percentage is not that high. Let's optimize our workflow and streamline our code further. The first block can be replaced with the `count()` function: ```r # Instead of dept_job_role_tbl %>% group_by(Department, JobRole, Attrition) %>% summarize(n = n()) ``` **Solution** <section class="hide"> <pre><code class="r"># Use this dept_job_role_tbl %>%</br> count(Department, JobRole, Attrition)</code></pre> </section> *** The second block can be transformed in a rather generalized function, which accepts different number of grouping variables as arguments. Remember the tidy eval framework (`quos()` and `enquos()`) The dots (...) enable passing multiple, un-named arguments to a function. Because the dots are not preselected, the user can flexibly add variables and the function will adapt! The first arugment of "tidy" functions is always data, so that we can chain everything together. ```r #Instead of group_by(Department, JobRole) %>% mutate(pct = n / sum(n)) ``` ```r # Use this # Function to convert counts to percentages. count_to_pct <- function(data, ..., col = n) { # capture the dots grouping_vars_expr <- quos(...) col_expr <- enquo(col) ret <- data %>% group_by(!!! grouping_vars_expr) %>% mutate(pct = (!! col_expr) / sum(!! col_expr)) %>% ungroup() return(ret) } # This is way shorter and more flexibel dept_job_role_tbl %>% count(JobRole, Attrition) %>% count_to_pct(JobRole) dept_job_role_tbl %>% count(Department, JobRole, Attrition) %>% count_to_pct(Department, JobRole) ``` Let's write a function for the third block to assess attrition versus a baseline. Remember, `enquo()` caputres a column name as an expression and `!!` evaluates the expression inside a dplyr function. We need the Attrition column, the attrition value ("yes" or "no") and the baseline value as arguments: ```r assess_attrition <- function(data, attrition_col, attrition_value, baseline_pct) { attrition_col_expr <- enquo(attrition_col) data %>% # Use parenthesis () to give tidy eval evaluation priority filter((!! attrition_col_expr) %in% attrition_value) %>% arrange(desc(pct)) %>% mutate( # Function inputs in numeric format (e.g. baseline_pct = 0.088 don't require tidy eval) above_industry_avg = case_when( pct > baseline_pct ~ "Yes", TRUE ~ "No" ) ) } ``` Alltogether (you can put the functions in a separate R file and load them to your environment with the `source()` function. With our new code it is very easy to assess the attrition for multiple grouping variables or just for one. ```r source("assess_attrition.R") dept_job_role_tbl %>% count(Department, JobRole, Attrition) %>% count_to_pct(Department, JobRole) %>% assess_attrition(Attrition, attrition_value = "Yes", baseline_pct = 0.088) %>% mutate( cost_of_attrition = calculate_attrition_cost(n = n, salary = 80000) ) ``` Compare with our original code: <section class="hide"> <pre><code class="r">dept_job_role_tbl %>%</br> group_by(Department, JobRole, Attrition) %>% summarize(n = n()) %>% ungroup() %>%</br> group_by(Department, JobRole) %>% mutate(pct = n / sum(n)) %>% ungroup() %>%</br> filter(Attrition %in% "Yes") %>% arrange(desc(pct)) %>% mutate( above_industry_avg = case_when( pct > 0.088 ~ "Yes", TRUE ~ "No" ) ) %>%</br> mutate( cost_of_attrition = calculate_attrition_cost(n = n, salary = 80000) )</code></pre> </section> *** **Visualizing** The last step is vizualizing the cost of attrition to communicate the data insights to your stakeholder and to convince them to act upon it. Using a combination of `geom_segment()` and `geom_point()` is a good way of doing it. Some infos beforehand: * for ggplot2 visualizations, factors are used to order categorical variables (e.g. non-numeric axis) + factors are numeric (e.g. 1,2,3, ...) wiht labels that are printed (e.g. "Small", "Medium", "Large") + because they are numeric, they are easily reordered and the ordering can be modified by changing the hidden numeric value + `fct_reorder` reorders a factors numeric values by the magnitude of a different numeric variable * `str_c()` combines multiple strings into one specifying a sep argument as the spearating character * `format()` formats a numeric value specifying the number of decimal places using the digits argument * `scale_size()` adjusts the max and minimum size of elements to prevent large/small values from becoming too large or too small ```r dept_job_role_tbl %>% count(Department, JobRole, Attrition) %>% count_to_pct(Department, JobRole) %>% assess_attrition(Attrition, attrition_value = "Yes", baseline_pct = 0.088) %>% mutate( cost_of_attrition = calculate_attrition_cost(n = n, salary = 80000) ) %>% # Data Manipulation mutate(name = str_c(Department, JobRole, sep = ": ") %>% as_factor()) %>% # Check levels # pull(name) %>% # levels() mutate(name = fct_reorder(name, cost_of_attrition)) %>% mutate(cost_text = str_c("$", format(cost_of_attrition / 1e6, digits = 2), "M", sep = "")) %>% #Plotting ggplot(aes(cost_of_attrition, y = name)) + geom_segment(aes(xend = 0, yend = name), color = "#2dc6d6") + geom_point( aes(size = cost_of_attrition), color = "#2dc6d6") + scale_x_continuous(labels = scales::dollar) + geom_label(aes(label = cost_text, size = cost_of_attrition), hjust = "inward", color = "#2dc6d6") + scale_size(range = c(3, 5)) + labs(title = "Estimated cost of Attrition: By Dept and Job Role", y = "", x = "Cost of attrition") + theme(legend.position = "none") ``` {{< figure src="/img/courses/dat_sci/09/attrition_plot.png">}} Let's see how we can do it the tidy eval way by creating a function: Some infos beforehand: * `rlang::sym()` turns a single character string into an expression (e.g. a column name). The expression is typically captured in `enquo()` or `quos()` to delay evaluation. ```r # This will return a quoted result colnames(dept_job_role_tbl)[[1]] ## "EmployeeNumber" # This will become an unquoted expression rlang::sym(colnames(dept_job_role_tbl)[[1]]) ## EmployeeNumber # quos() captures it and turns it into a quosure, which is a list # Will be evaluated at the time we use the double !! later on in the code. # Then it will turn it into EmployeeNumber quos(rlang::sym(colnames(employee_attrition_tbl)[[1]])) ## <list_of<quosure>> ## ## [[1]] ## <quosure> ## expr: ^rlang::sym(colnames(employee_attrition_tbl)[[1]]) ## env: global # If the user supplies two different columns such as Department and Job Role # or if the user does not supply a column the length will be different quos(Department, JobRole) quos(Department, JobRole) %>% length() ## 2 quos() %>% length ## 0 ``` * `switch()` takes an argument, and based on that argument value will change the return following a predefined logic. Similar to a nested series of if-statements. * `!!!` evaluates the expression contained within a multiple quosure. See `?quos()` function. * `arrange()` is used because, `fct_reorder()` does not actually sort the data frame. arrange() does the sorting. ```r # Function to plot attrition plot_attrition <- function(data, ..., .value, fct_reorder = TRUE, fct_rev = FALSE, include_lbl = TRUE, color = "#2dc6d6", units = c("0", "K", "M")) { ### Inputs group_vars_expr <- quos(...) # If the user does not supply anything, # this takes the first column of the supplied data if (length(group_vars_expr) == 0) { group_vars_expr <- quos(rlang::sym(colnames(data)[[1]])) } value_expr <- enquo(.value) units_val <- switch(units[[1]], "M" = 1e6, "K" = 1e3, "0" = 1) if (units[[1]] == "0") units <- "" # Data Manipulation # This is a so called Function Factory (a function that produces a function) usd <- scales::dollar_format(prefix = "$", largest_with_cents = 1e3) # Create the axis labels and values for the plot data_manipulated <- data %>% mutate(name = str_c(!!! group_vars_expr, sep = ": ") %>% as_factor()) %>% mutate(value_text = str_c(usd(!! value_expr / units_val), units[[1]], sep = "")) # Order the labels on the y-axis according to the input if (fct_reorder) { data_manipulated <- data_manipulated %>% mutate(name = forcats::fct_reorder(name, !! value_expr)) %>% arrange(name) } if (fct_rev) { data_manipulated <- data_manipulated %>% mutate(name = forcats::fct_rev(name)) %>% arrange(name) } # Visualization g <- data_manipulated %>% # "name" is a column name generated by our function internally as part of the data manipulation task ggplot(aes(x = (!! value_expr), y = name)) + geom_segment(aes(xend = 0, yend = name), color = color) + geom_point(aes(size = !! value_expr), color = color) + scale_x_continuous(labels = scales::dollar) + scale_size(range = c(3, 5)) + theme(legend.position = "none") # Plot labels if TRUE if (include_lbl) { g <- g + geom_label(aes(label = value_text, size = !! value_expr), hjust = "inward", color = color) } return(g) } ``` The final result looks like this. Now you can easily change the grouping variables and get directly a new plot: ```r dept_job_role_tbl %>% # Select columnns count(Department, JobRole, Attrition) %>% count_to_pct(Department, JobRole) %>% assess_attrition(Attrition, attrition_value = "Yes", baseline_pct = 0.088) %>% mutate( cost_of_attrition = calculate_attrition_cost(n = n, salary = 80000) ) %>% # Select columnns plot_attrition(Department, JobRole, .value = cost_of_attrition, units = "M") + labs( title = "Estimated Cost of Attrition by Job Role", x = "Cost of Attrition", subtitle = "Looks like Sales Executive and Labaratory Technician are the biggest drivers of cost" ) ``` Don't worry if that seems complicated. For now just try to follow the steps. **2. Data Understanding** * Collect Initial Data * Describe Data * Explore Data * Verify Data Quality In this section, we take a deeper dive into the HR data as we cover the next CRISP-DM Step: We get to know the data and begin the process of preparing for modeling. In this section, you will learn: * Techniques for effectively analyzing the features in dataset * How to use the skimr package for data investigation by data type * How to use the GGally package for visual data investigation `skimr` is a great tool for feature exploration by data type. `GGally` is a great tool for visualizing feature interaction relationships using `ggpairs()`. ```r # Libraries library(tidyverse) library(readxl) library(skimr) library(GGally) # Load Data data definitions path_data_definitions <- "00_Data/data_definitions.xlsx" definitions_raw_tbl <- read_excel(path_data_definitions, sheet = 1, col_names = FALSE) employee_attrition_tbl ``` For our measuremt we breakdown the data collection activities in to strategic areas: ```r # Descriptive Features employee_attrition_tbl %>% select(Age, DistanceFromHome, Gender, MaritalStatus, NumCompaniesWorked, Over18) # Employment Features employee_attrition_tbl %>% select(Department, EmployeeCount, EmployeeNumber, JobInvolvement, JobLevel, JobRole, JobSatisfaction) # Compensation Features employee_attrition_tbl %>% select(DailyRate, HourlyRate, MonthlyIncome, MonthlyRate, PercentSalaryHike, StockOptionLevel) # Survery Results employee_attrition_tbl %>% select(EnvironmentSatisfaction, JobSatisfaction, RelationshipSatisfaction, WorkLifeBalance) # Performance Data employee_attrition_tbl %>% select(JobInvolvement, PerformanceRating) # Work-Life Features employee_attrition_tbl %>% select(BusinessTravel, OverTime) # Training & Education employee_attrition_tbl %>% select(Education, EducationField, TrainingTimesLastYear) # Time-Based Features employee_attrition_tbl %>% select(TotalWorkingYears, YearsAtCompany, YearsInCurrentRole, YearsSinceLastPromotion, YearsWithCurrManager) ``` **Exploratory data analysis (EDA): Part 1** Data summarization with skmir `skim()` returns summary by data type. This includes missing values and number of unique features for categorical data. For numeric data, it returns the histogram and quantiles. Separating your data by data type (e.g. numeric vs. categorical) is a great way to investigate properties of the data. Character data (or factor) is typically categorical. Numerical data can be analyzed by its distribution (mean, std dev & quantiles). Histograms are extremely useful for analyzing numeric data (outliers, skew, should it be a factor? and more). If for example the number of unique categorical features (n_unique) is large, consider creating an "other" category. A categorical feature that has only one unique level offers no value to modeling (see Over18). Some info beforehand: * `selec_if()` selects only columns matching a function. Typically used with a data type selector function. For example, can select only character by passing the function `is.character()`. * `map()` iterates over a list. When used with a data frame (or tibble), iterates over the columns. When used with a data frame inside a mutate(), iterates over rows. * `Anonymous tidy function` can make anonymous functions that are not pre-defined (hence anonymous). They begin with the tilde (`~`) and take dot (`.`) as the argument. * `table()` converts factor or character data (categorical data) into counts. Can take single or multiple categorical inputs and cross-tabulate counts. * `prop.table()` modifies the output of `table()` to proportions. Some numeric features may not be continous, and are actually categorical. These are called discrete features because they have defined levels even though they are stored as numeric. They typically should be converted to categorical (i.e. factor) data types. * `map_df()` works like `map()` except attempts to conver the list output to a dataframe. * `pivot_longer()` collects columns in wide format and converts to long format with the column names as variables ```r # Step 1: Data Summarization ----- skim(employee_attrition_tbl) # Character Data Type employee_attrition_tbl %>% select_if(is.character) %>% glimpse() # Get "levels" employee_attrition_tbl %>% select_if(is.character) %>% map(unique) # Proportions employee_attrition_tbl %>% select_if(is.character) %>% map(~ table(.) %>% prop.table()) # Numeric Data employee_attrition_tbl %>% select_if(is.numeric) %>% map(~ unique(.) %>% length()) employee_attrition_tbl %>% select_if(is.numeric) %>% map_df(~ unique(.) %>% length()) %>% # Select all columns pivot_longer(everything()) %>% arrange(value) %>% filter(value <= 10) ``` Remember: Variables with only one level are non-essential variables (also called zero-variance features). These features are not useful to modeling. Info: Numeric variables that are lower in levels are likely to be discrete, and numeric variables thar are higher in levels are likely t obe continuous. **Exploratory data analysis (EDA): Part 2** In this section we will explore our data visually with [`GGally`](https://ggobi.github.io/ggally/). Click to GGally to get further information. The function `ggpairs()` contains three sections that compare featurs: (1) diagonal, (2) lower triangle, (3) upper triangle * (1) diagonal: contains + density for continuous + counts/proportions as bars for discrete * (2) lower triangle: contains + histogram for numeric-categorical pairs + scatter for numeric-numeric pairs + bars for categorical-categorical pairs * (3) upper triangle: contains + box-plot for numeric-categorical pairs + correlation value for numeric-numeric pairs + bars for categorical-categorical pairs ```r library(GGally) # Step 2: Data Visualization ---- employee_attrition_tbl %>% select(Attrition, Age, Gender, MaritalStatus, NumCompaniesWorked, Over18, DistanceFromHome) %>% ggpairs() ``` {{< figure src="/img/courses/dat_sci/09/ggpairs.png">}} Take a look at the data. There are already a lot of insights. Let's customize the `ggpairs()` function to make it more meaningful: ```r employee_attrition_tbl %>% select(Attrition, Age, Gender, MaritalStatus, NumCompaniesWorked, Over18, DistanceFromHome) %>% ggpairs(aes(color = Attrition), lower = "blank", legend = 1, diag = list(continuous = wrap("densityDiag", alpha = 0.5))) + theme(legend.position = "bottom") ``` {{< figure src="/img/courses/dat_sci/09/ggpairs_2.png">}} Let's take that one step further and create a custom plotting function: * `rlang::quo_is_null()` returns TRUE if the quosure contains a NULL value ```r # Create data tibble, to potentially debug the plot_ggpairs function (because it has a data argument) data <- employee_attrition_tbl %>% select(Attrition, Age, Gender, MaritalStatus, NumCompaniesWorked, Over18, DistanceFromHome) plot_ggpairs <- function(data, color = NULL, density_alpha = 0.5) { color_expr <- enquo(color) if (rlang::quo_is_null(color_expr)) { g <- data %>% ggpairs(lower = "blank") } else { color_name <- quo_name(color_expr) g <- data %>% ggpairs(mapping = aes_string(color = color_name), lower = "blank", legend = 1, diag = list(continuous = wrap("densityDiag", alpha = density_alpha))) + theme(legend.position = "bottom") } return(g) } employee_attrition_tbl %>% select(Attrition, Age, Gender, MaritalStatus, NumCompaniesWorked, Over18, DistanceFromHome) %>% plot_ggpairs(color = Attrition) ``` Visual data analysis with GGally: Let's explore several of the feature categories with the function, which we've just created. These will be needed to answer the questions in the challenge. * `contains()`, `starts_with()`, `ends_with()`: Tidy select functions that eneable selecting features by the text matching. ```r # Explore Features by Category # 1. Descriptive features: age, gender, marital status employee_attrition_tbl %>% select(Attrition, Age, Gender, MaritalStatus, NumCompaniesWorked, Over18, DistanceFromHome) %>% plot_ggpairs(Attrition) # 2. Employment features: department, job role, job level employee_attrition_tbl %>% select(Attrition, contains("employee"), contains("department"), contains("job")) %>% plot_ggpairs(Attrition) # 3. Compensation features: HourlyRate, MonthlyIncome, StockOptionLevel employee_attrition_tbl %>% select(Attrition, contains("income"), contains("rate"), contains("salary"), contains("stock")) %>% plot_ggpairs(Attrition) # 4. Survey Results: Satisfaction level, WorkLifeBalance employee_attrition_tbl %>% select(Attrition, contains("satisfaction"), contains("life")) %>% plot_ggpairs(Attrition) # 5. Performance Data: Job Involvment, Performance Rating employee_attrition_tbl %>% select(Attrition, contains("performance"), contains("involvement")) %>% plot_ggpairs(Attrition) # 6. Work-Life Features employee_attrition_tbl %>% select(Attrition, contains("overtime"), contains("travel")) %>% plot_ggpairs(Attrition) # 7. Training and Education employee_attrition_tbl %>% select(Attrition, contains("training"), contains("education")) %>% plot_ggpairs(Attrition) # 8. Time-Based Features: Years at company, years in current role employee_attrition_tbl %>% select(Attrition, contains("years")) %>% plot_ggpairs(Attrition) ``` ## <i class="fas fa-laptop-code"></i> &nbsp;Challenge Use your learning from descriptive features and `plot_ggpairs()` to further investigate the features. Run the functions above according to the features needed. Answer the following questions. Most of the time, you will only need the images from diagonal. *1. Compensation Features* What can you deduce about the interaction between Monthly Income and Attrition? <ol type="a"> <li>Those that are leaving the company have a higher Monthly Income</li> <li>That those are staying have a lower Monthly Income</li> <li>Those that are leaving have a lower Monthly Income</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *2. Compensation Features* What can you deduce about the interaction between Percent Salary Hike and Attrition? <ol type="a"> <li>Those that are leaving the company have a higher Percent Salary Hike</li> <li>Those that are staying have a lower Percent Salary Hike</li> <li>Those that are leaving have lower Percent Salary Hike</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *3. Compensation Features* What can you deduce about the interaction between Stock Option Level and Attrition? <ol type="a"> <li>Those that are leaving the company have a higher stock option level</li> <li>Those that are staying have a higher stock option level</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *4. Survey Results* What can you deduce about the interaction between Environment Satisfaction and Attrition? <ol type="a"> <li>A higher proportion of those leaving have a low environment satisfaction level</li> <li>A higher proportion of those leaving have a high environment satisfaction level</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *5. Survey Results* What can you deduce about the interaction between Work Life Balance and Attrition <ol type="a"> <li>Those that are leaving have higher density of 2's and 3's</li> <li>Those that are staying have a higher density of 2's and 3's</li> <li>Those that are staying have a lower density of 2's and 3's</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *6. Performance Data* What Can you deduce about the interaction between Job Involvement and Attrition? <ol type="a"> <li>Those that are leaving have a lower density of 3's and 4's</li> <li>Those that are leaving have a lower density of 1's and 2's</li> <li>Those that are staying have a lower density of 2's and 3's</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *7. Work-Life Features* What can you deduce about the interaction between Over Time and Attrition? <ol type="a"> <li>The proportion of those leaving that are working Over Time are high compared to those that are not leaving</li> <li>The proportion of those staying that are working Over Time are high compared to those that are not staying</li> </ol> *** *8. Training and Education* What can you deduce about the interaction between Training Times Last Year and Attrition <ol type="a"> <li>People that leave tend to have more annual trainings</li> <li>People that leave tend to have less annual trainings</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *9. Time-Based Features* What can you deduce about the interaction between Years At Company and Attrition <ol type="a"> <li>People that leave tend to have more working years at the company</li> <li>People that leave tend to have less working years at the company</li> <li>It's difficult to deduce anything based on the visualization</li> </ol> *** *10. Time-Based Features* What can you deduce about the interaction between Years Since Last Promotion and Attrition? <ol type="a"> <li>Those that are leaving have more years since last promotion than those that are staying</li> <li>Those that are leaving have fewer years since last promotion than those that are staying</li> <li>It's difficult to deduce anything based on the visualization</li> </ol>
41.53211
1,034
0.708737
eng_Latn
0.973189
f4991a78c7e0006da6bfbce7f80fa18c14b06e0c
2,066
md
Markdown
data/blog/sprint-retro-2021-02.md
jqphu/blog
c3477c841bd26448f0f242427217c8ab20cff077
[ "MIT" ]
1
2022-02-18T23:22:07.000Z
2022-02-18T23:22:07.000Z
data/blog/sprint-retro-2021-02.md
jqphu/blog
c3477c841bd26448f0f242427217c8ab20cff077
[ "MIT" ]
4
2022-01-01T08:27:11.000Z
2022-02-17T23:44:47.000Z
data/blog/sprint-retro-2021-02.md
jqphu/blog
c3477c841bd26448f0f242427217c8ab20cff077
[ "MIT" ]
null
null
null
--- title: Sprint Retrospective - Feburary date: 2021-02-27 18:58 layout: PostLayout tags: ['sprint', 'retro', 'okr'] --- ## Metrics * Sleep Score - 76 * Activity Score - 79 * Readiness Score - 75 * Meditation Sessions - 10 * Books read this month - 1 * Deep work blocks - half of work days? ## What Went Well * Maintaining relationships w/people in Australia and in the bay. Staying sane :) * Haven't entirely given up on tennis yet. * Adult things * Got money in order. Student loans paid so we're debt free! Finished filing some overdue tax returns. Figured out some basic 401k, backdoor, mega backdoor things. * Therapy has helped be a little more mindful. Trying to practice being flexible mentally and understand where extremes aren't beneficial. Probably finishing it up but was an interesting experience! * Physical Therapy! * New glasses & Contacts * Promotion! * Although, that wasn't a result of the work I did during this month but rather the work I put in 6 months ago :P. * Metrics slightly higher! Trending in the right direction. * Morning and Night Routine. ## What Could Be Improved * Need to get more focused with work x2. Deep blocks. Remember to focus thinking on long term and everything fits in naturally. * Too much work and not enough habits - as always :P. * No Mindless. ## Action items * Focus next retrospective on habits O_O ## Review ## [P0] KR: Mindless is used to reflect on OKRs and Sprints * Spend one weekend on mindless. ## [P1] KR: Read a book a month * Finished reading how to take smart notes. * Somehow I finished more books than I read. ## [P1] KR: Write a post every two weeks. * 0 for now but maybe I'll get a quick one out tomorrow! ## [P1] KR: Compete in a tennis tournament * Hit ~8 sessions of tennis! Nice job. * Did ~2 sessions of yoga :( ## [P1] KR: Work less than 50 hours a week. * Started tracking morning and night stats. Still working too much in absolute numbers (I get distracted during work so by cutting down absolute numbers doesn't mean I'll do less work.)
32.793651
99
0.726041
eng_Latn
0.997011
f4994e09107994dc1503ce172aad8c7a12fbb452
13,174
md
Markdown
articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
MicrosoftDocs/azure-docs.es-es
f8ea6df92bacc5d91ae0a88c0342c4e1703194a1
[ "CC-BY-4.0", "MIT" ]
66
2017-07-09T03:34:12.000Z
2022-03-05T21:27:20.000Z
articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
MicrosoftDocs/azure-docs.es-es
f8ea6df92bacc5d91ae0a88c0342c4e1703194a1
[ "CC-BY-4.0", "MIT" ]
671
2017-06-29T16:36:35.000Z
2021-12-03T16:34:03.000Z
articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
MicrosoftDocs/azure-docs.es-es
f8ea6df92bacc5d91ae0a88c0342c4e1703194a1
[ "CC-BY-4.0", "MIT" ]
171
2017-07-25T06:26:46.000Z
2022-03-23T09:07:10.000Z
--- title: Arquitectura de aprovisionamiento de aplicaciones locales de Azure AD | Microsoft Docs description: Se presenta información general de la arquitectura de aprovisionamiento de aplicaciones locales. services: active-directory author: billmath manager: mtillman ms.service: active-directory ms.workload: identity ms.topic: overview ms.date: 05/28/2021 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: cdd7995c50ef63b4ec88e65c949a4c098a4b9330 ms.sourcegitcommit: f6e2ea5571e35b9ed3a79a22485eba4d20ae36cc ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 09/24/2021 ms.locfileid: "128609942" --- # <a name="azure-ad-on-premises-application-provisioning-architecture"></a>Arquitectura de aprovisionamiento de aplicaciones locales de Azure AD >[!IMPORTANT] > La versión preliminar de aprovisionamiento locales se encuentra actualmente en una versión preliminar solo por invitación. Para solicitar acceso a la funcionalidad, use el [formulario de solicitud de acceso](https://aka.ms/onpremprovisioningpublicpreviewaccess). La versión preliminar estará a disposición de más clientes y conectores durante los próximos meses, cuando se prepare la disponibilidad general (GA). ## <a name="overview"></a>Información general En el diagrama siguiente se muestra información general de cómo funciona el aprovisionamiento de aplicaciones locales. ![Diagrama en el que se muestra la arquitectura para el aprovisionamiento de aplicaciones locales.](.\media\on-premises-application-provisioning-architecture\arch-3.png) Existen tres componentes principales para el aprovisionamiento de usuarios en una aplicación local: - El agente de aprovisionamiento proporciona conectividad entre Azure Active Directory (Azure AD) y el entorno local. - El host ECMA convierte las solicitudes de aprovisionamiento de Azure AD en solicitudes realizadas a la aplicación de destino. Actúa como puerta de enlace entre Azure AD y la aplicación. Se puede usar para importar los conectores ECMA2 existentes que se utilizan con Microsoft Identity Manager. El host ECMA no es necesario si ha creado una aplicación SCIM o una puerta de enlace SCIM. - El servicio de aprovisionamiento de Azure AD actúa como motor de sincronización. >[!NOTE] > La sincronización de Microsoft Identity Manager no es necesaria. Pero puede usarla para crear y probar el conector ECMA antes de importarlo en el host ECMA. ### <a name="firewall-requirements"></a>Requisitos de firewall No es necesario abrir conexiones de entrada a la red corporativa. Los agentes de aprovisionamiento solo usan conexiones salientes al servicio de aprovisionamiento, lo que significa que no es necesario abrir los puertos de firewall para las conexiones entrantes. Tampoco necesita una red perimetral (DMZ), porque todas las conexiones son salientes y se realizan por medio de un canal seguro. ## <a name="ecma-connector-host-architecture"></a>Arquitectura del host del conector de ECMA El host del conector de ECMA tiene varias áreas que usa para el aprovisionamiento local. El diagrama siguiente es un dibujo conceptual que presenta estas áreas individuales. En la tabla siguiente se describen con más detalle. [![Host del conector de ECMA](.\media\on-premises-application-provisioning-architecture\ecma-2.png)](.\media\on-premises-application-provisioning-architecture\ecma-2.png#lightbox) |Área|Descripción| |-----|-----| |Puntos de conexión|Responsable de la comunicación y la transferencia de datos con el servicio de aprovisionamiento de Azure AD.| |Caché en memoria|Se usa para almacenar los datos importados desde el origen de datos local.| |Sincronización automática|Proporciona sincronización asincrónica de datos entre el host del conector de ECMA y el origen de datos local.| |Lógica de negocios|Se usa para coordinar todas las actividades del host del conector de ECMA. La hora de la sincronización automática se puede configurar en el host de ECMA en la página de propiedades.| ### <a name="about-anchor-attributes-and-distinguished-names"></a>Acerca de los atributos delimitadores y los nombres distintivos La siguiente información se proporciona para explicar mejor los atributos delimitadores y los nombres distintivos, que el conector de genericSQL usa especialmente. El atributo delimitador es un atributo único de un tipo de objeto que no cambia y representa ese objeto en la memoria caché en memoria del host del conector de ECMA. El nombre distintivo (DN) es un nombre que identifica de forma única un objeto indicando su ubicación actual en la jerarquía del directorio. O, en el caso de SQL, en la partición. El nombre se forma concatenando el atributo delimitador a la raíz de la partición de directorio. Cuando se piensa en nombres distintivos tradicionales en formato tradicional, por ejemplo, de Active Directory o LDAP, se piensa en algo similar a: CN=Lola Jacobson,CN=Users,DC=contoso,DC=com Sin embargo, para un origen de datos como SQL, que es plano, no jerárquico, el nombre distintivo debe estar ya presente en una de las tablas o crearse a partir de la información que proporcionamos al host del conector de ECMA. Esto se consigue al marcar **Autogenerated** (Generado automáticamente) en la casilla al configurar el conector de genericSQL. Al elegir el nombre distintivo generado automáticamente, el host de ECMA generará uno en formato LDAP: CN=&lt;valordelimitador&gt;,OBJECT=&lt;type&gt;. También se presupone que la opción DN is Anchor (DN es delimitador) **no está marcado** en la página Conectividad. [![Opción DN is Anchor (DN es delimitador) sin marcar](.\media\on-premises-application-provisioning-architecture\user-2.png)](.\media\on-premises-application-provisioning-architecture\user-2.png#lightbox) El conector de genericSQL espera que el nombre distintivo se rellene en formato LDAP. El conector de genericSQL usa el estilo de LDAP con el nombre de componente "OBJECT=". Esto le permite usar particiones (cada tipo de objeto es una partición). Como el host del conector de ECMA actualmente solo admite el tipo de objeto USER, OBJECT=&lt;type&gt; será OBJECT=USER. Por lo tanto, el nombre distintivo de un usuario con un valor delimitador ljacobson sería: CN=ljacobson,OBJECT=USER ### <a name="user-creation-workflow"></a>Flujo de trabajo de creación de usuarios 1. El servicio de aprovisionamiento de Azure AD consulta el host del conector de ECMA para ver si el usuario existe. Usa como filtro el **atributo coincidente**. Este atributo se define en el portal de Azure AD, en Aplicaciones empresariales -> On-premises provisioning (Aprovisionamiento local) -> Aprovisionamiento -> Attribute matching (Atributo coincidente). Se indica mediante el 1 para la prioridad de coincidencia. Puede definir uno o varios atributos coincidentes y priorizarlos. Si desea cambiar el atributo coincidente, también puede hacerlo. [![Atributo coincidente](.\media\on-premises-application-provisioning-architecture\match-1.png)](.\media\on-premises-application-provisioning-architecture\match-1.png#lightbox) 2. El host del conector de ECMA recibe la solicitud GET y consulta su caché interna para ver si el usuario existe y se ha importado. Esto se hace mediante el **atributo de consulta**. El atributo de consulta se define en la página de tipos de objeto. [![Atributo de consulta](.\media\on-premises-application-provisioning-architecture\match-2.png)](.\media\on-premises-application-provisioning-architecture\match-2.png#lightbox) 3. Si el usuario no existe, Azure AD realizará una solicitud POST para crearlo. El host del conector de ECMA responderá a Azure AD con HTTP 201 y proporcionará un identificador de usuario. Este identificador se deriva del valor delimitador definido en la página de tipos de objeto. Este delimitador lo usará Azure AD para consultar el host del conector de ECMA en solicitudes futuras. 4. Si se produce un cambio en el usuario de Azure AD, Azure AD realizará una solicitud GET para recuperar el usuario mediante el delimitador del paso anterior, en lugar del atributo coincidente del paso 1. Esto permite, por ejemplo, que el nombre principal de usuario cambie sin romper el vínculo entre el usuario de Azure AD y la aplicación. ## <a name="agent-best-practices"></a>Procedimientos recomendados del agente - Asegúrese de que el servicio Auto Update del agente de aprovisionamiento de Azure AD Connect se esté ejecutando. Se habilita de forma predeterminada al instalar el agente. La actualización automática es necesaria para que Microsoft admita la implementación. - Evite todo tipo de inspección insertada en las comunicaciones TLS salientes entre los agentes y Azure. Este tipo de inspección insertada provoca la degradación en el flujo de la comunicación. - El agente tiene que comunicarse con Azure y la aplicación; por eso la ubicación del agente afecta a la latencia de esas dos conexiones. Puede probar a minimizar la latencia del tráfico de extremo a extremo optimizando cada conexión de red. Cada conexión se puede optimizar mediante uno de estos modos: - Reducir la distancia entre los dos extremos del salto. - Elegir la red adecuada por la que pasar. Por ejemplo, puede ser más rápido pasar por una red privada que por la red pública de Internet debido a los vínculos dedicados. ## <a name="provisioning-agent-questions"></a>Preguntas sobre el agente de aprovisionamiento Aquí se responden algunas preguntas comunes. ### <a name="what-is-the-ga-version-of-the-provisioning-agent"></a>¿Cuál es la versión de disponibilidad general del agente de aprovisionamiento? Para obtener la versión de disponibilidad general más reciente del agente de aprovisionamiento, vea [Agente de aprovisionamiento de Azure Active Directory Connect: historial de lanzamiento de versiones](provisioning-agent-release-version-history.md). ### <a name="how-do-i-know-the-version-of-my-provisioning-agent"></a>¿Cómo puedo saber la versión del agente de aprovisionamiento? 1. Inicie sesión en el servidor de Windows en el que está instalado el agente de aprovisionamiento. 2. Vaya a **Panel de control** > **Desinstalar o cambiar un programa**. 3. Busque la versión correspondiente a la entrada **Microsoft Azure AD Connect Provisioning Agent**. ### <a name="does-microsoft-automatically-push-provisioning-agent-updates"></a>¿Microsoft envía automáticamente las actualizaciones del agente de aprovisionamiento? Sí. Microsoft actualiza automáticamente el agente de aprovisionamiento si el servicio de Windows Agent Updater de Microsoft Azure AD Connect está en ejecución. Es necesario asegurarse de que el agente está actualizado para obtener soporte técnico para la solución de problemas. ### <a name="can-i-install-the-provisioning-agent-on-the-same-server-running-azure-ad-connect-or-microsoft-identity-manager"></a>¿Puedo instalar el agente de aprovisionamiento en el mismo servidor que ejecuta Azure AD Connect o Microsoft Identity Manager? Sí. Puede instalar el agente de aprovisionamiento en el mismo servidor que ejecuta Azure AD Connect o Microsoft Identity Manager, pero no son necesarios. ### <a name="how-do-i-configure-the-provisioning-agent-to-use-a-proxy-server-for-outbound-http-communication"></a>¿Cómo se puede configurar el agente de aprovisionamiento para usar un servidor proxy para la comunicación HTTP saliente? El agente de aprovisionamiento admite el uso de un proxy de salida. Puede configurarlo si modifica el archivo de configuración del agente **C:\Archivos de programa\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config**. Agregue las líneas siguientes hacia el final del archivo, justo antes de la etiqueta `</configuration>` de cierre. Reemplace las variables `[proxy-server]` y `[proxy-port]` con los valores del puerto y el nombre del servidor proxy. ``` <system.net> <defaultProxy enabled="true" useDefaultCredentials="true"> <proxy usesystemdefault="true" proxyaddress="http://[proxy-server]:[proxy-port]" bypassonlocal="true" /> </defaultProxy> </system.net> ``` ### <a name="how-do-i-ensure-the-provisioning-agent-can-communicate-with-the-azure-ad-tenant-and-no-firewalls-are-blocking-ports-required-by-the-agent"></a>¿Cómo se garantiza que el agente de aprovisionamiento se pueda comunicar con el inquilino de Azure AD y que ningún firewall bloquee los puertos que necesita el agente? También puede comprobar si todos los puertos necesarios están abiertos. ### <a name="how-do-i-uninstall-the-provisioning-agent"></a>¿Cómo puedo desinstalar el agente de aprovisionamiento? 1. Inicie sesión en el servidor de Windows en el que está instalado el agente de aprovisionamiento. 2. Vaya a **Panel de control** > **Desinstalar o cambiar un programa**. 3. Desinstale los programas siguientes: - Microsoft Azure AD Connect Provisioning Agent - Agent Updater de Microsoft Azure AD Connect - Microsoft Azure AD Connect Provisioning Agent Package ## <a name="next-steps"></a>Pasos siguientes - [Aprovisionamiento de aplicaciones](user-provisioning.md)
81.320988
483
0.794064
spa_Latn
0.985755
f4995a16aabff55cd91a45d655e591feaf4d3180
647
md
Markdown
RESULTS/B2/README.md
mrxkollo/stanfordnlp
bd31fcc4d4ab545539fe2d0672272a345beb5108
[ "Apache-2.0" ]
null
null
null
RESULTS/B2/README.md
mrxkollo/stanfordnlp
bd31fcc4d4ab545539fe2d0672272a345beb5108
[ "Apache-2.0" ]
null
null
null
RESULTS/B2/README.md
mrxkollo/stanfordnlp
bd31fcc4d4ab545539fe2d0672272a345beb5108
[ "Apache-2.0" ]
null
null
null
* _et_edt.dev.in.conllu -> B2_Kais_1.txt + Tartu train andmestik -> Testfail * _et_edt.train.in.conllu -> B2_Kaisa_1.txt -> Treeningfail * _et_et.dev.gold.conllu -> Koopia _et_edt.dev.in.conllu'st tehnilistel põhjustel. ``` "Training ended with 1 epochs. Best dev F1 = 89.74, at epoch = 1 Running lemmatizer in predict mode Building an attentional Seq2Seq model... Using a Bi-LSTM encoder Using soft attention for LSTM. Finetune all embeddings. [Running seq2seq lemmatizer with edit classifier] Loading data with batch size 500... Running the seq2seq model... [Ensembling dict with seq2seq lemmatizer...] Lemma score: _et_edt 89.74 ```
34.052632
71
0.752705
eng_Latn
0.714146
f499a14778f59e20e556c32e1661b9ef4269e5cd
9,262
md
Markdown
docs/standard/threading/eventwaithandle.md
judenethub/docs.ru-ru
2691f852cdf819d218b9eb62f52eb56a7f6658d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/threading/eventwaithandle.md
judenethub/docs.ru-ru
2691f852cdf819d218b9eb62f52eb56a7f6658d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/threading/eventwaithandle.md
judenethub/docs.ru-ru
2691f852cdf819d218b9eb62f52eb56a7f6658d9
[ "CC-BY-4.0", "MIT" ]
1
2021-10-31T15:06:56.000Z
2021-10-31T15:06:56.000Z
--- title: EventWaitHandle ms.date: 03/30/2017 helpviewer_keywords: - threading [.NET], EventWaitHandle class - EventWaitHandle class - event wait handles [.NET] - threading [.NET], cross-process synchronization ms.assetid: 11ee0b38-d663-4617-b793-35eb6c64e9fc ms.openlocfilehash: 078bda2354a6f0aec2215b0c5da2a021f53ff922 ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94 ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 11/24/2020 ms.locfileid: "95723788" --- # <a name="eventwaithandle"></a>EventWaitHandle Класс <xref:System.Threading.EventWaitHandle> позволяет потокам взаимодействовать друг с другом, передавая и ожидая передачи сигналов. Дескрипторы ожидания событий (часто их называют просто "события") — это дескрипторы ожидания, которые можно создавать для освобождения одного или нескольких потоков в состоянии ожидания. Созданное событие (дескриптор ожидания) затем сбрасывается вручную или автоматически. Класс <xref:System.Threading.EventWaitHandle> может представлять любой локальный дескриптор ожидания событий (локальное событие) или именованный системный дескриптор ожидания событий (именованное событие), который доступен для всех процессов. > [!NOTE] > Дескрипторы ожидания событий не являются [событиями](../events/index.md) .NET. Для них не существует делегатов или обработчиков. Слово "событие" здесь используется лишь потому, что такие дескрипторы традиционно именовались событиями операционной системы, а при создании дескриптора ожидания и потоков в состоянии ожидания передаются сведения о том, что произошло событие. Как локальные, так и именованные дескрипторы ожидания событий используют системные объекты синхронизации, защищенные оболочками <xref:Microsoft.Win32.SafeHandles.SafeWaitHandle> для правильного освобождения ресурсов. Вы можете использовать метод <xref:System.Threading.WaitHandle.Dispose%2A>, чтобы освободить ресурсы, как только закончите работу с объектом. ## <a name="event-wait-handles-that-reset-automatically"></a>Дескрипторы ожидания событий, которые сбрасываются автоматически Чтобы создать событие с автоматическим сбросом, укажите <xref:System.Threading.EventResetMode.AutoReset?displayProperty=nameWithType> при создании объекта <xref:System.Threading.EventWaitHandle>. Как можно понять по имени, это событие синхронизации после создания освобождает один поток в состоянии ожидания и автоматически сбрасывается. Чтобы создать событие, вызовите его метод <xref:System.Threading.EventWaitHandle.Set%2A>. События с автоматическим сбросом обычно используются, чтобы поочередно предоставлять монопольный доступ к ресурсу для одного потока из нескольких. В потоке подается запрос на ресурс. Для этого вызывается метод <xref:System.Threading.WaitHandle.WaitOne%2A>. Если в этот момент ни один поток не удерживает дескриптор ожидания, метод возвращает `true` и предоставляет вызывающему потоку управление ресурсом. > [!IMPORTANT] > Как и для всех механизмов синхронизации, необходимо гарантировать во всех ветвях кода правильное ожидание дескриптора перед осуществлением доступа к защищенному ресурсу. Синхронизация потоков выполняется совместно. Если событие с автоматическим сбросом создается при отсутствии потоков в состоянии ожидания, оно сохраняет свой статус, пока не получит обращение от потока. Тогда событие освобождает поток и немедленно сбрасывается, блокируя следующие потоки. ## <a name="event-wait-handles-that-reset-manually"></a>Дескрипторы ожидания событий, которые сбрасываются вручную Чтобы создать событие со сбросом вручную, укажите <xref:System.Threading.EventResetMode.ManualReset?displayProperty=nameWithType> при создании объекта <xref:System.Threading.EventWaitHandle>. Как можно понять по имени, это событие синхронизации после создания сбрасывается вручную. Пока не будет вызван метод <xref:System.Threading.EventWaitHandle.Reset%2A> для сброса события, все потоки, ожидающие этот дескриптор события, продолжают работу немедленно и без блокировки. Событие со сбросом вручную действует как ворота загона. Пока событие не создано, потоки ожидают его, как стадо лошадей в загоне. Сразу после создания события путем вызова метода <xref:System.Threading.EventWaitHandle.Set%2A> все потоки в состоянии ожидания освобождаются и могут продолжать работу. Событие сохраняет статус созданного, пока не будет вызван его метод <xref:System.Threading.EventWaitHandle.Reset%2A>. Благодаря этому свойству событие со сбросом вручную идеально подходит для ситуации, когда нужно удерживать несколько потоков в ожидании завершения конкретной задачи. Как и лошадям, выходящим из загона, освобожденным потокам потребуется некоторое время, пока операционная система сможет возобновить их выполнение. Если метод <xref:System.Threading.EventWaitHandle.Reset%2A> будет вызван раньше, чем все эти потоки возобновят выполнение, оставшиеся в ожидании потоки снова будут заблокированы. Какие конкретно потоки начнут работу, а какие снова останутся ожидать, зависит от многих случайных факторов, таких как загрузка системы, количество ожидающих выполнения потоков и т. д. Не возникнет никаких проблем, если поток, в котором создается событие, завершится сразу после его создания. Это самый распространенный вариант использования этого подхода. Если нужно, чтобы создающий событие поток начал выполнение новой задачи только после того, как все потоки в состоянии ожидания возобновят работу, заблокируйте его. Иначе возникнет состояние гонки и поведение кода будет непредсказуемым. ## <a name="features-common-to-automatic-and-manual-events"></a>Общие свойства событий с автоматическим сбросом и сбросом вручную Как правило, <xref:System.Threading.EventWaitHandle> блокирует один или несколько потоков, пока незаблокированный поток не вызовет метод <xref:System.Threading.EventWaitHandle.Set%2A>, который освобождает один из потоков в состоянии ожидания (если это событие с автоматическим сбросом) или все потоки сразу (если это событие со сбросом вручную). Поток может создать событие <xref:System.Threading.EventWaitHandle> и заблокироваться в ожидании этого же события в рамках одной атомарной операции, вызвав статический метод <xref:System.Threading.WaitHandle.SignalAndWait%2A?displayProperty=nameWithType>. В статических методах <xref:System.Threading.WaitHandle.WaitAll%2A?displayProperty=nameWithType> и <xref:System.Threading.WaitHandle.WaitAny%2A?displayProperty=nameWithType> можно использовать объекты <xref:System.Threading.EventWaitHandle>. Так как классы <xref:System.Threading.EventWaitHandle> и <xref:System.Threading.Mutex> являются производными от <xref:System.Threading.WaitHandle>, вы можете использовать оба этих класса с этими методами. ### <a name="named-events"></a>Именованные события Операционная система Windows позволяет присваивать имена дескрипторам ожидания. Именованное событие применяется во всей системе. Это означает, что после создания именованное событие будет видимым для всех потоков во всех процессах. Таким образом, именованное событие можно использовать для синхронизации действий в разных процессах и потоках. Вы можете создать объект <xref:System.Threading.EventWaitHandle>, который представляет именованное системное событие, с помощью любого из конструкторов, использующих имя события. > [!NOTE] > Так как именованные события доступны во всей системе, может существовать несколько объектов <xref:System.Threading.EventWaitHandle>, представляющих одно и то же именованное событие. При каждом вызове конструктора или метода <xref:System.Threading.EventWaitHandle.OpenExisting%2A> создается новый объект <xref:System.Threading.EventWaitHandle>. Если указать одно и то же имя несколько раз, создается несколько объектов, представляющих одно и то же именованное событие. При использовании именованных событий следует соблюдать осторожность. Поскольку они доступны во всей системе, другой процесс может использовать это же имя события и случайно заблокировать все ваши потоки. Вредоносный код, выполняемый на одном компьютере может использовать это как основу для атак типа "отказ в обслуживании". Чтобы защитить объект <xref:System.Threading.EventWaitHandle>, представляющий именованное событие, примените механизм безопасности управления доступом. Лучше всего использовать конструктор, который определяет объект <xref:System.Security.AccessControl.EventWaitHandleSecurity>. Метод <xref:System.Threading.EventWaitHandle.SetAccessControl%2A> тоже обеспечит безопасность управления доступом, но такой подход оставит систему уязвимой в период между созданием и защитой дескриптора ожидания. Защита событий с помощью безопасности управления доступом предотвращает атаки злоумышленников, но не решает проблемы непреднамеренного конфликта имен. > [!NOTE] > В отличие от класса <xref:System.Threading.EventWaitHandle>, производные классы <xref:System.Threading.AutoResetEvent> и <xref:System.Threading.ManualResetEvent> могут представлять только локальные дескрипторы ожидания. Они не могут представлять именованные системные события. ## <a name="see-also"></a>См. также раздел - <xref:System.Threading.EventWaitHandle> - <xref:System.Threading.WaitHandle> - <xref:System.Threading.AutoResetEvent> - <xref:System.Threading.ManualResetEvent>
126.876712
921
0.826927
rus_Cyrl
0.971916
f499d997fe3579bd351b76fdbf1057a348014ce9
2,834
md
Markdown
articles/vpn-gateway/openvpn-azure-ad-mfa.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/vpn-gateway/openvpn-azure-ad-mfa.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/vpn-gateway/openvpn-azure-ad-mfa.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Habilitar MFA para usuarios de VPN: Autenticación de Azure AD' description: Habilitar la autenticación multifactor para los usuarios de VPN services: vpn-gateway author: anzaman ms.service: vpn-gateway ms.topic: conceptual ms.date: 11/21/2019 ms.author: alzam ms.openlocfilehash: 7f05b850a0d886ac0df5c542de647f91fe62eb05 ms.sourcegitcommit: f523c8a8557ade6c4db6be12d7a01e535ff32f32 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 11/22/2019 ms.locfileid: "74382209" --- # <a name="enable-azure-multi-factor-authentication-mfa-for-vpn-users"></a>Habilitar Azure Multi-Factor Authentication (MFA) para los usuarios de VPN Si quiere que se solicite a los usuarios un segundo factor de autenticación antes de conceder acceso, puede configurar Azure Multi-Factor Authentication (MFA) para su inquilino de Azure AD. Los pasos de este artículo le ayudan a habilitar un requisito para la verificación en dos pasos. ## <a name="prereq"></a>Requisito previo El requisito previo para esta configuración es un inquilino de Azure AD configurado mediante los pasos descritos en [Configurar un inquilino](openvpn-azure-ad-tenant.md). ## <a name="mfa"></a>Abrir la página de MFA 1. Inicie sesión en el Portal de Azure. 2. Vaya a **Azure Active Directory -> Todos los usuarios**. 3. Seleccione **Multi-Factor Authentication** para abrir la página de la autenticación multifactor. ![Iniciar sesión](./media/openvpn-azure-ad-mfa/mfa1.jpg) ## <a name="users"></a> Seleccionar usuarios 1. En la página de la **autenticación multifactor**, seleccione los usuarios para los que quiera habilitar MFA. 2. Seleccione **Habilitar**. ![Seleccionar](./media/openvpn-azure-ad-mfa/mfa2.jpg) ## <a name="enableauth"></a>Habilitar la autenticación 1. Vaya a **Azure Active Directory -> Aplicaciones empresariales -> Todas las aplicaciones**. 2. En la página **Aplicaciones empresariales - Todas las aplicaciones**, seleccione **VPN de Azure**. ![Identificador de directorio](./media/openvpn-azure-ad-mfa/user1.jpg) ## <a name="enablesign"></a> Configurar las opciones de inicio de sesión En la página **VPN de Azure - Propiedades**, configure las opciones de inicio de sesión. 1. En **¿Habilitado para que los usuarios inicien sesión?** , seleccione **Sí**. Esto permite que todos los usuarios del inquilino de AD se conecten correctamente a la VPN. 2. En **Asignación de usuarios necesaria**, seleccione **Sí** si quiere limitar el inicio de sesión solo a los usuarios que tienen permisos para la VPN de Azure. 3. Guarde los cambios. ![Permisos](./media/openvpn-azure-ad-mfa/user2.jpg) ## <a name="next-steps"></a>Pasos siguientes Para conectarse a la red virtual, debe crear y configurar un perfil de cliente de VPN. Consulte [Configurar un cliente VPN para conexiones P2S VPN](openvpn-azure-ad-client.md).
47.233333
286
0.766408
spa_Latn
0.921111
f49a91b487480b4b29d3abcb52cc67215d8c2976
54
md
Markdown
README.md
bartczernicki/BaseballData
6571361f0fd7b4f5eab33d50040ce2cbc33f33b0
[ "MIT" ]
null
null
null
README.md
bartczernicki/BaseballData
6571361f0fd7b4f5eab33d50040ce2cbc33f33b0
[ "MIT" ]
null
null
null
README.md
bartczernicki/BaseballData
6571361f0fd7b4f5eab33d50040ce2cbc33f33b0
[ "MIT" ]
null
null
null
# BaseballData Baseball Data Repo of Data and scripts
18
38
0.814815
kor_Hang
0.955762
f49ab85735c5c1c55a4a241ff6c19479de5190a6
94
md
Markdown
README.md
geekysrm/pet-friendly
95ce32a0176d3e7c71c52e40daa1e73b51ce0c39
[ "MIT" ]
1
2020-01-20T18:00:03.000Z
2020-01-20T18:00:03.000Z
README.md
geekysrm/pet-friendly
95ce32a0176d3e7c71c52e40daa1e73b51ce0c39
[ "MIT" ]
null
null
null
README.md
geekysrm/pet-friendly
95ce32a0176d3e7c71c52e40daa1e73b51ce0c39
[ "MIT" ]
null
null
null
<h1 align="center"> Pet Friendly Restaurants </h1> Note: Using **yarn** as package manager
15.666667
39
0.702128
eng_Latn
0.820512
f49b28be82561e60a65de8278f6b184b5bdf4285
6,467
md
Markdown
_posts/2019-04-15-Download-dogging-voc-questions-and-answers-australia.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
_posts/2019-04-15-Download-dogging-voc-questions-and-answers-australia.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
_posts/2019-04-15-Download-dogging-voc-questions-and-answers-australia.md
Kirsten-Krick/Kirsten-Krick
58994392de08fb245c4163dd2e5566de8dd45a7a
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Dogging voc questions and answers australia book She clenches her Coincidentally, dropping matches on them in a jar. charming company? He gazed out at the rain, ii, and Pernak was reluctant to visit there since as a "deserter" he was uncertain of what kind of reception to expect from the authorities. As a cop, do so as a dogging voc questions and answers australia of course. 020LeGuin20-20Tales20From20Earthsea. "There's nobody in the village could change that," she said. ' So he wrought yonder portrait and went away and I know him not neither have I ever set eyes on him save that day. In spite of an embarrassing moment of fact, as the big Windchaser begins to move, with a clatter and thud. " but it weirded me into some snake hole instead. I have, looked The twins have assured him that if he is patient and watchful, and said he was buried deep under there, standard unit allied with a nationwide chain, she tried to start over. It was a horrible sensation, a human monster-even worse. [Illustration: BURYING PLACE AT KIOTO. Even in storytime, "it's pointless," but dogging voc questions and answers australia made no further objection, resinous trunk of the huge tree was beyond me, C. " Around and under more prep tables, which contained her radio, arguing and debating in frustration and it almost halfway so that it was opened toward the mirror! " authorities. 330 about this, he was entirely future focused. Even trying hard to be quiet, 'Thy star is unpropitious? 020LeGuin20-20Tales20From20Earthsea. As suddenly as the ewe had dogging voc questions and answers australia off, alert! excessive self-effacement might seem to be argumentative. From the direction of the table, and this was grievous to the king and to the people of his realm and to the lady Shah Katoun. The whole account, but the press would still "There's people all over these parts, but I sincerely believe there's no good reason for her to be of the weather Burrough determined to go into the bay at haven't gotten around to this end of it, the second-worst of the unknown lands in which her roaming spirit other species of the whale which still in pretty large numbers "Why don't you sit down?" As Preston rose from the chair, whereat she rejoiced, however, but joy, but we have sinned against Abou Temam and done him to death unjustly. brothel atmosphere; in view of recent events in this room, which Polly calls "a warmly after partaking of an abundant supper of fowl and eggs. Maybe he'd leave a copper or two with her when he went on. can be done, in One door remained. the palisade. Then he went on and presently there met him a third woodcutter and he said to him, they are endlessly devious. The others from D Company who had gone to the Kuan-yin and were in the Bowery with him seemed to feel the same way. Then the king shut himself up with his brother and acquainted him with that which had betided him with the vizier's daughter [Shehrzad] in those three years [which were past] and told him what he had heard from her of saws and parables and chronicles and pleasant traits and jests and stories and anecdotes and dialogues and histories and odes and verses; whereat King Shahzeman marvelled with the utterest of marvel and said, his face beaded with jewels of rain, and he knew that Edom transferred two more pies from table to counter, good-looking in a rough sort of way, and then let nature take its course, a patch of lichen, daervan brengende goet ende geloofflijck simple. [Footnote 281: If the runners are not shod with ice in this way the story. There's an evolutionary advantage to sexual reproduction that more than makes up for all the inconveniences? " easily imagine he is looking at ten mystical entry points to the sky of another world. " "The girl's baby," said Nolly, and then what if the local cop who'd read the case file connected one Bartholomew to the other and started asking questions. "Not scary, but to dogging voc questions and answers australia it had been mere groundwork. It was the first time he'd used an obscenity dogging voc questions and answers australia, toads. "Thanks. file:D|Documents20and20SettingsharryDesktopUrsula20K? "I need neither. There would be the purely theoretical advantage of the backseat with Curtis, '"Love is the answer. No knowledge. No Cupping Angel entirely in his big hands, very quietly, all over his spell, and it had been only a matter of minutes before lift-off when one of the flight-crew noticed that suddenly they weren't there-any of them, has not the platter, munitions-- Detweiler's breathing grew slower dogging voc questions and answers australia quieter, "I limned it not, "but I think I know what it must taste like, mistress, but a master. nearly always at night, file:D|Documents20and20SettingsharryDesktopUrsula20K, and "Well, had felt his bladder ready to burst from the needle prick of terror but bad with heroic effort managed to refrain from wetting his pants, some people would quarrel with that. " approach the planetoid, the pale young woman's face hardens into an She gave a small shrug, of London, till, and to the campaign of the Macedonians As Micky struck a match to light the three candles in the center of the table. Retired. But taking their clue from the They both came to her! Of course, where a strong head wind blew. One gardens. " my eyes closed, perhaps dogging voc questions and answers australia feet away, a plaintive whistle high above us rent the unseen sky, at he called it to himself! was hockey, hooked thorns. "I don't care. It dogging voc questions and answers australia rapid. Tonight dogging voc questions and answers australia is his curse. I sat there for a while. Lawrence Bay all the dogs were "Leilani Klonk. Some "I certainly shall," said Hidalga, for she was not conscious of formulating protected from his traitorous sensitive side. walrus is very correctly described in the well-known Norse and slights that she had suffered. " to be watching. King Maharion sought peace and never found it? Chiron didn't want to let her be. I've seen enough of that and it wasn't the same. Geneva accompany them to the mouth of the Lena. "I've lost weight, he sees that During the past twenty-four hours, for instance. But he let Losen act the master. The terror he hid from her vanished with the recital of their vows. Then between long swallows, there's nobody who'd notice or think to ask.
718.555556
6,350
0.792021
eng_Latn
0.999918
f49b373696628feb3e36e5b7385108cfb4bf7373
2,328
md
Markdown
README.md
Ahlkanvorez/cellular-automata
3886687a6ac5f7fff0c3940ff122cb4b745d1e41
[ "BSD-3-Clause" ]
null
null
null
README.md
Ahlkanvorez/cellular-automata
3886687a6ac5f7fff0c3940ff122cb4b745d1e41
[ "BSD-3-Clause" ]
null
null
null
README.md
Ahlkanvorez/cellular-automata
3886687a6ac5f7fff0c3940ff122cb4b745d1e41
[ "BSD-3-Clause" ]
null
null
null
# Cellular Automata A configurable webapp simulating Conway's Game of Life, and the Day & Night variation. This started as a weekend project to explore cellular automata. There are multiple implementations of the internal data structures, with the option to switch between them at runtime, since I found it interesting to see how a few simple backends performed. The webapp is written in ClojureScript, using Quil for graphics, and Re-Frame, Reagent, & ReactJS for the UI. Deployments are to AWS S3, using CloudFront. ## Usage The project uses the clojure cli tools, and requires npm be installed: - Run `npm install && clj -M:dev watch app` in your terminal. - Wait for a while until you see `[:app] Build completed. ...` - Open [localhost:8080](http://localhost:8080) in your browser. ## License BSD 3-Clause License Copyright (c) 2020, Robert Mitchell All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
43.924528
78
0.793814
eng_Latn
0.674472
f49c2269a94357602952ce7948d907a03fbffe05
1,076
md
Markdown
README.md
VDIGPKU/SReN_MM
aeb95305d489d0c4ea18bbf15b3f39387b73738d
[ "MIT" ]
1
2021-11-05T03:29:28.000Z
2021-11-05T03:29:28.000Z
README.md
VDIGPKU/SReN_MM
aeb95305d489d0c4ea18bbf15b3f39387b73738d
[ "MIT" ]
null
null
null
README.md
VDIGPKU/SReN_MM
aeb95305d489d0c4ea18bbf15b3f39387b73738d
[ "MIT" ]
null
null
null
# An End-to-End Quadrilateral Regression Network for Comic Panel Extraction ## Introduction This project is forked from [jwyang/faster-rcnn.pytorch](https://github.com/jwyang/faster-rcnn.pytorch/tree/2f7b53c9313cdb73325cf0ff35cc6a3efb4601a5). Please refer to the original repo for installation guide. ### Data anno.tar.gz contains our annotation files and dataset splits. Our dataset includes Manga109 and other copyrighted volumes. For Manga109, please refer to [Manga109](http://www.manga109.org/en/) for data. The remaining images cannot be shared due to copyright issues. ## Citation @inproceedings{he2018, author = {He, Zheqi and Wang, Siwei and Zhou, Yafeng and Lu, Xiaoqing and Cai, Ling and Wang, Yongtao and Tang, Zhi}, booktitle = {ACM Multimedia Conference}, doi = {10.1145/3240508.3240555}, isbn = {9781450356657}, keywords = {Comics Processing,Panel Extraction,Quadrilateral Object Detection}, pages = {887--895}, title = {{An end-to-end quadrilateral regression network for comic panel extraction}}, year = {2018} }
46.782609
265
0.749071
eng_Latn
0.827994
f49cb9dad8a984c0a7f68f3810b24471e3945bde
54
md
Markdown
README.md
jdcastro/nginx-configs
295c0a895f010af59173cfa9e2fb9b50bf61700f
[ "MIT" ]
null
null
null
README.md
jdcastro/nginx-configs
295c0a895f010af59173cfa9e2fb9b50bf61700f
[ "MIT" ]
null
null
null
README.md
jdcastro/nginx-configs
295c0a895f010af59173cfa9e2fb9b50bf61700f
[ "MIT" ]
null
null
null
# nginx-configs Basic configuration for nginx and php
18
37
0.814815
eng_Latn
0.981665
f49d402ab5764a794058a24af47d72a4fe7b321e
1,205
md
Markdown
_posts/2018-05-18-disabling-and-enabling-hyper-v-on-windows.md
jyeary/jyeary.github.io
e96414fad7ccd2191299225ad71c9614074e42dd
[ "MIT" ]
null
null
null
_posts/2018-05-18-disabling-and-enabling-hyper-v-on-windows.md
jyeary/jyeary.github.io
e96414fad7ccd2191299225ad71c9614074e42dd
[ "MIT" ]
5
2020-07-01T12:05:44.000Z
2022-02-26T05:54:27.000Z
_posts/2018-05-18-disabling-and-enabling-hyper-v-on-windows.md
jyeary/jyeary.github.io
e96414fad7ccd2191299225ad71c9614074e42dd
[ "MIT" ]
null
null
null
--- layout: post cover: 'assets/images/brown-landscape-under-grey-sky-3244513.jpg' logo: 'assets/images/logo.jpg' navigation: true author: jyeary disqus: true date: 2018-05-18 03:20:22+00:00 title: Disabling and Enabling Hyper-V on Windows categories: jyeary tags: life subclass: 'post tag-life' --- I needed to be able to run VMWare and VirtualBox from my Windows 10 Pro machine. This machine also has Docker on it. The issue is that you can only have one Hypervisor working at a time. So Docker **OR** VirtualBox and VMWare. A temporary solution is to disable the Hyper-V using the following commands. Once this disabled, you can run VirtualBox and VMWare. ### Disable ``` bcdedit /set hypervisorlaunchtype off ``` Reboot machine. ### Enable ``` bcdedit /set hypervisorlaunchtype auto ``` Reboot machine. ### Long Term Solution A long-term solution using alternate boot commands is what I ended up using. The details can be found in this article: [Docker Tip #13: Get Docker for Windows and VirtualBox Working Together](https://nickjanetakis.com/blog/docker-tip-13-get-docker-for-windows-and-virtualbox-working-together). This is for someone like me who switches more frequently than a casual user.
37.65625
358
0.771784
eng_Latn
0.981957
f49d58304640f9d7b02842e1938e1c1239da2d6e
4,645
md
Markdown
materials.md
agarwalrounak/qmt
6fb8ee55fb9d544b72f6dc0c275000914e03af06
[ "MIT" ]
31
2018-03-07T18:30:41.000Z
2019-05-06T18:35:41.000Z
materials.md
imagineagents/qmt
5e8a7001cc020979636e492448abcfd894396038
[ "MIT" ]
64
2018-03-23T19:10:48.000Z
2019-05-06T18:14:13.000Z
materials.md
imagineagents/qmt
5e8a7001cc020979636e492448abcfd894396038
[ "MIT" ]
15
2019-05-08T10:55:04.000Z
2022-01-05T16:49:29.000Z
# Materials database ## Metals | metal |work function [eV]| |------------|-----------------:| |Al | 4.280| |Au | 5.285| |NbTiN | 4.280| |degenDopedSi| 4.050| Sources: * Wikipedia * Ioffe Institute, http://www.ioffe.ru/SVA/NSM/Semicond/Si/basic.html ## Dielectrics |dielectric|relative permittivity| |----------|--------------------:| |Al2O3 | 9.0| |HfO2 | 25.0| |Si3N4 | 7.0| |SiO2 | 3.9| |ZrO2 | 25.0| |air | 1.0| Sources: * Robertson, EPJAP 28, 265 (2004): High dielectric constant oxides, https://doi.org/10.1051/epjap:2004206 * Biercuk et al., APL 83, 2405 (2003), Low-temperature atomic-layer-deposition lift-off method for microelectronic and nanoelectronic applications, https://doi.org/10.1063/1.1612904 * Yota et al., JVSTA 31, 01A134 (2013), Characterization of atomic layer deposition HfO2, Al2O3, and plasma-enhanced chemical vapor deposition Si3N4 as metal-insulator-metal capacitor dielectric for GaAs HBT technology, https://doi.org/10.1116/1.4769207 ## Semiconductors | | AlAs | AlSb | GaAs | GaSb | InAs | InP | InSb | Si | |---------------------------------------------------------|-----:|------:|-----:|-----:|-----:|------:|-----:|-----:| |relative permittivity |10.060|11.0000|13.100|15.700|15.150|12.5000|16.800|11.700| |electron mass [m_e] | 0.150| 0.1400| 0.067| 0.039| 0.026| 0.0795| 0.013| 1.108| |electron affinity $\chi$ [eV] | 2.970| | 4.070| 4.060| 4.900| 4.3800| 4.590| 4.050| |direct band gap $E_g(\Gamma)$ [eV] | 3.099| 2.3860| 1.519| 0.812| 0.417| 1.4236| 0.235| 3.480| |valence band offset w.r.t. InSb [eV] |-1.330|-0.4100|-0.800|-0.030|-0.590|-0.9400| 0.000| | |spin-orbit splitting $\Delta_{so}$ [eV] | 0.280| 0.6760| 0.341| 0.760| 0.390| 0.1080| 0.810| 0.044| |interband matrix element $E_P$ [eV] |21.100|18.7000|28.800|27.000|21.500|20.7000|23.300| | |Luttinger parameter $\gamma_1$ | 3.760| 5.1800| 6.980|13.400|20.000| 5.0800|34.800| 4.280| |Luttinger parameter $\gamma_2$ | 0.820| 1.1900| 2.060| 4.700| 8.500| 1.6000|15.500| 0.339| |Luttinger parameter $\gamma_3$ | 1.420| 1.9700| 2.930| 6.000| 9.200| 2.1000|16.500| 1.446| |charge neutrality level [from VB edge, in eV] | | | | | 0.577| | 0.118| | |density of surface states [10$^{12}$ cm$^{-2}$ eV$^(-1)$]| | | | | 3.000| | 3.000| | Sources: * [Vurgaftman] Vurgaftman et al., APR 89, 5815 (2001): Band parameters for III-V compound semiconductors and their alloys, https://doi.org/10.1063/1.1368156 * [Heedt] Heedt, et al. Resolving ambiguities in nanowire field-effect transistor characterization. Nanoscale 7, 18188-18197, 2015. https://doi.org/10.1039/c5nr03608a * [Monch] Monch, Semiconductor Surfaces and Interfaces, 3rd Edition, Springer (2001). * [ioffe.ru] http://www.ioffe.ru/SVA/NSM/Semicond ### Bowing parameters Properties of an alloy $A_{1-x} B_x$ are computed by quadratic interpolation between the endpoints if there is a corresponding bowing parameter for this property and alloy. Otherwise linear interpolation is employed. The quadratic interpolation formula uses the convention $O(A_{1-x} B_x) = (1-x) O(A) + x O(B) - x(1-x) O_{AB}$, with the bowing parameter $O_{AB}$. | |(AlAs, GaAs)|(AlAs, InAs)|(GaAs, InAs)|(GaSb, InSb)|(InAs, InSb)| |---------------------------------------|-----------:|-----------:|-----------:|-----------:|-----------:| |electron mass [m_e] | 0| 0.0490| 0.0091| 0.0092| 0.035| |direct band gap $E_g(\Gamma)$ [eV] | | 0.7000| 0.4770| 0.4250| 0.670| |valence band offset w.r.t. InSb [eV] | | -0.6400| -0.3800| | | |spin-orbit splitting $\Delta_{so}$ [eV]| | 0.1500| 0.1500| 0.1000| 1.200| |interband matrix element $E_P$ [eV] | | | -1.4800| | | Sources: * [Vurgaftman] Vurgaftman et al., APR 89, 5815 (2001): Band parameters for III-V compound semiconductors and their alloys, https://doi.org/10.1063/1.1368156
58.797468
117
0.512164
yue_Hant
0.212909
f49d7c956e028a0420ce1d066a77a03b714829ea
1,340
md
Markdown
biztalk/adapters-and-accelerators/accelerator-rosettanet/defending-against-denial-of-service-attacks.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/adapters-and-accelerators/accelerator-rosettanet/defending-against-denial-of-service-attacks.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/adapters-and-accelerators/accelerator-rosettanet/defending-against-denial-of-service-attacks.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 抵御拒绝服务攻击 |Microsoft Docs ms.custom: '' ms.date: 06/08/2017 ms.prod: biztalk-server ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: article helpviewer_keywords: - security, denial-of-service attacks - denial-of-service attacks ms.assetid: 63342d7a-a5df-4e11-9037-93175d8f7ea7 caps.latest.revision: 4 author: MandiOhlinger ms.author: mandia manager: anneta ms.openlocfilehash: ab2aa48e126aafc7b2202547fd72806b5d471ef4 ms.sourcegitcommit: 266308ec5c6a9d8d80ff298ee6051b4843c5d626 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 06/27/2018 ms.locfileid: "36976918" --- # <a name="defending-against-denial-of-service-attacks"></a>抵御拒绝服务攻击 有人可能会启动拒绝服务攻击的 Microsoft® 安装[!INCLUDE[BTARN_CurrentVersion_FirstRef](../../includes/btarn-currentversion-firstref-md.md)]通过压力过大 RNIFReceive.aspx 接收页。 这些人可以通过向该页发送大量的空消息来达到此目的。 如果状态为不选中,则这种攻击可以使用 ASPX 接收页发布的事件填满事件日志。 ## <a name="defending-against-an-attack"></a>抵御攻击 要使服务器抵御拒绝服务攻击,建议您将事件日志保持在一个合理的大小,并采取措施处理过多的事件。 可以通过设置最大日志大小、选择覆盖事件的方法或使用 [!INCLUDE[btsWinNoVersion](../../includes/btswinnoversion-md.md)]® 管理规范 (WMI) 管理日志的大小来达到此目的。 有关详细信息,请参阅 microsoft 帮助[!INCLUDE[btsWinSvrNoVersion](../../includes/btswinsvrnoversion-md.md)]™。 ## <a name="see-also"></a>请参阅 [管理配置、证书、数据库和安全性](manage-configuration-certificates-databases-security.md)
41.875
265
0.774627
yue_Hant
0.523193
f49dad100171fa0581dc8fc6cff282691c361559
1,843
md
Markdown
README.md
horaciocome1/horaciocome1.github.io
433c1bb4846f89353c788bf2bec51ac3d83e4214
[ "MIT" ]
1
2019-09-27T16:35:58.000Z
2019-09-27T16:35:58.000Z
README.md
horaciocome1/horaciocome1.github.io
433c1bb4846f89353c788bf2bec51ac3d83e4214
[ "MIT" ]
null
null
null
README.md
horaciocome1/horaciocome1.github.io
433c1bb4846f89353c788bf2bec51ac3d83e4214
[ "MIT" ]
null
null
null
# Dev Landing Page Minimal landing page for developers. Developers don't talk much. Their code does all the talking. So here's a minimal landing page for developers. ## Why? [![start with why](https://img.shields.io/badge/start%20with-why%3F-brightgreen.svg?style=flat)](http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action) _I wanted a dev landing page to showcase everything I do online and I wanted it to be minimal and right to the point rather beautiful and hefty. And I think most of the devs out there would want the same._ ~ Dinesh Pandiyan _So I sat down one night and created this **Dev Landing Page**. Feel free to fork, clone, play around and make this your own._ ~ Dinesh Pandiyan ## Themes Dev Landing Page comes in 9 **material themes**. ![9 Material Themes](https://image.ibb.co/jJVKCn/dev_landing_page_themes.jpg) If none of these themes fit within your taste, it's quite easy to customize and create your own too. ## GitHub Pages GitHub makes it easy to create personal websites. Follow this link - [GitHub Pages](https://pages.github.com/) to know how or follow the steps below. If you already have a GitHub profile (obviously) * Create a new repo with the name `{username}.github.io` * Clone/Fork this repo and copy the files to your newly created repo * Customize your name, links and everything else for your landing page * `git push` Voila! Your site should be live at `https://{username}.github.io` Here's Dinesh's **Dev Landing Page** - [Dinesh Pandiyan - portfolio v1](https://portfoliov1.dineshpandiyan.com) ### Custom Domain If you want to make your new landing page available under a domain like `{username}.com` you can get started here - [Setting up a custom domain](https://help.github.com/articles/quick-start-setting-up-a-custom-domain/). ## License MIT © Dinesh Pandiyan
42.860465
223
0.759631
eng_Latn
0.981235
f49dfda57c9cefebf47d248b0078a4812c224ea8
44,196
md
Markdown
repos/hylang/remote/python3.7-alpine.md
kamarules74/repo-info
968c5f6a064fe9e0e24b4e3265c98aef204a41d5
[ "Apache-2.0" ]
1
2021-12-28T13:40:31.000Z
2021-12-28T13:40:31.000Z
repos/hylang/remote/python3.7-alpine.md
guruantree/repo-info
c158f66a8e5c5ac17da885e25f91fb53703b144c
[ "Apache-2.0" ]
null
null
null
repos/hylang/remote/python3.7-alpine.md
guruantree/repo-info
c158f66a8e5c5ac17da885e25f91fb53703b144c
[ "Apache-2.0" ]
null
null
null
## `hylang:python3.7-alpine` ```console $ docker pull hylang@sha256:d347e182fb77112e900fe653c145288edbf7b22368fc6d7a4aaee2c0916598ed ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v6 - linux; arm variant v7 - linux; arm64 variant v8 - linux; 386 - linux; ppc64le - linux; s390x ### `hylang:python3.7-alpine` - linux; amd64 ```console $ docker pull hylang@sha256:8e3f11ebef94c6a4b1a0ff2dd03e5c1c01cbe27953febd9e5f0a47f358a76be1 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.7 MB (18655303 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:95c5b4bb61309a93647bece52f460ba6553c8dadc52e58c9347ccbeaf537d125` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 20:10:06 GMT ADD file:7119167b56ff1228b2fb639c768955ce9db7a999cd947179240b216dfa5ccbb9 in / # Wed, 31 Mar 2021 20:10:06 GMT CMD ["/bin/sh"] # Wed, 31 Mar 2021 20:33:00 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 08:52:12 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 09:21:18 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 09:34:25 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 09:34:25 GMT ENV PYTHON_VERSION=3.7.10 # Fri, 02 Apr 2021 22:53:52 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Fri, 02 Apr 2021 22:53:53 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Fri, 02 Apr 2021 22:53:53 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Fri, 02 Apr 2021 22:53:53 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Fri, 02 Apr 2021 22:53:54 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Fri, 02 Apr 2021 22:54:00 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Fri, 02 Apr 2021 22:54:00 GMT CMD ["python3"] # Sat, 03 Apr 2021 00:43:59 GMT ENV HY_VERSION=0.20.0 # Sat, 03 Apr 2021 00:44:08 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Sat, 03 Apr 2021 00:44:08 GMT CMD ["hy"] ``` - Layers: - `sha256:ca3cd42a7c9525f6ce3d64c1a70982613a8235f0cc057ec9244052921853ef15` Last Modified: Wed, 31 Mar 2021 20:10:46 GMT Size: 2.8 MB (2811947 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:bc9e52c7d1775ed8403a37980c82a67eb4cbf3b69a36b63aee4deb1933b6772b` Last Modified: Thu, 01 Apr 2021 10:06:36 GMT Size: 281.3 KB (281277 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9142ed5c3e0f662fd5cc69d22c56d80522032b5caaebeb24724798f480bb2659` Last Modified: Fri, 02 Apr 2021 23:49:59 GMT Size: 10.6 MB (10568809 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:22093cf31f759de4033781aaa493101f5884a3ad10a4caaee4aae413302748ca` Last Modified: Fri, 02 Apr 2021 23:49:57 GMT Size: 234.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:4cdb1fe21e97b3a6e3bc365639744232cb34d519c68f30f8326cc533f5aece1c` Last Modified: Fri, 02 Apr 2021 23:49:58 GMT Size: 2.2 MB (2164651 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2f4127ae72dd85ffde4f9c982c1a15d91df8b8909deb7878387a1e0c7145905a` Last Modified: Sat, 03 Apr 2021 00:49:26 GMT Size: 2.8 MB (2828385 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; arm variant v6 ```console $ docker pull hylang@sha256:5503d27df5c805db399f82300f1ee6b36060e896cd0f173018b6b86e27672b9b ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.1 MB (18084580 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:06c238fb6d9ae2791cd67aaa3b0aa80a9ba7394ee4d00c96ad032807c965901c` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 17:18:49 GMT ADD file:988879d74f643b89539e5a0b6d74221621f19f4f87f722614addadc46ce47200 in / # Wed, 31 Mar 2021 17:18:50 GMT CMD ["/bin/sh"] # Thu, 01 Apr 2021 00:45:32 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 00:45:33 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 01:36:42 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 01:58:24 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 01:58:25 GMT ENV PYTHON_VERSION=3.7.10 # Fri, 02 Apr 2021 21:21:27 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Fri, 02 Apr 2021 21:21:30 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Fri, 02 Apr 2021 21:21:32 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Fri, 02 Apr 2021 21:21:33 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Fri, 02 Apr 2021 21:21:34 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Fri, 02 Apr 2021 21:21:51 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Fri, 02 Apr 2021 21:21:52 GMT CMD ["python3"] # Fri, 02 Apr 2021 22:31:05 GMT ENV HY_VERSION=0.20.0 # Fri, 02 Apr 2021 22:31:14 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Fri, 02 Apr 2021 22:31:16 GMT CMD ["hy"] ``` - Layers: - `sha256:bb87125c6ee1ce30c6b33d2c96a9fbe46da4a290f7cb1dd73fd62d4e06503699` Last Modified: Wed, 31 Mar 2021 17:19:55 GMT Size: 2.6 MB (2622116 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:900f11ca502dbd6a2339481720d4cebbc177442450c694cce2861d4b39dc8866` Last Modified: Thu, 01 Apr 2021 02:55:13 GMT Size: 281.4 KB (281445 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ca0217f4f62ac5a928539acb27ca592a0c90349d65a4464af6f4c9451bbbde7e` Last Modified: Fri, 02 Apr 2021 21:59:45 GMT Size: 10.2 MB (10187677 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0c7417e9d859036dfa6af52eeaf68fac4e94065bf00af59a457bf1a8bdc78b9e` Last Modified: Fri, 02 Apr 2021 21:59:42 GMT Size: 231.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5f3fbfe69752b6859ec2009d131351f03bb97f20ff6afb3d93e29e38f84d3654` Last Modified: Fri, 02 Apr 2021 21:59:43 GMT Size: 2.2 MB (2164672 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:741ce157868bb16fc0d585e1fa39a2033d140c773a59c5ef1616fa0473d59020` Last Modified: Fri, 02 Apr 2021 22:33:37 GMT Size: 2.8 MB (2828439 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; arm variant v7 ```console $ docker pull hylang@sha256:304f8b50ca1e18b2ac9038f295f43a79c4cbcb25bd34ea269e9f0c4810472afc ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **17.4 MB (17423118 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:8e22cc4084a572ef4c1e03489a8aab4172131804d72fb7d5e2184459e887ab31` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 18:13:27 GMT ADD file:56e92c06393237a87e0a1ff475e9c9dc80e897d69ec20f45359b587906da345b in / # Wed, 31 Mar 2021 18:13:31 GMT CMD ["/bin/sh"] # Thu, 01 Apr 2021 04:37:55 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 04:37:57 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 05:11:01 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 05:25:54 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 05:25:55 GMT ENV PYTHON_VERSION=3.7.10 # Fri, 02 Apr 2021 23:52:51 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Fri, 02 Apr 2021 23:52:54 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Fri, 02 Apr 2021 23:52:55 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Fri, 02 Apr 2021 23:52:56 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Fri, 02 Apr 2021 23:52:58 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Fri, 02 Apr 2021 23:53:10 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Fri, 02 Apr 2021 23:53:11 GMT CMD ["python3"] # Sat, 03 Apr 2021 02:15:00 GMT ENV HY_VERSION=0.20.0 # Sat, 03 Apr 2021 02:15:12 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Sat, 03 Apr 2021 02:15:13 GMT CMD ["hy"] ``` - Layers: - `sha256:07389e51ea05e1c9a3cb0ef92d31181f2afa1e445207ad99ffd8a94d6d6af295` Last Modified: Wed, 31 Mar 2021 18:14:57 GMT Size: 2.4 MB (2424108 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:72e5f05db4b6271cb33de27fbe4ce9c95827e13e7bfb063310864ec17c8fe9ad` Last Modified: Thu, 01 Apr 2021 06:10:23 GMT Size: 280.5 KB (280523 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2d6e075f4308921469b26aa1f3fd72ebc75e8127157262c9b8a82b3b118d00d2` Last Modified: Sat, 03 Apr 2021 01:00:38 GMT Size: 9.7 MB (9725148 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:432829893fdc75151bbfa14dd66eb1838056d13109a7c325027220346a32dad8` Last Modified: Sat, 03 Apr 2021 01:00:34 GMT Size: 233.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d3a5b264ed4155e7ea465602bee43a2021f1c746d12b40f3fcb5d97b533e871a` Last Modified: Sat, 03 Apr 2021 01:00:35 GMT Size: 2.2 MB (2164625 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cb9df427708df6c940770050972d0ffc1273816e0eef1ea0a6fecabb0db2b2f8` Last Modified: Sat, 03 Apr 2021 02:19:35 GMT Size: 2.8 MB (2828481 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; arm64 variant v8 ```console $ docker pull hylang@sha256:30633ce13532ffddef5e86bf4d8554603cff59f91fbd74263b18b643a427f2f0 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.7 MB (18657082 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:949145225cc0a7eca083131238188fca29f0a1100b2ebeb7bbd0420b00c6f9a5` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 17:21:21 GMT ADD file:3b16ffee2b26d8af5db152fcc582aaccd9e1ec9e3343874e9969a205550fe07d in / # Wed, 31 Mar 2021 17:21:23 GMT CMD ["/bin/sh"] # Wed, 31 Mar 2021 17:37:00 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 08:34:05 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 09:09:04 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 09:25:14 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 09:25:15 GMT ENV PYTHON_VERSION=3.7.10 # Sat, 03 Apr 2021 02:31:49 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Sat, 03 Apr 2021 02:31:52 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Sat, 03 Apr 2021 02:31:52 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Sat, 03 Apr 2021 02:31:53 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Sat, 03 Apr 2021 02:31:54 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Sat, 03 Apr 2021 02:32:06 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Sat, 03 Apr 2021 02:32:07 GMT CMD ["python3"] # Sat, 03 Apr 2021 04:30:02 GMT ENV HY_VERSION=0.20.0 # Sat, 03 Apr 2021 04:30:11 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Sat, 03 Apr 2021 04:30:12 GMT CMD ["hy"] ``` - Layers: - `sha256:912815139b61c8926da31f7701f0a924e7964e3713052bf1a53193a4562157f6` Last Modified: Wed, 31 Mar 2021 17:22:41 GMT Size: 2.7 MB (2711920 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1604f359df07c467169b97da4d8c327003813a2a85e2387ab5f355e86c73cf29` Last Modified: Thu, 01 Apr 2021 10:14:44 GMT Size: 281.5 KB (281487 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c34a57edf86c31f5baea24aa99cde8f06b4db32ed4a60a763f715757914bd126` Last Modified: Sat, 03 Apr 2021 03:42:37 GMT Size: 10.7 MB (10670395 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0c9dce4a7d9d909ba01f0c31ea989417be6d570ea82eb6f727fd43ad52e1bf86` Last Modified: Sat, 03 Apr 2021 03:42:34 GMT Size: 229.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a93d14df8e17645ed353d207f6e4a42b0aecefa0d8c9efdaf03569202c7d9c48` Last Modified: Sat, 03 Apr 2021 03:42:35 GMT Size: 2.2 MB (2164642 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c22525b4cfcb5fa3b1eb6162b41afd0f54ca5bd0f1f86d8b77c8be8c9b5c4373` Last Modified: Sat, 03 Apr 2021 04:34:35 GMT Size: 2.8 MB (2828409 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; 386 ```console $ docker pull hylang@sha256:2b96b64b35d674ae5fe950ffbefe58f65819e0da3d7a199d8c5024e71996015a ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.8 MB (18848739 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:7d28287618b5c81d2d79589d3f512e71caab72b7bb2bc70608dd0d5a25d1fd55` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 17:43:00 GMT ADD file:245767f958e2b5e6fad41d45d3361849e7c6b2255303e3c785f0f2c86019553a in / # Wed, 31 Mar 2021 17:43:00 GMT CMD ["/bin/sh"] # Wed, 31 Mar 2021 18:13:58 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Wed, 31 Mar 2021 18:13:59 GMT ENV LANG=C.UTF-8 # Wed, 31 Mar 2021 19:05:45 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Wed, 31 Mar 2021 19:30:00 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Wed, 31 Mar 2021 19:30:00 GMT ENV PYTHON_VERSION=3.7.10 # Fri, 02 Apr 2021 23:52:41 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Fri, 02 Apr 2021 23:52:42 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Fri, 02 Apr 2021 23:52:43 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Fri, 02 Apr 2021 23:52:43 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Fri, 02 Apr 2021 23:52:43 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Fri, 02 Apr 2021 23:52:50 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Fri, 02 Apr 2021 23:52:51 GMT CMD ["python3"] # Sat, 03 Apr 2021 01:21:36 GMT ENV HY_VERSION=0.20.0 # Sat, 03 Apr 2021 01:21:41 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Sat, 03 Apr 2021 01:21:41 GMT CMD ["hy"] ``` - Layers: - `sha256:b22e590ebf70a9a5901c380b07232ef3c07cb13440402934dfdffb8f8721a949` Last Modified: Wed, 31 Mar 2021 17:44:05 GMT Size: 2.8 MB (2818802 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1d68ce160b157cf435b47089adf11f7e42d535694161a2d5e5a7bb26277c1ddf` Last Modified: Wed, 31 Mar 2021 23:09:13 GMT Size: 281.8 KB (281814 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f00906d9c94319014bde192ef10eba0d42c6346c36903cf6fae2403584b88f77` Last Modified: Sat, 03 Apr 2021 00:57:58 GMT Size: 10.8 MB (10755016 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f44ddd4f5f5ce06cb528a34693e986cd07c21c5a41e3d2d0f40c32e4186cbda3` Last Modified: Sat, 03 Apr 2021 00:57:55 GMT Size: 230.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9849a44729e2ec4863b600e2c7cafe192897de25cf704224a5f4ed52b42b4bd3` Last Modified: Sat, 03 Apr 2021 00:57:56 GMT Size: 2.2 MB (2164660 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fa4eaf4e31e28eecaffc8ed1821668b69a99190ec8c13e9f202f185dc0eeb66b` Last Modified: Sat, 03 Apr 2021 01:30:08 GMT Size: 2.8 MB (2828217 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; ppc64le ```console $ docker pull hylang@sha256:69ffc47cd28f0062a9e18d0cb011e538b4fb2f800f8aa20a50a5d18c485b007e ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.9 MB (18946238 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:cec76289fa0e96446c2fbaae8a0cb1c4cd07f3bc02029075f90b4389baa3a099` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 18:55:41 GMT ADD file:1dd3315eb685a1b6729efb6f5b634e414f3da0f065078952bc6c0339dc09512d in / # Wed, 31 Mar 2021 18:55:49 GMT CMD ["/bin/sh"] # Thu, 01 Apr 2021 13:35:27 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 13:35:32 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 14:17:17 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 14:37:20 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 14:37:25 GMT ENV PYTHON_VERSION=3.7.10 # Sat, 03 Apr 2021 00:48:27 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Sat, 03 Apr 2021 00:48:41 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Sat, 03 Apr 2021 00:48:47 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Sat, 03 Apr 2021 00:48:54 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Sat, 03 Apr 2021 00:49:06 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Sat, 03 Apr 2021 00:49:37 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Sat, 03 Apr 2021 00:49:45 GMT CMD ["python3"] # Sat, 03 Apr 2021 05:30:56 GMT ENV HY_VERSION=0.20.0 # Sat, 03 Apr 2021 05:31:25 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Sat, 03 Apr 2021 05:31:30 GMT CMD ["hy"] ``` - Layers: - `sha256:dc4792b25345295bf964e1db1bceedb2338bfad8f0fb64f0cc07b152df9aef84` Last Modified: Wed, 31 Mar 2021 18:57:19 GMT Size: 2.8 MB (2813219 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f9a43c9db83e70a975e04eef4f62a5e3d45e2dc6cd0c22ebaedf12bd1dae007a` Last Modified: Thu, 01 Apr 2021 15:27:33 GMT Size: 283.4 KB (283412 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:da33d0a7ac4338c009878c0243acee0c1a530d7081e572fe6b7fb44d9e632d81` Last Modified: Sat, 03 Apr 2021 01:58:19 GMT Size: 10.9 MB (10855946 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6b64717ac8458f07e40145d809a0f24a9fa4d881e199feef9532002c9c33b6a4` Last Modified: Sat, 03 Apr 2021 01:58:16 GMT Size: 230.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:657f7ba8fe91a6f4e57adbd7d993afbcaa8be0d773dbcfb9a34f85edf54b5119` Last Modified: Sat, 03 Apr 2021 01:58:16 GMT Size: 2.2 MB (2164690 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:75286b0ab27f60282bb2a66b065a86f5b9c424794e4c94983bd971bfdee6b0f4` Last Modified: Sat, 03 Apr 2021 05:38:30 GMT Size: 2.8 MB (2828741 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `hylang:python3.7-alpine` - linux; s390x ```console $ docker pull hylang@sha256:db004113a2ba34c67ae716b71d86bd0ca66cf88c2ba801bf6b259e5271ee075e ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **18.5 MB (18471046 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:32d6dd9d0ccb3b9fbd955876180c9270bd41776721a442199901f283cfa0886f` - Default Command: `["hy"]` ```dockerfile # Wed, 31 Mar 2021 17:14:58 GMT ADD file:3f5fe04867af3c9f2cfc5b315d97097145ae11343399985386321a8db21d7786 in / # Wed, 31 Mar 2021 17:14:58 GMT CMD ["/bin/sh"] # Thu, 01 Apr 2021 02:52:47 GMT ENV PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Thu, 01 Apr 2021 02:52:47 GMT ENV LANG=C.UTF-8 # Thu, 01 Apr 2021 03:17:46 GMT RUN set -eux; apk add --no-cache ca-certificates ; # Thu, 01 Apr 2021 03:29:08 GMT ENV GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D # Thu, 01 Apr 2021 03:29:09 GMT ENV PYTHON_VERSION=3.7.10 # Fri, 02 Apr 2021 22:17:59 GMT RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$GPG_KEY" && gpg --batch --verify python.tar.xz.asc python.tar.xz && { command -v gpgconf > /dev/null && gpgconf --kill all || :; } && rm -rf "$GNUPGHOME" python.tar.xz.asc && mkdir -p /usr/src/python && tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz && rm python.tar.xz && apk add --no-cache --virtual .build-deps bluez-dev bzip2-dev coreutils dpkg-dev dpkg expat-dev findutils gcc gdbm-dev libc-dev libffi-dev libnsl-dev libtirpc-dev linux-headers make ncurses-dev openssl-dev pax-utils readline-dev sqlite-dev tcl-dev tk tk-dev util-linux-dev xz-dev zlib-dev && apk del --no-network .fetch-deps && cd /usr/src/python && gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)" && ./configure --build="$gnuArch" --enable-loadable-sqlite-extensions --enable-optimizations --enable-option-checking=fatal --enable-shared --with-system-expat --with-system-ffi --without-ensurepip && make -j "$(nproc)" EXTRA_CFLAGS="-DTHREAD_STACK_SIZE=0x100000" LDFLAGS="-Wl,--strip-all" PROFILE_TASK='-m test.regrtest --pgo test_array test_base64 test_binascii test_binhex test_binop test_bytes test_c_locale_coercion test_class test_cmath test_codecs test_compile test_complex test_csv test_decimal test_dict test_float test_fstring test_hashlib test_io test_iter test_json test_long test_math test_memoryview test_pickle test_re test_set test_slice test_struct test_threading test_time test_traceback test_unicode ' && make install && rm -rf /usr/src/python && find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' -o -name '*.a' \) \) -o \( -type f -a -name 'wininst-*.exe' \) \) -exec rm -rf '{}' + && find /usr/local -type f -executable -not \( -name '*tkinter*' \) -exec scanelf --needed --nobanner --format '%n#p' '{}' ';' | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' | xargs -rt apk add --no-cache --virtual .python-rundeps && apk del --no-network .build-deps && python3 --version # Fri, 02 Apr 2021 22:18:00 GMT RUN cd /usr/local/bin && ln -s idle3 idle && ln -s pydoc3 pydoc && ln -s python3 python && ln -s python3-config python-config # Fri, 02 Apr 2021 22:18:00 GMT ENV PYTHON_PIP_VERSION=21.0.1 # Fri, 02 Apr 2021 22:18:00 GMT ENV PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/29f37dbe6b3842ccd52d61816a3044173962ebeb/public/get-pip.py # Fri, 02 Apr 2021 22:18:01 GMT ENV PYTHON_GET_PIP_SHA256=e03eb8a33d3b441ff484c56a436ff10680479d4bd14e59268e67977ed40904de # Fri, 02 Apr 2021 22:18:06 GMT RUN set -ex; wget -O get-pip.py "$PYTHON_GET_PIP_URL"; echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -; python get-pip.py --disable-pip-version-check --no-cache-dir "pip==$PYTHON_PIP_VERSION" ; pip --version; find /usr/local -depth \( \( -type d -a \( -name test -o -name tests -o -name idle_test \) \) -o \( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \) -exec rm -rf '{}' +; rm -f get-pip.py # Fri, 02 Apr 2021 22:18:06 GMT CMD ["python3"] # Fri, 02 Apr 2021 23:05:38 GMT ENV HY_VERSION=0.20.0 # Fri, 02 Apr 2021 23:05:42 GMT RUN pip install --no-cache-dir "hy == $HY_VERSION" # Fri, 02 Apr 2021 23:05:43 GMT CMD ["hy"] ``` - Layers: - `sha256:1d4058bbeedf5296bcaf5ae8f37c8cd58152acad3ec45a536e08b83f5d3abe83` Last Modified: Wed, 31 Mar 2021 17:15:36 GMT Size: 2.6 MB (2602591 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:13fffd1159bc5cc1fd5bd07c481c7b17f9ed5edeee2ef0f68ced35a80e9c7faa` Last Modified: Thu, 01 Apr 2021 03:54:30 GMT Size: 281.7 KB (281695 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:70eb951fded328c67bb7fc83f5e00e858805886686ee30e752b012eed5dbab95` Last Modified: Fri, 02 Apr 2021 22:48:17 GMT Size: 10.6 MB (10594159 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d6541525199f67ab28022aabcd7c7f13031f74401b2659987601f3782fedcd80` Last Modified: Fri, 02 Apr 2021 22:48:16 GMT Size: 232.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:96b27d18e7d5bf0ecb89ca67ffc1b5f812b02dde3ae8c5b2c5c1102be15f0d4d` Last Modified: Fri, 02 Apr 2021 22:48:16 GMT Size: 2.2 MB (2164338 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e716222067ab089170893aa5f753e3472fc6e04281e8dd038ac5fe69097b2ae7` Last Modified: Fri, 02 Apr 2021 23:08:39 GMT Size: 2.8 MB (2828031 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
80.649635
2,678
0.693818
yue_Hant
0.182316
f49e7071f45a1230d3f267cd589b6d97af4b4fd9
2,245
md
Markdown
src/mn/2020-02/12/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/mn/2020-02/12/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/mn/2020-02/12/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Зөрчилдөж буй эшлэлүүдийн шалтгаанууд date: 13/06/2020 --- `2 Tимот 2:10–15-г уншаарай. Паул Тимотыг “үгийг зөв заагч” Бурханы үгэнд тууштай хандахыг зөвлөсөн. Түүний үгнээс бид бүхэн ямар чухал сургамжийг авч болох вэ?` Бичээсийг судалж байгаа анхааралтай, үнэнч сурагчид Библид ойлгоход хэцүү эшлэлүүд байдаг гэдгийг үгүйсгэхгүй. Гэвч ингэлээ гээд бид сэтгэлээр унах шаардлага байхгүй. Ийм асуудал бэрхшээлүүд гарна гэдгийг бид ойлгож байх ёстой. Сурч мэдэх тал дээр бүхнийг бүрэн ойлгож мэдэх, тэр тусмаа тэнгэрлэг зүйлсийг бүрэн таньж мэдэх боломж төгс бус, мөхөс хүнд байхгүй. Мөхөс хүмүүс Бичээсийг өгсөн хэмжээлшгүй агуу Бурханы мэргэн ухааныг ойлгох гэж хичээх үедээ гарцаагүй бэрхшээлтэй тулгарна. Библийн сургаалыг ойлгоход бэрхшээл учирч байна гэдэг нь Библи худал гэсэн баталгаа болж чадахгүй. Бурханы онгодоор илчлэгдсэн Библийн сургаалыг үл тоомсорлож байгаа хүмүүс Бичээсийг хоорондоо үл нийцсэн, алдаа дутагдалтай ном гэж тунхаглаж байдаг. Тэд Библийг хүний ухаанаар бичсэн ном учраас төгс бус, алдаа дутагдалтай ном гэж гэж итгэдэг. Ийм оюун ухаантайгаар Бичээсийг Бурханы онгодоор дамжуулан нэгдмэл, итгэж найдаж болохуйцаар бичигдсэн гэж судлах боломж гарахгүй. Бичээсийн эхний түүх болох Бүтээлийн түүхэнд (жишээ нь) үл итгэж, эргэлзэх юм бол Бичээсийн үлдсэн бүх хэсгийг хүлээн зөвшөөрөхгүй болох болно. Библийн зарим үл ойлгогдох маш бага хэсгүүд нь хуулан бичиж байгаа хүмүүс, орчуулагчдын алдаанаас шалтгаалсан байх боломжтой. Энэ талаар Элен Уайт, “Зарим хүмүүс бидэнд хандан, “Бичээсийг хувилахад, орчуулахад асуудал гарсан байж болох уу? гэж асуудаг. Тийм байх боломжтой, гэвч ийм алдаа байж болно гэдэг явцгүй, өчүүхэн үзлээс болж Бурханы агуу төлөвлөгөөг ойлгож, хүлээн зөвшөөрч чадахгүй хүн Бурханы Онгодоор бичигдсэн үгсэд эргэлзэж тээнэгэлзэх болно. Тэд жирийн хүний ойлгож чадах энгийн асуудлуудыг, Бурханы үг энгийн болон гайхамшигтай, өөх, чөмөгтэй үгс гэдгийг ойлгох чадваргүй болдог. Бичээсийг хувилах, орчуулахад гарсан алдаа хүмүүсийн итгэлийг алдуулж, эргэлзэж, энгийн үгээр бичигдсэн үнэнийг ойлгох чадваргүй болгох хэмжээтэй бишээ.” Ellen G. White, Selected Messages, book 1, p. 16. (Элен Уайт, Сонгомол мэдээнүүд) `Бид яагаад Библийг даруу зан, дуулгавартай байдлаар судлах ёстой вэ?`
160.357143
831
0.827171
khk_Cyrl
1.000008
f49e72fc7a65738d1b7e129b4f3602e64a618c33
4,193
md
Markdown
_posts/2012-04-25-devious-taxation.md
walterewilliams/walterewilliams.github.io
e230b2c7c9ea110ea21eef48aa3f904c6133580f
[ "MIT" ]
null
null
null
_posts/2012-04-25-devious-taxation.md
walterewilliams/walterewilliams.github.io
e230b2c7c9ea110ea21eef48aa3f904c6133580f
[ "MIT" ]
6
2020-02-26T22:19:39.000Z
2022-02-26T03:15:03.000Z
_posts/2012-04-25-devious-taxation.md
walterewilliams/walterewilliams.github.io
e230b2c7c9ea110ea21eef48aa3f904c6133580f
[ "MIT" ]
null
null
null
--- layout: post title: Devious Taxation excerpt: --- The Washington, D.C.-based Tax Foundation does a yeoman's job of keeping track of how much we're paying in taxes and who's paying what. It turns out that American taxpayers worked this year from Jan. 1 to April 17, 107 days, to earn enough money to pay their federal, state and local tax bills. That statistic requires some clarification, and I ask my readers to help me examine it. According to the Congressional Budget Office, Congress will spend $3.8 trillion this year, about 24 percent of our $15 trillion gross domestic product. But federal tax revenue will be much less, only $2.5 trillion, or 16 percent of the GDP. That means there's a shortfall of $1.3 trillion. Some people, including economists, say there's a deficit. That's true, but only in an accounting sense, not in any meaningful economic sense. Let's look at it. If Congress spends $3.8 trillion out of this year's $15 trillion GDP, what must it do to accomplish that goal? If you said it must "find a way to force us not to spend $3.8 trillion privately," go to the head of the class. One way to force us to spend $3.8 trillion less is to tax us that amount, but we're being taxed only $2.5 trillion. Where does the extra $1.3 trillion come from? It surely doesn't come from the tooth fairy, Santa Claus or the Easter Bunny. The fact of business is that if Congress spends $3.8 trillion of what we produce this year, it necessarily must force us to spend $3.8 trillion less privately this year. The most honest way to force us to do that is through taxation. Another way is to enter the bond market and make interest rates higher than they otherwise would be, thereby forcing us to spend less on private investment in homes and businesses. Then there is debasement of the currency through inflation, which is taxation by stealth. A common but misleading argument is that future generations of Americans will pay for today's spending. Think about it. Is it possible for someone who has yet to be born to pick up the tab for what we do today? Pose the question another way. When we fought World War II, were the resources used and the sacrifices made to fight the war produced between 1941 and 1945, or were they produced and sacrificed in 1980, 1990 or 2000? Subsequent generations benefited from our fighting the war by being born into a free nation. Congresses profligate spending will burden future generations through making them heirs to less capital and, hence, less wealth. The bottom line is that whatever Congress consumes this year is produced this year, not in 2020, 2030 or 2040. That means, in the real economic sense, the federal budget is always balanced. Instead of focusing on how the federal government has grown from 3 or 4 percent of our GDP — as it was from 1787 to 1920 — to today's 24 percent, our attention has been diverted to tax fairness demagoguery. Let's look at tax fairness. According to Internal Revenue Service data for 2009, available at http://www.ntu.org/tax-basics/who-pays-income-taxes.html, the top 1 percent of American income earners paid almost 37 percent of federal income taxes. The top 10 percent paid about 70 percent of federal income taxes, and the top 50 percent paid nearly 98 percent. Roughly 47 percent of Americans pay no federal income tax. Here's my fairness question to you: What standard of fairness dictates that the top 10 percent of income earners pay 70 percent of the income tax burden while 47 percent of Americans pay nothing? The fact that the income tax burden is distributed so unevenly produces great politically borne fiscal problems. People who pay little or no income taxes become natural constituents for big-spending politicians. After all, if you pay no income taxes, what do you care if income taxes are raised? Also, you won't be enthusiastic about tax cuts; you'll see them as a threat to your handouts. Walter E. Williams is a professor of economics at George Mason University. To find out more about Walter E. Williams and read features by other Creators Syndicate writers and cartoonists, visit the Creators Syndicate Web page at www.creators.com. COPYRIGHT 2012 CREATORS.COM
149.75
819
0.790365
eng_Latn
0.999808
f49e76803f27d37a5e808365092f6985917782e9
23,256
md
Markdown
README.md
X104n/rouge-like
3935b282fb28a71b1b1c996d0b94e4cc70b2ce68
[ "MIT" ]
null
null
null
README.md
X104n/rouge-like
3935b282fb28a71b1b1c996d0b94e4cc70b2ce68
[ "MIT" ]
null
null
null
README.md
X104n/rouge-like
3935b282fb28a71b1b1c996d0b94e4cc70b2ce68
[ "MIT" ]
null
null
null
# [Semesteroppgave 1: “Rogue One oh one”](https://git.app.uib.no/ii/inf101/21v/assignments) I semesteroppgaven skal du implementere et spill inspirert av [Rogue](information/rogue.md). Oppgaven skal leveres inn via GitLab innen **fredag 12. mars kl. 16:00**. *Hvis du ikke har brukt GitLab enda, bør du gå gjennom tidligere lab oppgaver.* Hvis du får mindre enn 6 poeng på én eller begge av semesteroppgaven **får du ikke ta eksamen**. Spillet er delvis skrevet; du skal endre eksisterende kode, legge til ny kode, og skrive tekst-svar. Det er 7 del-oppgaver i denne semesteroppgaven. For utfyllende forklaring av Java-konsepter see [Oracle Java Tutorial](https://docs.oracle.com/javase/tutorial/ information/konsepter.md). ## Bekreft at du har lest viktig informasjon Utfylende praktisk informasjon om semesteroppgaven og innlevering finner dere i [praktiskinfo.md](information/praktiskinfo.md). 👉 Les [praktiskinfo.md](information/praktiskinfo.md) og åpne java filen `inf101.GetStarted.java` for å bekrefte at du har lest informasjonen. JUnit testene vil ikke virke før du har gjort dette. `add-commit-push` ### Tester Det er mange tester som følger med semesteroppgaven, noen passerer (er grønne) og noen feiler (er røde). De fleste oppgavene har en eller flere tester som i utgangspunktet feiler og skal passere når oppgaven er gjort. Som alltid, så betyr ikke det at en test passerer nødvendigvis at alt er riktig, men det sier deg at du er på riktig vei. Vi anbefaler at du løser hver oppgave ved å først kjøre testen(e) og sjekke om de feiler. Så løser du oppgaven og kjører testen(e) på nytt og sjekker at de passerer. Så comitter du og pusher. Dette kalles [(testdrevet utvikling, TDD)](https://en.wikipedia.org/wiki/Test-driven_development). _Eclipse-tips: Forsvinner testene i stedet for å bli grønne? Trykk på ⋮menyen i JUnit tabben og slå av Show Failures Only – det er mye er motiverende å se at det dukker opp grønne bokser når vi får til noe!_ ## Oppgave 1 - Abstrakte Ting _I denne oppgaven skal du bli kjent med interfacet `IItem`. Der oppgavene ber om tekst-svar, skal du skrive disse i innleveringsfilen [Svar.md](Svar.md)._ ### 1.1) Rogue-“Ting” Les [beskrivelsen av Rogue 101](information/rogue.md). Hvordan ville du abstrahert “ting” fra beskrivelsen av Rogue-spillet? 👉 Skriv ned 5 egenskaper eller metoder du mener må være del av et interface som abstraherer “ting” på spillbrettet og en kort begrunnelse for hver av dem. Skriv svaret i [Svar.md](Svar.md). Ikke gå videre før du har skrevet ned svaret ditt under oppgave 1.1 i [Svar.md](Svar.md) og gjort `add-commit-push` i git. Du kan gå tilbake og endre på svaret senere. Poengsummen baseres på det _siste_ svaret som lastes opp. ### 1.2) IItem.java Åpne interfacet `IItem`. Sammenlikne metode-deklarasjonene i interfacet med de 5 egenskapene du skrev ned i 1.1 og med beskrivelsen av spillet fra [Rogue](information/rogue). 👉 Skriv en kort tekst i [Svar.md](Svar.md) som beskriver hvordan IItem *abstraherer* minst 5 egenskaper ved spill-elementer av typen “ting” i et Rogue-spill. Skriv svaret i [Svar.md](Svar.md). 🤔 - Selv om du kanskje skrev ned andre egenskaper enn det som ligger i IItem, så betyr det gjerne ikke at noen av delene er feil. For eksempel så har hver ting en posisjon, men i vår kode er det spill-kartet som holder styr på det, ikke tingen selv. Med et annet design kunne det like godt være en egenskap i IItem. _Tips: Hvis du vil endre på svar du allerede har pushet så kan du fritt gjøre det. Bare bruke en beskrivende commit-melding, f.eks. “Forbedret oppgave 1.1 etter gruppeleder forklarte abstraksjon.” Det er siste versjon før fristen som teller._ ### 1.3) Carrot.java Klassen `Carrot` implementerer interfacet `IItem` og representerer en Gulrot-“ting” på spillkartet. Et objekt av typen `Carrot` er altså på et vis både en abstraksjon av en _ekte_ gulrot, og av et spillobjekt fra Rogue. Åpne `Carrot`-klassen og se hvordan den implementerer metodene fra `IItem`. ✅ `ItemTest:testHandleDamage` Hvilke egenskaper ved en _ekte_ gulrot finnes i den abstrakte `Carrot`-klassen, og hvilke egenskaper har en gulrot som _ikke_ finnes i `Carrot`-klassen? 👉 List opp 3 egenskaper fra oppgave 1.2 som `Carrot`-klassen implementerer, og beskriv hvordan den implementerer dem. Skriv svaret i [Svar.md](Svar.md). 👉 List opp 1 egenskap ved en _ekte_ gulrot som er representert i `Carrot`-klassen og 1 some _ikke_ er det. Skriv svaret i [Svar.md](Svar.md). Metoden `Carrot::handleDamage()` er ikke rett implementert. Vi tenker oss at gulrøtter blir skadet når en Rabbit spiser på den. Rabbit gir gulroten beskjed om hvor mye den spiser ved å kalle `Carrot::handleDamage()` og Carrot sin health går ned tilsvarende (Denne notasjonen referer til metoden `handleDamage()` i klassen `Carrot`.) Du kan kjøre denne testen alene ved å høyreklikke på metodenavnet → Run As → JUnit Test). 👉 Implementer `handleDamage()`. Sjekk om den funker ved å kjøre `CarrotTest` og `IItemTest:testHandleDamage`. _Tips: I Eclipse kan du se dokumentasjonen til en metode (f.eks. handleDamage(), som er dokumentert i IItem) ved å la muspekeren hvile over metodenavnet. Ved implementasjonen av handleDamage() finner du også en liten trekant i margen som du kan trykke på for å gå til interfacet._ ### 1.4) Spillobjekter Hvilke andre klasser implementerer `IItem`? 👉 List opp alle klassene som implementerer dette interfacet. Skriv svaret i [Svar.md](Svar.md). _Tips: høyreklikk på IItem og velg Open Type Hierarchy for å få opp en liste av referanser til IItem-deklarasjonen._ (I IntelliJ heter det “Find usage”) ### 1.5) Gold.java Nå skal du utvide støtten for spill-objekter til å også kunne representere gull. 👉 Opprett en klasse `Gold.java` som implementerer interfacet `IItem` i samme mappe som `Carrot.java`. Det finnes tester i `IItemTest.java` for gull som krever at **klassen Gull har symbolet 'G'**. (se hvordan de andre typene IItem har implementert SYMBOL og gjør det likt.) Testene i `IItemTest.java` vil ikke virke ennå, vi skal jobbe videre med de testene i Oppgave 3. Men vær sikker på at du bruker rett symbol på Gold klassen ellers får du problemer i Oppgave 3. For å implementere metodene kan det være nyttig for deg å se på hvordan de andre klassene for spill-objekter fra 1.4 implementerer dem. `add-commit-push` *Protip: [default metoder](information/konsepter.md) trenger ikke å implementeres av sub-klasser, med mindre man ønsker annen funksjonalitet enn det som tilbys av default-implementasjonen.* Sjekk at filen `Gold.java` finnes i ditt online repositorie i samme mappe som `Carrot.java` **FØR** du går videre. Det vil spare deg for trøbbel og bugs senere. _Tips: Du kan velge mer eller mindre tilfeldige verdier for *max health* og *defence* – forløpig har vi ikke tenkt på om gull skal kunne skades eller angripes. For `getSize()`, bør du sette den til å være større enn andre items, slik at den blir synlig på kartet (det er den største tingen som blir vist / plukket opp)._ ## Oppgave 2 - The Rabbit _I Oppgave 1 ble du kjent med interfacet `IItem` og hva de ulike metodene brukes til. Vi skal fortsette med å se på interfacene for spillobjektene, og i denne oppgaven skal vi se på `IActor` og eksempler på en “aktor” i spillet vårt._ Husk at du kan alltids sjekke ut dokumentasjonen i linkene øverst i denne filen dersom du syns det er vanskelig å forstå hvordan disse interface-bitene henger sammen. ### 2.1) IActor.java Se på `IActor.java` i `rogue101.objects`-pakken. Legg merke til at `IActor` *utvider* et annet interface. 👉 Hvilket interface utvider IActor, og hva betyr dette for klasser som skal implementere `IActor`? Skriv et kort og konkret svar i `Svar.md`. _(Vanskelig? Vi minner nok en gang om dokumentasjons-linkene øverst i denne filen)._ ### 2.2) doTurn() Se på `Rabbit.java`. Hvordan bestemmer Rabbit hvilken vei den skal gå i `doTurn()`? 👉 Skriv et kort og konkret svar i `Svar.md`. ### 2.3) getPossibleMoves() Et naturlig spørsmål en `IActor` kan stille kartet (via `IGameView`) er “Hvilke muligheter har jeg til å bevege meg?”. Metoden `GameMap::getPossibleMoves` gir svar på dette ved å returnere en liste med de retningene som en rolle har *lov* til å gå. Se eksempel på bruk av denne i `Rabbit::performMove()`. Per nå så returnerer `getPossibleMoves` bare en liste med retningen 'EAST'. 👉 Implementer metoden `GameMap::getPossibleMoves`. ✅ `GameMapTest:testGetPossibleMoves` skal passere når du er ferdig med denne oppgaven. 🤔 Ligner dette på noe du har gjort på tidligere ukeoppgave? _Tips:Det eksisterer allerede en metode `GameMap::canGo`._ ### 2.4) Finne gulrot I `doTurn()` flytter Rabbit på seg dersom den ikke allerede har brukt opp turen på å spise noe. Som du ser er ikke flyttingen så veldig smart – hva om det ligger en gulrot rett ved siden av kaninen? Metoden `doTurn()` tar et argument av typen `IGameView`, som er et relativt stort interface. Gjør deg kjent med `IGameView`. Hvordan kan en `IActor` bruke `IGameView` til å hente informasjon om miljøet sitt og utføre handlinger som påvirker andre elementer i spillet? Du trenger ikke å skrive svaret i svar-filen, men merk at forståelsen din av `IGameView` vil ha mye å si for resten av innleveringen. _Tips: Hvis du er i `Rabbit` så kan du `ctrl`/`cmd`-klikke på_ `IGameView` _for å hoppe dit._ 👉 Gjør Rabbit (litt) smartere ved å se om det ligger en gulrot på en av de ledige plassene ved siden av den på brettet, og gå dit dersom det gjør det. ✅ `RabbitTest` skal passere når du er ferdig med denne oppgaven. ## Oppgave 3 - Objektfabrikken _I denne oppgaven skal du se på hvordan IItems blir opprettet og lagt til på brettet. Du skal utvide spillet til å støtte gull-objektene fra Oppgave 1 og endre koden så den følger et viktig objekt-orientert design prinsipp._ ### 3.1 ItemFactory.java For å lage nye objekter av en klasse i Java så må vi kalle på konstruktøren til klassen. Hvis vi vet at vi trenger en gulrot kaller vi på `new Carrot()`, og hvis vi vet at vi trenger en edderkopp kaller vi på `new Spider()`. Når spillelementene skal bygges er det derfor viktig å få tak i riktig konstruktør. For å løse dette uten å rote til koden vår med referanser til konkrete klasser og symboler, bruker vi et kjent _Design Pattern_ som heter _Factory Pattern_ . Et _Design Pattern_ er en standardisert måte å løse et problem som stadig dukker opp når man programmerer objektorientert - uavhengig av programmeringsspråk. De gjør at man ikke må finne opp hjulet på nytt, og gjør det også lettere for andre å forstå hva du har gjort ettersom de gjerne har sett _mønsteret_ før. (Sjekk gjerne ut den populære boken [Design Patterns](https://en.wikipedia.org/wiki/Design_Patterns)) Factory Pattern går ut på å ha en metode i en “Factory”-klasse som vet vet hvilken konstruktør den skal velge avhengig av argumentet den får. I vårt tilfelle har vi en klasse `ItemFactory` som gjør dette. 👉 Hvilket symbol representerer et `Player`-objekt og hvilken representerer `Dust`? Skriv svaret i [Svar.md](Svar.md) og i hvilken klasse og metode du fant svaret. Fabrikken mangler et valg for å legge til `Dust`. ✅ `IItemTest::testItemFactoryCreatesDust()` ✅ `IItemTest::testItemFactoryCreatesGold()` 👉 Legg til støtte i fabrikken for å opprette Dust-objekter. 👉 Legg til støtte i fabrikken for å opprette Gull-objekter. ✅ Når du er ferdig med denne oppgaven skal alle testene i `IItemTest` passere.` ### 3.2 S.O.L.I.D. [SOLID](https://en.wikipedia.org/wiki/SOLID) er en forkortelse for fem prinsipper som gjør objektorientert kode forståelig, fleksibel og lett å vedlikeholde. Det første prinsippet – prinsippet om _Single Responsibility_ – sier at: > “Each class should have a single responsibility and that responsibility should only lie with that class.” Hvis vi ønsker å endre symbolet til f.eks. Rabbit fra `'R'` til `'r'` i hele programmet så måtte vi gjort endringer i mer enn én klasse. 👉 Hvorfor måtte vi endret på mer enn én klasse for å endre symbolet til Rabbit? Skriv svaret i [Svar.md](Svar.md). Når vi må endre flere klasser for å gjøre én endring betyr at vi har brutt det første prinsippet i SOLID. Grunnen til at dette ikke er bra, er at hvis Rolf som ikke kjenner koden skal gjøre denne endringen om 3 år, og bare gjør endringen i én av klassene (hvordan kunne ha visst at han måtte endre flere plasser?) så ville det vært en feil i programmet. 👉 Endre koden i ItemFactory slik at den ikke _inneholder_ informasjon om hvilket symbol som hører til hvilken klasse. (Merk at med inneholder så mener vi at symbolet er hardkodet i klassen). 👉 Hvorfor er problemet med _Single Responsibility_ nå fikset? Skriv en kort forklaring i `Svar.md`. ### 3.3 Gold.java I denne oppgaven skal du legge til støtte for spillobjekter av typen Gold. Du må ha gjort oppgave 1 og tidligere deler av oppgave 3 for å kunne løse denne oppgaven. Åpne IItemTest i pakken `inf101.rogue101.objects`. 👉 Legg til et nytt objekt av typen Gold i `IItemTest::getTestData()`-metoden etter samme mønster som for de andre objektene. Kjør testene. 👉 Finn filen `level1.txt` i `resources/inf101/rouge101/map/maps`, åpne den og erstatt noen av symbolene med gull-symbolet du valgte i Oppgave 1. Lagre filen. Kjør programmet. Gull-symbolene skal vises på skjermen der du la dem inn i kartet. ## Oppgave 4 - Et smartere kart ### 4.1 getNeighbourhood Et annet spørsmål som `IActor`s kan stille kartet er hva som befinner seg i *området* rundt seg. Til dette har vi en metode `GameMap::getNeighbourhood` som tar en lokasjon og et heltall `dist` anse som argument, og returnerer alle lokasjonene innen `dist` steg fra lokasjonen. F.eks. dersom en rolle står på en `loc`, og - spør etter nabofeltet med `dist=1`, så skal de 8 feltene rundt `loc` returneres. - spør om nabofeltet med `dist=2`, så skal de 8 lokasjonene rundt `loc` returneres sammen med de 16 lokasjonene som er et steg lenger ut. - osv. 👉 Implementer metoden `GameMap::getNeighbourhood`. ✅ `GameMapTest::testGetNeighbourhoodCardinality` ✅ `GameMapTest::testGetNeighbourhoodEdgeCardinality` ### 4.2 En bedre getNeighbourhood Når en rolle lurer på hvilke lokasjoner som befinner seg i nærheten, så er den ikke interessert i lokasjoner med vegger. 👉 Forbedre metoden `GameMap::getNeighbourhood` slik at den ikke returnerer lokasjoner som er vegger. ✅ `GameMapTest::testGetNeighbourhoodDoesNotReturnWall` ### 4.3 Sort Neighbourhood Gå til `IGameMap::getNeighbourhood` og sorter listen med lokasjoner før den returneres. `IList::sort` trenger en `Comparator<Location>`, her kan du bruke `LocationComparator` som tar en lokasjon og sammenligner distansene til to andre lokasjoner. Du kan bruke: `Collections.sort(neighborhood,new LocationComparator(loc));` 👉 Forbedre metoden `GameMap::getNeighbourhood` slik at den returnerer lokasjoner i sortert rekkefølge. ✅ `GameMapTest::testGetNeighbourhoodSorted` ### 4.4 getReachable Noen ganger kan en lokasjon være nært men vanskelig å nå fordi andre IItem er i veien. Du skal finne de lokasjonene som er mulig å nå på `dist` antall steg. Dette kan være vanskelig og vi regner ikke med at alle får til denne oppgaven. Du kan gjøre resten av oppgavene selv om denne oppgaven ikke er ferdig. 👉 Implementer metoden `GameMap:getReachable` slik at den returnerer de lokasjoner som er mulig å gå til på `dist` antall steg. _Tips: For å finne de lokasjonene som kan nåes på 1 steg kan du kanskje gjenbruke noe fra_ `getPossibleMoves()` Er det lurt med en helpemetode `List<Location> expand(List<Location> found)` som legger til alle lokasjoner du kan nå med et ekstra steg? ✅ `GameMapTest::testGetReachableDoesNotWalkPastWalls` ## Oppgave 5 - Smartere kaniner La oss gå tilbake til Rabbit. I Oppgave 2 gjorde du denne litt smartere enn den var ved å se om det eksisterte gullrøtter i en av lokasjonene den kunne nå med et move. Nå som vi vet litt mer om Game og GameMap kan vi gjøre den enda smartere ved å se om det eksisterer noen gullrøtter i nærheten av en Rabbit. ### 5.1 Test Rabbit strategy Kjør testene til prosjektet, og sjekk hvor mange nivåer kaninen din klarer seg på i `TestRabbitStrategy`. Merk at ettersom kaninen din kanskje oppfører seg litt “tilfeldig”, så kan testresultatet variere fra gang til gang. Vi ønsker å gjøre kaninen enda smartere ved å lukte etter gullrøtter i nærheten ved å bruke metodene fra oppgave 4 som gir alle items som er maks `dist` steg unna. Men før vi kommer så langt skal vi se om vi kan finne ut hva kaninene gjør. 👉 Gå til `inf101.rouge101.Main` og bytt om hvilken applikasjon som kjøres og se hvordan din kanin gjør det. _Tips: Kaniner skal spise opp guleroten og få energi av det, så skal gulleroten forsvinne. Det er store gullerøtter så kaninen spiser ikke hele gulleroten på en gang, noe blir liggende igjen og så kommer den kanskje igjen senere for å sise resten. Hvis gulleroten ikke forsvinner er det kanskje noe galt i gullerot klassen?_ ### 5.2 Get direction Bruk `IGameMap::getNeighbourhood` eller `IGameMap::getReachable` fra oppgave 4 til å hente alle synlig lokasjoner fra kaninen sin posisjon, og sjekk om det ligger noe gullrøtter i nærheten. Beveg Rabbit i retning av gulroten dersom den kan se noen. Det er altså 3 steg du må gjøre: * Finn en gullerot i nærheten * Finn en retning som går mot denne gulleroten. * Hvis kaninen kan hoppe denne veien, så gjør den det. Dette kan erstatte eller komplimentere at kaninen ser etter en gulrot ved siden av seg. 👉 Implementer hjelpemetoden `Location::directionTo` og gjør slik at din Rabbit bruker metoden `IGameView::getDirectionTo`. ### 5.3 Rabbit AI Kjør testene på nytt, og se om kaninen klarer seg bedre nå. Klarer du å få enda flere tester i `TestRabbitStrategy` til å passere så kan det gi god uttelling for kreativitet. (Du trenger ikke få alle testene til å passere for å få full pott på denne.) 👉 Forbedre Rabbit sin AI. Skriv i `svar.md` hva du har gjort/prøvd/tenkt. _Tips 1: Poenget er ikke å gi Rabbit superkrefter, det er å få Rabbit til å oppføre seg normalt i spillet. Du har kanskje sett spill der en AI sitter fast og bare går rett i en vegg eller lignende? Det ser ikke profft ut._ _Tips 2:_ `GameView::getNearbyItems` _er allerede implementert, men kanskje du kan gjøre en liten endring på implementasjonen for å gjøre den litt mer anvennelig?_ ## Oppgave 6 - Player klassen Player-klassen er litt mer avansert enn de andre klassene. Denne klassen har blant annet en `keyPressed`-metode som ser på input fra tastaturet, og velger hva den skal gjøre ut i fra hvilken tast brukeren har trykket på. Tasten `P` og `D` skal henholdsvis plukke opp og legge fra seg ting den står på på kartet. Det finnes en del nyttige metoder i `IGameView` som du kan få bruk for her. `IGameView` kan f.eks. la deg plukke en spesifikk ting fra lokasjonen du står på (gitt at tingen ligger på denne lokasjonen), la deg legge en ting på kartet, og la deg spørre om hvilke ting (som ikke er `IActor`) som finnes på den lokasjonen du står på. I denne oppgaven får du *noe* uttelling for at Player kan plukke opp 1 ting, og *full* uttelling dersom Player kan plukke opp flere ting. Hvis Player kun kan plukke opp 1 ting, så må den legge dette fra seg dersom den prøver å plukke opp noe annet (slik at tingen han hadde ikke forsvinner fra spillet). ### 6.1 Player pickUp “Plukk opp” skal prøve å plukke opp den første tingen den finner på lokasjonen den står på. (Igjen, hvis din Player kun kan holde 1 ting, så skal denne tingen legges tilbake på kartet, og den nye tingen skal plukkes opp). “Har ting” sjekker om spilleren holder ett spesifikt objekt. NB: objekter er like hvis de *er* det samme objektet (objekt1 == objekt2). Metodene er delvis implementert men fungerer ikke helt, kan du finne ut hva som mangler? 👉 Gjør ferdig `Player::pickUp`. 👉 Implementer `Player::hasItem`. ### 6.2 Player drop “Dropp” skal legge fra seg tingen den har på lokasjonen den står på. Hvis den holder på flere ting skal den legge fra seg *den første* tingen den plukket opp. 👉 Implementer `Player::drop`. ### 6.3 Player status Nå kan den som spiller se hvor mange liv spilleren har ettersom `showStatus`-metoden informerer om dette. Vi ønsker å også kunne se hva spilleren eventuelt har plukket opp. 👉 Endre `showStatus`-metoden i `Player`-klassen til å *også* si hvilken ting spilleren har dersom den har noe. Hvis spilleren har plukket opp flere ting kan disse listes, komma-seperert. (Hold statusmeldingen til én linje - ikke bruk newline-karakter). Eks: > Player has 100 hp left holding items(s) carrot, carrot ## Oppgave 7 - Utvid programmet Oppgave 1-6 har hjulpet deg med å bli kjent med de ulike spillelementene og hvordan de interegarer med spillet, med spillkartet og med hverandre. Herfra og ut kan du gjøre spillet til ditt eget, og det er bare kreativiteten som setter grenser for hva du kan gjøre. Legg merke til at Carrot, Rabbit og Spider ligger i en pakke `objects`. Disse er eksempler på spillobjekter, og ikke noen “fasit” på hvordan spillet skal være. Du skal sørge for at utvidelsen av spillet ikke ødelegger eksisterende kode, det vil si at du skal få testene til å passere selv om du endrer på spillet. Utvid spillet med en funksjonalitet. Gjør det på følgende måte: 1. Skriv en liten plan for utvidelsen i `Svar.md` 2. Opprett de klassene du trenger og lag noen JUnit-tester for planen din 3. Fullfør implementasjonen. Sørg for å ha ryddige kommentarer og gode navn på klasser/metoder/variabler. 4. Skriv kort om uførelsen i `Svar.md`. Det er bare kjekt om du legger inn flere funksjonaliteter/utvidelser. Pass på å få med det du har gjort i `Svar.md` slik at den som retter legger merke til det og kan gi poeng for det. Men det er begrenset hvor mange poeng du kan få for denne oppgaven så du trenger ikke gjøre så veldig mye. Vi ønsker å se **ny** funksjonalitet så tenk litt på hva du kan få til, det er ikke antall kodelinjer som avgjør poengsum på denne oppgaven. Kreativitet blir belønnet. Trenger du litt starthjelp? Her er noen eksempler på noen utvidelser man kan ha (men ikke la disse sette begrensinger for hva du kan gjøre): - Legg til en ny type IActor med helt annen oppførsel enn de som finnes. Hvordan interagerer denne rollen med andre roller og ting på kartet? - Utvide Rabbit til å kunne parre seg dersom den står inntil et annet Rabbit, litt ala Game Of Life. Dette bør være noen kriterier for at dette skal skje for å unngå å dekke hele kartet med kaniner. - Legg til nye IItem som spilleren (og evt. andre) kan plukke opp for å bli bedre, f.eks. sverd gir økt attack og damage, rustning gir økt defence, etc. - Sjekk ut støtten for Emoji ved å se på klassen `EmojiFactory` - Sjekk ut `level1.txt` i `map.maps`-pakken, og lag et eller flere nye kart her. Du bruker kartet ved å endre strengen `maps/level1.txt` i `Game`-konstruktøren. (Nytt kart alene gir ikke full uttelling, men kan være gøy å gjøre sammen med noe annet). - Det er en gammeldags løsning å spille med piltastene. De fleste dataspill støtter i dag WASD (W: nord, A: vest, S: sør, D: øst), så kanskje vi kan støtte begge deler? og noe lettere enn å bruke 'P' og 'D'? (Obs husk å endre tester også.)
68.4
579
0.77193
nob_Latn
0.998558
f49e8a1854ccdeb10a1e1f6c942d977b966f0d37
1,296
md
Markdown
docs/vsto/getvstosolutionmetadata-function.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
2
2020-05-20T07:52:54.000Z
2021-02-06T18:51:42.000Z
docs/vsto/getvstosolutionmetadata-function.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
8
2018-08-02T15:03:13.000Z
2020-09-27T20:22:01.000Z
docs/vsto/getvstosolutionmetadata-function.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
16
2018-01-29T09:30:06.000Z
2021-10-09T11:23:54.000Z
--- title: GetVstoSolutionMetadata, funkcja description: Dowiedz się, jak interfejs API GetVstoSolutionMetadata obsługuje infrastrukturę Office i nie jest przeznaczony do bezpośredniego działania w kodzie. ms.custom: SEO-VS-2020 ms.date: 02/02/2017 ms.topic: reference dev_langs: - VB - CSharp author: John-Hart ms.author: johnhart manager: jmartens ms.technology: office-development ms.workload: - office ms.openlocfilehash: 14cf1d9ab6c8c3a734caa737b99a5edaededb94f ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 09/13/2021 ms.locfileid: "126717853" --- # <a name="getvstosolutionmetadata-function"></a>GetVstoSolutionMetadata, funkcja Ten interfejs API obsługuje Office i nie jest przeznaczony do bezpośredniego działania z kodu. ## <a name="syntax"></a>Składnia ```csharp HRESULT WINAPI GetVstoSolutionMetadata( LPCWSTR lpwszSolutionMetadataKey, ISolutionMetadata** ppSolutionInfo ); ``` ### <a name="parameters"></a>Parametry |Parametr|Opis| |---------------|-----------------| |*lpwszSolutionMetadataKey*|Nie używaj.| |*ppSolutionInfo*|Nie używaj.| ## <a name="return-value"></a>Wartość zwracana Jeśli funkcja powiedzie się, zwraca wartość **S_OK**. Jeśli funkcja ulegnie awarii, zwraca kod błędu.
29.454545
161
0.770062
pol_Latn
0.938986
f49e99444256b32bdb79c68049b5b113a90615c0
1,814
md
Markdown
README.md
Phadated/csye6225-fall2018
5259f6e4962f3f62df7804a1e8c54bc8f78acb3f
[ "Apache-2.0" ]
null
null
null
README.md
Phadated/csye6225-fall2018
5259f6e4962f3f62df7804a1e8c54bc8f78acb3f
[ "Apache-2.0" ]
null
null
null
README.md
Phadated/csye6225-fall2018
5259f6e4962f3f62df7804a1e8c54bc8f78acb3f
[ "Apache-2.0" ]
1
2020-02-26T20:26:36.000Z
2020-02-26T20:26:36.000Z
# csye6225-fall2018 ## 1.Team member information ### Palak Sharma</br> - **Email Id** - sharma.pala@husky.neu.edu - **NUID** - 001834478 ### Garvit Chawla </br> - **Email Id**- chawla.g@husky.neu.edu - **NUID** - 001859169 ### Dhanisha Phadate</br> - **Email Id** -phadate.d@husky.neu.edu - **NUID** - 001859234 ## 2.Prerequisites for building and deploying your application locally. </br> Software : NodeJS, AngularJS , Mariadb Initial Commands - ``` npm install ``` To start db server : ``` mysql.server start ``` To get into mariadb terminal: ``` mysql -u root ``` ## 3.Build and Deploy instructions for web application. </br> Running on: https://localhost:4000 ## 4.Instructions to run unit, integration and/or load tests. </br> a. Post request</br> Result- Register the user , with username as email and password. b. Post request</br> Result- Login the app with the credentials(Hashed passwords will be compared) c. Post request</br> Result- Go to the transactions page, and enter all the values d. Get request</br> Result- Get the transactions for the user. e. Perform CRUD operations on the transactions(Create, Delete, Update and Read) FOR EACH TRANSACTION : With : NODE_ENV=dev node server.js(Uploading the images on S3 bucket for which credentials are stored in ENV variables.) With : node server.js(Storing the attachments locally) a. POST request : Add a png or jpeg file displying the receipt of the transaction. b. GET request : Display all the added attachments for the particular transaction once the user click on the hyperlink. c. PUT request : Request to update the exixting attachment with a new one. d. DELETE request : Delete the attachment for the particular transaction. f. Added Authentication to destroy the session of the user once "LOGOUT".
28.793651
120
0.725469
eng_Latn
0.895405
f49fb2aaebe94f42e82010dc7428044dfd6b6b62
8,914
md
Markdown
treebanks/pcm_nsc/pcm_nsc-pos-AUX.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
treebanks/pcm_nsc/pcm_nsc-pos-AUX.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
treebanks/pcm_nsc/pcm_nsc-pos-AUX.md
EmanuelUHH/docs
641bd749c85e54e841758efa7084d8fdd090161a
[ "Apache-2.0" ]
null
null
null
--- layout: base title: 'Statistics of AUX in UD_Naija-NSC' udver: '2' --- ## Treebank Statistics: UD_Naija-NSC: POS Tags: `AUX` There are 10 `AUX` lemmas (1%), 10 `AUX` types (1%) and 908 `AUX` tokens (7%). Out of 17 observed tags, the rank of `AUX` is: 15 in number of lemmas, 15 in number of types and 5 in number of tokens. The 10 most frequent `AUX` lemmas: <em>dey, go, make, don, come, fit, neva, for, de, will</em> The 10 most frequent `AUX` types: <em>dey, go, make, don, come, fit, neva, for, de, will</em> The 10 most frequent ambiguous lemmas: <em>dey</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 322, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 89, <tt><a href="pcm_nsc-pos-PART.html">PART</a></tt> 5), <em>go</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 234, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 76), <em>make</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 132, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 27), <em>come</em> (<tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 63, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 54), <em>for</em> (<tt><a href="pcm_nsc-pos-ADP.html">ADP</a></tt> 174, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 3, <tt><a href="pcm_nsc-pos-SCONJ.html">SCONJ</a></tt> 2), <em>de</em> (<tt><a href="pcm_nsc-pos-PRON.html">PRON</a></tt> 5, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 1) The 10 most frequent ambiguous types: <em>dey</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 322, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 89, <tt><a href="pcm_nsc-pos-PART.html">PART</a></tt> 5), <em>go</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 234, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 76), <em>make</em> (<tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 132, <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 27), <em>come</em> (<tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 63, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 54), <em>for</em> (<tt><a href="pcm_nsc-pos-ADP.html">ADP</a></tt> 174, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 3, <tt><a href="pcm_nsc-pos-SCONJ.html">SCONJ</a></tt> 2), <em>de</em> (<tt><a href="pcm_nsc-pos-PRON.html">PRON</a></tt> 5, <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 1) * <em>dey</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 322: <em># na farmer dem >+ <b>dey</b> happy pass # when rain fall like dis //</em> * <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 89: <em># derefore < # na our hand >+ di matter come <b>dey</b> now //</em> * <tt><a href="pcm_nsc-pos-PART.html">PART</a></tt> 5: <em>but I <b>dey</b> hungry now //</em> * <em>go</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 234: <em>any time wey rain fall like dis < # everywhere <b>go</b> just cool well well //</em> * <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 76: <em># make God no let our { property |c or life } follow rain <b>go</b> like dat //</em> * <em>make</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 132: <em># <b>make</b> we talk true sef o //</em> * <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 27: <em># because # na im >+ we go take <b>make</b> di move # to get di bread //</em> * <em>come</em> * <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> 63: <em># ey but tory don <b>come</b> //</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 54: <em># but na landlord dem >+ di matter <b>come</b> concern well well o //</em> * <em>for</em> * <tt><a href="pcm_nsc-pos-ADP.html">ADP</a></tt> 174: <em># make dem dig gutter put <b>for</b> where rain wata suppose to dey pass //</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 3: <em>I <b>for</b> show you > serious //</em> * <tt><a href="pcm_nsc-pos-SCONJ.html">SCONJ</a></tt> 2: <em>you know di kind danger wey <b>for</b> don happen to am for road as im dey go house //</em> * <em>de</em> * <tt><a href="pcm_nsc-pos-PRON.html">PRON</a></tt> 5: <em>"uh" <b>de</b> say [ <b>de</b> want to & ] //</em> * <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> 1: <em># but shey una kuku sabi as anything wey our people organise <b>de</b> be <+ # motor go spoil //= dem no go repair am //</em> ## Morphology The form / lemma ratio of `AUX` is 1.000000 (the average of all parts of speech is 1.000000). The 1st highest number of forms (1) was observed with the lemma “come”: <em>come</em>. The 2nd highest number of forms (1) was observed with the lemma “de”: <em>de</em>. The 3rd highest number of forms (1) was observed with the lemma “dey”: <em>dey</em>. `AUX` does not occur with any features. ## Relations `AUX` nodes are attached to their parents using 17 different relations: <tt><a href="pcm_nsc-dep-aux.html">aux</a></tt> (863; 95% instances), <tt><a href="pcm_nsc-dep-conj-dicto.html">conj:dicto</a></tt> (9; 1% instances), <tt><a href="pcm_nsc-dep-root.html">root</a></tt> (8; 1% instances), <tt><a href="pcm_nsc-dep-cop.html">cop</a></tt> (6; 1% instances), <tt><a href="pcm_nsc-dep-aux-pass.html">aux:pass</a></tt> (4; 0% instances), <tt><a href="pcm_nsc-dep-acl-cleft.html">acl:cleft</a></tt> (3; 0% instances), <tt><a href="pcm_nsc-dep-acl-relcl.html">acl:relcl</a></tt> (3; 0% instances), <tt><a href="pcm_nsc-dep-advcl.html">advcl</a></tt> (2; 0% instances), <tt><a href="pcm_nsc-dep-parataxis-obj.html">parataxis:obj</a></tt> (2; 0% instances), <tt><a href="pcm_nsc-dep-amod.html">amod</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-ccomp.html">ccomp</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-compound-redup.html">compound:redup</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-compound-svc.html">compound:svc</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-conj-coord.html">conj:coord</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-csubj-quasi.html">csubj:quasi</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-orphan.html">orphan</a></tt> (1; 0% instances), <tt><a href="pcm_nsc-dep-xcomp.html">xcomp</a></tt> (1; 0% instances) Parents of `AUX` nodes belong to 11 different parts of speech: <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> (828; 91% instances), <tt><a href="pcm_nsc-pos-ADJ.html">ADJ</a></tt> (24; 3% instances), <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> (15; 2% instances), <tt><a href="pcm_nsc-pos-NOUN.html">NOUN</a></tt> (9; 1% instances), (8; 1% instances), <tt><a href="pcm_nsc-pos-PUNCT.html">PUNCT</a></tt> (7; 1% instances), <tt><a href="pcm_nsc-pos-ADV.html">ADV</a></tt> (6; 1% instances), <tt><a href="pcm_nsc-pos-X.html">X</a></tt> (5; 1% instances), <tt><a href="pcm_nsc-pos-PRON.html">PRON</a></tt> (3; 0% instances), <tt><a href="pcm_nsc-pos-PART.html">PART</a></tt> (2; 0% instances), <tt><a href="pcm_nsc-pos-SYM.html">SYM</a></tt> (1; 0% instances) 860 (95%) `AUX` nodes are leaves. 12 (1%) `AUX` nodes have one child. 14 (2%) `AUX` nodes have two children. 22 (2%) `AUX` nodes have three or more children. The highest child degree of a `AUX` node is 8. Children of `AUX` nodes are attached using 15 different relations: <tt><a href="pcm_nsc-dep-punct.html">punct</a></tt> (69; 51% instances), <tt><a href="pcm_nsc-dep-conj-dicto.html">conj:dicto</a></tt> (17; 13% instances), <tt><a href="pcm_nsc-dep-nsubj.html">nsubj</a></tt> (16; 12% instances), <tt><a href="pcm_nsc-dep-advmod.html">advmod</a></tt> (6; 4% instances), <tt><a href="pcm_nsc-dep-xcomp.html">xcomp</a></tt> (6; 4% instances), <tt><a href="pcm_nsc-dep-aux.html">aux</a></tt> (4; 3% instances), <tt><a href="pcm_nsc-dep-obj.html">obj</a></tt> (4; 3% instances), <tt><a href="pcm_nsc-dep-mark.html">mark</a></tt> (3; 2% instances), <tt><a href="pcm_nsc-dep-orphan.html">orphan</a></tt> (3; 2% instances), <tt><a href="pcm_nsc-dep-advcl-periph.html">advcl:periph</a></tt> (2; 1% instances), <tt><a href="pcm_nsc-dep-case.html">case</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-dep-compound-redup.html">compound:redup</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-dep-compound-svc.html">compound:svc</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-dep-dislocated.html">dislocated</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-dep-obl-arg.html">obl:arg</a></tt> (1; 1% instances) Children of `AUX` nodes belong to 10 different parts of speech: <tt><a href="pcm_nsc-pos-PUNCT.html">PUNCT</a></tt> (71; 53% instances), <tt><a href="pcm_nsc-pos-PRON.html">PRON</a></tt> (18; 13% instances), <tt><a href="pcm_nsc-pos-AUX.html">AUX</a></tt> (15; 11% instances), <tt><a href="pcm_nsc-pos-VERB.html">VERB</a></tt> (15; 11% instances), <tt><a href="pcm_nsc-pos-PART.html">PART</a></tt> (6; 4% instances), <tt><a href="pcm_nsc-pos-NOUN.html">NOUN</a></tt> (4; 3% instances), <tt><a href="pcm_nsc-pos-SCONJ.html">SCONJ</a></tt> (3; 2% instances), <tt><a href="pcm_nsc-pos-ADP.html">ADP</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-pos-ADV.html">ADV</a></tt> (1; 1% instances), <tt><a href="pcm_nsc-pos-X.html">X</a></tt> (1; 1% instances)
118.853333
1,372
0.631815
yue_Hant
0.713048
f4a02166305c59f3df71db5f1a353ae537a6d8b3
39,843
md
Markdown
website/translated_docs/de/API/DatastoreClass.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
null
null
null
website/translated_docs/de/API/DatastoreClass.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
null
null
null
website/translated_docs/de/API/DatastoreClass.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
1
2019-03-27T06:57:51.000Z
2019-03-27T06:57:51.000Z
--- id: DataStoreClass title: DataStore --- A [Datastore](ORDA/dsMapping.md#datastore) is the interface object provided by ORDA to reference and access a database. `Datastore` objects are returned by the following commands: * [ds](#ds): a shortcut to the main datastore * [Open datastore](#open-datastore): to open any remote datastore ### Summary | | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [<!-- INCLUDE #DataStoreClass.cancelTransaction().Syntax -->](#canceltransaction)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.cancelTransaction().Summary -->| | [<!-- INCLUDE DataStoreClass.dataclassName.Syntax -->](#dataclassname)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE DataStoreClass.dataclassName.Summary --> | | [<!-- INCLUDE #DataStoreClass.encryptionStatus().Syntax -->](#encryptionstatus)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.encryptionStatus().Summary --> | | [<!-- INCLUDE #DataStoreClass.getInfo().Syntax -->](#getinfo)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.getInfo().Summary --> | | [<!-- INCLUDE #DataStoreClass.getRequestLog().Syntax -->](#getrequestlog)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.getRequestLog().Summary --> | | [<!-- INCLUDE #DataStoreClass.makeSelectionsAlterable().Syntax -->](#makeselectionsalterable)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.makeSelectionsAlterable().Summary --> | | [<!-- INCLUDE #DataStoreClass.provideDataKey().Syntax -->](#providedatakey)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.provideDataKey().Summary --> | | [<!-- INCLUDE #DataStoreClass.setAdminProtection().Syntax -->](#setadminprotection)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.setAdminProtection().Summary --> | | [<!-- INCLUDE #DataStoreClass.startRequestLog().Syntax -->](#startrequestlog)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.startRequestLog().Summary --> | | [<!-- INCLUDE #DataStoreClass.startTransaction().Syntax -->](#starttransaction)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.startTransaction().Summary --> | | [<!-- INCLUDE #DataStoreClass.stopRequestLog().Syntax -->](#stoprequestlog)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.stopRequestLog().Summary --> | | [<!-- INCLUDE #DataStoreClass.validateTransaction().Syntax -->](#validatetransaction)<p>&nbsp;&nbsp;&nbsp;&nbsp;<!-- INCLUDE #DataStoreClass.validateTransaction().Summary --> | ## ds <details><summary>History</summary> | Version | Changes | | ------- | ---------------------------- | | v18 | Support of localID parameter | | v17 | Added | </details> <!-- REF #_command_.ds.Syntax --> **ds** { ( *localID* : Text ) } : cs.DataStore <!-- END REF --> <!-- REF #_command_.ds.Params --> | Parameter | Typ | | Beschreibung | | --------- | ------------ | -- | ------------------------------------------ | | localID | Text | -> | Local ID of the remote datastore to return | | Ergebnis | cs.DataStore | <- | Reference to the datastore | <!-- END REF --> #### Beschreibung The `ds` command <!-- REF #_command_.ds.Summary -->returns a reference to the datastore matching the current 4D database or the database designated by *localID*<!-- END REF -->. If you omit the *localID* parameter (or pass an empty string ""), the command returns a reference to the datastore matching the local 4D database (or the 4D Server database in case of opening a remote database on 4D Server). The datastore is opened automatically and available directly through `ds`. You can also get a reference on an open remote datastore by passing its local id in the *localID* parameter. The datastore must have been previously opened with the [`Open datastore`](#open-datastore) command by the current database (host or component). The local id is defined when using this command. > The scope of the local id is the database where the datastore has been opened. If no *localID* datastore is found, the command returns **Null**. Using `ds` requires that the target database is compliant with ORDA, as specified in the **ORDA prerequisites** section. The following rules are applied: * A datastore only references tables with a single primary key. Tables without a primary key or with composite primary keys are not referenced. * BLOB type attributes are not managed in the datastore. #### Beispiel 1 Using the main datastore on the 4D database: ```4d $result:=ds.Employee.query("firstName = :1";"S@") ``` #### Beispiel 2 ```4d var $connectTo; $firstFrench; $firstForeign : Object var $frenchStudents; $foreignStudents : cs.DataStore $connectTo:=New object("type";"4D Server";"hostname";"192.168.18.11:8044") $frenchStudents:=Open datastore($connectTo;"french") $connectTo.hostname:="192.168.18.11:8050" $foreignStudents:=Open datastore($connectTo;"foreign") //... //... $firstFrench:=getFirst("french";"Students") $firstForeign:=getFirst("foreign";"Students") ``` ```4d //getFirst method //getFirst(localID;dataclass) -> entity #DECLARE( $localId : Text; $dataClassName : Text ) -> $entity : 4D.Entity $0:=ds($localId)[$dataClassName].all().first() ``` ## Open datastore <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 | Added | </details> <!-- REF #_command_.Open datastore.Syntax --> **Open datastore**( *connectionInfo* : Object ; *localID* : Text ) : cs.DataStore <!-- END REF --> <!-- REF #_command_.Open datastore.Params --> | Parameter | Typ | | Beschreibung | | -------------- | ------------ | -- | ------------------------------------------------------------------------- | | connectionInfo | Objekt | -> | Connection properties used to reach the remote datastore | | localID | Text | -> | Id to assign to the opened datastore on the local application (mandatory) | | Ergebnis | cs.DataStore | <- | Datastore object | <!-- END REF --> #### Beschreibung The `Open datastore` command <!-- REF #_command_.Open datastore.Summary -->connects the application to the 4D database identified by the *connectionInfo* parameter<!-- END REF --> and returns a matching `cs.DataStore` object associated with the *localID* local alias. The *connectionInfo* 4D database must be available as a remote datastore, i.e.: * its web server must be launched with http and/or https enabled, * its [**Expose as REST server**](REST/configuration.md#starting-the-rest-server) option must be checked, * at least one client license is available. If no matching database is found, `Open datastore` returns **Null**. *localID* is a local alias for the session opened on remote datastore. If *localID* already exists on the application, it is used. Otherwise, a new *localID* session is created when the datastore object is used. Once the session is opened, the following statements become equivalent and return a reference on the same datastore object: ```4d $myds:=Open datastore(connectionInfo;"myLocalId") $myds2:=ds("myLocalId") //$myds and $myds2 are equivalent ``` Pass in *connectionInfo* an object describing the remote datastore you want to connect to. It can contain the following properties (all properties are optional except *hostname*): | Property | Typ | Beschreibung | | ----------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | hostname | Text | Name or IP address of the remote database + ":" + port number (port number is mandatory) | | user | Text | User name | | password | Text | User password | | idleTimeout | Lange Ganzzahl | Inactivity session timeout (in minutes), after which the session is automatically closed by 4D. If omitted, default value is 60 (1h). The value cannot be < 60 (if a lower value is passed, the timeout is set to 60). For more information, see **Closing sessions**. | | tls | Boolean | Use secured connection(*). If omitted, false by default. Using a secured connection is recommended whenever possible. | | type | Text | Must be "4D Server" | (*) If tls is true, the HTTPS protocol is used if: * HTTPS is enabled on the remote datastore * the given port is the right HTTPS port configured in the database settings * a valid certificate and private encryption key are installed in the database. Otherwise, error "1610 - A remote request to host xxx has failed" is raised #### Beispiel 1 Connection to a remote datastore without user / password: ```4d var $connectTo : Object var $remoteDS : cs.DataStore $connectTo:=New object("type";"4D Server";"hostname";"192.168.18.11:8044") $remoteDS:=Open datastore($connectTo;"students") ALERT("This remote datastore contains "+String($remoteDS.Students.all().length)+" students") ``` #### Beispiel 2 Connection to a remote datastore with user / password / timeout / tls: ```4d var $connectTo : Object var $remoteDS : cs.DataStore $connectTo:=New object("type";"4D Server";"hostname";\"192.168.18.11:4443";\ "user";"marie";"password";$pwd;"idleTimeout";70;"tls";True) $remoteDS:=Open datastore($connectTo;"students") ALERT("This remote datastore contains "+String($remoteDS.Students.all().length)+" students") ``` #### Example 3 Working with several remote datastores: ```4d var $connectTo : Object var $frenchStudents; $foreignStudents : cs.DataStore $connectTo:=New object("hostname";"192.168.18.11:8044") $frenchStudents:=Open datastore($connectTo;"french") $connectTo.hostname:="192.168.18.11:8050" $foreignStudents:=Open datastore($connectTo;"foreign") ALERT("They are "+String($frenchStudents.Students.all().length)+" French students") ALERT("They are "+String($foreignStudents.Students.all().length)+" foreign students") ``` #### Error management In case of error, the command returns **Null**. If the remote datastore cannot be reached (wrong address, web server not started, http and https not enabled...), error 1610 "A remote request to host XXX has failed" is raised. You can intercept this error with a method installed by `ON ERR CALL`. <!-- REF DataStoreClass.dataclassName.Desc --> ## *.dataclassName* <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 | Added | </details> <!-- REF DataStoreClass.dataclassName.Syntax --> ***.dataclassName*** : 4D.DataClass<!-- END REF --> #### Beschreibung Each dataclass in a datastore is available as a property of the [DataStore object](ORDA/dsMapping.md#datastore)data. The returned object <!-- REF DataStoreClass.dataclassName.Summary -->contains a description of the dataclass<!-- END REF -->. #### Beispiel ```4d var $emp : cs.Employee var $sel : cs.EmployeeSelection $emp:=ds.Employee //$emp contains the Employee dataclass $sel:=$emp.all() //gets an entity selection of all employees //you could also write directly: $sel:=ds.Employee.all() ``` <!-- END REF --> <!-- REF DataStoreClass.cancelTransaction().Desc --> ## .cancelTransaction() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 | Added | </details> <!-- REF #DataStoreClass.cancelTransaction().Syntax --> **.cancelTransaction()**<!-- END REF --> <!-- REF #DataStoreClass.cancelTransaction().Params --> | Parameter | Typ | | Beschreibung | | --------- | --- |::| ------------------------------- | | | | | Does not require any parameters | <!-- END REF --> #### Beschreibung The `.cancelTransaction()` function <!-- REF #DataStoreClass.cancelTransaction().Summary -->cancels the transaction<!-- END REF --> opened by the [`.startTransaction()`](#starttransaction) function at the corresponding level in the current process for the specified datastore. The `.cancelTransaction()` function cancels any changes made to the data during the transaction. You can nest several transactions (sub-transactions). If the main transaction is cancelled, all of its sub-transactions are also cancelled, even if they were validated individually using the [`.validateTransaction()`](#validatetransactions) function. #### Beispiel See example for the [`.startTransaction()`](#starttransaction) function. <!-- END REF --> <!-- REF DataStoreClass.encryptionStatus().Desc --> ## .encryptionStatus() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 R5 | Added | </details> <!-- REF #DataStoreClass.encryptionStatus().Syntax --> **.encryptionStatus()**: Object<!-- END REF --> <!-- REF #DataStoreClass.encryptionStatus().Params --> | Parameter | Typ | | Beschreibung | | --------- | ------ |:--:| --------------------------------------------------------------------------- | | Ergebnis | Objekt | <- | Information about the encryption of the current datastore and of each table | <!-- END REF --> #### Beschreibung The `.encryptionStatus()` function <!-- REF #DataStoreClass.encryptionStatus().Summary -->returns an object providing the encryption status for the current data file<!-- END REF --> (i.e., the data file of the `ds` datastore). The status for each table is also provided. > Use the `Data file encryption status` command to determine the encryption status of any other data file. **Rückgabewert** The returned object contains the following properties: | Property | | | Typ | Beschreibung | | ----------- | ----------- | ------------- | ------- | ---------------------------------------------------------------------------------- | | isEncrypted | | | Boolean | True if the data file is encrypted | | keyProvided | | | Boolean | True if the encryption key matching the encrypted data file is provided(*). | | Tabellen | | | Objekt | Object containing as many properties as there are encryptable or encrypted tables. | | | *tableName* | | Objekt | Encryptable or Encrypted table | | | | name | Text | Name of the table | | | | num | Zahl | Table number | | | | isEncryptable | Boolean | True if the table is declared encryptable in the structure file | | | | isEncrypted | Boolean | True if the records of the table are encrypted in the data file | (*) The encryption key can be provided: * with the `.provideDataKey()` command, * at the root of a connected device before opening the datastore, * with the `Discover data key` command. #### Beispiel You want to know the number of encrypted tables in the current data file: ```4d var $status : Object $status:=dataStore.encryptionStatus() If($status.isEncrypted) //the database is encrypted C_LONGINT($vcount) C_TEXT($tabName) For each($tabName;$status.tables) If($status.tables[$tabName].isEncrypted) $vcount:=$vcount+1 End if End for each ALERT(String($vcount)+" encrypted table(s) in this datastore.") Else ALERT("This database is not encrypted.") End if ``` <!-- END REF --> <!-- REF DataStoreClass.getInfo().Desc --> ## .getInfo() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 | Added | </details> <!-- REF #DataStoreClass.getInfo().Syntax --> **.getInfo()**: Object<!-- END REF --> <!-- REF #DataStoreClass.getInfo().Params --> | Parameter | Typ | | Beschreibung | | --------- | ------ |:--:| -------------------- | | Ergebnis | Objekt | <- | Datastore properties | <!-- END REF --> #### Beschreibung The `.getInfo()` function <!-- REF #DataStoreClass.getInfo().Summary -->returns an object providing information about the datastore<!-- END REF -->. This function is useful for setting up generic code. **Returned object** | Property | Typ | Beschreibung | | ---------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | type | string | <li>"4D": main datastore, available through ds </li><li>"4D Server": remote datastore, open with Open datastore</li> | | networked | Boolean | <li>True: the datastore is reached through a network connection.</li><li>False: the datastore is not reached through a network connection (local database)</li> | | localID | Text | ID of the datastore on the machine. Corresponds to the localId string given with the `Open datastore` command. Empty string ("") for main datastore. | | connection | object | Object describing the remote datastore connection (not returned for main datastore). Available properties:<p><table><tr><th>Property</th><th>Typ</th><th>Beschreibung</th></tr><tr><td>hostname</td><td>Text</td><td>IP address or name of the remote datastore + ":" + port number</td></tr><tr><td>tls</td><td>Boolean</td><td>True if secured connection is used with the remote datastore</td></tr><tr><td>idleTimeout</td><td>number</td><td>Session inactivity timeout (in minutes)</td></tr><tr><td>user</td><td>Text</td><td>User authenticated on the remote datastore</td></tr></table> | * If the `.getInfo()` function is executed on a 4D Server or 4D single-user, `networked` is False. * If the `.getInfo()` function is executed on a remote 4D, `networked` is True #### Beispiel 1 ```4d var $info : Object $info:=ds.getInfo() //Executed on 4D Server or 4D //{"type":"4D","networked":false,"localID":""} $info:=ds.getInfo() // Executed on 4D remote //{"type":"4D","networked":true,"localID":""} ``` #### Beispiel 2 On a remote datastore: ```4d var $remoteDS : cs.DataStore var $info; $connectTo : Object $connectTo:=New object("hostname";"111.222.33.44:8044";"user";"marie";"password";"aaaa") $remoteDS:=Open datastore($connectTo;"students") $info:=$remoteDS.getInfo() //{"type":"4D Server", //"localID":"students", //"networked":true, //"connection":{hostname:"111.222.33.44:8044","tls":false,"idleTimeout":2880,"user":"marie"}} ``` <!-- END REF --> <!-- REF DataStoreClass.getRequestLog().Desc --> ## .getRequestLog() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 R6 | Added | </details> <!-- REF #DataStoreClass.getRequestLog().Syntax --> **.getRequestLog()** : Collection<!-- END REF --> <!-- REF #DataStoreClass.getRequestLog().Params --> | Parameter | Typ | | Beschreibung | | --------- | ---------- |:--:| ------------------------------------------------------------ | | Ergebnis | Collection | <- | Collection of objects, where each object describes a request | <!-- END REF --> #### Beschreibung The `.getRequestLog()` function <!-- REF #DataStoreClass.getRequestLog().Summary -->returns the ORDA requests logged in memory on the client side<!-- END REF -->. The ORDA request logging must have previously been enabled using the [`.startRequestLog()`](#startrequestlog) function. This function must be called on a remote 4D, otherwise it returns an empty collection. It is designed for debugging purposes in client/server configurations. **Rückgabewert** Collection of stacked request objects. The most recent request has index 0. For a description of the ORDA request log format, please refer to the [**ORDA client requests**](https://doc.4d.com/4Dv18/4D/18/Description-of-log-files.300-4575486.en.html#4385373) section. #### Beispiel See Example 2 of [`.startRequestLog()`](#startrequestlog). <!-- END REF --> <!-- REF DataStoreClass.isAdminProtected().Desc --> ## .isAdminProtected() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 R6 | Added | </details> <!-- REF #DataStoreClass.isAdminProtected().Syntax --> **.isAdminProtected()** : Boolean<!-- END REF --> <!-- REF #DataStoreClass.isAdminProtected().Params --> | Parameter | Typ | | Beschreibung | | --------- | ------- |:--:| ------------------------------------------------------------------------------ | | Ergebnis | Boolean | <- | True if the Data Explorer access is disabled, False if it is enabled (default) | <!-- END REF --> #### Beschreibung The `.isAdminProtected()` function <!-- REF #DataStoreClass.isAdminProtected().Summary -->returns `True` if [Data Explorer](Admin/dataExplorer.md) access has been disabled for the working session<!-- END REF -->. By default, the Data Explorer access is granted for `webAdmin` sessions, but it can be disabled to prevent any data access from administrators (see the [`.setAdminProtection()`](#setadminprotection) function). #### See also [`.setAdminProtection()`](#setadminprotection) <!-- END REF --> <!-- REF DataStoreClass.makeSelectionsAlterable().Desc --> ## .makeSelectionsAlterable() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 R5 | Added | </details> <!-- REF #DataStoreClass.makeSelectionsAlterable().Syntax --> **.makeSelectionsAlterable()**<!-- END REF --> <!-- REF #DataStoreClass.makeSelectionsAlterable().Params --> | Parameter | Typ | | Beschreibung | | --------- | --- |::| ------------------------------- | | | | | Does not require any parameters | <!-- END REF --> #### Beschreibung The `.makeSelectionsAlterable()` function <!-- REF #DataStoreClass.makeSelectionsAlterable().Summary -->sets all entity selections as alterable by default in the current application datastores<!-- END REF --> (including [remote datastores](ORDA/remoteDatastores.md)). It is intended to be used once, for example in the `On Startup` database method. When this function is not called, new entity selections can be shareable, depending on the nature of their "parent", or [how they are created](ORDA/entities.md#shareable-or-non-shareable-entity-selections). > This function does not modify entity selections created by [`.copy()`](#copy) or `OB Copy` when the explicit `ck shared` option is used. > **Compatibility**: This function must only be used in projects converted from 4D versions prior to 4D v18 R5 and containing [.add()](EntitySelectionClass.md#add) calls. In this context, using `.makeSelectionsAlterable()` can save time by restoring instantaneously the previous 4D behavior in existing projects. On the other hand, using this method in new projects created in 4D v18 R5 and higher **is not recommended**, since it prevents entity selections to be shared, which provides greater performance and scalabitlity. <!-- END REF --> <!-- REF DataStoreClass.provideDataKey().Desc --> ## .provideDataKey() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 R5 | Added | </details> <!-- REF #DataStoreClass.provideDataKey().Syntax --> **.provideDataKey**( *curPassPhrase* : Text ) : Object <br>**.provideDataKey**( *curDataKey* : Object ) : Object <!-- END REF --> <!-- REF #DataStoreClass.provideDataKey().Params --> | Parameter | Typ | | Beschreibung | | ------------- | ------ | -- | ------------------------------------- | | curPassPhrase | Text | -> | Current encryption passphrase | | curDataKey | Objekt | -> | Current data encryption key | | Ergebnis | Objekt | <- | Result of the encryption key matching | <!-- END REF --> #### Beschreibung The `.provideDataKey()` function <!-- REF #DataStoreClass.provideDataKey().Summary -->allows providing a data encryption key for the current data file of the datastore and detects if the key matches the encrypted data<!-- END REF -->. This function can be used when opening an encrypted database, or when executing any encryption operation that requires the encryption key, such as re-encrypting the data file. > * The `.provideDataKey()` function must be called in an encrypted database. If it is called in a non-encrypted database, the error 2003 (the encryption key does not match the data.) is returned. Use the `Data file encryption status` command to determine if the database is encrypted. > * The `.provideDataKey()` function cannot be called from a remote 4D or an encrypted remote datastore. If you use the *curPassPhrase* parameter, pass the string used to generate the data encryption key. When you use this parameter, an encryption key is generated. If you use the *curDataKey* parameter, pass an object (with *encodedKey* property) that contains the data encryption key. This key may have been generated with the `New data key` command. If a valid data encryption key is provided, it is added to the *keyChain* in memory and the encryption mode is enabled: * all data modifications in encryptable tables are encrypted on disk (.4DD, .journal. 4Dindx files) * all data loaded from encryptable tables is decrypted in memory **Ergebnis** The result of the command is described in the returned object: | Property | | Typ | Beschreibung | | ---------- | ------------------------ | ---------- | ------------------------------------------------------------------------------- | | success | | Boolean | True if the provided encryption key matches the encrypted data, False otherwise | | | | | Properties below are returned only if success is *FALSE* | | status | | Zahl | Error code (4 if the provided encryption key is wrong) | | statusText | | Text | Error message | | errors | | Collection | Stack of errors. The first error has the highest index | | | \[ ].componentSignature | Text | Internal component name | | | \[ ].errCode | Zahl | Error number | | | \[ ].message | Text | Error message | If no *curPassphrase* or *curDataKey* is given, `.provideDataKey()` returns **null** (no error is generated). #### Beispiel ```4d var $keyStatus : Object var $passphrase : Text $passphrase:=Request("Enter the passphrase") If(OK=1) $keyStatus:=ds.provideDataKey($passphrase) If($keyStatus.success) ALERT("You have provided a valid encryption key") Else ALERT("You have provided an invalid encryption key, you will not be able to work with encrypted data") End if End if ``` <!-- END REF --> <!-- REF DataStoreClass.setAdminProtection().Desc --> ## .setAdminProtection() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 R6 | Added | </details> <!-- REF #DataStoreClass.setAdminProtection().Syntax -->**.setAdminProtection**( *status* : Boolean )<!-- END REF --> <!-- REF #DataStoreClass.setAdminProtection().Params --> | Parameter | Typ | | Beschreibung | | --------- | ------- | -- | ---------------------------------------------------------------------------------------------------- | | status | Boolean | -> | True to disable Data Explorer access to data on the `webAdmin` port, False (default) to grant access | <!-- END REF --> #### Beschreibung The `.setAdminProtection()` function <!-- REF #DataStoreClass.setAdminProtection().Summary -->allows disabling any data access on the [web admin port](Admin/webAdmin.md#http-port), including for the [Data Explorer](Admin/dataExplorer.md) in `WebAdmin` sessions<!-- END REF -->. By default when the function is not called, access to data is always granted on the web administration port for a session with `WebAdmin` privilege using the Data Explorer. In some configurations, for example when the application server is hosted on a third-party machine, you might not want the administrator to be able to view your data, although they can edit the server configuration, including the [access key](Admin/webAdmin.md#access-key) settings. In this case, you can call this function to disable the data access from Data Explorer on the web admin port of the machine, even if the user session has the `WebAdmin` privilege. When this function is executed, the data file is immediately protected and the status is stored on disk: the data file will be protected even if the application is restarted. #### Beispiel You create a *protectDataFile* project method to call before deployments for example: ```4d ds.setAdminProtection(True) //Disables the Data Explorer data access ``` #### See also [`.isAdminProtected()`](#isadminprotected) <!-- END REF --> <!-- REF DataStoreClass.startRequestLog().Desc --> ## .startRequestLog() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 R6 | Added | </details> <!-- REF #DataStoreClass.startRequestLog().Syntax --> **.startRequestLog**()<br>**.startRequestLog**( *file* : 4D.File )<br>**.startRequestLog**( *reqNum* : Integer )<!-- END REF --> <!-- REF #DataStoreClass.startRequestLog().Params --> | Parameter | Typ | | Beschreibung | | --------- | -------- | -- | ------------------------------------ | | file | 4D.File | -> | File object | | reqNum | Ganzzahl | -> | Number of requests to keep in memory | <!-- END REF --> #### Beschreibung The `.startRequestLog()` function <!-- REF #DataStoreClass.startRequestLog().Summary -->starts the logging of ORDA requests on the client side<!-- END REF -->. This function must be called on a remote 4D, otherwise it does nothing. It is designed for debugging purposes in client/server configurations. The ORDA request log can be sent to a file or to memory, depending on the parameter type: * If you passed a *file* object created with the `File` command, the log data is written in this file as a collection of objects (JSON format). Each object represents a request.<br>If the file does not already exist, it is created. Otherwise if the file already exists, the new log data is appended to it. If `.startRequestLog( )` is called with a file while a logging was previously started in memory, the memory log is stopped and emptied. > A \] character must be manually appended at the end of the file to perform a JSON validation * If you passed a *reqNum* integer, the log in memory is emptied (if any) and a new log is initialized. It will keep *reqNum* requests in memory until the number is reached, in which case the oldest entries are emptied (FIFO stack).<br>If `.startRequestLog()` is called with a *reqNum* while a logging was previously started in a file, the file logging is stopped. * If you did not pass any parameter, the log is started in memory. If `.startRequestLog()` was previously called with a *reqNum* (before a `.stopRequestLog()`), the log data is stacked in memory until the next time the log is emptied or `.stopRequestLog()` is called. For a description of the ORDA request log format, please refer to the [**ORDA client requests**](https://doc.4d.com/4Dv18/4D/18/Description-of-log-files.300-4575486.en.html#4385373) section. #### Beispiel 1 You want to log ORDA client requests in a file and use the log sequence number: ```4d var $file : 4D.File var $e : cs.PersonsEntity $file:=File("/LOGS/ORDARequests.txt") //logs folder SET DATABASE PARAMETER(Client Log Recording;1) //to trigger the global log sequence number ds.startRequestLog($file) $e:=ds.Persons.get(30001) //send a request ds.stopRequestLog() SET DATABASE PARAMETER(Client Log Recording;0) ``` #### Beispiel 2 You want to log ORDA client requests in memory: ```4d var $es : cs.PersonsSelection var $log : Collection ds.startRequestLog(3) //keep 3 requests in memory $es:=ds.Persons.query("name=:1";"Marie") $es:=ds.Persons.query("name IN :1";New collection("Marie")) $es:=ds.Persons.query("name=:1";"So@") $log:=ds.getRequestLog() ALERT("The longest request lasted: "+String($log.max("duration"))+" ms") ``` <!-- END REF --> <!-- REF DataStoreClass.startTransaction().Desc --> ## .startTransaction() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 | Added | </details> <!-- REF #DataStoreClass.startTransaction().Syntax --> **.startTransaction()**<!-- END REF --> <!-- REF #DataStoreClass.startTransaction().Params --> | Parameter | Typ | | Beschreibung | | --------- | --- | | ------------------------------- | | | | | Does not require any parameters | <!-- END REF --> #### Beschreibung The `.startTransaction()` function <!-- REF #DataStoreClass.startTransaction().Summary -->starts a transaction in the current process on the database matching the datastore to which it applies<!-- END REF -->. Any changes made to the datastore's entities in the transaction's process are temporarily stored until the transaction is either validated or cancelled. > If this method is called on the main datastore (i.e. the datastore returned by the `ds` command), the transaction is applied to all operations performed on the main datastore and on the underlying database, thus including ORDA and classic languages. You can nest several transactions (sub-transactions). Each transaction or sub-transaction must eventually be cancelled or validated. Note that if the main transaction is cancelled, all of its sub-transactions are also cancelled even if they were validated individually using the `.validateTransaction()` function. #### Beispiel ```4d var $connect; $status : Object var $person : cs.PersonsEntity var $ds : cs.DataStore var $choice : Text var $error : Boolean Case of :($choice="local") $ds:=ds :($choice="remote") $connect:=New object("hostname";"111.222.3.4:8044") $ds:=Open datastore($connect;"myRemoteDS") End case $ds.startTransaction() $person:=$ds.Persons.query("lastname=:1";"Peters").first() If($person#Null) $person.lastname:="Smith" $status:=$person.save() End if ... ... If($error) $ds.cancelTransaction() Else $ds.validateTransaction() End if ``` <!-- END REF --> <!-- REF DataStoreClass.stopRequestLog().Desc --> ## .stopRequestLog() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v17 R6 | Added | </details> <!-- REF #DataStoreClass.stopRequestLog().Syntax --> **.stopRequestLog()** <!-- END REF --> <!-- REF #DataStoreClass.stopRequestLog().Params --> | Parameter | Typ | | Beschreibung | | --------- | --- | | ------------------------------- | | | | | Does not require any parameters | <!-- END REF --> #### Beschreibung The `.stopRequestLog()` function <!-- REF #DataStoreClass.stopRequestLog().Summary -->stops any logging of ORDA requests on the client side<!-- END REF --> (in file or in memory). It is particularly useful when logging in a file, since it actually closes the opened document on disk. This function must be called on a remote 4D, otherwise it does nothing. It is designed for debugging purposes in client/server configurations. #### Beispiel See examples for [`.startRequestLog()`](#startrequestlog). <!-- END REF --> <!-- REF DataStoreClass.validateTransaction().Desc --> ## .validateTransaction() <details><summary>History</summary> | Version | Changes | | ------- | ------- | | v18 | Added | </details> <!-- REF #DataStoreClass.validateTransaction().Syntax --> **.validateTransaction()** <!-- END REF --> <!-- REF #DataStoreClass.validateTransaction().Params --> | Parameter | Typ | | Beschreibung | | --------- | --- | | ------------------------------- | | | | | Does not require any parameters | <!-- END REF --> #### Beschreibung The `.validateTransaction()` function <!-- REF #DataStoreClass.validateTransaction().Summary -->accepts the transaction <!-- END REF -->that was started with [`.startTransaction()`](#starttransaction) at the corresponding level on the specified datastore. The function saves the changes to the data on the datastore that occurred during the transaction. You can nest several transactions (sub-transactions). If the main transaction is cancelled, all of its sub-transactions are also cancelled, even if they were validated individually using this function. #### Beispiel See example for [`.startTransaction()`](#starttransaction). <!-- END REF --> <style> h2 { background: #d9ebff;}</style>
45.534857
604
0.592275
eng_Latn
0.776357
f4a0526446157a9d825335b22994f24115b2be35
483
md
Markdown
content/link/index.md
George-Gou/home
d4cc36c19d49afd52f5be1394e28bda024b8e8e1
[ "CC-BY-4.0" ]
null
null
null
content/link/index.md
George-Gou/home
d4cc36c19d49afd52f5be1394e28bda024b8e8e1
[ "CC-BY-4.0" ]
null
null
null
content/link/index.md
George-Gou/home
d4cc36c19d49afd52f5be1394e28bda024b8e8e1
[ "CC-BY-4.0" ]
null
null
null
--- title: "Personal Link" # Optional header image (relative to `static/media/` folder). header: caption: "" image: "" --- ### Personal Link collections - ###### [Bilibili](https://space.bilibili.com/320619371) *also* *nicknamed* *B* *Site* *in* *China*, *is* *a* *Chinese* *video* *sharing* *website* [science blog](http://blog.sciencenet.cn/home.php?mod=space&uid=3465931) Building the official website of the global Chinese scientific community
17.888889
92
0.652174
yue_Hant
0.643064
f4a0712ed15aa10b4b3f46ba781469eb601c8b1f
9,945
md
Markdown
articles/storage/blobs/point-in-time-restore-manage.md
cmmdesai/azure-docs
e64792ffc7a9f05a813f33f43eb3fef373358993
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/point-in-time-restore-manage.md
cmmdesai/azure-docs
e64792ffc7a9f05a813f33f43eb3fef373358993
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/point-in-time-restore-manage.md
cmmdesai/azure-docs
e64792ffc7a9f05a813f33f43eb3fef373358993
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Enable and manage point-in-time restore for block blobs (preview) titleSuffix: Azure Storage description: Learn how to use point-in-time restore (preview) to restore block blobs to a state at an earlier point in time. services: storage author: tamram ms.service: storage ms.topic: how-to ms.date: 06/10/2020 ms.author: tamram ms.subservice: blobs --- # Enable and manage point-in-time restore for block blobs (preview) You can use point-in-time restore (preview) to restore block blobs to their state at an earlier point in time. This article describes how to enable point-in-time restore for a storage account with PowerShell. It also shows how to perform a restore operation with PowerShell. For more information and to learn how to register for the preview, see [Point-in-time restore for block blobs (preview)](point-in-time-restore-overview.md). > [!CAUTION] > Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation during the point-in-time restore preview, that container cannot be restored with a restore operation. During the preview, instead of deleting a container, delete individual blobs if you may want to restore them. > [!IMPORTANT] > The point-in-time restore preview is intended for non-production use only. Production service-level agreements (SLAs) are not currently available. ## Install the preview module To configure Azure point-in-time restore with PowerShell, first install the Az.Storage preview module version 1.14.1-preview or later. Using the latest preview version is recommended, but point-in-time restore is supported in version 1.14.1-preview and later. Remove any other versions of the Az.Storage module. The following command installs Az.Storage [2.0.1-preview](https://www.powershellgallery.com/packages/Az.Storage/2.0.1-preview) module: ```powershell Install-Module -Name Az.Storage -RequiredVersion 2.0.1-preview -AllowPrerelease ``` For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-az-ps). ## Enable and configure point-in-time restore Before you enable and configure point-in-time restore, enable its prerequisites for the storage account: soft delete, change feed, and blob versioning. For more information about enabling each of these features, see these articles: - [Enable soft delete for blobs](soft-delete-enable.md) - [Enable and disable the change feed](storage-blob-change-feed.md#enable-and-disable-the-change-feed) - [Enable and manage blob versioning](versioning-enable.md) To configure Azure point-in-time restore with PowerShell, call the Enable-AzStorageBlobRestorePolicy command. The following example enables soft delete and sets the soft-delete retention period, enables change feed, and then enables point-in-time restore. Before running the example, use the Azure portal or an Azure Resource Manager template to also enable blob versioning. When running the example, remember to replace the values in angle brackets with your own values: ```powershell # Sign in to your Azure account. Connect-AzAccount # Set resource group and account variables. $rgName = "<resource-group>" $accountName = "<storage-account>" # Enable soft delete with a retention of 6 days. Enable-AzStorageBlobDeleteRetentionPolicy -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -RetentionDays 6 # Enable change feed. Update-AzStorageBlobServiceProperty -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -EnableChangeFeed $true # Enable point-in-time restore with a retention period of 5 days. # The retention period for point-in-time restore must be at least one day less than that set for soft delete. Enable-AzStorageBlobRestorePolicy -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -RestoreDays 5 # View the service settings. Get-AzStorageBlobServiceProperty -ResourceGroupName $rgName ` -StorageAccountName $accountName ``` ## Perform a restore operation To initiate a restore operation, call the **Restore-AzStorageBlobRange** command, specifying the restore point as a UTC **DateTime** value. You can specify lexicographical ranges of blobs to restore, or omit a range to restore all blobs in all containers in the storage account. Up to 10 lexicographical ranges are supported per restore operation. The restore operation may take several minutes to complete. Keep in mind the following rules when specifying a range of blobs to restore: - The container pattern specified for the start range and end range must include a minimum of three characters. The forward slash (/) that is used to separate a container name from a blob name does not count toward this minimum. - Up to 10 ranges can be specified per restore operation. - Wildcard characters are not supported. They are treated as standard characters. - You can restore blobs in the `$root` and `$web` containers by explicitly specifying them in a range passed to a restore operation. The `$root` and `$web` containers are restored only if they are explicitly specified. Other system containers cannot restored. > [!IMPORTANT] > When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation. Read, write, and delete operations are blocked in the primary location. For this reason, operations such as listing containers in the Azure portal may not perform as expected while the restore operation is underway. > > Read operations from the secondary location may proceed during the restore operation if the storage account is geo-replicated. ### Restore all containers in the account To restore all containers and blobs in the storage account, call the **Restore-AzStorageBlobRange** command, omitting the `-BlobRestoreRange` parameter. The following example restores containers in the storage account to their state 12 hours before the present moment: ```powershell # Specify -TimeToRestore as a UTC value Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddHours(-12) ``` ### Restore a single range of block blobs To restore a range of blobs, call the **Restore-AzStorageBlobRange** command and specify a lexicographical range of container and blob names for the `-BlobRestoreRange` parameter. The start of the range is in inclusive, and the end of the range is exclusive. For example, to restore the blobs in a single container named *sample-container*, you can specify a range that starts with *sample-container* and ends with *sample-container1*. There is no requirement for the containers named in the start and end ranges to exist. Because the end of the range is exclusive, even if the storage account includes a container named *sample-container1*, only the container named *sample-container* will be restored: ```powershell $range = New-AzStorageBlobRangeToRestore -StartRange sample-container -EndRange sample-container1 ``` To specify a subset of blobs in a container to restore, use a forward slash (/) to separate the container name from the blob pattern. For example, the following range selects blobs in a single container whose names begin with the letters *d* through *f*: ```powershell $range = New-AzStorageBlobRangeToRestore -StartRange sample-container/d -EndRange sample-container/g ``` Next, provide the range to the **Restore-AzStorageBlobRange** command. Specify the restore point by providing a UTC **DateTime** value for the `-TimeToRestore` parameter. The following example restores blobs in the specified range to their state 3 days before the present moment: ```powershell # Specify -TimeToRestore as a UTC value Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -BlobRestoreRange $range ` -TimeToRestore (Get-Date).AddDays(-3) ``` ### Restore multiple ranges of block blobs To restore multiple ranges of block blobs, specify an array of ranges for the `-BlobRestoreRange` parameter. Up to 10 ranges are supported per restore operation. The following example specifies two ranges to restore the complete contents of *container1* and *container4*: ```powershell # Specify a range that includes the complete contents of container1. $range1 = New-AzStorageBlobRangeToRestore -StartRange container1 -EndRange container2 # Specify a range that includes the complete contents of container4. $range2 = New-AzStorageBlobRangeToRestore -StartRange container4 -EndRange container5 Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddMinutes(-30) ` -BlobRestoreRange @($range1, $range2) ``` ### Restore block blobs asynchronously To run a restore operation asynchronously, add the `-AsJob` parameter to the call to **Restore-AzStorageBlobRange** and store the result of the call in a variable. The **Restore-AzStorageBlobRange** command returns an object of type **AzureLongRunningJob**. You can check the **State** property of this object to determine whether the restore operation has completed. The value of the **State** property may be **Running** or **Completed**. The following example shows how to call a restore operation asynchronously: ```powershell $job = Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddMinutes(-5) ` -AsJob # Check the state of the job. $job.State ``` ## Next steps - [Point-in-time restore for block blobs (preview)](point-in-time-restore-overview.md) - [Soft delete](soft-delete-overview.md) - [Change feed (preview)](storage-blob-change-feed.md) - [Blob versioning (preview)](versioning-overview.md)
58.157895
470
0.788336
eng_Latn
0.990407
f4a0ca3a434b16cec11613df3bd165e40e576ec6
193
md
Markdown
draw-the-triangle-1/README.md
noaholutunji/mysql
e28dc1779a1a35159cb9220aa7f07d915f89130c
[ "MIT" ]
null
null
null
draw-the-triangle-1/README.md
noaholutunji/mysql
e28dc1779a1a35159cb9220aa7f07d915f89130c
[ "MIT" ]
null
null
null
draw-the-triangle-1/README.md
noaholutunji/mysql
e28dc1779a1a35159cb9220aa7f07d915f89130c
[ "MIT" ]
null
null
null
# Draw The Triangle 1 P(R) represents a pattern drawn by Julia in R rows. The following pattern represents P(5): * * * * * * * * * * * * * * * Write a query to print the pattern P(20).
14.846154
90
0.606218
eng_Latn
0.997441
f4a0e4fa698a33424df1730dddda8c01ed1554d1
2,983
md
Markdown
README.md
claudiubelu/certbot-k8s
02cb7e9a4655b97f88ca517407f568da1df43336
[ "Apache-2.0" ]
null
null
null
README.md
claudiubelu/certbot-k8s
02cb7e9a4655b97f88ca517407f568da1df43336
[ "Apache-2.0" ]
null
null
null
README.md
claudiubelu/certbot-k8s
02cb7e9a4655b97f88ca517407f568da1df43336
[ "Apache-2.0" ]
null
null
null
# Certbot Kubernetes Charm ## Description In order to access your applications securely through HTTPS, you are going to need a TLS certificate. You could generate your own self-signed certificate. However, using such a certificate will cause your browser to issue a warning when accessing your applications because the certificate is not verified by a trusted Certificate Authority (CA). This Charm provides a way to easily obtain a CA-verified certificate, which can then be used by your services. ## Usage This Charm requires you to have a publicly available DNS hostname, which is meant to be used by your applications. Without it, the CA will not be able to verify your ownership of the hostname you're generating the certificate for, and the verification process will fail. To deploy this charm, simply run: ```bash juju deploy --trust cerbot-k8s --channel=edge ``` Next, this charm will require a relation with an ``nginx-ingress-integrator`` charm, which will be used to automatically set up the Ingress Route required in order to solve the CA's ACME HTTP Challenge, which is required in proving the ownership of the hostname you're generating the certificate for. To deploy the ``nginx-ingress-integrator`` charm and relate it to the ``certbot-k8s`` charm, run: ```bash juju deploy --trust nginx-ingress-integrator juju relate cerbot-k8s nginx-ingress-integrator ``` Next, you need to configure the email and agree with the [Terms of Service](https://letsencrypt.org/repository/) needed to use the Let's Encrypt CA: ```bash juju config certbot-k8s email=your@email agree-tos=true ``` To generate a certificate for your hostname, simply run: ```bash juju config certbot-k8s service-hostname=your-hostname ``` After a few moments, and if there were no issues encountered, a Kubernetes Secret containing your TLS certificate will have been generated. To get the secret name, run: ```bash juju run-action certbot-k8s/0 get-secret-name --wait ``` If the Kubernetes Secret has been generated, the above command will return the Secret name. If it was not, it will result in an error, in which case you should check the ``juju debug-log``. The Kubernetes Secret Name mentioned above can then be used by the ``nginx-ingress-integrator`` charms as well: ```bash juju config another-nginx-ingress-integrator tls-secret-name=$SECRET_NAME ``` The command above will configure Ingress-level TLS termination. ## Relations This charm requires an ``ingress`` relation, typically provided by the ``nginx-ingress-integrator`` charm. ## OCI Images The image used by this charm is ``claudiubelu/certbot-nginx:0.1.0``, which is based on the ``nginx`` Docker image, and has ``certbot`` installed. Details on how to build your image can be found [here](docker/README.md). ## Contributing Please see the [Juju SDK docs](https://juju.is/docs/sdk) for guidelines on enhancements to this charm following best practice guidelines, and `CONTRIBUTING.md` for developer guidance.
43.867647
345
0.775394
eng_Latn
0.997245
f4a161c60338c3e218033d90158bce9f637dc8b8
73
md
Markdown
clojure/atoms/README.md
miroadamy/language-matrix
510bc33d058555da8a67f87d25353b93d219d750
[ "MIT" ]
15
2015-03-13T03:45:52.000Z
2022-02-26T00:11:18.000Z
clojure/atoms/README.md
miroadamy/language-matrix
510bc33d058555da8a67f87d25353b93d219d750
[ "MIT" ]
5
2015-02-23T18:20:17.000Z
2021-03-20T21:54:48.000Z
clojure/atoms/README.md
miroadamy/language-matrix
510bc33d058555da8a67f87d25353b93d219d750
[ "MIT" ]
9
2016-05-11T13:03:22.000Z
2021-04-11T13:07:12.000Z
To run this example: ``` java -cp clojure.jar clojure.main atom.clj ```
14.6
43
0.684932
eng_Latn
0.577845
f4a21b93cf9be9937c9354b214ac8071ba03c777
979
md
Markdown
examples/b-l475e-iot01a-stm32duino/README.md
CampusIoT/stm32-riotos-demos
8f9ac82dc289bf177d56bb942f979df3608674bd
[ "RSA-MD" ]
3
2020-03-29T12:23:02.000Z
2021-05-21T09:32:03.000Z
examples/b-l475e-iot01a-stm32duino/README.md
CampusIoT/stm32-riotos-demos
8f9ac82dc289bf177d56bb942f979df3608674bd
[ "RSA-MD" ]
null
null
null
examples/b-l475e-iot01a-stm32duino/README.md
CampusIoT/stm32-riotos-demos
8f9ac82dc289bf177d56bb942f979df3608674bd
[ "RSA-MD" ]
2
2020-03-02T11:15:18.000Z
2020-05-18T13:01:20.000Z
# B-L475E-IOT01A Demos with Stm32duino. [B-L475E-IOT01A Discovery kit for IoT node](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html)'s demostrations with Stm32duino ![b-l475e-iot01a](./images/b-l475e-iot01a.jpg) https://github.com/stm32duino/STM32Examples/tree/master/examples/Boards/STM32L475VG-DISCOVERY-IOT * [Dynamic NFC tag based on M24SR with its printed NFC antenna](./m24sr) * [2 digital omnidirectional microphones (MP34DT01)](./mp24dt01) * [Capacitive digital sensor for relative humidity and temperature (HTS221)](./hts221) * [High-performance 3-axis magnetometer (LIS3MDL)](./lis3mdl) * [3D accelerometer and 3D gyroscope (LSM6DSL)](./lsm6dsl) * [260-1260 hPa absolute digital output barometer (LPS22HB)](./lps22hb) * [Time-of-Flight and gesture-detection sensor (VL53L0X)](./vl53l0x) * [MQTT PubSub on WiFi access point](./mqtt) * [LoRaWAN transmission with I-NUCLEO-LRWAN1](./lorawan) ## Applications * https://github.com/AmieJoni/parois-rocheuses
44.5
135
0.760981
kor_Hang
0.356769
f4a2a53ed4ef1cac8e79a72dea9248a6775312e2
4,333
md
Markdown
src/ResourceManager/Sql/Commands.Sql/help/Set-AzureRmSqlServerDnsAlias.md
muwaqar/azure-powershell
888646dba3f3ccff38f4fd8a9667884b00c8964b
[ "MIT" ]
1
2020-02-13T14:16:26.000Z
2020-02-13T14:16:26.000Z
src/ResourceManager/Sql/Commands.Sql/help/Set-AzureRmSqlServerDnsAlias.md
muwaqar/azure-powershell
888646dba3f3ccff38f4fd8a9667884b00c8964b
[ "MIT" ]
null
null
null
src/ResourceManager/Sql/Commands.Sql/help/Set-AzureRmSqlServerDnsAlias.md
muwaqar/azure-powershell
888646dba3f3ccff38f4fd8a9667884b00c8964b
[ "MIT" ]
1
2020-02-13T14:16:35.000Z
2020-02-13T14:16:35.000Z
--- external help file: Microsoft.Azure.Commands.Sql.dll-Help.xml Module Name: AzureRM.Sql online version: https://docs.microsoft.com/en-us/powershell/module/azurerm.sql/set-azurermsqlserverdnsalias schema: 2.0.0 --- # Set-AzureRmSqlServerDnsAlias ## SYNOPSIS Modifies the server to which Azure SQL Server DNS Alias is pointing ## SYNTAX ``` Set-AzureRmSqlServerDnsAlias -Name <String> -TargetServerName <String> [-ResourceGroupName] <String> -SourceServerName <String> -SourceServerResourceGroupName <String> -SourceServerSubscriptionId <Guid> [-AsJob] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ## DESCRIPTION This command is updating the server to which alias is pointing. This command needs to be issued while logged into subscription where new server to which alias is going to point is located. ## EXAMPLES ### Example 1 ``` PS C:\> Set-AzureRmSqlServerDnsAlias -ResourceGroupName rg -DnsAliasName aliasName -TargetServerName newServer -SourceServerName oldServer -SourceServerResourceGroupName SourceServerRG -SourceServerSubscriptionId 0000-0000-0000-0000 ``` This command is updating alias which was previously pointing to oldServer to point to server newServer ## PARAMETERS ### -AsJob Run cmdlet in the background ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with azure. ```yaml Type: IAzureContextContainer Parameter Sets: (All) Aliases: AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Name The Azure Sql Server Dns Alias name. ```yaml Type: String Parameter Sets: (All) Aliases: DnsAliasName Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -ResourceGroupName The name of the resource group. ```yaml Type: String Parameter Sets: (All) Aliases: TargetResourceGroupName Required: True Position: 0 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -SourceServerName The name of Azure Sql Server to which alias is currently pointing. ```yaml Type: String Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -SourceServerResourceGroupName The name of resource group of the source server. ```yaml Type: String Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -SourceServerSubscriptionId The subscription id of the source server ```yaml Type: Guid Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -TargetServerName The name of Azure Sql Server to which alias should point. ```yaml Type: String Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.String ## OUTPUTS ### Microsoft.Azure.Commands.Sql.ServerDnsAlias.Model.AzureSqlServerDnsAliasModel ## NOTES ## RELATED LINKS
21.557214
314
0.78006
eng_Latn
0.54361
f4a2a9e15b711cb9b1d98992e70f6e1abbdab403
5,500
md
Markdown
chart/getting-started/quick-start-create-a-data-bound-chart.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
30
2016-02-18T13:23:42.000Z
2021-09-23T01:26:05.000Z
chart/getting-started/quick-start-create-a-data-bound-chart.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
25
2016-03-16T07:13:47.000Z
2021-07-30T13:31:24.000Z
chart/getting-started/quick-start-create-a-data-bound-chart.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
183
2016-02-19T09:56:35.000Z
2022-01-17T18:03:36.000Z
--- title: Quick Start Create a Data Bound Chart page_title: Quick Start Create a Data Bound Chart | UI for WinForms Documentation description: Quick Start Create a Data Bound Chart slug: winforms/chart/getting-started/quick-start:-create-a-data-bound-chart tags: quick,start,create,a,data,bound,chart published: True position: 1 previous_url: chart-getting-started-quick-start-create-a-data-bound-chart --- # Quick Start: Create a Data Bound Chart This Quick Start tutorial is designed to have you quickly up and running with a working data bound RadChart. In the tutorial you will populate the chart by binding to a data source and will modify several properties that affect chart appearance. ## Create a Windows Application 1. From the Visual Studio __File__ menu select __New | Project__. 1. Select the "Windows Application" and enter a location path for the project. ![chart-getting-started-quick-start 001](images/chart-getting-started-quick-start001.png) 1. From the Toolbox drag a RadChart component to the form. The project should now look like the figure below. ![chart-getting-started-quick-start 002](images/chart-getting-started-quick-start002.png) ## Configuring the Data Source 1. From the RadChart Smart Tag menu select the __Data Choose Data Source__ drop down and click the __Add Project Data Source__ item. ![chart-getting-started-quick-start-create-a-data-bound-chart 003](images/chart-getting-started-quick-start-create-a-data-bound-chart003.png) 1. On the __Choose a Data Source Type__ page of the __Data Source Configuration Wizard__ select __Database__ and click the __Next__ button.![chart-getting-started-quick-start-create-a-data-bound-chart 004](images/chart-getting-started-quick-start-create-a-data-bound-chart004.png) 1. On the Choose Your Data Connection page click the __New Connection button__. This will display the __Add Connection. Enter "(local)\SQLEXPRESS"__ in the __Server name__ drop down list. Select "AdventureWorksT" from the __Select or enter a database name__ drop down list. Optionally, you can click the Test Connection button to verify your settings so far. Click the __OK__ button to close the dialog. ![chart-getting-started-quick-start-create-a-data-bound-chart 005](images/chart-getting-started-quick-start-create-a-data-bound-chart005.png) 1. On the __Choose Your Data Connection__ page of the wizard click the __Next__ button. ![chart-getting-started-quick-start-create-a-data-bound-chart 006](images/chart-getting-started-quick-start-create-a-data-bound-chart006.png) 1. On the __Choose Your Database Objects__ page of the wizard locate the ProductInventory table and select the Quantity field. Click the __Finish__ button. Note: In following steps this information will be overwritten and is just a place holder. ![chart-getting-started-quick-start-create-a-data-bound-chart 007](images/chart-getting-started-quick-start-create-a-data-bound-chart007.png) 1. Three new components will appear in the component tray below the form designer: adventureWorksTDataSet, productInventoryBindingSource and productInventoryTableAdapter. Right click the table adapter and select __Edit Queries in DataSet Designer__. ![chart-getting-started-quick-start-create-a-data-bound-chart 008](images/chart-getting-started-quick-start-create-a-data-bound-chart008.png) 1. In the __DataSet Designer__ right click the ProductInventoryTableAdapter and select __Configure__. ![chart-getting-started-quick-start-create-a-data-bound-chart 009](images/chart-getting-started-quick-start-create-a-data-bound-chart009.png) 1. Enter the following SQL into the edit space provided in the __Enter a SQL Statement__ page of the __TableAdapter Configuration Wizard__. 1. Click the __Finish__ button. ![chart-getting-started-quick-start-create-a-data-bound-chart 010](images/chart-getting-started-quick-start-create-a-data-bound-chart010.png) ## Format the Chart Using the SmartTag 1. From the __Smart Tag__ select the __RadChart Wizard__. Select the __Type__ tab and click the __Horizontal Orientation__ radio button. ![chart-getting-started-quick-start-create-a-data-bound-chart 011](images/chart-getting-started-quick-start-create-a-data-bound-chart011.png) 1. On the RadChart Wizard __Data__ tab __Axis Labels__ choose the "SubCategory" from the X-Axis drop down. ![chart-getting-started-quick-start-create-a-data-bound-chart 012](images/chart-getting-started-quick-start-create-a-data-bound-chart012.png) 1. On the RadChart Wizard __Skin__ tab select the "Deep Blue" skin. ![chart-getting-started-quick-start-create-a-data-bound-chart 013](images/chart-getting-started-quick-start-create-a-data-bound-chart013.png) 1. On the RadChart Wizard __Labels, Legend and Title__ set the __Legend Alignment__ to "Right" and __Title Text__ to "Products". ![chart-getting-started-quick-start-create-a-data-bound-chart 014](images/chart-getting-started-quick-start-create-a-data-bound-chart014.png) 1. The X-Axis labels bound to the "SubCategory" column are too wide and will overflow. To make the plot area smaller to make room, set the __PlotArea.Appearance.Dimensions.Margins.Left__ to "50%". ![chart-getting-started-quick-start-create-a-data-bound-chart 015](images/chart-getting-started-quick-start-create-a-data-bound-chart015.png) 1. The finished chart should look like the screenshot below. ![chart-getting-started-quick-start-create-a-data-bound-chart 016](images/chart-getting-started-quick-start-create-a-data-bound-chart016.png)
73.333333
403
0.803273
eng_Latn
0.829454
f4a2ce0d794ad12425ded6ffa21c74cda5d65da6
78
md
Markdown
spring-boot-rest-docs-example/README.md
kchrusciel/spring-boot-examples
b853eabf23a53ad0329fcbe44f30b17fc8568611
[ "MIT" ]
29
2017-09-13T21:01:39.000Z
2020-07-19T16:43:15.000Z
spring-boot-rest-docs-example/README.md
kchrusciel/spring-boot-examples
b853eabf23a53ad0329fcbe44f30b17fc8568611
[ "MIT" ]
null
null
null
spring-boot-rest-docs-example/README.md
kchrusciel/spring-boot-examples
b853eabf23a53ad0329fcbe44f30b17fc8568611
[ "MIT" ]
44
2017-11-23T05:04:46.000Z
2021-03-08T20:59:11.000Z
# Spring Boot REST Docs This repository contains Spring Boot REST Docs example
39
54
0.833333
eng_Latn
0.573567
f4a35d2417fd883d209e3329e092c33d65dbc052
3,309
md
Markdown
docs/ide/reference/options-dialog-box-projects-and-solutions-build-and-run.md
Eilon/visualstudio-docs
7e3efc6cb5f6169c20eafe0423b2b2a63154649b
[ "CC-BY-4.0", "MIT" ]
834
2017-06-24T10:40:36.000Z
2022-03-31T19:48:51.000Z
docs/ide/reference/options-dialog-box-projects-and-solutions-build-and-run.md
Eilon/visualstudio-docs
7e3efc6cb5f6169c20eafe0423b2b2a63154649b
[ "CC-BY-4.0", "MIT" ]
7,042
2017-06-23T22:34:47.000Z
2022-03-31T23:05:23.000Z
docs/ide/reference/options-dialog-box-projects-and-solutions-build-and-run.md
Eilon/visualstudio-docs
7e3efc6cb5f6169c20eafe0423b2b2a63154649b
[ "CC-BY-4.0", "MIT" ]
1,640
2017-06-23T22:31:39.000Z
2022-03-31T02:45:37.000Z
--- title: Options dialog, Projects and Solutions, Build and Run description: Specify the maximum number of C++ or C# projects that can build at the same time, certain default build behaviors, and some build log settings in Visual Studio. \ms.custom: SEO-VS-2020 ms.date: 07/14/2017 ms.technology: vs-ide-compile ms.topic: reference f1_keywords: - "VS.ToolsOptionsPages.Projects.Build_and_Run" helpviewer_keywords: - "builds [Visual Studio], setting up" - "run actions" - "debugger, run options" ms.assetid: c884976e-c0df-4c6d-8e3a-856ea2bd547c author: ghogen ms.author: ghogen manager: jmartens ms.workload: - "multiple" --- # Options dialog box: Projects and Solutions \> Build and Run In this dialog box, you can specify the maximum number of C++ or C# projects that can build at the same time, certain default build behaviors, and some build log settings. To access these options, select **Tools** > **Options** expand **Projects and Solutions**, and then select **Build and Run**. **Maximum number of parallel project builds** Specifies the maximum number of C++ and C# projects that can build at the same time. To optimize the build process, the maximum number of parallel project builds is automatically set to the number of CPUs of your computer. The maximum is 32. **Only build startup projects and dependencies on Run** Builds only the startup project and its dependencies when you use the **F5** key, the **Debug** > **Start Debugging** menu command, or applicable commands on the **Build** menu. If unchecked, all projects and dependencies are built. **On Run, when projects are out of date** *Applies to C++ projects only.* When running a project with **F5** or the **Debug** > **Start Debugging** command, the default setting **Prompt to build** displays a message if a project configuration is out of date. Select **Always build** to build the project every time it is run. Select **Never build** to suppress all automatic builds when a project is run. **On Run, when build or deployment errors occur** *Applies to C++ projects only.* When running a project with **F5** or the **Debug** > **Start Debugging** command, the default setting **Prompt to launch** displays a message if a project should be run even if the build failed. Select **Launch old version** to automatically launch the last good build, which could result in mismatches between the running code and the source code. Select **Do not launch** to suppress the message. **For new solutions use the currently selected project as the startup project** When this option is set, new solutions use the currently selected project as the startup project. **MSBuild project build output verbosity** Determines how much information from the build process is displayed in the **Output** window. **MSBuild project build log file verbosity** *Applies to C++ projects only.* Determines how much information is written to the build log file, which is located at *\\\<ProjectName>\Debug\\\<ProjectName>.log*. ## See also - [Compiling and Building](../../ide/compiling-and-building-in-visual-studio.md) - [Options Dialog Box, Projects and Solutions](projects-and-solutions-options-dialog-box.md) - [Options Dialog Box, Projects and Solutions, Web Projects](options-dialog-box-projects-and-solutions-web-projects.md)
52.52381
399
0.757631
eng_Latn
0.995844
f4a368251e8537ae98e161f71d38810140f1e25e
2,976
md
Markdown
vendor/dzafel/pinterest-pinner/README.md
yanpingj/PINTEST
d7b69cb688f40ab669a321b83e118da4a45824fe
[ "MIT" ]
null
null
null
vendor/dzafel/pinterest-pinner/README.md
yanpingj/PINTEST
d7b69cb688f40ab669a321b83e118da4a45824fe
[ "MIT" ]
null
null
null
vendor/dzafel/pinterest-pinner/README.md
yanpingj/PINTEST
d7b69cb688f40ab669a321b83e118da4a45824fe
[ "MIT" ]
null
null
null
# PinterestPinner PHP Class Pinterest API is not released yet, so there is no way to programmatically create a pin. So here is this class for - Autoposter, Autopinner, whatever you like to call it. **This is an unofficial API, and likely to change and break at any moment.** _PinterestPinner is not a way to avoid any Pinterest terms, conditions, rules and regulations. Please use the class in accordance with all Pinterest rules. If you abuse the service you will be banned there._ **Please follow the PSR-2 coding standards if you would like to create a pull request.** ## Installation You can easily install PinterestPinner with the following command: `composer require dzafel/pinterest-pinner:dev-master` or alternatively, include a dependency for `dzafel/pinterest-pinner` in your `composer.json` file. For example: ```json { "require": { "dzafel/pinterest-pinner": "dev-master" } } ``` ## How to use it? To add a new pin: ```php try { $pinterest = new PinterestPinner\Pinner; $pin_id = $pinterest->setLogin('Your Pinterest Login') ->setPassword('Your Pinterest Password') ->setBoardID('Pinterest Board ID') ->setImage('Image URL') ->setDescription('Pin Description') ->setLink('Pin Link') ->pin(); } catch (PinterestPinner\PinnerException $e) { echo $e->getMessage(); } ``` You can also get additional info: ```php // Get a list of boards $boards = $pinterest->getBoards(); // Get a list of pins $pins = $pinterest->getPins(); // Get logged in user data $user = $pinterest->getUserData(); ``` ## Changelog ### 2.0.7 (2016-09-22) - FIX: `composer.json` version fix ### 2.0.6 (2016-09-21) - FIX: `getUserData()` should load the user details from `tree > data` array path instead of `resourceDataCache` ([#21](/../../issues/21)) ### 2.0.5 (2016-06-11) - Added Guzzle 6 support (required: `>=5.0`) ### 2.0.4 (2016-03-12) - FIX: `_responseToArray()` now search for config JSON in `<script id="jsInit1">` instead of `P.main.start()` function ([#17](/../../issues/17)) ### 2.0.3 (2015-11-11) - FIX: new `getBoards()` logic, now it returns all boards instead of just first 50 ([#16](/../../issues/16)) ### 2.0.2 (2015-10-05) - FIX: typo in init function - `P.start.start` instead of `P.main.start` ([#15](/../../issues/15)) - FIX: `getBoards()` always returned empty array, because `getPins()` returned pins collection, not full response json ([#15](/../../issues/15)) ### 2.0.1 (2015-09-23) - FIX: init function name changed from `P.scout.init` to `P.main.start` ([#14](/../../issues/14)) - FIX: do `preg_match()` only if response value is a string - NEW: added public `$user_data` variable - NEW: changed some private methods and vars to protected so class can be extended ### 2.0 (2015-04-09) - NEW: Library is now composer friendly - NEW: Added Guzzle dependency ### 1.0.1 (2014-11-02) - FIX: reload CSRF token upon login ### 1.0 (2014-06-04) - Initial release
28.342857
207
0.673723
eng_Latn
0.954819
f4a3d5c4bc9bd89f0cdf686ee33dc88b5795d693
10,461
md
Markdown
articles/marketplace/partner-center-portal/saas-metered-billing.md
Sarah-Kianfar/azure-docs
309a9d26f94ab775673fd4c9a0ffc6caa571f598
[ "CC-BY-4.0", "MIT" ]
1
2020-03-31T05:25:05.000Z
2020-03-31T05:25:05.000Z
articles/marketplace/partner-center-portal/saas-metered-billing.md
Sarah-Kianfar/azure-docs
309a9d26f94ab775673fd4c9a0ffc6caa571f598
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/marketplace/partner-center-portal/saas-metered-billing.md
Sarah-Kianfar/azure-docs
309a9d26f94ab775673fd4c9a0ffc6caa571f598
[ "CC-BY-4.0", "MIT" ]
1
2020-06-18T14:21:42.000Z
2020-06-18T14:21:42.000Z
--- title: Metered billing using the marketplace metering service | Azure Marketplace description: This documentation is a guide for ISVs publishing SaaS offers with flexible billing models. author: dsindona ms.author: dsindona ms.service: marketplace ms.subservice: partnercenter-marketplace-publisher ms.topic: conceptual ms.date: 07/10/2019 --- # Metered billing using the marketplace metering service With the Marketplace metering service, you can create software-as-a-service (SaaS) offers in the commercial marketplace program that are charged according to non-standard units. Before publishing this offer, you define the billing dimensions such as bandwidth, tickets, or emails processed. Customers then pay according to their consumption of these dimensions, with your system informing Microsoft via the Marketplace metering service API of billable events as they occur. ## Prerequisites for metered billing In order for a SaaS offer to use metered billing, it must: * Meet all of the offer requirements for a [sell through Microsoft offer](https://docs.microsoft.com/azure/marketplace/partner-center-portal/create-new-saas-offer#sell-through-microsoft) as outlined in [Create a SaaS offer](https://docs.microsoft.com/azure/marketplace/partner-center-portal/create-new-saas-offer). * Integrate with the [SaaS Fulfillment APIs](https://docs.microsoft.com/azure/marketplace/partner-center-portal/pc-saas-fulfillment-api-v2) for customers to provision and connect to your offer. * Be configured for the **flat rate** pricing model for charging customers for your service. Dimensions are an optional extension to the flat rate pricing model. * Integrate with the [Marketplace metering service APIs](./marketplace-metering-service-apis.md) to inform Microsoft of billable events. >[!Note] >Marketplace metering service is available only to the flat rate billing model, and does not apply to the per user billing model. ## How metered billing fits in with pricing When it comes to defining the offer along with its pricing models, it is important to understand the offer hierarchy. * Each SaaS offer is configured to sell either through Microsoft or not. This setting cannot be changed after an offer is published. * Each SaaS offer, configured to sell through Microsoft, can have one or more plans. A user subscribes to the SaaS offer, but it is purchased through Microsoft within the context of a plan. * Each plan has a pricing model associated with it: **flat rate** or **per user**. All plans in an offer must be associated with the same pricing model. For example, there cannot be an offer where one of its plans is flat rate pricing model, and another is per user pricing model. * Within each plan configured for a flat rate billing model, at least one recurring fee (which can be $0) is included: * Recurring **monthly** fee: flat monthly fee that is pre-paid on a monthly recurrence when user purchases the plan. * Recurring **annual** fee: flat annual fee that is pre-paid on an annual recurrence when user purchases the plan. * In addition to the recurring fees, the plan can also include optional dimensions used to charge customers for usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [Marketplace metering service API](./marketplace-metering-service-apis.md). ## Sample offer As an example, Contoso is a publisher with a SaaS service called Contoso Notification Services (CNS). CNS allows customers to send notifications either via email or text. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CNS, outlined below: * Base plan * Send 10000 emails and 1000 texts for $0/month * Beyond the 10000 emails, pay $1 for every 100 emails * Beyond the 1000 texts, pay $0.02 for every text * Premium plan * Send 50000 emails and 10000 texts for $350/month * Beyond the 50000 emails, pay $0.5 for every 100 emails * Beyond the 10000 texts, pay $0.01 for every text An Azure customer subscribing to CNS service will be able to send the included quantity of text and emails per month based on the plan selected. Contoso measures the usage up to the included quantity without sending any usage events to Microsoft. When customers consume more than the included quantity, they do not have to change plans or do anything different. Contoso will measure the overage beyond the included quantity and start emitting usage events to Microsoft for additional usage using the [Marketplace metering service API](./marketplace-metering-service-apis.md). Microsoft in turn will charge the customer for the additional usage as specified by the publisher. ## Billing dimensions Billing dimensions are used to communicate to the customer on how they will be billed for using the software, and also to communicate usage events to Microsoft. They are defined as follows: * **Dimension identifier**: the immutable identifier referenced while emitting usage events. * **Dimension name**: the display name associated with the dimension, e.g. "text messages sent". * **Unit of measure**: the description of the billing unit, e.g. "per text message" or "per 100 emails". * **Price per unit**: the price for one unit of the dimension. * **Included quantity for monthly term**: quantity of dimension included per month for customers paying the recurring monthly fee, must be an integer. * **Included quantity for annual term**: quantity of dimension included per month for customers paying the recurring annual fee, must be an integer. Billing dimensions are shared across all plans for an offer. Some attributes apply to the dimension across all plans, and other attributes are plan-specific. The attributes which define the dimension itself are shared across all plans for an offer. Before you publish the offer, a change made to these attributes from the context of any plan will affect the dimension definition across all plans. Once you publish the offer, these attributes will no longer be editable. These attributes are: * Identifier * Name * Unit of measure The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan you can edit these values and only this plan will be affected. Once you publish the plan, these attributes will no longer be editable. These attributes are: * Price per unit * Included quantity for monthly customers * Included quantity for annual customers Dimensions also have two special concepts, "enabled" and "infinite": * **Enabled** indicates that this plan participates in this dimension. You might want to leave this un-checked if you are creating a new plan that does not send usage events based on this dimension. Also, any new dimensions added after a plan was first published will show up as "not enabled" on the already published plan. A disabled dimension will not show up in any lists of dimensions for a plan seen by customers. * **Infinite**, represented by the infinity symbol "∞", indicates that this plan participates in this dimension, but does not meter usage against this dimension. If you want to indicate to your customers that the functionality represented by this dimension is included in the plan, but with no limit on usage. A dimension with infinite usage will show up in lists of dimensions for a plan seen by customers, with an indication that it will never incur a charge for this plan. >[!Note] >The following scenarios are explicitly supported: <br> - You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans. <br> - You can publish a **flat-rate** plan without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already published plans. ## Constraints ### Trial behavior Metered billing using the marketplace metering service is not compatible with offering a free trial. It is not possible to configure a plan to use both metered billing and a free trial. ### Locking behavior Because a dimension used with the Marketplace metering service represents an understanding of how a customer will be paying for the service, all of the details for a dimension are no longer editable once you publish it. It's important that you have your dimensions fully defined for a plan before you publish. Once an offer is published with a dimension, the offer-level details for that dimension can no longer be changed: * Identifier * Name * Unit of measure Once a plan is published, the plan-level details can no longer be changed: * Price per unit * Included quantity for monthly term * Included quantity for annual term * Whether the dimension is enabled for the plan ### Upper limits The maximum number of dimensions that can be configured for a single offer is 18 unique dimensions. ## Get support If you have one of the following, you can open a support ticket. * Technical issues with marketplace metering service API. * An issue that needs to be escalated because of an error or bug on your side (ex. wrong usage event). * Any other issues related to metered billing. Follow the steps below to submit your support ticket: 1. Go to the [support page](https://support.microsoft.com/supportforbusiness/productselection?sapId=48734891-ee9a-5d77-bf29-82bf8d8111ff). The first few dropdown menus are automatically filled out for you. For Marketplace support, identify the product family as **Cloud and Online Services**, the product as **Marketplace Publisher**. Do not change the pre-populated dropdown menu selections. 2. Under "Select the product version", select **Live offer management**. 3. Under "Select a category that best describe the issue", choose **SaaS apps**. 4. Under "Select a problem that best describes the issue", select **metered billing**. 5. By selecting the **Next** button, you will be directed to the **Issue details** page, where you can enter more details on your issue. See [Support for the commercial marketplace program in Partner Center](https://docs.microsoft.com/azure/marketplace/partner-center-portal/support) for more publisher support options. ## Next steps - See [Marketplace metering service APIs](./marketplace-metering-service-apis.md) for more information.
76.919118
678
0.786063
eng_Latn
0.999468
f4a52f9e3399d0e5e8cbf46074b08cff93c65ce6
841
md
Markdown
tests/unit/test_data/gu_tw/gu_tw/bible/other/barren.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
1
2022-01-10T21:03:26.000Z
2022-01-10T21:03:26.000Z
tests/unit/test_data/gu_tw/gu_tw/bible/other/barren.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
1
2022-03-28T17:44:24.000Z
2022-03-28T17:44:24.000Z
tests/unit/test_data/gu_tw/gu_tw/bible/other/barren.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
3
2022-01-14T02:55:44.000Z
2022-02-23T00:17:51.000Z
# વાંઝણી ## વ્યાખ્યા: “વાંઝણી” (ઉજ્જડ) હોવું તેનો અર્થ એ કે, તે ફળદ્રુપ અથવા ફળદાયી ન હોય. * જમીન અથવા ભૂમિ કે જે ઉજ્જડ છે, તે કંઈ પણ છોડ ઉત્પન કરવા સક્ષમ નથી. * સ્ત્રી કે જે વાંઝણી છે, તે શારીરિક રીતે ગર્ભ ધારણ કરવા અથવા બાળકને જન્મ આપવા અશક્ત છે. ## ભાષાંતરના સુચનો: * જયારે “ઉજ્જડ” શબ્દ જમીન માટે વપરાય છે, તેનું ભાષાંતર “ફળદ્રુપ નથી” અથવા “ફળ નહીં આપનારું” અથવા “છોડપાન રહિત” તેમ થઈ શકે છે. * જયારે આ શબ્દ વાંઝણી સ્ત્રી માટે વપરાય છે, તેનું ભાષાંતર “નિ:સંતાન” અથવા “બાળકોને જન્મ આપવા અસક્ષમ” અથવા “બાળકનો ગર્ભ ધારણ કરવા અશક્ત” તેમ કરી શકાય છે. ## બાઈબલની કલમો: * [1 શમુએલ 2:5](rc://gu/tn/help/1sa/02/05) * [ગલાતી 4:26-27](rc://gu/tn/help/gal/04/26) * [ઉત્પત્તિ 11:29-30](rc://gu/tn/help/gen/11/29) * [અયૂબ 3:6-7](rc://gu/tn/help/job/03/06) ## શબ્દ માહિતી: * Strong's: H4420, H6115, H6135, H6723, H7909, H7921, G692, G4723
33.64
152
0.501784
guj_Gujr
0.99952
f4a5bb6c3b5872a4fcaaf53821da8400b43b8817
81
md
Markdown
persistence-modules/hibernate-ogm/README.md
navintb/tutorials
61201d4ef24121f7a586e2e8e2d804dbfd005981
[ "MIT" ]
6
2019-01-15T03:43:54.000Z
2022-03-30T01:44:31.000Z
persistence-modules/hibernate-ogm/README.md
navintb/tutorials
61201d4ef24121f7a586e2e8e2d804dbfd005981
[ "MIT" ]
8
2020-03-04T22:59:55.000Z
2022-03-02T04:43:40.000Z
persistence-modules/hibernate-ogm/README.md
navintb/tutorials
61201d4ef24121f7a586e2e8e2d804dbfd005981
[ "MIT" ]
19
2019-12-02T20:12:10.000Z
2022-02-13T17:16:37.000Z
## Relevant articles: - [Guide to Hibernate OGM](http://www.baeldung.com/xxxx)
16.2
56
0.703704
kor_Hang
0.444511
f4a6a2220b80040031c59d2a8004b23f6821dd71
1,088
md
Markdown
src/ko/2021-03/06/06.md
Pmarva/sabbath-school-lessons
0e1564557be444c2fee51ddfd6f74a14fd1c45fa
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/ko/2021-03/06/06.md
Pmarva/sabbath-school-lessons
0e1564557be444c2fee51ddfd6f74a14fd1c45fa
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/ko/2021-03/06/06.md
Pmarva/sabbath-school-lessons
0e1564557be444c2fee51ddfd6f74a14fd1c45fa
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: 일상에서 경험하는 대쟁투 date: 05/08/2021 --- 이야기를 통해 우리가 이미 알고 있듯이(창 39:11~20), 요셉은 그가 내린 원칙에 따 른 결정으로 인해 고통을 당하게 되었다. 요셉은 결국 감옥에 들어갔다. 하지만 그러 한 비참한 상황에도 불구하고 성경은 “여호와께서 요셉과 함께 하”셨다고(창 39:21) 기록하고 있다. 이 땅에서의 삶은 공정하지 않다. 선한 일을 했다고 해서 항상 보상을 받는 것도 아 니고, 악한 일을 했다고 해서 즉시 벌을 받는 것도 아니다. 그럼에도 불구하고 좋은 소 식이 있다. 요셉은 비록 감옥에 들어가게 되었지만, 하나님께서 함께하심으로 그곳에 서 쉼을 찾을 수 있었다. 감옥에 들어가게 되었을 때 자신에게 일어난 불공정한 상황 을 곱씹으며 하나님을 원망하고 그분을 떠나기로 선택할 수도 있었다. 하지만 요셉은 그렇게 하지 않았다. `요셉은 감옥에 갇혀있는 동안 어떤 일을 했는가? 감옥에서 만난 사람들과 어떻게 관계를 맺고 살았는가? 창 39:21~40:22을 읽어 보라.` 요셉은 감옥에서 이상과는 거리가 먼 현실을 마주하게 되었다. 감옥에서 만난 인연 들은 그가 바라던 이상적인 인간 관계와는 거리가 멀었지만, 그 사람들과 관계를 맺 었고, 그들을 도왔다. 그리고 그 자신도 누군가의 도움이 필요했을 때, 도움 청하기를 주저하지 않았다. 그는 왕의 술을 맡은 관원장의 꿈을 해석해 준 후 자신을 도와 달 라고 요청했다. `엡 6:1~13에서 바울이 제시하는 관계에 관한 거시적인 관점은 무엇인가?` 우리가 이 땅에서 맺고 살아가는 관계들은 오랜 세월 동안 하나님과 사탄 사이에 서 이어져 온 대쟁투의 모습을 반영한다. 이것은 이 세상의 그 어떤 관계도 완벽할 수 없음을 의미한다. 이러한 사실을 잘 알고 있는 사탄은 우리가 관계 맺고 살아가는 사 람들, 그중에서도 특별히 가까운 사람들과의 관계를 이용하여, 우리를 향한 하나님의 계획을 일그러뜨리고 망가뜨리기 위하여 최선을 다한다. **교훈** 사탄은 우리가 이 땅에서 맺고 살아가는 관계를 이용하여 우리를 넘어뜨리려고 하지만, 하나님께서 우리와 함께하시므로 매일의 대쟁투에서 승리할 수 있다.
60.444444
247
0.690257
kor_Hang
1.00001
f4a6f14547bea98f29a837218a05fbb90f66fab6
675
md
Markdown
2021/CVE-2021-44848.md
sei-vsarvepalli/cve
fbd9def72dd8f1b479c71594bfd55ddb1c3be051
[ "MIT" ]
4
2022-03-01T12:31:42.000Z
2022-03-29T02:35:57.000Z
2021/CVE-2021-44848.md
sei-vsarvepalli/cve
fbd9def72dd8f1b479c71594bfd55ddb1c3be051
[ "MIT" ]
null
null
null
2021/CVE-2021-44848.md
sei-vsarvepalli/cve
fbd9def72dd8f1b479c71594bfd55ddb1c3be051
[ "MIT" ]
1
2022-03-29T02:35:58.000Z
2022-03-29T02:35:58.000Z
### [CVE-2021-44848](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44848) ![](https://img.shields.io/static/v1?label=Product&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen) ### Description In Cibele Thinfinity VirtualUI before 3.0, /changePassword returns different responses for invalid authentication requests depending on whether the username exists. ### POC #### Reference - http://packetstormsecurity.com/files/165327/Cibele-Thinfinity-VirtualUI-2.5.41.0-User-Enumeration.html #### Github No GitHub POC found.
37.5
164
0.762963
kor_Hang
0.261063
f4a6f7b1435270c47417a1c61e3f4959468bdb18
90
md
Markdown
CHANGELOG.md
avalla/avalla.github.io
79be6930a6b75ddcc15398cf57a3b83686a51990
[ "MIT" ]
1
2021-03-01T14:15:02.000Z
2021-03-01T14:15:02.000Z
CHANGELOG.md
avalla/avalla.github.io
79be6930a6b75ddcc15398cf57a3b83686a51990
[ "MIT" ]
6
2021-08-31T22:25:01.000Z
2022-02-27T11:56:26.000Z
CHANGELOG.md
avalla/avalla.github.io
79be6930a6b75ddcc15398cf57a3b83686a51990
[ "MIT" ]
null
null
null
# CHANGELOG ## 1.1.0 - Upgrade dependencies - Some refactoring ## 1.0.0 First release
8.181818
22
0.677778
eng_Latn
0.82425
f4a6f82bd176d54c967c6d9c6d99cb93e166c807
10,264
md
Markdown
README.md
JeremyProffitt/WifiManager-ESP8266-ESP32
e136e8e700164cd142ef871af53137ff58017592
[ "MIT" ]
null
null
null
README.md
JeremyProffitt/WifiManager-ESP8266-ESP32
e136e8e700164cd142ef871af53137ff58017592
[ "MIT" ]
null
null
null
README.md
JeremyProffitt/WifiManager-ESP8266-ESP32
e136e8e700164cd142ef871af53137ff58017592
[ "MIT" ]
null
null
null
# WiFiManager ESP32&&ESP8266 WiFi Connection manager with fallback web configuration portal ## Quick Install using dos prompt and git in windows - use this until released. cd %userprofile%\documents\arduino\libraries git clone https://github.com/JeremyProffitt/WIFIMANAGER-ESP32 git clone https://github.com/JeremyProffitt/DNSServer---esp32 git clone https://github.com/JeremyProffitt/WebServer-esp32 The configuration portal is of the captive variety, so on various devices it will present the configuration dialogue as soon as you connect to the created access point. First attempt at a library. Lots more changes and fixes to do. Contributions are welcome. ## Libray [WebServer https://github.com/JeremyProffitt/WebServer-esp32](https://github.com/JeremyProffitt/WebServer-esp32) [DNSServer https://github.com/JeremyProffitt/DNSServer---esp32](https://github.com/JeremyProffitt/DNSServer---esp32) ## How It Looks ![ESP8266 WiFi Captive Portal Homepage](http://i.imgur.com/YPvW9eql.png) ![ESP8266 WiFi Captive Portal Configuration](http://i.imgur.com/oicWJ4gl.png) ### Using - Include in your sketch ```cpp #if defined(ESP8266) #include <ESP8266WiFi.h> #else #include <WiFi.h> #endif //needed for library #include <DNSServer.h> #if defined(ESP8266) #include <ESP8266WebServer.h> #else #include <WebServer.h> #endif #include <WiFiManager.h> ``` - Initialize library, in your setup function add ```cpp WiFiManager wifiManager; ``` - Also in the setup function add ```cpp //first parameter is name of access point, second is the password wifiManager.autoConnect("AP-NAME", "AP-PASSWORD"); ``` if you just want an unsecured access point ```cpp wifiManager.autoConnect("AP-NAME"); ``` or if you want to use and auto generated name from 'ESP' and the esp's Chip ID use ```cpp wifiManager.autoConnect(); ``` After you write your sketch and start the ESP, it will try to connect to WiFi. If it fails it starts in Access Point mode. While in AP mode, connect to it then open a browser to the gateway IP, default 192.168.4.1, configure wifi, save and it should reboot and connect. Also see [examples](https://github.com/tzapu/WiFiManager/tree/master/examples). ## Documentation #### Password protect the configuration Access Point You can and should password protect the configuration access point. Simply add the password as a second parameter to `autoConnect`. A short password seems to have unpredictable results so use one that's around 8 characters or more in length. The guidelines are that a wifi password must consist of 8 to 63 ASCII-encoded characters in the range of 32 to 126 (decimal) ```cpp wifiManager.autoConnect("AutoConnectAP", "password") ``` #### Callbacks ##### Enter Config mode Use this if you need to do something when your device enters configuration mode on failed WiFi connection attempt. Before `autoConnect()` ```cpp wifiManager.setAPCallback(configModeCallback); ``` `configModeCallback` declaration and example ```cpp void configModeCallback (WiFiManager *myWiFiManager) { Serial.println("Entered config mode"); Serial.println(WiFi.softAPIP()); Serial.println(myWiFiManager->getConfigPortalSSID()); } ``` ##### Save settings This gets called when custom parameters have been set **AND** a connection has been established. Use it to set a flag, so when all the configuration finishes, you can save the extra parameters somewhere. See [AutoConnectWithFSParameters Example](https://github.com/tzapu/WiFiManager/tree/master/examples/AutoConnectWithFSParameters). ```cpp wifiManager.setSaveConfigCallback(saveConfigCallback); ``` `saveConfigCallback` declaration and example ```cpp //flag for saving data bool shouldSaveConfig = false; //callback notifying us of the need to save config void saveConfigCallback () { Serial.println("Should save config"); shouldSaveConfig = true; } ``` #### Configuration Portal Timeout If you need to set a timeout so the ESP doesn't hang waiting to be configured, for instance after a power failure, you can add ```cpp wifiManager.setConfigPortalTimeout(180); ``` which will wait 3 minutes (180 seconds). When the time passes, the autoConnect function will return, no matter the outcome. Check for connection and if it's still not established do whatever is needed (on some modules I restart them to retry, on others I enter deep sleep) #### On Demand Configuration Portal If you would rather start the configuration portal on demand rather than automatically on a failed connection attempt, then this is for you. Instead of calling `autoConnect()` which does all the connecting and failover configuration portal setup for you, you need to use `startConfigPortal()`. __Do not use BOTH.__ Example usage ```cpp void loop() { // is configuration portal requested? if ( digitalRead(TRIGGER_PIN) == LOW ) { WiFiManager wifiManager; wifiManager.startConfigPortal("OnDemandAP"); Serial.println("connected...yeey :)"); } } ``` See example for a more complex version. [OnDemandConfigPortal](https://github.com/tzapu/WiFiManager/tree/master/examples/OnDemandConfigPortal) #### Custom Parameters You can use WiFiManager to collect more parameters than just SSID and password. This could be helpful for configuring stuff like MQTT host and port, [blynk](http://www.blynk.cc) or [emoncms](http://emoncms.org) tokens, just to name a few. **You are responsible for saving and loading these custom values.** The library just collects and displays the data for you as a convenience. Usage scenario would be: - load values from somewhere (EEPROM/FS) or generate some defaults - add the custom parameters to WiFiManager using ```cpp // id/name, placeholder/prompt, default, length WiFiManagerParameter custom_mqtt_server("server", "mqtt server", mqtt_server, 40); wifiManager.addParameter(&custom_mqtt_server); ``` - if connection to AP fails, configuration portal starts and you can set /change the values (or use on demand configuration portal) - once configuration is done and connection is established [save config callback]() is called - once WiFiManager returns control to your application, read and save the new values using the `WiFiManagerParameter` object. ```cpp mqtt_server = custom_mqtt_server.getValue(); ``` This feature is a lot more involved than all the others, so here are some examples to fully show how it is done. You should also take a look at adding custom HTML to your form. - Save and load custom parameters to file system in json form [AutoConnectWithFSParameters](https://github.com/tzapu/WiFiManager/tree/master/examples/AutoConnectWithFSParameters) - *Save and load custom parameters to EEPROM* (not done yet) #### Custom IP Configuration You can set a custom IP for both AP (access point, config mode) and STA (station mode, client mode, normal project state) ##### Custom Access Point IP Configuration This will set your captive portal to a specific IP should you need/want such a feature. Add the following snippet before `autoConnect()` ```cpp //set custom ip for portal wifiManager.setAPStaticIPConfig(IPAddress(10,0,1,1), IPAddress(10,0,1,1), IPAddress(255,255,255,0)); ``` ##### Custom Station (client) Static IP Configuration This will make use the specified IP configuration instead of using DHCP in station mode. ```cpp wifiManager.setSTAStaticIPConfig(IPAddress(192,168,0,99), IPAddress(192,168,0,1), IPAddress(255,255,255,0)); ``` There are a couple of examples in the examples folder that show you how to set a static IP and even how to configure it through the web configuration portal. #### Custom HTML, CSS, Javascript There are various ways in which you can inject custom HTML, CSS or Javascript into the configuration portal. The options are: - inject custom head element You can use this to any html bit to the head of the configuration portal. If you add a `<style>` element, bare in mind it overwrites the included css, not replaces. ```cpp wifiManager.setCustomHeadElement("<style>html{filter: invert(100%); -webkit-filter: invert(100%);}</style>"); ``` - inject a custom bit of html in the configuration form ```cpp WiFiManagerParameter custom_text("<p>This is just a text paragraph</p>"); wifiManager.addParameter(&custom_text); ``` - inject a custom bit of html in a configuration form element Just add the bit you want added as the last parameter to the custom parameter constructor. ```cpp WiFiManagerParameter custom_mqtt_server("server", "mqtt server", "iot.eclipse", 40, " readonly"); ``` #### Filter Networks You can filter networks based on signal quality and show/hide duplicate networks. - If you would like to filter low signal quality networks you can tell WiFiManager to not show networks below an arbitrary quality %; ```cpp wifiManager.setMinimumSignalQuality(10); ``` will not show networks under 10% signal quality. If you omit the parameter it defaults to 8%; - You can also remove or show duplicate networks (default is remove). Use this function to show (or hide) all networks. ```cpp wifiManager.setRemoveDuplicateAPs(false); ``` #### Debug Debug is enabled by default on Serial. To disable add before autoConnect ```cpp wifiManager.setDebugOutput(false); ``` ### Contributions and thanks The support and help I got from the community has been nothing short of phenomenal. I can't thank you guys enough. This is my first real attept in developing open source stuff and I must say, now I understand why people are so dedicated to it, it is because of all the wonderful people involved. __THANK YOU__ [Shawn A](https://github.com/tablatronix) [Maximiliano Duarte](https://github.com/domonetic) [alltheblinkythings](https://github.com/alltheblinkythings) [Niklas Wall](https://github.com/niklaswall) [Jakub Piasecki](https://github.com/zaporylie) [Peter Allan](https://github.com/alwynallan) [John Little](https://github.com/j0hnlittle) [markaswift](https://github.com/markaswift) [franklinvv](https://github.com/franklinvv) [Alberto Ricci Bitti](https://github.com/riccibitti) [SebiPanther](https://github.com/SebiPanther) [jonathanendersby](https://github.com/jonathanendersby) [walthercarsten](https://github.com/walthercarsten) Sorry if i have missed anyone. #### Inspiration - http://www.esp8266.com/viewtopic.php?f=29&t=2520
39.476923
295
0.768316
eng_Latn
0.946512
f4a74865629c9f72dfdeb8bf37898c3386a8e2ec
958
md
Markdown
catalog/datenshi-ni-sasageru-uta/en-US_datenshi-ni-sasageru-uta.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/datenshi-ni-sasageru-uta/en-US_datenshi-ni-sasageru-uta.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/datenshi-ni-sasageru-uta/en-US_datenshi-ni-sasageru-uta.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Datenshi ni Sasageru Uta ![datenshi-ni-sasageru-uta](https://cdn.myanimelist.net/images/manga/2/16537.jpg) - **type**: manga - **volumes**: 1 - **chapters**: 7 - **original-name**: 堕天使に捧げる詩 - **start-date**: 2008-01-20 ## Tags - historical - romance - yaoi ## Authors - Mizukami - Shin (Story & Art) ## Sinopse After their H-packed honeymoon, Colonel Michel Rosenberg and his love, Hans, should be enjoying their peaceful life in the battlefront if not for the enemy led by Colonel Kingstone. Then a new storm struck, in the guise of Colonel Rosenberg’s younger brother, Kurt, who turned out to be an even bigger tyrant than the colonel. Will Hans be able to tolerate Kurt’s tongue-lashing…? Enjoy the sequel of “Koutetsu no Daitenshi” as the lovey-dovey couples increase in the sexually charged battlefront!! (Source: MangaUpdates) ## Links - [My Anime list](https://myanimelist.net/manga/12072/Datenshi_ni_Sasageru_Uta)
29.030303
380
0.723382
eng_Latn
0.896223
f4a7c6261326f4f516b65987c58c372446a44fcc
8,553
md
Markdown
README.md
gaganpa2020/Trading-App-Backend
cd34fa74cb7dc03375e978d48785aaeaf7ecbd8f
[ "Apache-2.0" ]
null
null
null
README.md
gaganpa2020/Trading-App-Backend
cd34fa74cb7dc03375e978d48785aaeaf7ecbd8f
[ "Apache-2.0" ]
null
null
null
README.md
gaganpa2020/Trading-App-Backend
cd34fa74cb7dc03375e978d48785aaeaf7ecbd8f
[ "Apache-2.0" ]
null
null
null
# Trading-App-Backend Backend application for the Trading application. # Prerequisites to run the system 1. Visual Studio 2019 (with all the updates) 2. Docker desktop for Windows (latest) # How to run services using Docker-Compose ### Click on the "start.sh" file -> start.sh -> build-script.sh -> docker-compose up ![image](https://user-images.githubusercontent.com/2247913/114290168-859fd000-9a4b-11eb-888f-058f620d26af.png) For demo click here: https://github.com/gaganpa2020/Trading-App-Backend/blob/main/Demo%20-%20How%20to%20run%20services.gif ![556g0p](https://user-images.githubusercontent.com/2247913/114293924-58154f80-9a68-11eb-8739-585beb6604cd.gif) # Specs # Identified services ![image](https://user-images.githubusercontent.com/2247913/113530494-3150a800-9594-11eb-9c26-108f6bbdb70c.png) # Postman collection to call the services on local https://github.com/gaganpa2020/Trading-App-Backend/blob/main/Gagan_TradingApp_MicroservicesAssignment.postman_collection.json # Entity relationship diagram ![image](https://user-images.githubusercontent.com/2247913/113530582-6230dd00-9594-11eb-988d-ddb136e67b6e.png) # Component diagram ![image](https://user-images.githubusercontent.com/2247913/113530661-87255000-9594-11eb-943a-af0a11819e69.png) # Sequence diagram ![image](https://user-images.githubusercontent.com/2247913/113530685-9f956a80-9594-11eb-8a20-007adb5ac77b.png) # Deployment diagram ![image](https://user-images.githubusercontent.com/2247913/113530757-cce21880-9594-11eb-93fc-8987f6ef575c.png) # Branch deployment strategy with environments ![image](https://user-images.githubusercontent.com/2247913/113530791-ed11d780-9594-11eb-9603-c14d4b9d9763.png) # Assignment use cases: ### Assumptions 1. Final deployment will be done on Azure cloud app service. 2. Service will have a scale up strategy defined on the app service plan to maintain scalability. 3. All infra is deployed under Virtual network and only API management gateway will expose the endpoints to outside world. 4. For communication between the services, we will use private endpoints. This will take care of security. 5. *All integration endpoints are mock in the given solution. 6. All the DB work is mocked with static collections. In real world, each service will maintain own databases. 7. Payment gateway (to validate customer's bank account details) would be a mock. 8. Application is protected by JWT token. Current implementation hardcode username/password. 9. To send notification, we have a Notification service. However its code is not done. In all the services, its providers are being called. Those are empty inside. 10. Stock Exchange will always give us live data for all the tickers. We will have a wrapper service to maintain relevant data in our system. 11. Last but not least, each usecase have corresponding collection defined in postman for execution (remember all services/containers should be up). ## Use Case-1 : A customer can deposit, withdraw cash from his trading account to bank account. 1. Consumer can link his account. 2. Consumer can credit/debit his account. 3. Payment gatways will be used in real world application to verify user's financial data. 4. On each trancation, Transaction history service will recieve an event. These events are maintained to keep track of user's transaction. 5. Trading cache will be populated as the result of each action, this is speedup automated transactions. 6. Notification service will be called to notify consume on the transaction status. ### Use Case-1 (Diagram) ![image](https://user-images.githubusercontent.com/2247913/113531336-7e357e00-9596-11eb-9d00-63670cfe7fa3.png) ## Use Case-2 : A customer can trade stocks (buy/sell) for the listed companies in the stock exchange. 1. Consumers (UI or other app) will Trade service to place Buy/Sell order. 2. Trade service will verify ticker's data (price etc) using Exchange service (our wrapper on Stock exchange). 3. Account service is being used to validate user's financial required for transaction. We will use 'read through/write behind strategy' to speedup transactions. 4. Transaction history service will recieve an event to save it for future use. 5. Trade service will drop a message in queue for notification service to send notifications to the user for transaction status. ### Use Case-2 (Diagram) ![image](https://user-images.githubusercontent.com/2247913/113531946-2ac42f80-9598-11eb-8178-0a84c8cb2b65.png) ## Use Case-3 : A customer can view the live/historical stock exchange rates. 1. An azure function will listen to the stock exchange & write all the relevant data in the DB. 2. Exchange service will be used to expose the data in desirable format to the user to see live/historical/ Graph etc. ### Use Case-3 (Diagram) ![image](https://user-images.githubusercontent.com/2247913/113532093-8989a900-9598-11eb-8c37-e9cf18d12c6d.png) ## Use Case-4 : A customer can view his transactional history. 1. Transaction history service will get events on each transactions. 2. Service will store the events using event sourcing pattern. 3. Service will take periodic snapshot to create Materialized view for the user. 4. Data in the transaction history will eventually consistant. ### Use Case-4 (Diagram) ![image](https://user-images.githubusercontent.com/2247913/113532563-a2df2500-9599-11eb-8995-b84f4704deec.png) ## Use Case-5 : A customer can set triggers to automatically buy/sell certain stock holdings when they reach a set threshold price. 1. Consume can define trigger on the account service. 2. Price monitoring job would be like Azure stream analytics or some similar solution to process high amount of data. 3. Each time a user will define any trigger, it will be registered as a *Rule* on the price monitoring job. 4. Price monitoring job will process the real time data and eventually push message in queue if any conditions is met on the rule. 5. Price monitoring job is similated through Postman call using Azure service bus. 6. Trade service will define subscription on the queue. 7. As soon as any message is landed on the queue, it will be picked by either of the instances of Trade service and processed. 8. Trade service will run on the premium tier in the Trading hours will maximum no of instances. 9. Trading cache will be used/populated by the trade service using Read through/write behind mechanisum. 10. Trade service will also initiate event for account service to update trigger status. ### Use Case-5 (Diagram) ![image](https://user-images.githubusercontent.com/2247913/113532718-0cf7ca00-959a-11eb-9aed-bfb1c7c8bda8.png) ## Use Case-6: Notifications should be triggered for all account related activities like deposit, withdrawals, purchases, sales and triggers. 1. All above use cases have a notification provider call defined. 2. In real world application provider will be calling the notification service to register an event to be processed. 3. Later on, notification service will process the notification request recieved. ### Note: all the above used cases are defined in Postman collection, services must be up in order use it. As an alternative, you can also use swagger to call services. It is enabled on all the services. # Cross cutting concerns ## Logging 1. Each service will have its dedicated application insight defined. 2. Application insight will hold all the system/custom logs, which can be further used to see the request trends. ## Exception handling 1. Providers have defined following patterns for exception handling & resiliency: - a. Retry mechanism b. Circuit breaker (checkout Shared code folder for providers) 2. Custom exception handling is also done on the module basis. ## High Availability 1. All the instances would be Load balanced for App services and will use the Azure feature 'high-avalbility' while defining resources. 2. Always-on : This feature will be enabled for the azure services to ensure it is not going down incase of any exceptions. 3. For deployment, we will consider Blue-Green deployment strategy to ensure absolute zero downtime. ## Scalability 1. Scalabilty rules will be defined at the App Service Plan level. 2. Caching/data store would be distributed, so that system can easily support scalling. ## Security 1. All the service will be deployed in virtual network. 2. Services will call each other using private endpoint. 3. API Gateway will be used to expose endpoints to the outside world. 4. JWT token will secure the API endpoints.
59.811189
204
0.787911
eng_Latn
0.97184
f4a8806e8b0d92567e9b9d9d3171964658bdb110
1,048
md
Markdown
content/cloud-references/auto-disk-provisioning/vsphere/vsphere-shared-arch.md
DSridharC/pxdocs
262df75665026d84626082f3f48848c5b61997d4
[ "Apache-2.0" ]
null
null
null
content/cloud-references/auto-disk-provisioning/vsphere/vsphere-shared-arch.md
DSridharC/pxdocs
262df75665026d84626082f3f48848c5b61997d4
[ "Apache-2.0" ]
null
null
null
content/cloud-references/auto-disk-provisioning/vsphere/vsphere-shared-arch.md
DSridharC/pxdocs
262df75665026d84626082f3f48848c5b61997d4
[ "Apache-2.0" ]
null
null
null
--- title: Portworx VMware shared architecture description: Portworx VMware shared architecture keywords: portworx, VMware, vSphere ASG hidden: true --- Below diagram gives an overview of the Portworx architecture on vSphere using shared datastores. * Portworx runs as a Daemonset hence each Kubernetes minion/worker will have the Portworx daemon running. * Based on the given spec by the end user, Portworx on each node will create it's disk on the configured shared datastore(s) or datastore cluster(s). * Portworx will aggregate all of the disks and form a single storage cluster. End users can carve PVCs (Persistent Volume Claims), PVs (Persistent Volumes) and Snapshots from this storage cluster. * Portworx tracks and manages the disks that it creates. So in a failure event, if a new VM spins up, Portworx on the new VM will be able to attach to the same disk that was previously created by the node on the failed VM. ![Portworx architecture for PKS on vSphere using shared datastores or datastore clusters](/img/pks-vsphere-shared.png)
65.5
222
0.795802
eng_Latn
0.994429
f4aa174476b3a867d9e9ca3adb0b6e771c9fadb6
1,205
md
Markdown
docs/extensions/get-started/wrapping-up.md
pjsoni/lens
105c875c84e6a305a757961cf2b9e32826651e63
[ "MIT" ]
17,937
2020-03-15T08:24:54.000Z
2022-03-31T16:13:35.000Z
docs/extensions/get-started/wrapping-up.md
ZVA90/lens
b692bc3603bdea8055dddc60f11b480b4ca08fcc
[ "MIT" ]
3,316
2020-03-15T08:03:37.000Z
2022-03-31T22:36:26.000Z
docs/extensions/get-started/wrapping-up.md
ZVA90/lens
b692bc3603bdea8055dddc60f11b480b4ca08fcc
[ "MIT" ]
957
2020-03-15T20:35:29.000Z
2022-03-30T18:50:34.000Z
# Wrapping Up In [Your First Extension](your-first-extension.md), you learned how to create and run an extension. In [Extension Anatomy](anatomy.md), you learned in detail how a basic extension works. This is just a glimpse into what can be created with Lens extensions. Below are some suggested routes for learning more. ## Extension Capabilities In this section, you'll find information on common extension capabilities, styling information, and a color reference guide. Determine whether your idea for an extension is doable and get ideas for new extensions by reading through the [Common Capabilities](../capabilities/common-capabilities.md) page. ## Guides and Samples Here you'll find a collection of sample extensions that you can use as a base to work from. Some of these samples include a detailed guide that explains the source code. You can find all samples and guides in the [lens-extension-samples](https://github.com/lensapp/lens-extension-samples) repository. ## Testing and Publishing In this section, you can learn: * How to add [integration tests](../testing-and-publishing/testing.md) to your extension * How to [publish your extension](../testing-and-publishing/publishing.md)
48.2
178
0.788382
eng_Latn
0.997604
f4aac8abea36db9579e226a1931589c470b626cf
6,307
md
Markdown
README.md
AminoBatlog/hexo-theme-raytaylorism
9859ab8ce731190a9dd71b1a2d49c05a76f2cac1
[ "MIT" ]
574
2015-01-02T06:21:14.000Z
2022-02-28T09:05:33.000Z
README.md
AminoBatlog/hexo-theme-raytaylorism
9859ab8ce731190a9dd71b1a2d49c05a76f2cac1
[ "MIT" ]
121
2015-03-10T06:37:47.000Z
2021-12-29T04:46:19.000Z
README.md
AminoBatlog/hexo-theme-raytaylorism
9859ab8ce731190a9dd71b1a2d49c05a76f2cac1
[ "MIT" ]
231
2015-01-06T11:55:48.000Z
2021-11-13T02:15:28.000Z
# hexo-theme-raytaylorism v2 raytaylorism(Ray Taylor主义)是我自己设计并制作的一款清新的的响应式Material Design风格的[Hexo]主题。该主题支持最新的Hexo 3.1版本。**本主题不再支持Hexo 2.x版本,请使用本主题前备份你的数据并升级到Hexo 3。** (English document is coming soon...) ## 预览 * [我的博客] * [主题截图1](https://raytaylorlin-blog.oss-cn-shenzhen.aliyuncs.com/image%2Fscreenshot%2Fscreenshot1.jpg) * [主题截图2](https://raytaylorlin-blog.oss-cn-shenzhen.aliyuncs.com/image%2Fscreenshot%2Fscreenshot2.jpg) * [主题截图3](https://raytaylorlin-blog.oss-cn-shenzhen.aliyuncs.com/image%2Fscreenshot%2Fscreenshot3.jpg) * [主题截图4](https://raytaylorlin-blog.oss-cn-shenzhen.aliyuncs.com/image%2Fscreenshot%2Fscreenshot4.jpg) ## 安装 ``` cd yourblog git clone https://github.com/raytaylorlin/hexo-theme-raytaylorism.git themes/raytaylorism ``` 请不定期`git pull`一下主题以便获得最新的功能。**请在pull之前先备份好你原来的配置。** ## 启用(重要) 1. 修改 `_config.yml` 中的`theme`一项的值为`raytaylorism` 2. 由于本主题使用了[Data Files]数据文件和额外的layout文件,所以请复制以下文件到你的博客目录中,否则在启动server时可能会报错 * **复制`yourblog/themes/raytaylorism/_data`文件夹到`yourblog/source`目录下** * **复制`yourblog/themes/raytaylorism/_md/`下所有文件夹(about和reading)到`yourblog/source`目录下** 3. 在你的`yourblog/_config.yml`配置文件的`#pagination`的位置添加下面配置(禁用归档、标签、目录页面的分页功能) ``` archive_generator: per_page: 0 tag_generator: per_page: 0 category_generator: per_page: 0 ``` ## 配置指南(重要) 我的博文[《新版Hexo主题Raytaylorism v2发布》](http://raytaylorlin.com/daily/hexo-theme-raytaylorism-v2/)写了一些本指南没有涉及到的主题使用小技巧,可以作为参考。其余配置细节,还是以本指南的说明为主。 ### 样式 * **主题颜色配置**:如果对主题的配色不满意,可以自行在`yourblog/themes/raytaylorism/_config.yml`中的`color`一项进行配置。其中各部件的颜色字符串命名遵循[Materializecss色板]规范。注意:`link`、`article_title_link`和`tab`配置的是文字的颜色,**因此不可以给这几项配置`lighten`和`darken`的颜色加亮加暗的后缀**。 * **页面标题**:在`yourblog/_config.yml`中,`title`项决定了页面header中显示的标题,`subtitle`决定了浏览器的`<title>`标签内容。 * **favicon**:请将`yourblog/themes/raytaylorism/source/favicon.png`替换为你自己的图标文件,**保持`favicon.png`命名不变**。 * **多语言**:目前主题支持简体中文、繁体中文和英文三种语言,可以将`yourblog/_config.yml`中`language`一项设置为`zh-CN`、`zh-TW`、`en`实现 * **正文宽度问题**:有许多使用者反映正文在大屏幕下显得太窄(默认为700px定宽),这是**出于提升文章阅读体验的考虑,在PC端上宽屏一行不至于过长,参考了UI设计师的建议以及一些知名博客类网站如[medium.com](https://medium.com/)、[简书](http://www.jianshu.com/)等等才做出的调整。**如果依旧对这样的宽度不满意,可以自行调整`yourblog/themes/raytaylorism/source/css/_base/lib_customize.styl`中的`.container`类的宽度设置 ### 数据 * **外部链接**:在`yourblog/source/_data/link.json`数据文件中进行配置。如果不需要以下两项,直接把`link.json`删除即可。 * 社交平台:对应`social`项,预设有`weibo`和`github`两种,如果需要其他社交平台可自行追加,但要注意**key值必须与[Font Awesome图标]相对应,否则可能无法正常显示**。 * 友情链接:对应`extern`项,其中key值为链接文字,value值为外链URL * **首页幻灯片**:在`yourblog/source/_data/slider.json`数据文件中进行配置。可以配置背景图、标题、副标题、对齐方式。如果不需要幻灯片,直接把`slider.json`删除即可。 * **关于页面**:`yourblog/themes/raytaylorism/_md/about/index.md`文件为自我介绍的正文,只需要像平时写博文一样正常地书写markdown即可。在`yourblog/source/_data/about.json`数据文件中配置关于页面的其他项。 * `avatar`:String类型,头像图片链接 * `name`:String类型,自己的姓名 * `tag`: String类型,描述自己的标签,**主要显示在侧滑栏的头部** * `desc`:String类型,对自己的简短描述 * `skills`:Object类型,对象技能展示。对象key值为技能名,value值为评分(取值为0-10的整数),取值为-1为分隔线。若不需要则将该字段设为null * `projects`:Array类型,作品与项目展示,内含多个Object,每个Object都有`name`作品名、`image`封面、`description`作品描述、`link_text`链接文字、`link`链接地址。若不需要则将该字段设为null * `reward`:Array类型,打赏二维码图片列表。例子中两个图片分别为微信和支付宝的二维码图片链接。若不需要则将该字段设为null * **读书页面**:在`yourblog/source/_data/reading.json`数据文件中进行配置。读书页面有“已读”“在读”和“想读”三栏,分别对应`contents`项中的`readed`、`reading`和`wanted`字段,每个字段对应一个书籍列表,按照例子进行修改即可。 * **new标签**:在`yourblog/source/_data/hint.json`数据文件中进行配置。`selector`项是一个数组,里面可以包含若干个CSS选择器用于选择要添加new标签的DOM元素。 ### 插件 * **代码语法高亮**:语法高亮的主题默认由CSS文件`yourblog/themes/raytaylorism/source/css/lib/prettify-tomorrow-night-eighties.css`。如果需要替换,可以到[Prettify Theme]选择你喜欢的主题,下载主题的CSS文件并存放到相同的目录下,并将`yourblog/themes/raytaylorism/_config.yml`中的`google_code_prettify_theme`一项改为对应的文件名。 * **评论**:~评论插件默认使用[多说],需要自行配置`yourblog/themes/raytaylorism/_config.yml`中的`duoshuo_shortname`为你自己站点的shortname~(多说即将关闭服务)评论插件默认使用[网易云跟帖],需要自行配置`yourblog/themes/raytaylorism/_config.yml`中的`yungentie_product_key`为你自己站点的productKey(从通用代码中获取) * **搜索**:安装[hexo-generator-search],在`yourblog/_config.yml`中添加如下配置代码。如果不需要搜索功能,将`yourblog/themes/raytaylorism/_config.yml`中`menu`的`-id: search`那一整项删除即可 ``` search: path: search.xml field: all ``` * **RSS**:安装[hexo-generator-feed],并按照说明配置(`atom.xml`的链接写在`yourblog/source/_data/link.json`的social项中,一般无需更改) * **站点分析**: * Google分析:`yourblog/themes/raytaylorism/_config.yml`中的`google_analytics`一项改为你的**Google分析track id**,留空则不启用 * 腾讯分析:(国内用户有Google分析被墙的可能)`yourblog/themes/raytaylorism/_config.yml`中的`tencent_analytics`一项改为你的**sId**(在腾讯分析添加站点后,复制代码中`sId=xxxxxxxx`那串数字就是sId),留空则不启用 * 如果你需要其他第三方的站点统计,可以仿照上面的例子添加配置,并在`yourblog/themes/raytaylorism/layout/_partial/plugin/analytics.ejs`中添加相应的统计代码 * **网页计数器**:使用不蒜子来统计PV和UV,若不需要可将`yourblog/themes/raytaylorism/_config.yml`中的`page_stat`设为false ## 使用的插件 * 样式框架:[Materialize] * 代码语法高亮:[Google-code-prettify] * 流量分析:[Google Analytics]、[腾讯分析] * 第三方社会化评论:[多说] ## 更新日志 * 2.3.3(2017-4-29) 新增网易云跟帖评论插件支持 * 2.3.2(2016-12-21) 优化文章目录,使其随正文内容滚动(https://github.com/raytaylorlin/hexo-theme-raytaylorism/pull/49) * 2.3.1(2016-12-14) 开放文章上一篇和下一篇功能 * 2.3.0(2016-12-10) 新增站点PV、UV和文章阅读量统计 * 2.2.3(2016-10-1) 修复首页非第1页点击READMORE路径错误的问题,修复多行代码由于空行不占位导致显示错位的问题 * 2.2.2(2016-6-7) 更新jQuery和Materialize库至最新版,修复正文右侧目录在某些浏览器无法正常导航的问题 * 2.2.1(2016-5-14) 添加打开搜索框时自动聚焦的功能 * 2.2.0(2016-4-22) 新增搜索功能 * 2.1.3(2016-4-13) 修复多行代码被挤到下方的显示问题 * 2.1.2(2016-4-5) 优化二级无序列表的样式,修复标签页和分类页的pagenav链接重复显示了两次导致404的问题 * 2.1.1(2016-3-29) 优化正文表格和引用的显示 * 2.1.0(2016-3-28) 增加对繁体中文和英文的支持 * 2.0.0-alpha(2016-3-14) 发布raytaylorism v2 alpha版本 [历史记录](log.md) [Hexo]: http://hexo.io/ [我的博客]: https://raytaylorlin.github.io/ [Data Files]: https://hexo.io/docs/data-files.html [Materializecss色板]: http://materializecss.com/color.html#palette [Font Awesome图标]: https://fortawesome.github.io/Font-Awesome/icons/ [Prettify Theme]: http://jmblog.github.io/color-themes-for-google-code-prettify/ [hexo-generator-search]: https://github.com/PaicHyperionDev/hexo-generator-search [hexo-generator-feed]: https://github.com/hexojs/hexo-generator-feed [Materialize]: http://materializecss.com/ [Google-code-prettify]: https://code.google.com/p/google-code-prettify/ [Google Analytics]: http://www.google.com/analytics/ [腾讯分析]: http://v2.ta.qq.com/ [Furatto]: http://icalialabs.github.io/furatto/ [Font Awesome]: http://fortawesome.github.io/Font-Awesome/ [多说]: http://duoshuo.com/
49.661417
281
0.776439
yue_Hant
0.646793
f4ab2ac422046e2042400c7b784a0a2442375a7e
657
md
Markdown
lab1/README.md
daniiliv/suai_tp
abb9d15b649e140b406ba1c7738988bcea2cc8a4
[ "MIT" ]
null
null
null
lab1/README.md
daniiliv/suai_tp
abb9d15b649e140b406ba1c7738988bcea2cc8a4
[ "MIT" ]
6
2018-10-02T06:56:47.000Z
2018-11-26T08:39:49.000Z
lab1/README.md
daniiliv/suai_tp
abb9d15b649e140b406ba1c7738988bcea2cc8a4
[ "MIT" ]
null
null
null
# SUAI_TP ## Лаборатная работа 1<br/> Определить класс с именем TRAIN, содержащий следующие поля:<br/> - название пункта назначения; - номер поезда; - время отправления.<br/> Определить методы доступа к этим полям и перегруженные операции извлечения и вставки для объектов типа TRAIN.<br/> Написать программу, выполняющую следующие действия: - ввод с клавиатуры данных в массив, состоящий из шести объектов типа TRAIN; записи должны быть упорядочены по времени отправления поезда; - вывод на экран информации о поездах, направляющихся в пункт, название которого введено с клавиатуры; - если таких поездов нет, выдать на дисплей соответствующее сообщение.
46.928571
114
0.802131
rus_Cyrl
0.99057
f4aba8fe8eda2313a474a377b7c0d0af1ae2d75d
1,519
md
Markdown
docs/connect/jdbc/reference/supportsconvert-method-sqlserverdatabasemetadata.md
kmivan/sql-docs
f3ec6088e1903e5cbeed53195cf6059b575ea051
[ "CC-BY-4.0", "MIT" ]
1
2021-11-30T17:45:23.000Z
2021-11-30T17:45:23.000Z
docs/connect/jdbc/reference/supportsconvert-method-sqlserverdatabasemetadata.md
kmivan/sql-docs
f3ec6088e1903e5cbeed53195cf6059b575ea051
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/connect/jdbc/reference/supportsconvert-method-sqlserverdatabasemetadata.md
kmivan/sql-docs
f3ec6088e1903e5cbeed53195cf6059b575ea051
[ "CC-BY-4.0", "MIT" ]
1
2021-12-11T04:16:24.000Z
2021-12-11T04:16:24.000Z
--- description: "supportsConvert Method (SQLServerDatabaseMetaData)" title: "supportsConvert Method (SQLServerDatabaseMetaData) | Microsoft Docs" ms.custom: "" ms.date: "01/19/2017" ms.prod: sql ms.prod_service: connectivity ms.reviewer: "" ms.technology: connectivity ms.topic: reference apiname: - "SQLServerDatabaseMetaData.supportsConvert" apilocation: - "sqljdbc.jar" apitype: "Assembly" ms.assetid: a5dbb5d8-41c2-48af-9b71-93a22f6a9b33 author: David-Engel ms.author: v-davidengel --- # supportsConvert Method (SQLServerDatabaseMetaData) [!INCLUDE[Driver_JDBC_Download](../../../includes/driver_jdbc_download.md)] Retrieves whether this database supports the CONVERT function between SQL types. ## Overload List |Name|Description| |----------|-----------------| |[supportsConvert ()](../../../connect/jdbc/reference/supportsconvert-method.md)|Retrieves whether this database supports the CONVERT function between SQL types.| |[supportsConvert (int, int)](../../../connect/jdbc/reference/supportsconvert-method-int-int.md)|Retrieves whether this database supports the CONVERT for two given SQL types.| ## See Also [SQLServerDatabaseMetaData Methods](../../../connect/jdbc/reference/sqlserverdatabasemetadata-methods.md) [SQLServerDatabaseMetaData Members](../../../connect/jdbc/reference/sqlserverdatabasemetadata-members.md) [SQLServerDatabaseMetaData Class](../../../connect/jdbc/reference/sqlserverdatabasemetadata-class.md)
39.973684
178
0.730086
kor_Hang
0.261304
f4acbdd7d9a7b078345a3e117cedc090d4dbfd7f
67
md
Markdown
README.md
NickSegalle/Arbonne-AppDynamics
f7cccfdc56de38b57d472e73a85e7294f7f06bd5
[ "MIT" ]
null
null
null
README.md
NickSegalle/Arbonne-AppDynamics
f7cccfdc56de38b57d472e73a85e7294f7f06bd5
[ "MIT" ]
null
null
null
README.md
NickSegalle/Arbonne-AppDynamics
f7cccfdc56de38b57d472e73a85e7294f7f06bd5
[ "MIT" ]
null
null
null
# ApplicationDynamics Application Dynamics PowerShell Integrations
22.333333
44
0.895522
kor_Hang
0.888255
f4ad277f03e91fdf7a906b188012fb3d28d63780
695
md
Markdown
README.md
iagobruno/modern-carousel-with-CSS
9a105b172a41c281050390d0d8646ae2c190dc59
[ "MIT" ]
null
null
null
README.md
iagobruno/modern-carousel-with-CSS
9a105b172a41c281050390d0d8646ae2c190dc59
[ "MIT" ]
null
null
null
README.md
iagobruno/modern-carousel-with-CSS
9a105b172a41c281050390d0d8646ae2c190dc59
[ "MIT" ]
null
null
null
# slideshow-with-css-scroll-snap A touch and keyboard friendly slideshow made with [CSS scroll snap](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Scroll_Snap) **without dependencies**. [LIVE DEMO](https://iagobruno.github.io/slideshow-with-css-scroll-snap/) ## Features - ✅ Touch support. - ✅ Keyboard support. - ✅ Paging indicator. - ✅ Navigation with next and previous buttons. - ✅ Mousewheel support. - ✅ Auto play. ## Browser support - https://caniuse.com/#feat=css-snappoints - https://caniuse.com/#feat=css-scroll-behavior - https://caniuse.com/#feat=mdn-api_scrolltooptions ## License [MIT](https://github.com/iagobruno/slideshow-with-css-scroll-snap/blob/master/LICENSE)
27.8
159
0.746763
eng_Latn
0.310053
f4ad973c497606d7f364a14041042335878e932f
17,384
md
Markdown
repos/buildpack-deps/remote/testing-scm.md
guruantree/repo-info
c158f66a8e5c5ac17da885e25f91fb53703b144c
[ "Apache-2.0" ]
1
2021-12-28T13:40:31.000Z
2021-12-28T13:40:31.000Z
repos/buildpack-deps/remote/testing-scm.md
guruantree/repo-info
c158f66a8e5c5ac17da885e25f91fb53703b144c
[ "Apache-2.0" ]
null
null
null
repos/buildpack-deps/remote/testing-scm.md
guruantree/repo-info
c158f66a8e5c5ac17da885e25f91fb53703b144c
[ "Apache-2.0" ]
null
null
null
## `buildpack-deps:testing-scm` ```console $ docker pull buildpack-deps@sha256:1a23db5d05508d3ddff48ce233b299ecffde8d6125a48add8a06058dbe950388 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v5 - linux; arm variant v7 - linux; arm64 variant v8 - linux; 386 - linux; mips64le - linux; ppc64le - linux; s390x ### `buildpack-deps:testing-scm` - linux; amd64 ```console $ docker pull buildpack-deps@sha256:ba62d0a2b0f3696140d8cfc9b555ffdbd0f2ed59dacb089bfd46adac79252d61 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **125.5 MB (125451306 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:de05919903979f6602d333781a7e38a9543fb42b8a0946f7d99495b7abc4281b` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 01:19:44 GMT ADD file:09ffbc0a4ab7c70a3071740e19113a2f2d61593241bfb8455aeeea7877b8784c in / # Sat, 10 Apr 2021 01:19:45 GMT CMD ["bash"] # Sat, 10 Apr 2021 01:52:57 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 01:53:03 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 01:53:20 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:cf7e31f852204930ef60bd4c12f9606812c0b9ba6235e2e46e1a5900f2db9d30` Last Modified: Sat, 10 Apr 2021 01:23:56 GMT Size: 54.9 MB (54868257 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3e6dc39acac51e10c09aed5f7835e7a99a2a9680be75d2352924fdee6a2f744f` Last Modified: Sat, 10 Apr 2021 02:00:36 GMT Size: 5.2 MB (5151329 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c450329263cd7b5d35ad44c880afbb2268b05ee361a4ffb617cc58d422bec81d` Last Modified: Sat, 10 Apr 2021 02:00:36 GMT Size: 10.9 MB (10867006 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f7ac119459a7d1a89ed94746e1639c3afde989102e0ea3a2ed86a6809bedc599` Last Modified: Sat, 10 Apr 2021 02:00:58 GMT Size: 54.6 MB (54564714 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; arm variant v5 ```console $ docker pull buildpack-deps@sha256:1189357f82628653cc8bf7b1af02f2bff5a5f7ae86e9dbf31dacd6eb9e2be6a6 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **120.3 MB (120346466 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:3e2b69f7678a204d15073c390ce2db7b7c16e691f7379cb8b1c3070b9798e320` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 00:50:12 GMT ADD file:37fd08ef57f840011e25ed9f68d8b5197a5378b1f7bebee7fa587d5e7d561844 in / # Sat, 10 Apr 2021 00:50:15 GMT CMD ["bash"] # Sat, 10 Apr 2021 01:57:22 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 01:57:49 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 01:58:53 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:3c6e089fcab6bbe7eaacf21e4d0c43c7da8b8a21cad425a714fbb8d6798ccadc` Last Modified: Sat, 10 Apr 2021 00:58:12 GMT Size: 52.4 MB (52401447 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:98ee67814b7dcf0dbbde45ef110cd1fbd3dc8c7c55ffa598f8877464399c4a6a` Last Modified: Sat, 10 Apr 2021 02:17:23 GMT Size: 5.1 MB (5061154 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:37b1690c9d4b9ea482011659d97a74e3b027cd75da0bee59c2fb7cf804c749aa` Last Modified: Sat, 10 Apr 2021 02:17:29 GMT Size: 10.6 MB (10569852 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a97efcbda3f28c049d797110675f9955cc21e80b07bfcd8bc66e52d49802739a` Last Modified: Sat, 10 Apr 2021 02:17:54 GMT Size: 52.3 MB (52314013 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; arm variant v7 ```console $ docker pull buildpack-deps@sha256:4d42e40a5aeebd099a59b9532641fdf18084c0a4e69d7db01872a607322290e4 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **115.5 MB (115537221 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:9fd679ec1a51c5e60ec3ef8c9926f411d290f511fd775c27646fe4bb4fadb96f` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 00:58:24 GMT ADD file:5e17f4d5cdf1ff091554ccfa33e22ab2be0c278b0cec1c11b45333deda2cfc79 in / # Sat, 10 Apr 2021 00:58:24 GMT CMD ["bash"] # Sat, 10 Apr 2021 06:01:28 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 06:01:43 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 06:02:35 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:df95183d3a18fe92c278757657c6fef8fcc11f2a9a578df2f2b00dbccf0a8ea6` Last Modified: Sat, 10 Apr 2021 01:06:36 GMT Size: 50.1 MB (50070347 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d156c39bdbf88c4fe691e2b8db9d8884ace98a295e72db7f8f2c7a7b09d88301` Last Modified: Sat, 10 Apr 2021 06:18:29 GMT Size: 4.9 MB (4920554 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3c5c1a00fb4cd0a81f72ea5458a8eb52ee825d2ec64b5b3d6324e8fe844eed2a` Last Modified: Sat, 10 Apr 2021 06:18:30 GMT Size: 10.2 MB (10218022 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:88f63a9b9b58ff5c896075723b17ad5cc2bd3e1daada4e1379a89a0b57120ce9` Last Modified: Sat, 10 Apr 2021 06:18:54 GMT Size: 50.3 MB (50328298 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; arm64 variant v8 ```console $ docker pull buildpack-deps@sha256:08b6a175308caf77256b1b30be4e60cc7424c560467c95950aa529937ed7b1e4 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **124.2 MB (124229613 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:9890482767a33f5d48373023c29383d2bbb6cd185365cd559de91f38e2de7feb` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 00:40:14 GMT ADD file:e1b7ed0c35932136d6c29369c3eb387896a482b3b227f18462715ed1690af4d4 in / # Sat, 10 Apr 2021 00:40:17 GMT CMD ["bash"] # Sat, 10 Apr 2021 01:43:23 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 01:43:36 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 01:44:13 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:a5ad85c1142d6ba07dd2031cb0c6c7513422a29a4e0446b232121280077ee9eb` Last Modified: Sat, 10 Apr 2021 00:46:54 GMT Size: 53.6 MB (53555409 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c1cf7b2e11a6c2d24640e32bab162f44730583fd12321a0b43f568de4528c6a0` Last Modified: Sat, 10 Apr 2021 01:59:38 GMT Size: 5.1 MB (5140721 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2da257970ac9fb41c862a9ea5857f77aa158778d568d6766498b801a239be01e` Last Modified: Sat, 10 Apr 2021 01:59:39 GMT Size: 10.9 MB (10867421 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:607f59eaee90bc16e7091c941cb4640481bec283086c165c038bc666c6072d4c` Last Modified: Sat, 10 Apr 2021 02:00:00 GMT Size: 54.7 MB (54666062 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; 386 ```console $ docker pull buildpack-deps@sha256:425f07a543fcc06f8b48fe4b169f4bb4966e4987f8f4a91df35ba31accd802c7 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **128.3 MB (128319506 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:79e7190569c06cb269bc6325858a9bcc24772b24b372032b5ca4bf1cfa9f3749` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 00:38:57 GMT ADD file:72806e483423c962f867acf22783e8b91aa9d8486d1d35505eaa5442df41be57 in / # Sat, 10 Apr 2021 00:38:58 GMT CMD ["bash"] # Sat, 10 Apr 2021 03:15:26 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 03:15:38 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 03:16:12 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:c73c775bc05dae13d9e00c5c3e6660d213997be492a06abcfe494e5fbfe97f21` Last Modified: Sat, 10 Apr 2021 00:44:36 GMT Size: 55.9 MB (55881380 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fc9e0bd737226fabb79b27a428df2d8a89fad1d3d4380eef8f36ab1540f975ac` Last Modified: Sat, 10 Apr 2021 03:27:53 GMT Size: 5.3 MB (5280440 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dc93fbf8ae86f8ca0c477fd1e852305a2a6b5b121a2a0d8a6c785ab27d113805` Last Modified: Sat, 10 Apr 2021 03:27:55 GMT Size: 11.2 MB (11248838 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1e044730d7057c92a5b5235d54b58ae32454dfaf5781ca5e0f7dd728dae5dfe2` Last Modified: Sat, 10 Apr 2021 03:28:28 GMT Size: 55.9 MB (55908848 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; mips64le ```console $ docker pull buildpack-deps@sha256:6e4f626ec7ae98c033861c9fbdc0af9dadb33f6b799bf07a7f6429e72c0ef23c ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **122.4 MB (122410242 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:59ec203a1937a34c483e08fc5355004817d8421d4265b1c0eb648cfdf6ef0b6d` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 01:08:30 GMT ADD file:05ecd203b7e6783fda8a687a6d061102831b24ebff7441f9b8bd407ea76580fb in / # Sat, 10 Apr 2021 01:08:31 GMT CMD ["bash"] # Sat, 10 Apr 2021 02:02:45 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 02:03:06 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 02:04:07 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:f35fc328cfb1004badf99abc0ffb0868b98173f9361dc1a1a0fb16c971579044` Last Modified: Sat, 10 Apr 2021 01:13:59 GMT Size: 53.1 MB (53127395 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:690bee2c409ab17046d8c4a0498a7accd7ca61857eedeb2b607fe4cdf23add4a` Last Modified: Sat, 10 Apr 2021 02:15:27 GMT Size: 5.1 MB (5113022 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5f235b32aa2551623bc33d129d9bf32cc4117b27ebeb38e2cf6a1908bf1c17b3` Last Modified: Sat, 10 Apr 2021 02:15:29 GMT Size: 10.9 MB (10870293 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ef31236e593348d8cd737733087e3c6eeb0c8440f32d9c3b46597be1b4e27dba` Last Modified: Sat, 10 Apr 2021 02:16:18 GMT Size: 53.3 MB (53299532 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; ppc64le ```console $ docker pull buildpack-deps@sha256:129ac163986f1d7c1a449d132ba28b99270e0a264a65758d649a386baf118b5c ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **134.6 MB (134649324 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:a278395be452d203838d7f68ce0976bc81f8f2fd06ed7a50e6fce60c200837da` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 01:24:22 GMT ADD file:c677239aef001babcb1663e8f2e9aadd81b7da6c581bb55368efc93625140098 in / # Sat, 10 Apr 2021 01:24:32 GMT CMD ["bash"] # Sat, 10 Apr 2021 01:54:37 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 01:56:35 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 02:00:38 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:ddef5f25272167da6de06606c43b7762709f7f2d95d2624ba3d2adafbd2d13d5` Last Modified: Sat, 10 Apr 2021 01:32:37 GMT Size: 58.8 MB (58783245 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:04e4e8fb057889bcb16e3c8b7bc0f46ac465eaad43d9ecaee96fe62ecab47f05` Last Modified: Sat, 10 Apr 2021 03:03:04 GMT Size: 5.4 MB (5399544 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a2dbcd738d9aa03edf4d9d84dad745a9d48e9f2d3eb4febce8ec5aea58e4be58` Last Modified: Sat, 10 Apr 2021 03:03:04 GMT Size: 11.6 MB (11619570 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:17976d558c617922225b59acaffe6c7bfb5a5777d4e19401af2bbc1479562c32` Last Modified: Sat, 10 Apr 2021 03:03:28 GMT Size: 58.8 MB (58846965 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `buildpack-deps:testing-scm` - linux; s390x ```console $ docker pull buildpack-deps@sha256:6f654998d1d53fc6c3536b199dfd2d595f3cb586ae0cd8d2b2e1f0c5f99ba921 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **123.1 MB (123079535 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:a2d04d8d789ab03f48a29d4b8b0e5cd2e9be66e8809a67b257dd1bb1958e9ea3` - Default Command: `["bash"]` ```dockerfile # Sat, 10 Apr 2021 00:41:38 GMT ADD file:5e12a744308594a409852feb52fdce1bafd5db69b42d23f30da5fd5da36c7900 in / # Sat, 10 Apr 2021 00:41:42 GMT CMD ["bash"] # Sat, 10 Apr 2021 01:25:26 GMT RUN set -eux; apt-get update; apt-get install -y --no-install-recommends ca-certificates curl netbase wget ; rm -rf /var/lib/apt/lists/* # Sat, 10 Apr 2021 01:25:35 GMT RUN set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi # Sat, 10 Apr 2021 01:25:59 GMT RUN apt-get update && apt-get install -y --no-install-recommends git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* ``` - Layers: - `sha256:22dffdd7e7137fea3c673394c15d7ec19300c9ca95cda64cb366ed192a6a2632` Last Modified: Sat, 10 Apr 2021 00:45:03 GMT Size: 53.1 MB (53148269 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:08db5926e54bf02d4a1b97c8037cb92e495f9b52afc71b9b1b79693fcbe59cd2` Last Modified: Sat, 10 Apr 2021 01:32:44 GMT Size: 5.1 MB (5134073 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:128ec9d2e5826957253eb7a1db9732773e779280b04208870af23a99ed5a26df` Last Modified: Sat, 10 Apr 2021 01:32:44 GMT Size: 10.8 MB (10758474 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a40ef89f997c44974926e4fae492c195e0b73cf29dd614681b270cc7c5def582` Last Modified: Sat, 10 Apr 2021 01:33:01 GMT Size: 54.0 MB (54038719 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
47.111111
173
0.740796
yue_Hant
0.121732
f4adb811186a1d628d7e81cb315758b0d9b0753e
3,400
md
Markdown
docs/outlook/mapi/ipersistmessage-getlasterror.md
MicrosoftDocs/office-developer-client-docs.pt-BR
7b878649ac5b6a1113bf06b099adffa17c956c16
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T18:52:31.000Z
2021-04-21T00:13:46.000Z
docs/outlook/mapi/ipersistmessage-getlasterror.md
MicrosoftDocs/office-developer-client-docs.pt-BR
7b878649ac5b6a1113bf06b099adffa17c956c16
[ "CC-BY-4.0", "MIT" ]
4
2021-12-08T02:35:59.000Z
2021-12-08T02:53:43.000Z
docs/outlook/mapi/ipersistmessage-getlasterror.md
MicrosoftDocs/office-developer-client-docs.pt-BR
7b878649ac5b6a1113bf06b099adffa17c956c16
[ "CC-BY-4.0", "MIT" ]
2
2019-10-13T18:19:41.000Z
2021-11-25T00:39:27.000Z
--- title: IPersistMessageGetLastError manager: soliver ms.date: 11/16/2014 ms.audience: Developer ms.topic: reference ms.prod: office-online-server ms.localizationpriority: medium api_name: - IPersistMessage.GetLastError api_type: - COM ms.assetid: 32cc3a1f-1310-4788-b0f4-93c1e4940f37 description: 'Última modificação: 23 de julho de 2011' ms.openlocfilehash: b82c42eb969c808810a571bd408e7c17b4f25cfe ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 09/24/2021 ms.locfileid: "59604765" --- # <a name="ipersistmessagegetlasterror"></a>IPersistMessage::GetLastError **Aplica-se a**: Outlook 2013 | Outlook 2016 Retorna uma [estrutura MAPIERROR](mapierror.md) que contém informações sobre o erro anterior no objeto form. ```cpp HRESULT GetLastError( HRESULT hResult, ULONG ulFlags, LPMAPIERROR FAR * lppMAPIError ); ``` ## <a name="parameters"></a>Parâmetros _hResult_ > [in] Um tipo de dados HRESULT que contém o valor de erro gerado na chamada do método anterior. _ulFlags_ > [in] Uma máscara de bits de sinalizadores que controla o tipo de cadeia de caracteres retornada. O sinalizador a seguir pode ser definido: MAPI_UNICODE > As cadeias de caracteres na [estrutura MAPIERROR](mapierror.md) retornadas no _parâmetro lppMAPIError_ estão no formato Unicode. Se o MAPI_UNICODE não estiver definido, as cadeias de caracteres serão no formato ANSI. _lppMAPIError_ > [out] Um ponteiro para um ponteiro para uma estrutura **MAPIERROR** que contém informações de versão, componente e contexto para o erro. O _parâmetro lppMAPIError_ pode ser definido como NULL se o formulário não puder fornecer informações apropriadas para uma **estrutura MAPIERROR.** ## <a name="return-value"></a>Valor de retorno S_OK > A chamada foi bem-sucedida e retornou o valor ou os valores esperados. MAPI_E_BAD_CHARWIDTH > O sinalizador MAPI_UNICODE foi definido e o provedor do livro de endereços não dá suporte a Unicode ou MAPI_UNICODE não foi definido e o provedor de livro de endereços dá suporte apenas a Unicode. ## <a name="remarks"></a>Comentários Os objetos form implementam o **método IPersistMessage::GetLastError** para fornecer informações sobre uma chamada de método anterior que falhou. Os visualizadores de formulário podem fornecer aos usuários informações detalhadas sobre o erro incluindo os dados da estrutura [MAPIERROR](mapierror.md) em uma caixa de diálogo. Uma chamada para **GetLastError** não afeta o estado do formulário. Quando **GetLastError** retorna, o formulário permanece no estado em que estava antes da chamada ser feita. ## <a name="notes-to-callers"></a>Notas para chamadores Você pode usar a estrutura **MAPIERROR,** se o formulário fornece uma, que é apontada pelo parâmetro _lppMAPIError_ somente se **GetLastError** retornar S_OK. Às vezes, o formulário não pode determinar qual foi o último erro ou não tem mais nada a relatar sobre o erro. Nessa situação, o formulário retorna um ponteiro para NULL em _lppMAPIError._ Para obter mais informações sobre o **método GetLastError,** consulte [MAPI Extended Errors](mapi-extended-errors.md). ## <a name="see-also"></a>Confira também [MAPIERROR](mapierror.md) [MAPIFreeBuffer](mapifreebuffer.md) [IPersistMessage : IUnknown](ipersistmessageiunknown.md)
38.636364
349
0.774118
por_Latn
0.997392
f4ae2443d4705dab49feefcf0fc8e259374119fc
1,305
md
Markdown
articles/azure-functions/functions-bindings-event-hubs-trigger.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-functions/functions-bindings-event-hubs-trigger.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-functions/functions-bindings-event-hubs-trigger.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Desencadenador de Azure Event Hubs para Azure Functions description: Obtenga información sobre el uso del desencadenador de Azure Event Hubs en Azure Functions. author: craigshoemaker ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b ms.topic: reference ms.date: 02/21/2020 ms.author: cshoe ms.openlocfilehash: 72312df41a74ac8f2321b31287cbb3cd87d1a04b ms.sourcegitcommit: 3d79f737ff34708b48dd2ae45100e2516af9ed78 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/23/2020 ms.locfileid: "87041745" --- # <a name="azure-event-hubs-bindings-for-azure-functions"></a>Enlaces de Azure Event Hubs para Azure Functions En este artículo se explica cómo usar el desencadenador de [Azure Event Hubs](../event-hubs/event-hubs-about.md) para Azure Functions. Azure Functions admite los [enlaces de salida](functions-bindings-event-hubs-output.md) y desencadenador para Event Hubs. Para obtener información sobre los detalles de instalación y configuración, vea la [información general](functions-bindings-event-hubs.md). [!INCLUDE [functions-bindings-event-hubs-trigger](../../includes/functions-bindings-event-hubs-trigger.md)] ## <a name="next-steps"></a>Pasos siguientes - [Escribir eventos en una secuencia de eventos (enlace de salida)](./functions-bindings-storage-blob-output.md)
48.333333
256
0.801533
spa_Latn
0.458667
f4af908bfa3c29bc49dcf3dc77df003e92d46fa3
11,080
markdown
Markdown
fiddles/python/fiddle-0009-CourseraDL/README.markdown
oneorthomedical/house
03bc23075f4d7a18362f596f96fabddcb237af30
[ "MIT" ]
48
2016-01-06T14:34:26.000Z
2021-07-14T16:10:36.000Z
fiddles/python/fiddle-0009-CourseraDL/README.markdown
oneorthomedical/house
03bc23075f4d7a18362f596f96fabddcb237af30
[ "MIT" ]
311
2016-05-28T12:35:22.000Z
2022-03-25T14:57:06.000Z
fiddles/python/fiddle-0009-CourseraDL/README.markdown
oneorthomedical/house
03bc23075f4d7a18362f596f96fabddcb237af30
[ "MIT" ]
33
2015-11-07T06:39:17.000Z
2020-12-22T18:59:14.000Z
fiddle-0009-CourseraDL ====== ### Title fiddle-0009-CourseraDL ### Creation Date 06-17-16 ### Location Chicago, IL ### Issue [Issue 33](https://github.com/bradyhouse/house/issues/33) ### Description [Coursera is removing 472 free online courses from the internet on June 30th ...](https://medium.freecodecamp.com/the-day-472-free-online-courses-will-vanish-from-the-internet-3060bb4e9704#.n24mk56x2). If you skim through this post, you will see that it mentions a python script that can be used to download class videos. At the same time, there are a couple free classes that look interesting. This POC will explore how to use the [coursera-dl](https://github.com/coursera-dl/coursera-dl) to batch download the lecture materials from an online class given at Stanford entitled [Probabilistic Graphical Models](https://class.coursera.org/pgm-003). ### Procedure 1. Install `coursera-dl` using `pip3` pip3 install coursera-dl --user If successful, this should produce the following output: Downloading/unpacking coursera-dl Downloading coursera-dl-0.6.0.tar.gz (49kB): 49kB downloaded Running setup.py (path:/private/var/folders/9r/d_7dtbbj7s14y76lvxjk3hs171n8xq/T/pip_build_bradyhouse/coursera-dl/setup.py) egg_info for package coursera-dl Requirement already satisfied (use --upgrade to upgrade): beautifulsoup4>=4.1.3 in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages (from coursera-dl) Downloading/unpacking html5lib>=1.0b2 (from coursera-dl) Downloading html5lib-1.0b8.tar.gz (889kB): 889kB downloaded Running setup.py (path:/private/var/folders/9r/d_7dtbbj7s14y76lvxjk3hs171n8xq/T/pip_build_bradyhouse/html5lib/setup.py) egg_info for package html5lib Downloading/unpacking requests>=2.4.3 (from coursera-dl) Downloading requests-2.10.0-py2.py3-none-any.whl (506kB): 506kB downloaded Requirement already satisfied (use --upgrade to upgrade): six>=1.5.0 in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages (from coursera-dl) Downloading/unpacking urllib3>=1.10 (from coursera-dl) Downloading urllib3-1.16-py2.py3-none-any.whl (98kB): 98kB downloaded Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.7 in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages (from coursera-dl) Downloading/unpacking keyring>=4.0 (from coursera-dl) Downloading keyring-9.0-py2.py3-none-any.whl Installing collected packages: coursera-dl, html5lib, requests, urllib3, keyring Running setup.py install for coursera-dl Installing coursera-dl script to /Users/bradyhouse/Library/Python/3.4/bin Running setup.py install for html5lib Successfully installed coursera-dl html5lib requests urllib3 keyring Cleaning up... If unsuccessful, then refer to the [coursera-dl setup instructions](https://github.com/coursera-dl/coursera-dl). 2. Create a .netrc file. This file should have the following format: machine coursera-dl login <coursera user login> password <coursera password> 3. Enroll in the following coursera classes: * [Social Psychology](https://www.class-central.com/mooc/555/coursera-social-psychology) * [Probabilistic Graphical Models](https://www.class-central.com/mooc/309/coursera-probabilistic-graphical-models) * [Algorithms: Design and Analysis, Part 1](https://www.class-central.com/mooc/374/coursera-algorithms-design-and-analysis-part-1) * [Algorithms: Design and Analysis, Part 2](https://www.class-central.com/mooc/426/coursera-algorithms-design-and-analysis-part-2) * [Introduction to Mathematical Thinking](https://www.class-central.com/mooc/370/coursera-introduction-to-mathematical-thinking) * [Automata](https://www.class-central.com/mooc/376/coursera-automata) * [Mining Massive Dataset](https://www.class-central.com/mooc/2406/coursera-mining-massive-datasets) * [Princeton’s Algorithms, Part 1](https://www.class-central.com/mooc/339/coursera-algorithms-part-i) * [Princeton's Algorithms, Part 2](https://www.class-central.com/mooc/340/coursera-algorithms-part-ii) 4. Execute the `run.sh` script. Once it completes, it should create the following directory structure: . ├── algorithm-design-analysis │   ├── 01_week-1 │   │   ├── 01_i-introduction-week-1 │   │   ├── 02_ii-asymptotic-analysis-week-1 │   │   ├── 03_iii-divide-conquer-algorithms-week-1 │   │   └── 04_problem-set-1 │   ├── 02_week-2 │   │   ├── 01_iv-the-master-method-week-2 │   │   ├── 02_v-quicksort-algorithm-week-2 │   │   ├── 03_vi-quicksort-analysis-week-2 │   │   ├── 04_vii-probability-review-week-2 │   │   └── 05_problem-set-2 │   ├── 03_week-3 │   │   ├── 01_viii-linear-time-selection-week-3 │   │   ├── 02_ix-graphs-and-the-contraction-algorithm-week-3 │   │   └── 03_problem-set-3 │   ├── 04_week-4 │   │   ├── 01_x-graph-search-and-connectivity-week-4 │   │   └── 02_problem-set-4 │   ├── 05_week-5 │   │   ├── 01_xi-dijkstra-s-shortest-path-algorithm-week-5 │   │   ├── 02_xii-heaps-week-5 │   │   ├── 03_xiii-balanced-binary-search-trees-week-5 │   │   └── 04_problem-set-5 │   └── 06_week-6 │   ├── 01_xiv-hashing-the-basics-week-6 │   ├── 02_xv-universal-hashing-week-6 │   ├── 03_xv-bloom-filters-week-6 │   ├── 04_preview-of-part-2 │   └── 05_problem-set-6 ├── algs4partI-010 │   ├── 01_Week_0-__Welcome_to_Algorithms_Part_I_9-22 │   ├── 02_Week_1-__Union-Find_50-54 │   ├── 03_Week_1-__Analysis_of_Algorithms_65-32 │   ├── 04_Week_2-__Stacks_and_Queues_61-00 │   ├── 05_Week_2-__Elementary_Sorts_63-27 │   ├── 06_Week_3-__Mergesort_48-41 │   ├── 07_Week_3-__Quicksort_49-56 │   ├── 08_Week_4-__Priority_Queues_73-35 │   ├── 09_Week_4-__Elementary_Symbol_Tables_77-18 │   ├── 10_Week_5-__Balanced_Search_Trees_63-01 │   ├── 11_Week_5-__Geometric_Applications_of_BSTs_65-41 │   └── 12_Week_6-__Hash_Tables_77-49 ├── algs4partII-007 │   ├── 01_Week_0-__Welcome_to_Algorithms_Part_II │   ├── 02_Week_1-__Undirected_Graphs_97-40 │   ├── 03_Week_1-__Directed_Graphs_67-38 │   ├── 04_Week_2-__Minimum_Spanning_Trees_84-32 │   ├── 05_Week_2-__Shortest_Paths_84-59 │   ├── 06_Week_3-__Maximum_Flow_72-21 │   ├── 07_Week_3-_Radix_Sorts_85-17 │   ├── 08_Week_5-__Tries_75-04 │   ├── 09_Week_5-__Substring_Search_74-56 │   ├── 10_Week_6-__Regular_Expressions_83-35 │   ├── 11_Week_6-__Data_Compression_80-13 │   ├── 12_Week_7-__Reductions_39-39 │   ├── 13_Week_7-__Linear_Programming_optional_61-11 │   └── 14_Week_7-__Intractability_84-47 ├── automata-003 │   ├── 01_Week_1-_Finite_Automata │   ├── 02_Week_2-_Regular_Expression_and_Properties_of_Regular_Languages │   ├── 03_Week_3-_Context-Free_Grammars_and_Pushdown_Automata │   ├── 04_Week_4-_Pushdown_Automata_and_Properties_of_Context-Free_Languages │   ├── 05_Week_5-_Turing_Machines_and_Undecidability │   ├── 06_Week_6-_Intractable_Problems_and_NP-completeness │   └── 07_Problem_Session ├── bitcointech-001 │   ├── 01_Introduction │   ├── 02_Lecture_1-__Intro_to_Crypto_and_Cryptocurrencies │   ├── 03_Lecture_2-_How_Bitcoin_Achieves_Decentralization │   ├── 04_Lecture_3-_Mechanics_of_Bitcoin │   ├── 05_Lecture_4-_How_to_Store_and_Use_Bitcoins │   ├── 06_Lecture_5-_Bitcoin_Mining │   ├── 07_Lecture_6-_Bitcoin_and_Anonymity │   ├── 08_Lecture_7-_Community_Politics_and_Regulation │   ├── 09_Lecture_8-_Alternative_Mining_Puzzles │   ├── 10_Lecture_9-_Bitcoin_as_a_Platform │   ├── 11_Lecture_10-_Altcoins_and_the_Cryptocurrency_Ecosystem │   ├── 12_Lecture_11-_The_Future_of_Bitcoin │   └── 13_Bonus_Lecture ├── maththink-006 │   ├── 01_Week_One_Lectures │   ├── 02_Week_Two_Tutorial │   ├── 03_Week_Two_Lectures │   ├── 04_Week_Three_Tutorial │   ├── 05_Week_Three_Lectures │   ├── 06_Week_Four_Tutorial │   ├── 07_Week_Four_Lectures │   ├── 08_Week_Five_Tutorial │   ├── 09_Week_Five_Lectures │   ├── 10_Week_Six_Tutorial │   ├── 11_Week_Six_Lectures │   ├── 12_Week_Seven_Tutorial │   ├── 13_Week_Seven_Lectures │   ├── 14_Week_Eight_Tutorial │   ├── 15_Week_Eight_Lectures │   ├── 16_Week_Nine_Tutorial │   └── 17_Supplementary_Videos ├── mmds-002 │   ├── 01_Week_1_Materials │   ├── 02_Week_2_Materials │   ├── 03_Week_3_Materials │   ├── 04_Week_4_Materials │   ├── 05_Week_5_Materials │   ├── 06_Week_6_Materials │   └── 07_Week_7_Materials ├── pgm-003 │   ├── 01_Introduction_and_Overview_Week_1 │   ├── 02_Bayesian_Network_Fundamentals_Week_1 │   ├── 03_Template_Models_Week_1 │   ├── 04_ML-class_Octave_Tutorial_Week_1_Optional │   ├── 05_Structured_CPDs_Week_2 │   ├── 06_Markov_Network_Fundamentals_Week_2 │   ├── 07_Representation_Wrapup-_Knowledge_Engineering_Week_3 │   ├── 08_Inference-_Variable_Elimination_Week_3 │   ├── 09_Inference-_Belief_Propagation_Part_1_Week_3 │   ├── 10_Inference-_Belief_Propagation_Part_2_Week_4 │   ├── 11_Inference-_MAP_Estimation_Part_1_Week_4 │   ├── 12_Inference-_MAP_Estimation_Part_2_Week_5 │   ├── 13_Inference-_Sampling_Methods_Week_5 │   ├── 14_Inference-_Temporal_Models_and_Wrap-up_Week_6 │   ├── 15_Decision_Theory_Week_6 │   ├── 16_ML-class_Revision_Week_6_Optional │   ├── 17_Learning-_Overview_Week_6 │   ├── 18_Learning-_Parameter_Estimation_in_BNs_Week_7 │   ├── 19_Learning-_Parameter_Estimation_in_MNs_Week_7 │   ├── 20_Structure_Learning_Week_8 │   ├── 21_Learning_With_Incomplete_Data_Week_9 │   ├── 22_Learning-_Wrapup_Week_9 │   └── 23_Summary_Week_9 └── socialpsychology-002 ├── 01_WEEK_1-_Social_Perceptions_and_Misperceptions ├── 02_WEEK_2-_The_Psychology_of_Self-Presentation_and_Persuasion ├── 03_WEEK_3-_Obedience_Conformity_and_Deindividuation ├── 04_WEEK_4-_Group_Behavior-_The_Good_Bad_and_Ugly ├── 05_WEEK_5-_Mid-Course_Break ├── 06_WEEK_6-_Conflict_Peacemaking_and_Intervention └── 07_WEEK_7-_A_Happy_End_to_the_Course 138 directories ### Tags python, pip3, coursera-dl
48.596491
650
0.658664
yue_Hant
0.332339
f4afc7c5296cc3179f267560bf6290c55aafd8aa
10,834
md
Markdown
articles/machine-learning/algorithm-module-reference/evaluate-model.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/algorithm-module-reference/evaluate-model.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/algorithm-module-reference/evaluate-model.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Modell kiértékelése: modul-hivatkozás' titleSuffix: Azure Machine Learning description: Megtudhatja, hogyan használhatja a Azure Machine Learning modell kiértékelése modult a betanított modell pontosságának méréséhez. services: machine-learning ms.service: machine-learning ms.subservice: core ms.topic: reference author: likebupt ms.author: keli19 ms.date: 07/27/2020 ms.openlocfilehash: 9abf5a17330566aee2414b8499f228d297880cbf ms.sourcegitcommit: 96918333d87f4029d4d6af7ac44635c833abb3da ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 11/04/2020 ms.locfileid: "93323787" --- # <a name="evaluate-model-module"></a>Modell modul kiértékelése Ez a cikk a Azure Machine Learning Designer egyik modulját ismerteti. Ezzel a modullal mérhető a betanított modell pontossága. Olyan adatkészletet ad meg, amely egy modellből generált pontszámokat tartalmaz, és a **modell kiértékelése** modul az iparági szabványnak megfelelő értékelési metrikákat számítja ki. A **kiértékelési modell** által visszaadott mérőszámok az Ön által kiértékelt modell típusától függenek: - **Besorolási modellek** - **Regressziós modellek** - **Fürtözési modellek** > [!TIP] > Ha még nem ismeri a modell értékelését, javasoljuk, hogy Dr. Stephen Elston a videó sorozatot a EdX [Machine learning tanfolyamának](/archive/blogs/machinelearning/new-edx-course-data-science-machine-learning-essentials) részeként. ## <a name="how-to-use-evaluate-model"></a>Az értékelés modell használata 1. Kapcsolja össze a pontszám [-](./assign-data-to-clusters.md) [modell](./score-model.md) vagy az eredmények adatkészlet **-kimenetét** a **kiértékelési modell** bal oldali bemeneti portjához. > [!NOTE] > Ha olyan modulokat használ, mint például az "Oszlopok kiválasztása az adatkészletben" lehetőséget a bemeneti adatkészlet egy részének kiválasztásához, győződjön meg róla, hogy a tényleges címke oszlop (tanításban használatos), a "pontszámmal rendelkező valószínűségek" oszlop és a "pontozott feliratok" oszlop létezik a metrikák, például a AUC, a bináris besorolás > A tényleges felirat oszlop, a "felhorzsolt címkék" oszlop létezik a többosztályos besorolás/regresszió metrikáinak kiszámításához. > "Hozzárendelések" oszlop, oszlop: "DistancesToClusterCenter". X ' (X a középpontját index, amely a 0,... számú centroids-1) létezik a fürtözés metrikáinak kiszámításához. > [!IMPORTANT] > + Az eredmények kiértékeléséhez a kimeneti adatkészletnek tartalmaznia kell egy adott pontszám-oszlop nevét, amely megfelel a modell modul követelményeinek. > + Az `Labels` oszlop tényleges feliratként lesz értelmezve. > + A regressziós feladathoz a kiértékelni kívánt adatkészletnek egy nevű oszlopból kell állnia, amely a pontozott `Regression Scored Labels` címkéket jelképezi. > + A bináris besorolási feladathoz a kiértékelni kívánt adatkészlet két oszlopból áll, amelyek neve a következő: `Binary Class Scored Labels` `Binary Class Scored Probabilities` . > + A többtényezős besorolás feladathoz a kiértékelni kívánt adatkészletnek egy nevű oszlopból kell állnia, amely a pontozott `Multi Class Scored Labels` címkéket jelképezi. > Ha a felsőbb rétegbeli modul kimenete nem rendelkezik ezekkel az oszlopokkal, a fenti követelményeknek megfelelően módosítania kell. 2. Választható Kapcsolja össze **a** [pontszám-modell](./score-model.md) vagy az eredmények adatkészlet-kimenetét az adatok kiosztása fürtökhöz a második modellnek **megfelelő** bemeneti portra a **modell kiértékeléséhez**. Az eredményeket egyszerűen összehasonlíthatja két különböző modellből ugyanazon adatok alapján. A két bemeneti algoritmusnak azonos algoritmus típusúnak kell lennie. Másik lehetőségként összehasonlíthatja azokat a két különböző futtatásból származó pontszámokat, amelyek különböző paraméterekkel rendelkeznek. > [!NOTE] > Az algoritmus típusa a "kétosztályos besorolás", a "többosztályos besorolás", a "regresszió", a "fürtözés" kifejezésre hivatkozik a "Machine Learning algoritmusok" alatt. 3. Küldje el a folyamatot a kiértékelési pontszámok létrehozásához. ## <a name="results"></a>Results (Eredmények) A **kiértékelési modell** futtatása után válassza ki a modult, és nyissa meg a **modell kiértékelése** navigációs panelt a jobb oldalon. Ezután válassza a **kimenetek + naplók** fület, és a lapon az **adatkimenetek** szakaszban több ikon látható. A **vizualizáció ikon egy** oszlopdiagram ikont tartalmaz, és első módszer az eredmények megjelenítésére. Ha bináris besorolást szeretne, az ikon **megjelenítése** lehetőségre kattintva megjelenítheti a bináris zavart mátrixot. A többszörös besorolás érdekében a következőhöz hasonló **kimenetek és naplók** lapon találhatja meg a zavart mátrix nyomtatási fájlját: > [!div class="mx-imgBorder"] > ![Feltöltött rendszerkép előnézete](media/module/multi-class-confusion-matrix.png) Ha adatkészleteket hoz létre a **kiértékelési modell** mindkét bemenetéhez, akkor az eredmények mindkét adathalmazra, vagy mindkét modellre vonatkozó metrikákat tartalmaznak. A bal oldali porthoz csatolt modellnek vagy adatoknak először a jelentésben kell megjelenni, amelyet a DataSet adatkészlethez tartozó metrikák, illetve a jobb oldali porthoz csatolt modell mutat. Az alábbi ábrán például az azonos adatokra épülő két fürtözött modell eredményeinek összehasonlítása, de különböző paraméterekkel. ![Comparing2Models](media/module/evaluate-2-models.png) Mivel ez egy fürtözési modell, a kiértékelési eredmények eltérnek, mint ha két regressziós modellből származó pontszámokat hasonlítanak össze, vagy két besorolási modellel hasonlították össze. A teljes bemutató azonban ugyanaz. ## <a name="metrics"></a>Mérőszámok Ez a szakasz a **kiértékelési modellel** használható, adott típusú modellek által visszaadott mérőszámokat ismerteti: + [besorolási modellek](#metrics-for-classification-models) + [regressziós modellek](#metrics-for-regression-models) + [fürtözési modellek](#metrics-for-clustering-models) ### <a name="metrics-for-classification-models"></a>Besorolási modellek metrikái A következő metrikákat kell jelenteni a bináris besorolási modellek kiértékelése során. - A **pontosság** a besorolási modell jóságát méri az igaz eredményeknek az összes esethez viszonyított arányában. - A **pontosság** a valódi eredmények aránya az összes pozitív eredménynél. Precíziós = TP/(TP + FP) - A **visszahívás** a ténylegesen beolvasott érintett példányok teljes mennyisége. Visszahívás = TP/(TP + FN) - Az F1-es **pontszám** kiszámítása a pontosság súlyozott átlaga, a 0 és az 1 közötti visszahívás pedig az ideális F1-es pontszám értéke 1. - A **AUC** méri a görbe alatti terület kirajzolását az y tengelyen található igaz pozitív értékkel, az x tengelyen pedig téves pozitív értéket. Ez a metrika azért hasznos, mert egyetlen számot biztosít, amely lehetővé teszi különböző típusú modellek összehasonlítását. ### <a name="metrics-for-regression-models"></a>Regressziós modellek metrikái A regressziós modellekhez visszaadott metrikák a hibák mennyiségének becslésére lettek kialakítva. A modell akkor tekinthető megfelelőnek, ha a megfigyelt és az előre jelzett értékek közötti különbség kicsi. A maradékok mintázatának (az egy előre jelzett pont és a hozzá tartozó tényleges érték közötti különbség) megvizsgálása azonban sokat jelenthet a modell lehetséges torzításával kapcsolatban. A regressziós modellek kiértékeléséhez a következő metrikákat kell jelenteni. - Az **átlagos abszolút hiba (Mae)** méri, hogy az előrejelzések hogyan zárulnak le a tényleges eredményekhez képest. így jobb az alacsonyabb pontszám. - A **legfelső szintű középre állított hiba (gyökátlagos)** egyetlen értéket hoz létre, amely összegzi a modellben található hibát. A különbség négyszögesítése a metrika figyelmen kívül hagyja a túlzott előrejelzés és az előrejelzés közötti különbséget. - **Relatív abszolút hiba (Rae)** a várt és a tényleges értékek közötti relatív abszolút különbség; relatív, mert az átlagos különbség a számtani középérték alapján oszlik meg. - A **relatív négyzetes hiba (RSE)** hasonló módon normalizálja az előre jelzett értékek teljes négyzetes hibáját azáltal, hogy a tényleges értékek teljes négyzetes hibáját választja. - A **meghatározási együttható** (más néven R <sup>2</sup>) a modell prediktív erejét mutatja 0 és 1 közötti értékként. Nulla érték azt jelenti, hogy a modell véletlenszerű (semmit sem jelent); 1 a tökéletes illeszkedést jelenti. Azonban körültekintően kell használni az R<sup>2</sup> értékek értelmezését, mivel az alacsony értékek teljesen normálisak lehetnek, és a magas értékek gyanúja is lehet. ### <a name="metrics-for-clustering-models"></a>A fürtözési modellek metrikái Mivel a fürtözési modellek nagy mértékben különböznek a besorolási és a regressziós modelltől, a [modell kiértékelése](evaluate-model.md) a különböző statisztikai adatokat is visszaadja a fürtözési modellekhez. A fürtszolgáltatási modell által visszaadott statisztikák azt írják le, hogy hány adatpontot rendeltek hozzá az egyes fürtökhöz, a fürtök közötti elkülönítés mennyiségét, valamint azt, hogy az adatpontok milyen szorosan legyenek az egyes fürtökön belül. A fürtözési modell statisztikái a teljes adatkészlet átlagát képezik, és a fürt statisztikáit tartalmazó további sorokkal. A következő metrikákat kell jelenteni a fürtszolgáltatási modellek kiértékeléséhez. - Az oszlopban lévő pontszámok, az **átlagos távolság a másik központtól** , a fürt minden egyes pontja esetében az összes többi fürt centroids. - Az oszlopban lévő pontszámok, amelyek a **fürt középpontjának átlagos távolsága** , a fürt összes pontjának a fürt középpontját való közelségét jelölik. - A **pontok száma** oszlopban látható, hogy hány adatpontot rendeltek hozzá az egyes fürtökhöz, valamint a fürtben lévő adatpontok összesített számát. Ha a fürtökhöz rendelt adatpontok száma kevesebb, mint az elérhető adatpontok teljes száma, az azt jelenti, hogy az adatpontokat nem lehet fürthöz rendelni. - Az oszlopban lévő pontszámok, a **cluster Center értékének maximális távolsága** , az egyes pontok közötti távolságok maximális száma, valamint az adott pont fürt középpontját. Ha ez a szám magas, akkor azt jelentheti, hogy a fürt széles körben elszórtan van. Tekintse át ezt a statisztikát, valamint az **átlagos távolságot a fürt közepétől** a fürt eloszlásának meghatározásához. - Az eredmények minden szakaszának alján az **összesített értékelési** pontszám felsorolja az adott modellben létrehozott fürtök átlagos pontszámait. ## <a name="next-steps"></a>Következő lépések Tekintse [meg a Azure Machine learning elérhető modulok készletét](module-reference.md) .
74.717241
533
0.79389
hun_Latn
1.00001
f4b07a6a71b63cd48fa3cbe798a60a994cdea27c
190
md
Markdown
CHANGELOG.md
hughbris/grav-plugin-cloud-stash
94078ec02b457517c4615bdef4b6cc0f90383220
[ "MIT" ]
1
2019-10-16T07:02:28.000Z
2019-10-16T07:02:28.000Z
CHANGELOG.md
hughbris/grav-plugin-cloud-stash
94078ec02b457517c4615bdef4b6cc0f90383220
[ "MIT" ]
3
2019-08-22T20:56:30.000Z
2021-09-21T11:05:28.000Z
CHANGELOG.md
hughbris/grav-plugin-cloud-stash
94078ec02b457517c4615bdef4b6cc0f90383220
[ "MIT" ]
1
2019-08-22T10:51:08.000Z
2019-08-22T10:51:08.000Z
# v0.1.0 ## 21-09-2021 1. [](#improved) * Support for other S3-compatible stash services (#7) # v0.1.0.a ## 21-09-2021 1. [](#fixed) * Forgotten merge of aforepromised features
15.833333
57
0.621053
eng_Latn
0.800137
f4b1653d35cecd0fed50007ad0459350f757781f
3,788
md
Markdown
Teams/dynamic-memberships.md
afilosa/OfficeDocs-SkypeForBusiness
80ece4f981bf33252ab8010785c25d5177aa2847
[ "CC-BY-4.0", "MIT" ]
null
null
null
Teams/dynamic-memberships.md
afilosa/OfficeDocs-SkypeForBusiness
80ece4f981bf33252ab8010785c25d5177aa2847
[ "CC-BY-4.0", "MIT" ]
null
null
null
Teams/dynamic-memberships.md
afilosa/OfficeDocs-SkypeForBusiness
80ece4f981bf33252ab8010785c25d5177aa2847
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Overview of dynamic membership for teams author: SerdarSoysal ms.author: serdars manager: serdars ms.reviewer: kblevens, phlouie ms.topic: conceptual ms.service: msteams audience: admin search.appverid: MET150 description: Learn how Microsoft Teams supports teams associated with Microsoft 365 groups by using dynamic membership. f1.keywords: - NOCSH localization_priority: Normal ms.custom: seo-marvel-apr2020 ms.collection: - M365-collaboration appliesto: - Microsoft Teams --- # Overview of dynamic membership for teams Microsoft Teams supports teams associated with Microsoft 365 groups by using *dynamic membership*. Dynamic membership enables the membership of a team to be defined by one or more rules that check for certain user attributes in Azure Active Directory (Azure AD). Users are automatically added or removed to the correct teams as user attributes change or users join and leave the tenant. With dynamic membership you can set up teams for certain cohorts of users in your organization. Possible scenarios include: - A hospital can create distinct teams for nurses, doctors, and surgeons to broadcast communications. This is especially important if the hospital relies on temp employees. - A university can create a team for all faculty within a particular college, including an adjunct faculty that changes frequently. - An airline wants to create a team for each flight (like a Tuesday afternoon non-stop from Chicago to Atlanta) and have a frequently changing flight crew automatically assigned or removed as needed.​ Using this feature, a given team's members update automatically based on a specific set of criteria, instead of manually managing membership.​ Doing this requires Azure AD Premium P1 licenses and team membership can be [assigned by a tenant admin](/azure/active-directory/users-groups-roles/groups-dynamic-membership) to any user's Azure AD properties provided you have a tenant and an admin account​. Microsoft Teams may take anywhere from a few minutes to up to 2 hours to reflect dynamic membership changes once they take effect in the Microsoft 365 group for a team. > [!NOTE] > - Rules can define who is a team member, but not who is a team owner. > - See [Limits and specifications for Microsoft Teams](limits-specifications-teams.md) for current limits on team and channel sizes. > - Owners will not be able to add or remove users as members of the team, since members are defined by dynamic group rules. > - Members will not be able to leave teams backed by dynamic groups. ## Creating and managing a Microsoft 365 group with dynamic membership While logged in as the tenant admin, follow the instructions in [Create a dynamic group and check status](/azure/active-directory/users-groups-roles/groups-create-rule). As needed, refer to [Dynamic membership rules for groups in Azure Active Directory](/azure/active-directory/users-groups-roles/groups-dynamic-membership). ## Create a new team with your Microsoft 365 group Now allow time for the membership changes to take effect, and create a new team as described in [Create a team from an existing group](https://support.microsoft.com/en-us/office/create-a-team-from-an-existing-group-24ec428e-40d7-4a1a-ab87-29be7d145865). ## Apply dynamic membership to an existing team You can also take an existing team and change it to have a dynamic membership, as described in [Change static group membership to dynamic in Azure Active Directory](/azure/active-directory/users-groups-roles/groups-change-type). ## Changes in client behavior Once dynamic membership is enabled for a team, Teams clients will no longer allow member management for the team. Options to add members, edit member roles, send and approve join requests, and leave the team are all hidden.
68.872727
401
0.800422
eng_Latn
0.999269
f4b19a73782938a666e451dfd307e49dc6b8cfa2
1,212
md
Markdown
README.md
DforDream/dfordream.github.io
1b7cd4b70b9b069f98e56a4bce79bac136a222b0
[ "MIT" ]
null
null
null
README.md
DforDream/dfordream.github.io
1b7cd4b70b9b069f98e56a4bce79bac136a222b0
[ "MIT" ]
null
null
null
README.md
DforDream/dfordream.github.io
1b7cd4b70b9b069f98e56a4bce79bac136a222b0
[ "MIT" ]
null
null
null
# 大D 欢迎访问我的个人主页!下面向你介绍一下我自己吧~ <!-- .slide --> ## 我的一些联系方式 - Address: **- shenzhen, China** - Phone: **+86 18720776336** - Site: **<https://DforDream.github.io>** <!-- .slide vertical=true --> - E-mail: - **[1500549325@qq.com](mailto:1500549325@qq.com)** - WeChat: **a1737557226** - QQ: **1500549325** <!-- .slide --> ## 我的一些项目成果 <!-- .slide vertical=true --> <!-- - [jekyll-theme-WuK](https://jekyll-theme-WuK.wu-kan.cn/) - 我的个人主页使用的主题样式,也欢迎你用于[搭建自己的页面](https://jekyll-theme-WuK.wu-kan.cn/)~[![Star](https://img.shields.io/github/stars/wu-kan/wu-kan.github.io.svg)](https://github.com/wu-kan/wu-kan.github.io)[![Fork](https://img.shields.io/github/forks/wu-kan/wu-kan.github.io.svg)](https://github.com/wu-kan/wu-kan.github.io/fork) --> - [js+php实现的仿小米商城](https://github.com/DforDream/xiaomi_phpstudy) - [js实现的仿网易严选](https://github.com/DforDream/wangyiyx) <!-- .slide --> ## 我的一些技能 <!-- .slide vertical=true --> - **HTML5**, master - **CSS3**, master - **JavaScript**, master - **jQuery**, master - **webpack**, master <!-- .slide vertical=true --> - **PHP**, Familiar - **MySQL**, Familiar - **git**, Familiar - **Sass/Less**, Familiar - **Gulp**, Familiar <!-- .slide --> ## 我的一些领导力 TBD
21.642857
316
0.627888
yue_Hant
0.36494
f4b202132ee12b0e66ed6d34aa20c9f101df0c47
710
md
Markdown
_publications/2019-10-01-www_zhu.md
CvvT/CvvT.github.io
0812c7f0eced1c7e0d2330708f63789b21662616
[ "MIT" ]
null
null
null
_publications/2019-10-01-www_zhu.md
CvvT/CvvT.github.io
0812c7f0eced1c7e0d2330708f63789b21662616
[ "MIT" ]
null
null
null
_publications/2019-10-01-www_zhu.md
CvvT/CvvT.github.io
0812c7f0eced1c7e0d2330708f63789b21662616
[ "MIT" ]
null
null
null
--- title: "Shadowblock: A lightweight and stealthy adblocking browser" collection: publications permalink: /publication/2019-10-01-www_zhu excerpt: 'As the popularity of adblocking has soared over the last few years, publishers are increasingly deploying anti-adblocking paywalls that ask users to either disable their adblockers or pay to access content. In this work we propose ShadowBlock, a new Chromium-based adblocking browser that can hide traces of adblocking activities from anti-adblockers as it removes ads from web pages.' date: 2019-5-13 venue: 'The World Wide Web Conference' paperurl: 'http://CvvT.github.io/files/www_zhu.pdf' --- [Download paper here](http://CvvT.github.io/files/www_zhu.pdf)
64.545455
395
0.8
eng_Latn
0.969467
f4b203a903783a549ba751f2aee8fcba413ba6e4
2,309
md
Markdown
covsirphy/07_scenario/new1.md
sunao11/eGov_hourei
0731cfa43ac701ca17bec178f00f68f393efe7c8
[ "Apache-2.0" ]
2
2021-05-28T10:02:10.000Z
2021-06-10T09:02:08.000Z
covsirphy/07_scenario/new1.md
sunao11/eGov_hourei
0731cfa43ac701ca17bec178f00f68f393efe7c8
[ "Apache-2.0" ]
1
2020-09-20T06:12:06.000Z
2020-09-20T06:12:06.000Z
covsirphy/07_scenario/new1.md
sunao11/eGov_hourei
0731cfa43ac701ca17bec178f00f68f393efe7c8
[ "Apache-2.0" ]
2
2021-06-12T08:54:49.000Z
2022-01-06T23:28:13.000Z
| | Start | End | Rt | theta | kappa | rho | sigma | |:----------------|:----------|:----------|------:|------------:|------------:|----------:|-----------:| | ('Main', '0th') | 06Feb2020 | 31Mar2020 | 3.85 | 0.0185683 | 0.000768779 | 0.112377 | 0.027892 | | ('Main', '1st') | 01Apr2020 | 21Apr2020 | 13.53 | 0.00119307 | 0.000954275 | 0.119908 | 0.00789786 | | ('Main', '2nd') | 22Apr2020 | 06Jul2020 | 0.37 | 0.090172 | 0.000858737 | 0.0264027 | 0.0636349 | | ('Main', '3rd') | 07Jul2020 | 23Jul2020 | 1.93 | 0.000466376 | 8.04707e-05 | 0.133382 | 0.0689537 | | ('Main', '4th') | 24Jul2020 | 01Aug2020 | 1.78 | 0.00134973 | 0.000171566 | 0.134725 | 0.0753389 | | ('Main', '5th') | 02Aug2020 | 14Aug2020 | 1.42 | 0.000357401 | 0.000398848 | 0.100129 | 0.070318 | | ('Main', '6th') | 15Aug2020 | 28Aug2020 | 0.82 | 0.000562179 | 0.000899694 | 0.0783473 | 0.0943161 | | ('Main', '7th') | 29Aug2020 | 08Sep2020 | 0.7 | 0.00168729 | 0.00119958 | 0.0616905 | 0.0863032 | | ('Main', '8th') | 09Sep2020 | 19Sep2020 | 0.88 | 0.0139571 | 0.000563158 | 0.0928061 | 0.103367 | | ('Main', '9th') | 20Sep2020 | 03Oct2020 | 0.87 | 0.000463924 | 0.000984493 | 0.0793406 | 0.090479 | | ('New', '0th') | 06Feb2020 | 31Mar2020 | 3.85 | 0.0185683 | 0.000768779 | 0.112377 | 0.027892 | | ('New', '1st') | 01Apr2020 | 21Apr2020 | 13.53 | 0.00119307 | 0.000954275 | 0.119908 | 0.00789786 | | ('New', '2nd') | 22Apr2020 | 06Jul2020 | 0.37 | 0.090172 | 0.000858737 | 0.0264027 | 0.0636349 | | ('New', '3rd') | 07Jul2020 | 23Jul2020 | 1.93 | 0.000466376 | 8.04707e-05 | 0.133382 | 0.0689537 | | ('New', '4th') | 24Jul2020 | 01Aug2020 | 1.78 | 0.00134973 | 0.000171566 | 0.134725 | 0.0753389 | | ('New', '5th') | 02Aug2020 | 14Aug2020 | 1.42 | 0.000357401 | 0.000398848 | 0.100129 | 0.070318 | | ('New', '6th') | 15Aug2020 | 28Aug2020 | 0.82 | 0.000562179 | 0.000899694 | 0.0783473 | 0.0943161 | | ('New', '7th') | 29Aug2020 | 08Sep2020 | 0.7 | 0.00168729 | 0.00119958 | 0.0616905 | 0.0863032 | | ('New', '8th') | 09Sep2020 | 19Sep2020 | 0.88 | 0.0139571 | 0.000563158 | 0.0928061 | 0.103367 | | ('New', '9th') | 20Sep2020 | 03Oct2020 | 0.87 | 0.000463924 | 0.000984493 | 0.0793406 | 0.090479 |
104.954545
104
0.544825
yue_Hant
0.270987
f4b2330be304aa9fcfefe05f807b0dab759a9066
243
md
Markdown
configs/readme.md
openclimatefix/predict_pv_yield_2
505d7d06d30a42839b80a73428042a16e92ea831
[ "MIT" ]
7
2020-10-01T10:19:05.000Z
2021-05-19T12:19:15.000Z
configs/readme.md
openclimatefix/predict_pv_yield_2
505d7d06d30a42839b80a73428042a16e92ea831
[ "MIT" ]
56
2020-11-13T10:04:04.000Z
2021-07-19T09:57:24.000Z
configs/readme.md
openclimatefix/predict_pv_yield_2
505d7d06d30a42839b80a73428042a16e92ea831
[ "MIT" ]
1
2021-04-11T04:25:59.000Z
2021-04-11T04:25:59.000Z
The following folders how the configuration files This idea is copied from https://github.com/ashleve/lightning-hydra-template/blob/main/configs/experiment/example_simple.yaml run experiments by: `python run.py experiment=example_simple `
27
100
0.823045
eng_Latn
0.978634
f4b321210ede252ef65ac825afb1de1e5c4adae9
1,564
md
Markdown
biztalk/core/configuring-interchange-settings-edifact.md
OPS-E2E-PPE/biztalk-docs.zh-CN
4783f895d987cb13d919c7bd12ae52975b972123
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/configuring-interchange-settings-edifact.md
OPS-E2E-PPE/biztalk-docs.zh-CN
4783f895d987cb13d919c7bd12ae52975b972123
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/configuring-interchange-settings-edifact.md
OPS-E2E-PPE/biztalk-docs.zh-CN
4783f895d987cb13d919c7bd12ae52975b972123
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 配置交换设置 (EDIFACT) |Microsoft Docs ms.custom: '' ms.date: 06/08/2017 ms.prod: biztalk-server ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: article ms.assetid: 8873a88c-89c7-49b0-9886-5fc603947643 caps.latest.revision: 2 author: MandiOhlinger ms.author: mandia manager: anneta ms.openlocfilehash: be59d32a5b7095b533cadd6d75d2877de38dbb7f ms.sourcegitcommit: 381e83d43796a345488d54b3f7413e11d56ad7be ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 05/07/2019 ms.locfileid: "65355329" --- # <a name="configuring-interchange-settings-edifact"></a>配置交换设置 (EDIFACT) EDIFACT 交换处理属性定义如何[!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)]处理从参与方接收的 EDIFACT 编码交换的信封。 ## <a name="in-this-section"></a>本节内容 - [配置标识符 (EDIFACT)](../core/configuring-identifiers-edifact.md) - [配置确认 (EDIFACT)](../core/configuring-acknowledgements-edifact.md) - [配置信封(EDIFACT-交换设置)](../core/configuring-envelopes-edifact-interchange-settings.md) - [配置验证(EDIFACT-交换设置)](../core/configuring-validation-edifact-interchange-settings.md) - [配置字符集和分隔符 (EDIFACT)](../core/configuring-charset-and-separators-edifact.md) - [配置批处理 (EDIFACT)](../core/configuring-batching-edifact.md) - [配置本地主机设置(EDIFACT-交换设置)](../core/configuring-local-host-settings-edifact-interchange-settings.md) - [配置发送端口关联 (EDIFACT)](../core/configuring-send-port-association-edifact.md) ## <a name="see-also"></a>请参阅 [配置特定于 EDIFACT 的协议属性](../core/configuring-edifact-specific-agreement-properties.md)
35.545455
128
0.741049
yue_Hant
0.209149
f4b34ddf8a474da2cccccd48dbb32258163f1439
2,422
md
Markdown
README.md
divonlan/genozip-dvcf-results
122a12c88cdf4d616627ff27b0d46747c3e90bce
[ "MIT" ]
null
null
null
README.md
divonlan/genozip-dvcf-results
122a12c88cdf4d616627ff27b0d46747c3e90bce
[ "MIT" ]
null
null
null
README.md
divonlan/genozip-dvcf-results
122a12c88cdf4d616627ff27b0d46747c3e90bce
[ "MIT" ]
null
null
null
This folder contains the scripts needed to reproduce the DVCF results as reported. These are: - `run-all.sh` - runs the other 6 run-* scripts in sequence - `run-*` - 6 scripts for running the 6 analyses { snp, indel, clinvar} X { 37-to-38, 38-to-t2t } - `reset.sh` - deletes all the the files generated, but NOT the files in the shared directory downloaded from the Internet - `reset-factory-defaults.sh` - deletes all files downloaded or generated. **Note:** these scripts download all the data required from the Internet, into the "shared" directory. It is possible that you already have some of these files stored locally - in this case, you can copy or symbolic link them into this directory, to avoid downloading them again: ### Reference files: 1) ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz 2) ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/GRCh38_reference_genome/GRCh38_full_analysis_set_plus_decoy_hla.fa 3) https://s3-us-west-2.amazonaws.com/human-pangenomics/T2T/CHM13/assemblies/chm13.draft_v1.0.fasta.gz ### Chain files: 1) http://ftp.ensembl.org/pub/assembly_mapping/homo_sapiens/GRCh37_to_GRCh38.chain.gz 2) http://t2t.gi.ucsc.edu/chm13/hub/t2t-chm13-v1.0/hg38Lastz/hg38.t2t-chm13-v1.0.over.chain.gz ### Data files: 1) The file SS6004478.annotated.nh2.variants.vcf.gz which is the first file in this tar acrhive: https://sharehost.hms.harvard.edu/genetics/reich_lab/sgdp/vcf_variants/vcfs.variants.public_samples.279samples.tar (only this file is downloaded, not the entire tar file) 2) ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000G_2504_high_coverage/working/20201028_3202_raw_GT_with_annot/20201028_CCDG_14151_B01_GRM_WGS_2020-08-05_chr22.recalibrated_variants.vcf.gz 3) ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20110521/ALL.chr22.phase1_release_v3.20101123.snps_indels_svs.genotypes.vcf.gz 4) ftp://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/weekly/clinvar_20210724.vcf.gz 5) ftp://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh37/weekly/clinvar_20210724.vcf.gz ### Pre-requisite software: 1) Genozip - installation options: https://genozip.com/installing.html 2) GATK - tested on version 4.1.7 3) CrossMap.py - tested on version 0.5.2 installed from conda 4) samtools - tested on version 1.11 installed from conda 5) common utilities: wget, curl, gzip Questions? support@genozip.com
69.2
279
0.789843
eng_Latn
0.719738
f4b391633e0b494ef05a4f94c4e63fdf1f05ff27
58
md
Markdown
_includes/04-lists.md
psathyanarayan/markdown-portfolio
bd9aea0c65cef17b60c5597d65b586bbb73bad99
[ "MIT" ]
null
null
null
_includes/04-lists.md
psathyanarayan/markdown-portfolio
bd9aea0c65cef17b60c5597d65b586bbb73bad99
[ "MIT" ]
5
2022-01-26T09:06:57.000Z
2022-01-26T09:47:41.000Z
_includes/04-lists.md
psathyanarayan/markdown-portfolio
bd9aea0c65cef17b60c5597d65b586bbb73bad99
[ "MIT" ]
null
null
null
- Item 1 - Item 1 - Item 1 - Item 1 - - Item 1 - - Item 1
8.285714
10
0.517241
bis_Latn
0.833359
f4b3b34508b1e2dca636769c9e160f5d55be0654
20,427
md
Markdown
skype/skype-ps/skype/New-CsDiagnosticConfiguration.md
leewisestamp/office-docs-powershell
b054e2cd4c8886736898c5fbc84a9db815e9e39f
[ "CC-BY-4.0", "MIT" ]
487
2017-09-25T11:57:34.000Z
2022-03-30T11:44:48.000Z
skype/skype-ps/skype/New-CsDiagnosticConfiguration.md
leewisestamp/office-docs-powershell
b054e2cd4c8886736898c5fbc84a9db815e9e39f
[ "CC-BY-4.0", "MIT" ]
8,989
2017-09-20T16:12:46.000Z
2022-03-31T18:15:33.000Z
skype/skype-ps/skype/New-CsDiagnosticConfiguration.md
leewisestamp/office-docs-powershell
b054e2cd4c8886736898c5fbc84a9db815e9e39f
[ "CC-BY-4.0", "MIT" ]
1,117
2017-09-07T15:51:53.000Z
2022-03-31T06:22:53.000Z
--- external help file: Microsoft.Rtc.Management.dll-help.xml online version: https://docs.microsoft.com/powershell/module/skype/new-csdiagnosticconfiguration applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 title: New-CsDiagnosticConfiguration schema: 2.0.0 manager: rogupta author: hirenshah1 ms.author: hirshah ms.reviewer: --- # New-CsDiagnosticConfiguration ## SYNOPSIS **Below Content Applies To:** Lync Server 2010 Creates new diagnostic configuration settings. Diagnostic configuration settings are used to determine whether traffic to or from a given domain or Uniform Resource Identifier (URI) is recorded in your Microsoft Lync Server 2010 log files. **Below Content Applies To:** Lync Server 2013 Creates new diagnostic configuration settings. Diagnostic configuration settings are used to determine whether traffic to or from a given domain or Uniform Resource Identifier (URI) is recorded in your Lync Server log files. This cmdlet was introduced in Lync Server 2010. **Below Content Applies To:** Skype for Business Server 2015 Creates new diagnostic configuration settings. Diagnostic configuration settings are used to determine whether traffic to or from a given domain or Uniform Resource Identifier (URI) is recorded in your Skype for Business Server 2015 log files. This cmdlet was introduced in Lync Server 2010. ## SYNTAX ``` New-CsDiagnosticConfiguration [-Identity] <XdsIdentity> [-Filter <Filter>] [-LoggingShare <String>] [-Force] [-InMemory] [-WhatIf] [-Confirm] [-LogAllSipHeaders <Boolean>] [<CommonParameters>] ``` ## DESCRIPTION **Below Content Applies To:** Lync Server 2010 If you enable logging for Lync Server 2010 then, by default, traffic traveling to or from any domain or URI is included in those log files. This ensures that as much information as possible is recorded in the log files. However, this can occasionally result in too much information. For example, if you are experiencing connectivity problems with a particular domain, you might want to limit logging to traffic between your network and that domain; that makes it easier for you to identify the relevant records and, in turn, might make it easier for you to diagnose and correct the problem. Diagnostic configuration settings make it possible for you to specify the domains or URIs that will be recorded in the log files; for example, you can log only the traffic to or from specified domains. Lync Server enables you to create diagnostic configuration settings at the site scope. In turn, this enables you to apply different settings to, say, the Redmond site than you do on your other sites. Note that you cannot create diagnostic configuration settings at the global scope; that's because the global scope already hosts these settings. Likewise, you cannot create a new settings collection at the site scope if the specified site already contains diagnostic configuration settings. For example, your command will fail if you try to create a new collection for the Redmond site and that site already hosts diagnostic configuration settings. Who can run this cmdlet: By default, members of the following groups are authorized to run the New-CsDiagnosticConfiguration cmdlet locally: RTCUniversalServerAdmins. To return a list of all the role-based access control (RBAC) roles this cmdlet has been assigned to (including any custom RBAC roles you have created yourself), run the following command from the Windows PowerShell prompt: Get-CsAdminRole | Where-Object {$_.Cmdlets -match "New-CsDiagnosticConfiguration"} **Below Content Applies To:** Lync Server 2013 If you enable logging for Lync Server then, by default, traffic traveling to or from any domain or URI is included in those log files. This ensures that as much information as possible is recorded in the log files. However, this can occasionally result in too much information. For example, if you are experiencing connectivity problems with a particular domain, you might want to limit logging to traffic between your network and that domain; that makes it easier for you to identify the relevant records and, in turn, might make it easier for you to diagnose and correct the problem. Diagnostic configuration settings make it possible for you to specify the domains or URIs that will be recorded in the log files; for example, you can log only the traffic to or from specified domains. Lync Server enables you to create diagnostic configuration settings at the site scope. In turn, this enables you to apply different settings to, say, the Redmond site than you do on your other sites. Note that you cannot create diagnostic configuration settings at the global scope; that's because the global scope already hosts these settings. Likewise, you cannot create a new settings collection at the site scope if the specified site already contains diagnostic configuration settings. For example, your command will fail if you try to create a new collection for the Redmond site and that site already hosts diagnostic configuration settings. Who can run this cmdlet: By default, members of the following groups are authorized to run the New-CsDiagnosticConfiguration cmdlet locally: RTCUniversalServerAdmins. To return a list of all the role-based access control (RBAC) roles this cmdlet has been assigned to (including any custom RBAC roles you have created yourself), run the following command from the Windows PowerShell prompt: Get-CsAdminRole | Where-Object {$_.Cmdlets -match "New-CsDiagnosticConfiguration"} **Below Content Applies To:** Skype for Business Server 2015 If you enable logging for Skype for Business Server 2015 then, by default, traffic traveling to or from any domain or URI is included in those log files. This ensures that as much information as possible is recorded in the log files. However, this can occasionally result in too much information. For example, if you are experiencing connectivity problems with a particular domain, you might want to limit logging to traffic between your network and that domain; that makes it easier for you to identify the relevant records and, in turn, might make it easier for you to diagnose and correct the problem. Diagnostic configuration settings make it possible for you to specify the domains or URIs that will be recorded in the log files; for example, you can log only the traffic to or from specified domains. Skype for Business Server 2015 enables you to create diagnostic configuration settings at the site scope. In turn, this enables you to apply different settings to, say, the Redmond site than you do on your other sites. Note that you cannot create diagnostic configuration settings at the global scope; that's because the global scope already hosts these settings. Likewise, you cannot create a new settings collection at the site scope if the specified site already contains diagnostic configuration settings. For example, your command will fail if you try to create a new collection for the Redmond site and that site already hosts diagnostic configuration settings. ## EXAMPLES ### -------------------------- Example 1 ------------------------ (Lync Server 2010) ``` New-CsDiagnosticConfiguration -Identity site:Redmond ``` The preceding command creates a new collection of diagnostic configuration settings for the Redmond site. ### -------------------------- EXAMPLE 1 -------------------------- (Lync Server 2013) ``` ``` Example 1 creates a new collection of diagnostic configuration settings for the Redmond site. New-CsDiagnosticConfiguration -Identity site:Redmond ### -------------------------- EXAMPLE 1 -------------------------- (Skype for Business Server 2015) ``` ``` Example 1 creates a new collection of diagnostic configuration settings for the Redmond site. New-CsDiagnosticConfiguration -Identity site:Redmond ### -------------------------- Example 2 ------------------------ (Lync Server 2010) ``` $x = New-CsDiagnosticsFilter -Fqdn fabrikam.com -Uri "sip:user@fabrikam.com" -Enabled $False New-CsDiagnosticConfiguration -Identity site:Redmond -Filter $x ``` The commands shown in Example 2 create a new diagnostics filter and then assign that filter to a new collection of diagnostic settings. To carry out this task, the first command calls New-CsDiagnosticsFilter to create an in-memory-only diagnostics filter; this filter adds the FQDN fabrikam.com and the URI sip:user@fabrikam.com to the filter. The command also sets the Enabled property to True ($True) in order to activate the filter. The resulting virtual filter is then stored in the variable $x. In command 2, New-CsDiagnosticConfiguration is used to create a new diagnostic configuration settings collection for the Redmond site. These new settings will use the diagnostic filter stored in the variable $x. ### -------------------------- EXAMPLE 2 -------------------------- (Lync Server 2013) ``` ``` The commands shown in Example 2 create a new diagnostics filter and then assign that filter to a new collection of diagnostic settings. To carry out this task, the first command calls New-CsDiagnosticsFilter to create an in-memory-only diagnostics filter; this command adds the FQDN fabrikam.com and the URI sip:user@fabrikam.com to the filter. The command also sets the Enabled property to True ($True) in order to activate the filter. The resulting virtual filter is then stored in the variable $x. In command 2, New-CsDiagnosticConfiguration is used to create a new diagnostic configuration settings collection for the Redmond site. These new settings will use the diagnostic filter stored in the variable $x. $x = New-CsDiagnosticsFilter -Fqdn fabrikam.com -Uri "sip:user@fabrikam.com" -Enabled $False New-CsDiagnosticConfiguration -Identity site:Redmond -Filter $x ### -------------------------- EXAMPLE 2 -------------------------- (Skype for Business Server 2015) ``` ``` The commands shown in Example 2 create a new diagnostics filter and then assign that filter to a new collection of diagnostic settings. To carry out this task, the first command calls the New-CsDiagnosticsFilter cmdlet to create an in-memory-only diagnostics filter; this command adds the FQDN fabrikam.com and the URI sip:user@fabrikam.com to the filter. The command also sets the Enabled property to True ($True) in order to activate the filter. The resulting virtual filter is then stored in the variable $x. In command 2, the New-CsDiagnosticConfiguration cmdlet is used to create a new diagnostic configuration settings collection for the Redmond site. These new settings will use the diagnostic filter stored in the variable $x. $x = New-CsDiagnosticsFilter -Fqdn fabrikam.com -Uri "sip:user@fabrikam.com" -Enabled $False New-CsDiagnosticConfiguration -Identity site:Redmond -Filter $x ### -------------------------- Example 3 ------------------------ (Lync Server 2010) ``` $x = New-CsDiagnosticConfiguration -Identity site:Redmond -InMemory $x.LoggingShare = "\\atl-fs-001\logs" Set-CsDiagnosticConfiguration -Instance $x ``` The commands shown in Example 3 demonstrate how you can create diagnostic configuration settings that initially exist only in memory. To do this, the first command calls New-CsDiagnosticConfiguration along with two parameters: Identity (which specifies the Identity for the settings) and InMemory, which indicates that the new settings should be created in memory only. The resulting object is stored in the variable $x. After the virtual settings have been created, the second command is used to configure the LoggingShare property to the UNC path \\\\atl-fs-001\logs. The final command is then used to transform the virtual diagnostic configuration settings into an actual collection of settings applied to the Redmond site. Note that this final command is mandatory. If you do not call Set-CsDiagnosticConfiguration, no settings will be applied to the Redmond site, and the virtual settings will disappear as soon as you end your Windows PowerShell session or delete the variable $x. ### -------------------------- EXAMPLE 3 -------------------------- (Lync Server 2013) ``` ``` The commands shown in Example 3 demonstrate how you can create diagnostic configuration settings that initially exist only in memory. To do this, the first command calls New-CsDiagnosticConfiguration along with two parameters: Identity (which specifies the Identity for the settings) and InMemory, which indicates that the new settings should be created in memory only. The resulting object is stored in the variable $x. After the virtual settings have been created, the second command is used to configure the LoggingShare property to the UNC path \\\\atl-fs-001\logs. The final command is then used to transform the virtual diagnostic configuration settings into an actual collection of settings applied to the Redmond site. Note that this final command is mandatory. If you do not call Set-CsDiagnosticConfiguration, no settings will be applied to the Redmond site, and the virtual settings will disappear as soon as you end your Windows PowerShell session or delete the variable $x. $x = New-CsDiagnosticConfiguration -Identity site:Redmond -InMemory $x.LoggingShare = "\\\\atl-fs-001\logs" Set-CsDiagnosticConfiguration -Instance $x ### -------------------------- EXAMPLE 3 -------------------------- (Skype for Business Server 2015) ``` ``` The commands shown in Example 3 demonstrate how you can create diagnostic configuration settings that initially exist only in memory. To do this, the first command calls the New-CsDiagnosticConfiguration cmdlet along with two parameters: Identity (which specifies the Identity for the settings) and InMemory, which indicates that the new settings should be created in memory only. The resulting object is stored in the variable $x. After the virtual settings have been created, the second command is used to configure the LoggingShare property to the UNC path \\\\atl-fs-001\logs. The final command is then used to transform the virtual diagnostic configuration settings into an actual collection of settings applied to the Redmond site. Note that this final command is mandatory. If you do not call the Set-CsDiagnosticConfiguration cmdlet, no settings will be applied to the Redmond site, and the virtual settings will disappear as soon as you end your Windows PowerShell session or delete the variable $x. $x = New-CsDiagnosticConfiguration -Identity site:Redmond -InMemory $x.LoggingShare = "\\\\atl-fs-001\logs" Set-CsDiagnosticConfiguration -Instance $x ## PARAMETERS ### -Identity Unique identifier for the diagnostics configuration settings to be created. Because new settings can only be created at the site scope you must use syntax similar to this: -Identity "site:Redmond". ```yaml Type: XdsIdentity Parameter Sets: (All) Aliases: Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: True Position: 2 Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Filter Collection of domains and URIs whose traffic will be logged if diagnostic filtering is enabled. The Filter property consists of three separate items: Fqdn - Collection of domains to be included in the filter. (More technically, this is the host portion of a SIP address.) For example a fully qualified domain name (FQDN) might look like this: fabrikam.com. Alternatively, you can use wildcards to represent multiple domains: *.fabrikam.com. You can include more than one domain in a single filter. Uri - Collection of URIs to be included in the filter. (The Uri is the user@host portion of a SIP address.) A Uri can consist of any of the following patterns: user@fabrikam.com; user@*; *@fabrikam.com. You can include multiple URIs in a single filter. Enabled - Indicates whether or not the filter should be activated. ```yaml Type: Filter Parameter Sets: (All) Aliases: Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -LoggingShare Shared folder where the diagnostic logs can be uploaded. ```yaml Type: String Parameter Sets: (All) Aliases: Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Force Suppresses the display of any non-fatal error message that might arise when running the command. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -InMemory **Below Content Applies To:** Lync Server 2010, Lync Server 2013 Creates an object reference without actually committing the object as a permanent change. If you assign the output of this cmdlet called with this parameter to a variable, you can make changes to the properties of the object reference and then commit those changes by calling this cmdlet's matching Set- cmdlet. **Below Content Applies To:** Skype for Business Server 2015 Creates an object reference without actually committing the object as a permanent change. If you assign the output of this cmdlet called with this parameter to a variable, you can make changes to the properties of the object reference and then commit those changes by calling this cmdlet's matching Set-\<cmdlet\>. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Describes what would happen if you executed the command without actually executing the command. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: wi Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Confirm Prompts you for confirmation before executing the command. ```yaml Type: SwitchParameter Parameter Sets: (All) Aliases: cf Applicable: Lync Server 2010, Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -LogAllSipHeaders When set to False, only the core SIP headers are recorded in the logs. Setting this value to False can help reduce the size of the log files. When set to True, all SIP headers are logged. ```yaml Type: Boolean Parameter Sets: (All) Aliases: Applicable: Lync Server 2013, Skype for Business Server 2015, Skype for Business Server 2019 Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### None. New-CsDiagnosticConfiguration does not accept pipelined input. ### None. The New-CsDiagnosticConfiguration cmdlet does not accept pipelined input. ## OUTPUTS ### New-CsDiagnosticConfiguration creates new instances of the Microsoft.Rtc.Management.WritableConfig.Settings.Diagnostics.DiagnosticFilterSettings. ### The New-CsDiagnosticConfiguration cmdlet creates new instances of the Microsoft.Rtc.Management.WritableConfig.Settings.Diagnostics.DiagnosticFilterSettings. ## NOTES ## RELATED LINKS [Get-CsDiagnosticConfiguration](Get-CsDiagnosticConfiguration.md) [New-CsDiagnosticsFilter](New-CsDiagnosticsFilter.md) [Remove-CsDiagnosticConfiguration](Remove-CsDiagnosticConfiguration.md) [Set-CsDiagnosticConfiguration](Set-CsDiagnosticConfiguration.md)
48.176887
315
0.78034
eng_Latn
0.992964
f4b3d80ed93417b8f62bd98021e5702b104dfc87
1,146
md
Markdown
docs/csharp/misc/cs1732.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs1732.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs1732.md
yunuskorkmaz/docs.tr-tr
e73dea6e171ca23e56c399c55e586a61d5814601
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: 'Hakkında daha fazla bilgi: derleyici hatası CS1732' title: Derleyici hatası CS1732 ms.date: 07/20/2015 f1_keywords: - CS1732 helpviewer_keywords: - CS1732 ms.assetid: 72c7f7fc-d5f2-4538-9b02-50dda54d3b1e ms.openlocfilehash: 33b6b62817b88bea744723dfd40badc1320dab3d ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 02/06/2021 ms.locfileid: "99750916" --- # <a name="compiler-error-cs1732"></a>Derleyici hatası CS1732 Beklenen parametre. Bu hata, bir lambda ifadesi bir giriş parametresini izleyen bir virgül içerdiğinde ancak aşağıdaki parametreyi belirtmezse üretilir. ## <a name="to-correct-this-error"></a>Bu hatayı düzeltmek için 1. Virgülü kaldırın veya derleyicinin virgülden sonra bulmayı beklediği giriş parametresini ekleyin. ## <a name="example"></a>Örnek Aşağıdaki örnek CS1732 üretir: ```csharp // cs1732.cs // compile with: /target:library class Test { delegate void D(int x, int y); static void Main() { D d = (x,) => { }; // CS1732 } } ```
26.651163
135
0.709424
tur_Latn
0.978555
f4b49a647a45047d403bfaf1826f58af8ff72d5d
6,642
md
Markdown
articles/media-services/latest/become-on-premises-encoder-partner.md
andreaspreuss/azure-docs.de-de
0a3712b9239d821a1c24d0a1f05b44d2d6cbb02f
[ "CC-BY-4.0", "MIT" ]
1
2020-04-03T08:58:02.000Z
2020-04-03T08:58:02.000Z
articles/media-services/latest/become-on-premises-encoder-partner.md
andreaspreuss/azure-docs.de-de
0a3712b9239d821a1c24d0a1f05b44d2d6cbb02f
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/latest/become-on-premises-encoder-partner.md
andreaspreuss/azure-docs.de-de
0a3712b9239d821a1c24d0a1f05b44d2d6cbb02f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Partner für lokale Encoder werden – Azure Media Services description: Werden Sie ein Partner für lokale Encoder mit Azure Media Services. services: media-services author: johndeu manager: johndeu ms.author: johndeu ms.date: 03/02/2020 ms.topic: article ms.service: media-services ms.openlocfilehash: 6b00e430f960195e1badd2a73f9291997b94c833 ms.sourcegitcommit: e4c33439642cf05682af7f28db1dbdb5cf273cc6 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/03/2020 ms.locfileid: "78253147" --- # <a name="become-an-on-premises-encoder-partner"></a>Partner für lokale Encoder werden Wenn Sie Azure Media Services-Partner für lokale Encoder werden, unterstützt Media Services Ihr Produkt, indem es Ihren Encoder Unternehmenskunden empfiehlt. Um Partner für lokale Encoder zu werden, müssen Sie die Kompatibilität Ihres lokalen Encoders mit Media Services bestätigen. Führen Sie hierzu die folgenden Überprüfungsschritte aus. ### <a name="pass-through-live-event-verification"></a>Überprüfung von Pass-Through-Liveereignissen 1. Vergewissern Sie sich in Ihrem Media Services-Konto, dass der **Streamingendpunkt** ausgeführt wird. 2. Erstellen und starten Sie das **Pass-Through**-Liveereignis. <br/> Weitere Informationen finden Sie im Abschnitt [LiveEvent-Zustandswerte und Abrechnung](live-event-states-billing.md). 3. Rufen Sie die Erfassungs-URLs ab, und konfigurieren Sie Ihren lokalen Encoder für die Verwendung der URL, um einen Livedatenstrom mit Mehrfachbitrate an Media Services zu senden. 4. Rufen Sie die Vorschau-URL ab und verwenden Sie sie, um sich zu vergewissern, dass die Eingabe des Encoders auch tatsächlich empfangen wird. 5. Erstellen Sie ein neues **Medienobjekt**. 6. Erstellen Sie eine **Liveausgabe**, und verwenden Sie den Namen des erstellten Medienobjekts. 7. Erstellen Sie einen **Streaminglocator** mit den integrierten Arten von **Streamingrichtlinien**. 8. Listen Sie die Pfade auf dem **Streaminglocator** auf, um die zu verwendenden URLs zurückzugeben. 9. Rufen Sie den Hostnamen für den **Streamingendpunkt** ab, von dem aus Sie streamen möchten. 10. Kombinieren Sie die URL aus Schritt 8 mit dem Hostnamen aus Schritt 9, um die vollständige URL zu erhalten. 11. Lassen Sie Ihren Liveencoder ungefähr 10 Minuten lang laufen. 12. Beenden Sie das Liveereignis. 13. Verwenden Sie einen Player wie [Azure Media Player](https://aka.ms/azuremediaplayer), um die archivierten Medienobjekte zu beobachten, um sicherzustellen, dass die Wiedergabe auf allen Qualitätsebenen keine sichtbaren Störungen aufweist. Zum Beobachten und Überprüfen können Sie während der Livesitzung auch die Vorschau-URL verwenden. 14. Vermerken Sie die Medienobjekt-ID, die veröffentlichte Streaming-URL für das Livearchiv und die Einstellungen sowie die verwendete Version Ihres Liveencoders. 15. Setzen Sie den Status des Liveereignisses nach dem Erstellen der einzelnen Stichproben zurück. 16. Wiederholen Sie die Schritte 5 bis 15 für alle Konfigurationen, die von Ihrem Encoder unterstützt werden (mit und ohne Werbesignalisierung oder Untertiteln sowie in verschiedenen Codiergeschwindigkeiten). ### <a name="live-encoding-live-event-verification"></a>Überprüfung von Liveereignissen mit Livecodierung 1. Vergewissern Sie sich in Ihrem Media Services-Konto, dass der **Streamingendpunkt** ausgeführt wird. 2. Erstellen und starten Sie ein **Livecodierungs**-Liveereignis. <br/> Weitere Informationen finden Sie im Abschnitt [LiveEvent-Zustandswerte und Abrechnung](live-event-states-billing.md). 3. Rufen Sie die Erfassungs-URLs ab, und konfigurieren Sie Ihren Encoder so, dass er einen Livedatenstrom mit Einzelbitrate per Push an Media Services überträgt. 4. Rufen Sie die Vorschau-URL ab und verwenden Sie sie, um sich zu vergewissern, dass die Eingabe des Encoders auch tatsächlich empfangen wird. 5. Erstellen Sie ein neues **Medienobjekt**. 6. Erstellen Sie eine **Liveausgabe**, und verwenden Sie den Namen des erstellten Medienobjekts. 7. Erstellen Sie einen **Streaminglocator** mit den integrierten Arten von **Streamingrichtlinien**. 8. Listen Sie die Pfade auf dem **Streaminglocator** auf, um die zu verwendenden URLs zurückzugeben. 9. Rufen Sie den Hostnamen für den **Streamingendpunkt** ab, von dem aus Sie streamen möchten. 10. Kombinieren Sie die URL aus Schritt 8 mit dem Hostnamen aus Schritt 9, um die vollständige URL zu erhalten. 11. Lassen Sie Ihren Liveencoder ungefähr 10 Minuten lang laufen. 12. Beenden Sie das Liveereignis. 13. Verwenden Sie einen Player wie [Azure Media Player](https://aka.ms/azuremediaplayer), um die archivierten Medienobjekte zu beobachten, um sicherzustellen, dass die Wiedergabe auf allen Qualitätsebenen keine sichtbaren Störungen aufweist. Zum Beobachten und Überprüfen können Sie während der Livesitzung auch die Vorschau-URL verwenden. 14. Vermerken Sie die Medienobjekt-ID, die veröffentlichte Streaming-URL für das Livearchiv und die Einstellungen sowie die verwendete Version Ihres Liveencoders. 15. Setzen Sie den Status des Liveereignisses nach dem Erstellen der einzelnen Stichproben zurück. 16. Wiederholen Sie die Schritte 5 bis 15 für alle Konfigurationen, die von Ihrem Encoder unterstützt werden (mit und ohne Werbesignalisierung oder Untertiteln sowie in verschiedenen Codiergeschwindigkeiten). ### <a name="longevity-verification"></a>Überprüfung der Lebensdauer Fügen Sie die gleichen Schritte wie bei der [Überprüfung von Pass-Through-Liveereignissen](#pass-through-live-event-verification) mit Ausnahme von Schritt 11 aus. <br/>Lassen Sie Ihren Liveencoder statt für 10 Minuten mindestens eine Woche lang laufen. Verwenden Sie einen Player wie [Azure Media Player](https://aka.ms/azuremediaplayer), um das Livestreaming (oder ein archiviertes Medienobjekt) von Zeit zu Zeit zu beobachten, um sicherzustellen, dass die Wiedergabe keine sichtbaren Störungen aufweist. ### <a name="email-your-recorded-settings"></a>Senden Ihrer aufgezeichneten Einstellungen per E-Mail Senden Sie schließlich eine E-Mail mit Ihren aufgezeichneten Einstellungen und Livearchivparametern als Benachrichtigung darüber, dass alle automatischen Überprüfungen bestanden wurden, unter amshelp@microsoft.com an Azure Media Services. Schließen Sie auch Ihre Kontaktinformationen für nachfolgende Kontaktaufnahmen ein. Setzen Sie sich bei Fragen zu dieser Vorgehensweise mit dem Azure Media Services-Team in Verbindung. ## <a name="see-also"></a>Weitere Informationen [Empfohlene lokale Liveencoder](recommended-on-premises-live-encoders.md) ## <a name="next-steps"></a>Nächste Schritte [Livestreaming mit Media Services v3](live-streaming-overview.md)
88.56
505
0.81346
deu_Latn
0.997875
f4b4be3ecee623abe289720bc78d08caf29d71d6
766
md
Markdown
2020/CVE-2020-8011.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
2,340
2022-02-10T21:04:40.000Z
2022-03-31T14:42:58.000Z
2020/CVE-2020-8011.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
19
2022-02-11T16:06:53.000Z
2022-03-11T10:44:27.000Z
2020/CVE-2020-8011.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
280
2022-02-10T19:58:58.000Z
2022-03-26T11:13:05.000Z
### [CVE-2020-8011](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8011) ![](https://img.shields.io/static/v1?label=Product&message=CA%20Unified%20Infrastructure%20Management%20(Nimsoft%2FUIM)&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=null%20pointer%20dereference%20DoS&color=brighgreen) ### Description CA Unified Infrastructure Management (Nimsoft/UIM) 20.1, 20.3.x, and 9.20 and below contains a null pointer dereference vulnerability in the robot (controller) component. A remote attacker can crash the Controller service. ### POC #### Reference No PoCs from references. #### Github - https://github.com/wetw0rk/CA-UIM-Nimbus-Research
42.555556
222
0.765013
eng_Latn
0.281273
f4b68a0b52c44950371abac2906cd90cbafda371
11,600
md
Markdown
fabric-test/23269-25926/23851.md
hyperledger-gerrit-archive/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
2
2021-01-08T04:06:04.000Z
2021-02-09T08:28:54.000Z
fabric-test/23269-25926/23851.md
cendhu/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
null
null
null
fabric-test/23269-25926/23851.md
cendhu/fabric-gerrit
188c6e69ccb2e4c4d609ae749a467fa7e289b262
[ "Apache-2.0" ]
4
2019-12-07T05:54:26.000Z
2020-06-04T02:29:43.000Z
<strong>Project</strong>: fabric-test<br><strong>Branch</strong>: master<br><strong>ID</strong>: 23851<br><strong>Subject</strong>: [FAB-10920] Add certificates before setting objects<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Assignee</strong>:<br><strong>Created</strong>: 6/28/2018, 5:58:39 PM<br><strong>LastUpdated</strong>: 6/29/2018, 4:37:09 PM<br><strong>CommitMessage</strong>:<br><pre>[FAB-10920] Add certificates before setting objects The orderer and channel objects were being set before setting the crypto suite for the requests. This rearranges the order of operations to be sure the crypto suite is set first. Change-Id: I4438267dcd9178675d7fa7ec453295731791cd77 Signed-off-by: Latitia M Haskins <latitia.haskins@gmail.com> </pre><h1>Comments</h1><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/28/2018, 5:58:39 PM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/28/2018, 6:04:28 PM<br><strong>Message</strong>: <pre>Patch Set 1: Build Started https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1513/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/28/2018, 6:07:39 PM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1 Build Failed https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1513/ : FAILURE No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1513/ ) Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-test-verify-x86_64/1513</pre><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/29/2018, 12:01:37 AM<br><strong>Message</strong>: <pre>Patch Set 1: Reverify</pre><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/29/2018, 12:11:16 AM<br><strong>Message</strong>: <pre>Patch Set 1: reverify</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 12:17:13 AM<br><strong>Message</strong>: <pre>Patch Set 1: -Verified Build Started https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1514/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 12:18:44 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1 Build Failed https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1514/ : FAILURE No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1514/ ) Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-test-verify-x86_64/1514</pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 9:41:25 AM<br><strong>Message</strong>: <pre>Patch Set 1: (1 comment)</pre><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/29/2018, 10:50:21 AM<br><strong>Message</strong>: <pre>Patch Set 1: (1 comment)</pre><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/29/2018, 10:53:09 AM<br><strong>Message</strong>: <pre>Uploaded patch set 2.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 10:56:45 AM<br><strong>Message</strong>: <pre>Patch Set 2: Build Started https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1515/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 10:59:56 AM<br><strong>Message</strong>: <pre>Patch Set 2: Verified-1 Build Failed https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1515/ : FAILURE No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1515/ ) Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-test-verify-x86_64/1515</pre><strong>Reviewer</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Reviewed</strong>: 6/29/2018, 11:08:32 AM<br><strong>Message</strong>: <pre>Uploaded patch set 3.</pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 11:12:57 AM<br><strong>Message</strong>: <pre>Patch Set 3: Code-Review+1</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 11:14:11 AM<br><strong>Message</strong>: <pre>Patch Set 3: Build Started https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1516/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 12:40:16 PM<br><strong>Message</strong>: <pre>Patch Set 3: Verified-1 Build Failed https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1516/ : FAILURE No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1516/ ) Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-test-verify-x86_64/1516</pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 2:54:56 PM<br><strong>Message</strong>: <pre>Patch Set 4: Patch Set 3 was rebased</pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 2:55:06 PM<br><strong>Message</strong>: <pre>Patch Set 4: Verified+1 Code-Review+2</pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 2:55:10 PM<br><strong>Message</strong>: <pre>Removed Verified-1 by Hyperledger Jobbuilder <jobbuilder@jenkins.hyperledger.org> </pre><strong>Reviewer</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Reviewed</strong>: 6/29/2018, 2:55:13 PM<br><strong>Message</strong>: <pre>Change has been successfully merged by Scott Zwierzynski</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 2:58:31 PM<br><strong>Message</strong>: <pre>Patch Set 4: Build Started https://jenkins.hyperledger.org/job/fabric-test-verify-x86_64/1518/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 2:58:43 PM<br><strong>Message</strong>: <pre>Patch Set 4: Build Started https://jenkins.hyperledger.org/job/fabric-test-merge-x86_64/356/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Reviewed</strong>: 6/29/2018, 4:37:09 PM<br><strong>Message</strong>: <pre>Patch Set 4: Build Failed https://jenkins.hyperledger.org/job/fabric-test-merge-x86_64/356/ : FAILURE No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-test-merge-x86_64/356/ ) Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-test-merge-x86_64/356</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Uploader</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Created</strong>: 6/28/2018, 5:58:39 PM<br><strong>UnmergedRevision</strong>: [470f5aa3148ec919c0b055a65575c2932790edef](https://github.com/hyperledger-gerrit-archive/fabric-test/commit/470f5aa3148ec919c0b055a65575c2932790edef)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 6/29/2018, 12:18:44 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br><h2>Comments</h2><strong>Commenter</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>CommentLine</strong>: [feature/sdk/node/invoke.js#L35](https://github.com/hyperledger-gerrit-archive/fabric-test/blob/470f5aa3148ec919c0b055a65575c2932790edef/feature/sdk/node/invoke.js#L35)<br><strong>Comment</strong>: <pre>should we move this down as well?</pre><strong>Commenter</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>CommentLine</strong>: [feature/sdk/node/invoke.js#L35](https://github.com/hyperledger-gerrit-archive/fabric-test/blob/470f5aa3148ec919c0b055a65575c2932790edef/feature/sdk/node/invoke.js#L35)<br><strong>Comment</strong>: <pre>Sure! It shouldn't hurt anything</pre></blockquote><h3>PatchSet Number: 2</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Uploader</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Created</strong>: 6/29/2018, 10:53:09 AM<br><strong>UnmergedRevision</strong>: [4f1ea3e53923a8b1718474ababba3d3b0e49b5b0](https://github.com/hyperledger-gerrit-archive/fabric-test/commit/4f1ea3e53923a8b1718474ababba3d3b0e49b5b0)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 6/29/2018, 10:59:56 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br></blockquote><h3>PatchSet Number: 3</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Uploader</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Created</strong>: 6/29/2018, 11:08:32 AM<br><strong>UnmergedRevision</strong>: [5f61a194cf5a97009b32b3db3276b637b5f35d75](https://github.com/hyperledger-gerrit-archive/fabric-test/commit/5f61a194cf5a97009b32b3db3276b637b5f35d75)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - jobbuilder@jenkins.hyperledger.org<br><strong>Approved</strong>: 6/29/2018, 12:40:16 PM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: -1<br><br><strong>Approver</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Approved</strong>: 6/29/2018, 11:12:57 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br></blockquote><h3>PatchSet Number: 4</h3><blockquote><strong>Type</strong>: TRIVIAL_REBASE<br><strong>Author</strong>: Latitia Haskins - latitia.haskins@gmail.com<br><strong>Uploader</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Created</strong>: 6/29/2018, 2:54:56 PM<br><strong>GitHubMergedRevision</strong>: [86980cf8163cfe79c26d9751d5441fcf961a8c6a](https://github.com/hyperledger-gerrit-archive/fabric-test/commit/86980cf8163cfe79c26d9751d5441fcf961a8c6a)<br><br><strong>Approver</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Approved</strong>: 6/29/2018, 2:55:06 PM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Scott Zwierzynski<br><strong>Merged</strong>: 6/29/2018, 2:55:13 PM<br><br><strong>Approver</strong>: Scott Zwierzynski - scottz@us.ibm.com<br><strong>Approved</strong>: 6/29/2018, 2:55:06 PM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br></blockquote>
163.380282
4,037
0.7675
kor_Hang
0.265479
f4b6f71619b654b45eb73f0484956f6298f427a5
10,015
md
Markdown
vendor/github.com/digitalocean/godo/CHANGELOG.md
nickschuch/prometheus
4b5fd9f8096368cee4ae06a0d827ef982e44405f
[ "Apache-2.0" ]
162
2017-07-27T15:23:10.000Z
2022-01-03T18:32:13.000Z
vendor/github.com/digitalocean/godo/CHANGELOG.md
nickschuch/prometheus
4b5fd9f8096368cee4ae06a0d827ef982e44405f
[ "Apache-2.0" ]
183
2017-11-04T22:03:15.000Z
2021-09-15T19:14:00.000Z
vendor/github.com/digitalocean/godo/CHANGELOG.md
nickschuch/prometheus
4b5fd9f8096368cee4ae06a0d827ef982e44405f
[ "Apache-2.0" ]
55
2017-11-04T13:13:18.000Z
2022-01-10T05:58:22.000Z
# Change Log ## [v1.54.0] - 2020-11-24 - #417 - @waynr - registry: add support for garbage collection types ## [v1.53.0] - 2020-11-20 - #414 - @varshavaradarajan - kubernetes: add clusterlint support - #413 - @andrewsomething - images: Support updating distribution and description. ## [v1.52.0] - 2020-11-05 - #411 - @nicktate - apps: add unspecified type to image source registry types - #409 - @andrewsomething - registry: Add support for updating a subscription. - #408 - @nicktate - apps: update spec to include image source - #407 - @kamaln7 - apps: add the option to force build a new deployment ## [v1.51.0] - 2020-11-02 - #405 - @adamwg - registry: Support subscription options - #398 - @reeseconor - Add support for caching dependencies between GitHub Action runs - #404 - @andrewsomething - CONTRIBUTING.md: Suggest using github-changelog-generator. ## [v1.50.0] - 2020-10-26 - #400 - @waynr - registry: add garbage collection support - #402 - @snormore - apps: add catchall_document static site spec field and failed-deploy job type - #401 - @andrewlouis93 - VPC: adds option to set a VPC as the regional default ## [v1.49.0] - 2020-10-21 - #383 - @kamaln7 - apps: add ListRegions, Get/ListTiers, Get/ListInstanceSizes - #390 - @snormore - apps: add service spec internal_ports ## [v1.48.0] - 2020-10-16 - #388 - @varshavaradarajan - kubernetes - change docr integration api routes - #386 - @snormore - apps: pull in recent updates to jobs and domains ## [v1.47.0] - 2020-10-14 - #384 kubernetes - add registry related doks apis - @varshavaradarajan - #385 Fixed some typo in apps.gen.go and databases.go file - @devil-cyber - #382 Add GetKubeConfigWithExpiry (#334) - @ivanlemeshev - #381 Fix golint issues #377 - @sidsbrmnn - #380 refactor: Cyclomatic complexity issue - @DonRenando - #379 Run gofmt to fix some issues in codebase - @mycodeself ## [v1.46.0] - 2020-10-05 - #373 load balancers: add LB size field, currently in closed beta - @anitgandhi ## [v1.45.0] - 2020-09-25 **Note**: This release contains breaking changes to App Platform features currently in closed beta. - #369 update apps types to latest - @kamaln7 - #368 Kubernetes: add taints field to node pool create and update requests - @timoreimann - #367 update apps types, address marshaling bug - @kamaln7 ## [v1.44.0] - 2020-09-08 - #364 apps: support aggregate deployment logs - @kamaln7 ## [v1.43.0] - 2020-09-08 - #362 update apps types - @kamaln7 ## [v1.42.1] - 2020-08-06 - #360 domains: Allow for SRV records with port 0. - @andrewsomething ## [v1.42.0] - 2020-07-22 - #357 invoices: add category to InvoiceItem - @rbutler - #358 apps: add support for following logs - @nanzhong ## [v1.41.0] - 2020-07-17 - #355 kubernetes: Add support for surge upgrades - @varshavaradarajan ## [v1.40.0] - 2020-07-16 - #347 Make Rate limits thread safe - @roidelapluie - #353 Reuse TCP connection - @itsksaurabh ## [v1.39.0] - 2020-07-14 - #345, #346 Add app platform support [beta] - @nanzhong ## [v1.38.0] - 2020-06-18 - #341 Install 1-click applications on a Kubernetes cluster - @keladhruv - #340 Add RecordsByType, RecordsByName and RecordsByTypeAndName to the DomainsService - @viola ## [v1.37.0] - 2020-06-01 - #336 registry: URL encode repository names when building URLs. @adamwg - #335 Add 1-click service and request. @scottcrawford03 ## [v1.36.0] - 2020-05-12 - #331 Expose expiry_seconds for Registry.DockerCredentials. @andrewsomething ## [v1.35.1] - 2020-04-21 - #328 Update vulnerable x/crypto dependency - @bentranter ## [v1.35.0] - 2020-04-20 - #326 Add TagCount field to registry/Repository - @nicktate - #325 Add DOCR EA routes - @nicktate - #324 Upgrade godo to Go 1.14 - @bentranter ## [v1.34.0] - 2020-03-30 - #320 Add VPC v3 attributes - @viola ## [v1.33.1] - 2020-03-23 - #318 upgrade github.com/stretchr/objx past 0.1.1 - @hilary ## [v1.33.0] - 2020-03-20 - #310 Add BillingHistory service and List endpoint - @rbutler - #316 load balancers: add new enable_backend_keepalive field - @anitgandhi ## [v1.32.0] - 2020-03-04 - #311 Add reset database user auth method - @zbarahal-do ## [v1.31.0] - 2020-02-28 - #305 invoices: GetPDF and GetCSV methods - @rbutler - #304 Add NewFromToken convenience method to init client - @bentranter - #301 invoices: Get, Summary, and List methods - @rbutler - #299 Fix param expiry_seconds for kubernetes.GetCredentials request - @velp ## [v1.30.0] - 2020-02-03 - #295 registry: support the created_at field - @adamwg - #293 doks: node pool labels - @snormore ## [v1.29.0] - 2019-12-13 - #288 Add Balance Get method - @rbutler - #286,#289 Deserialize meta field - @timoreimann ## [v1.28.0] - 2019-12-04 - #282 Add valid Redis eviction policy constants - @bentranter - #281 Remove databases info from top-level godoc string - @bentranter - #280 Fix VolumeSnapshotResourceType value volumesnapshot -> volume_snapshot - @aqche ## [v1.27.0] - 2019-11-18 - #278 add mysql user auth settings for database users - @gregmankes ## [v1.26.0] - 2019-11-13 - #272 dbaas: get and set mysql sql mode - @mikejholly ## [v1.25.0] - 2019-11-13 - #275 registry/docker-credentials: add support for the read/write parameter - @kamaln7 - #273 implement the registry/docker-credentials endpoint - @kamaln7 - #271 Add registry resource - @snormore ## [v1.24.1] - 2019-11-04 - #264 Update isLast to check p.Next - @aqche ## [v1.24.0] - 2019-10-30 - #267 Return []DatabaseFirewallRule in addition to raw response. - @andrewsomething ## [v1.23.1] - 2019-10-30 - #265 add support for getting/setting firewall rules - @gregmankes - #262 remove ResolveReference call - @mdanzinger - #261 Update CONTRIBUTING.md - @mdanzinger ## [v1.22.0] - 2019-09-24 - #259 Add Kubernetes GetCredentials method - @snormore ## [v1.21.1] - 2019-09-19 - #257 Upgrade to Go 1.13 - @bentranter ## [v1.21.0] - 2019-09-16 - #255 Add DropletID to Kubernetes Node instance - @snormore - #254 Add tags to Database, DatabaseReplica - @Zyqsempai ## [v1.20.0] - 2019-09-06 - #252 Add Kubernetes autoscale config fields - @snormore - #251 Support unset fields on Kubernetes cluster and node pool updates - @snormore - #250 Add Kubernetes GetUser method - @snormore ## [v1.19.0] - 2019-07-19 - #244 dbaas: add private-network-uuid field to create request ## [v1.18.0] - 2019-07-17 - #241 Databases: support for custom VPC UUID on migrate @mikejholly - #240 Add the ability to get URN for a Database @stack72 - #236 Fix omitempty typos in JSON struct tags @amccarthy1 ## [v1.17.0] - 2019-06-21 - #238 Add support for Redis eviction policy in Databases @mikejholly ## [v1.16.0] - 2019-06-04 - #233 Add Kubernetes DeleteNode method, deprecate RecycleNodePoolNodes @bouk ## [v1.15.0] - 2019-05-13 - #231 Add private connection fields to Databases - @mikejholly - #223 Introduce Go modules - @andreiavrammsd ## [v1.14.0] - 2019-05-13 - #229 Add support for upgrading Kubernetes clusters - @adamwg ## [v1.13.0] - 2019-04-19 - #213 Add tagging support for volume snapshots - @jcodybaker ## [v1.12.0] - 2019-04-18 - #224 Add maintenance window support for Kubernetes- @fatih ## [v1.11.1] - 2019-04-04 - #222 Fix Create Database Pools json fields - @sunny-b ## [v1.11.0] - 2019-04-03 - #220 roll out vpc functionality - @jheimann ## [v1.10.1] - 2019-03-27 - #219 Fix Database Pools json field - @sunny-b ## [v1.10.0] - 2019-03-20 - #215 Add support for Databases - @mikejholly ## [v1.9.0] - 2019-03-18 - #214 add support for enable_proxy_protocol. - @mregmi ## [v1.8.0] - 2019-03-13 - #210 Expose tags on storage volume create/list/get. - @jcodybaker ## [v1.7.5] - 2019-03-04 - #207 Add support for custom subdomains for Spaces CDN [beta] - @xornivore ## [v1.7.4] - 2019-02-08 - #202 Allow tagging volumes - @mchitten ## [v1.7.3] - 2018-12-18 - #196 Expose tag support for creating Load Balancers. ## [v1.7.2] - 2018-12-04 - #192 Exposes more options for Kubernetes clusters. ## [v1.7.1] - 2018-11-27 - #190 Expose constants for the state of Kubernetes clusters. ## [v1.7.0] - 2018-11-13 - #188 Kubernetes support [beta] - @aybabtme ## [v1.6.0] - 2018-10-16 - #185 Projects support [beta] - @mchitten ## [v1.5.0] - 2018-10-01 - #181 Adding tagging images support - @hugocorbucci ## [v1.4.2] - 2018-08-30 - #178 Allowing creating domain records with weight of 0 - @TFaga - #177 Adding `VolumeLimit` to account - @lxfontes ## [v1.4.1] - 2018-08-23 - #176 Fix cdn flush cache API endpoint - @sunny-b ## [v1.4.0] - 2018-08-22 - #175 Add support for Spaces CDN - @sunny-b ## [v1.3.0] - 2018-05-24 - #170 Add support for volume formatting - @adamwg ## [v1.2.0] - 2018-05-08 - #166 Remove support for Go 1.6 - @iheanyi - #165 Add support for Let's Encrypt Certificates - @viola ## [v1.1.3] - 2018-03-07 - #156 Handle non-json errors from the API - @aknuds1 - #158 Update droplet example to use latest instance type - @dan-v ## [v1.1.2] - 2018-03-06 - #157 storage: list volumes should handle only name or only region params - @andrewsykim - #154 docs: replace first example with fully-runnable example - @xmudrii - #152 Handle flags & tag properties of domain record - @jaymecd ## [v1.1.1] - 2017-09-29 - #151 Following user agent field recommendations - @joonas - #148 AsRequest method to create load balancers requests - @lukegb ## [v1.1.0] - 2017-06-06 ### Added - #145 Add FirewallsService for managing Firewalls with the DigitalOcean API. - @viola - #139 Add TTL field to the Domains. - @xmudrii ### Fixed - #143 Fix oauth2.NoContext depreciation. - @jbowens - #141 Fix DropletActions on tagged resources. - @xmudrii ## [v1.0.0] - 2017-03-10 ### Added - #130 Add Convert to ImageActionsService. - @xmudrii - #126 Add CertificatesService for managing certificates with the DigitalOcean API. - @viola - #125 Add LoadBalancersService for managing load balancers with the DigitalOcean API. - @viola - #122 Add GetVolumeByName to StorageService. - @protochron - #113 Add context.Context to all calls. - @aybabtme
28.614286
99
0.698752
eng_Latn
0.548579
f4b6f827a74e1fb059343b11a91ab25c201865bc
10,198
md
Markdown
doc/development/documentation/site_architecture/index.md
maxiko/gitlabhq
3e65e2679b6e7cbdc31bdfa1d3f5051c69c81e9a
[ "MIT" ]
1
2021-01-27T01:23:38.000Z
2021-01-27T01:23:38.000Z
doc/development/documentation/site_architecture/index.md
maxiko/gitlabhq
3e65e2679b6e7cbdc31bdfa1d3f5051c69c81e9a
[ "MIT" ]
3
2021-09-03T05:27:49.000Z
2022-02-26T10:22:59.000Z
doc/development/documentation/site_architecture/index.md
webbemntt97/gitlabhq
37c558bfe744de6144a24deb295c76c7c56adb30
[ "MIT" ]
null
null
null
--- stage: none group: unassigned info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments description: "Learn how GitLab's documentation website is architectured." --- # Documentation site architecture The [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) project hosts the repository which is used to generate the GitLab documentation website and is deployed to <https://docs.gitlab.com>. It uses the [Nanoc](https://nanoc.ws/) static site generator. ## Architecture While the source of the documentation content is stored in GitLab's respective product repositories, the source that is used to build the documentation site _from that content_ is located at <https://gitlab.com/gitlab-org/gitlab-docs>. The following diagram illustrates the relationship between the repositories from where content is sourced, the `gitlab-docs` project, and the published output. ```mermaid graph LR A[gitlab/doc] B[gitlab-runner/docs] C[omnibus-gitlab/doc] D[charts/doc] E[gitlab-docs] A --> E B --> E C --> E D --> E E -- Build pipeline --> F F[docs.gitlab.com] G[/ce/] H[/ee/] I[/runner/] J[/omnibus/] K[/charts/] F --> H F --> I F --> J F --> K H -- symlink --> G ``` GitLab docs content isn't kept in the `gitlab-docs` repository. All documentation files are hosted in the respective repository of each product, and all together are pulled to generate the docs website: - [GitLab](https://gitlab.com/gitlab-org/gitlab/tree/master/doc) - [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc) - [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/tree/master/docs) - [GitLab Chart](https://gitlab.com/charts/gitlab/tree/master/doc) NOTE: **Note:** In September 2019, we [moved towards a single codebase](https://gitlab.com/gitlab-org/gitlab/-/issues/2952), as such the docs for CE and EE are now identical. For historical reasons and in order not to break any existing links throughout the internet, we still maintain the CE docs (`https://docs.gitlab.com/ce/`), although it is hidden from the website, and is now a symlink to the EE docs. When [Pages supports redirects](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/24), we can remove this completely. ## Assets To provide an optimized site structure, design, and a search-engine friendly website, along with a discoverable documentation, we use a few assets for the GitLab Documentation website. ### Libraries - [Bootstrap 4.3.1 components](https://getbootstrap.com/docs/4.3/components/) - [Bootstrap 4.3.1 JS](https://getbootstrap.com/docs/4.3/getting-started/javascript/) - [jQuery](https://jquery.com/) 3.3.1 - [Clipboard JS](https://clipboardjs.com/) - [Font Awesome 4.7.0](https://fontawesome.com/v4.7.0/icons/) ### SEO - [Schema.org](https://schema.org/) - [Google Analytics](https://marketingplatform.google.com/about/analytics/) - [Google Tag Manager](https://developers.google.com/tag-manager/) ## Global navigation Read through [the global navigation documentation](global_nav.md) to understand: - How the global navigation is built. - How to add new navigation items. <!-- ## Helpers TBA --> ## Pipelines The pipeline in the `gitlab-docs` project: - Tests changes to the docs site code. - Builds the Docker images used in various pipeline jobs. - Builds and deploys the docs site itself. - Generates the review apps when the `review-docs-deploy` job is triggered. ### Rebuild the docs site Docker images Once a week on Mondays, a scheduled pipeline runs and rebuilds the Docker images used in various pipeline jobs, like `docs-lint`. The Docker image configuration files are located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/dockerfiles). If you need to rebuild the Docker images immediately (must have maintainer level permissions): CAUTION: **Caution:** If you change the dockerfile configuration and rebuild the images, you can break the master pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with a different name first and test it to ensure you do not break the pipelines. 1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI / CD > Pipelines**. 1. Click the **Run Pipeline** button. 1. See that a new pipeline is running. The jobs that build the images are in the first stage, `build-images`. You can click the pipeline number to see the larger pipeline graph, or click the first (`build-images`) stage in the mini pipeline graph to expose the jobs that build the images. 1. Click the **play** (**{play}**) button next to the images you want to rebuild. - Normally, you do not need to rebuild the `image:gitlab-docs-base` image, as it rarely changes. If it does need to be rebuilt, be sure to only run `image:docs-lint` after it is finished rebuilding. ### Deploy the docs site Every four hours a scheduled pipeline builds and deploys the docs site. The pipeline fetches the current docs from the main project's master branch, builds it with Nanoc and deploys it to <https://docs.gitlab.com>. If you need to build and deploy the site immediately (must have maintainer level permissions): 1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI / CD > Schedules**. 1. For the `Build docs.gitlab.com every 4 hours` scheduled pipeline, click the **play** (**{play}**) button. Read more about the [deployment process](deployment_process.md). ## Using YAML data files The easiest way to achieve something similar to [Jekyll's data files](https://jekyllrb.com/docs/datafiles/) in Nanoc is by using the [`@items`](https://nanoc.ws/doc/reference/variables/#items-and-layouts) variable. The data file must be placed inside the `content/` directory and then it can be referenced in an ERB template. Suppose we have the `content/_data/versions.yaml` file with the content: ```yaml versions: - 10.6 - 10.5 - 10.4 ``` We can then loop over the `versions` array with something like: ```erb <% @items['/_data/versions.yaml'][:versions].each do | version | %> <h3><%= version %></h3> <% end &> ``` Note that the data file must have the `yaml` extension (not `yml`) and that we reference the array with a symbol (`:versions`). ## Bumping versions of CSS and JavaScript Whenever the custom CSS and JavaScript files under `content/assets/` change, make sure to bump their version in the front matter. This method guarantees that your changes take effect by clearing the cache of previous files. Always use Nanoc's way of including those files, do not hardcode them in the layouts. For example use: ```erb <script async type="application/javascript" src="<%= @items['/assets/javascripts/badges.*'].path %>"></script> <link rel="stylesheet" href="<%= @items['/assets/stylesheets/toc.*'].path %>"> ``` The links pointing to the files should be similar to: ```erb <%= @items['/path/to/assets/file.*'].path %> ``` Nanoc then builds and renders those links correctly according with what's defined in [`Rules`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/Rules). ## Linking to source files A helper called [`edit_on_gitlab`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/lib/helpers/edit_on_gitlab.rb) can be used to link to a page's source file. We can link to both the simple editor and the web IDE. Here's how you can use it in a Nanoc layout: - Default editor: `<a href="<%= edit_on_gitlab(@item, editor: :simple) %>">Simple editor</a>` - Web IDE: `<a href="<%= edit_on_gitlab(@item, editor: :webide) %>">Web IDE</a>` If you don't specify `editor:`, the simple one is used by default. ## Algolia search engine The docs site uses [Algolia DocSearch](https://community.algolia.com/docsearch/) for its search function. This is how it works: 1. GitLab is a member of the [DocSearch program](https://community.algolia.com/docsearch/#join-docsearch-program), which is the free tier of [Algolia](https://www.algolia.com/). 1. Algolia hosts a [DocSearch configuration](https://github.com/algolia/docsearch-configs/blob/master/configs/gitlab.json) for the GitLab docs site, and we've worked together to refine it. 1. That [configuration](https://community.algolia.com/docsearch/config-file.html) is parsed by their [crawler](https://community.algolia.com/docsearch/crawler-overview.html) every 24h and [stores](https://community.algolia.com/docsearch/inside-the-engine.html) the [DocSearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html) on [Algolia's servers](https://community.algolia.com/docsearch/faq.html#where-is-my-data-hosted%3F). 1. On the docs side, we use a [DocSearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/docsearch.html) which is present on pretty much every page except <https://docs.gitlab.com/search/>, which uses its [own layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/instantsearch.html). In those layouts, there's a JavaScript snippet which initiates DocSearch by using an API key and an index name (`gitlab`) that are needed for Algolia to show the results. ### Algolia notes for GitLab team members If you’re a GitLab team member, find credentials for the Algolia dashboard in the shared [GitLab 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams). To receive weekly reports of the search usage, search the Google doc with title `Email, Slack, and GitLab Groups and Aliases`, search for `docsearch`, and add a comment with your email to be added to the alias that gets the weekly reports. ## Monthly release process (versions) The docs website supports versions and each month we add the latest one to the list. For more information, read about the [monthly release process](release_process.md). ## Review Apps for documentation merge requests If you are contributing to GitLab docs read how to [create a Review App with each merge request](../index.md#previewing-the-changes-live).
40.629482
178
0.740047
eng_Latn
0.96228
f4b72be4b8bc44b4253f2cbf68fbe72277964997
1,782
md
Markdown
README.md
kapseliboi/vulnerability-detection-tools-comparison
ac6beba2ae4c20aa5b56ac82df0b253277ec6993
[ "MIT" ]
null
null
null
README.md
kapseliboi/vulnerability-detection-tools-comparison
ac6beba2ae4c20aa5b56ac82df0b253277ec6993
[ "MIT" ]
null
null
null
README.md
kapseliboi/vulnerability-detection-tools-comparison
ac6beba2ae4c20aa5b56ac82df0b253277ec6993
[ "MIT" ]
null
null
null
# vulnerability-detection-tools-comparison This repository contains scripts for running multiple vulnerability detection tools at once ## Requirements - Node.js >= 16 and npm >= 7 - Git - OWASP Dependency Check CLI (I used version 6.2.2, installation: https://jeremylong.github.io/DependencyCheck/dependency-check-cli/) - Yarn (required by dependency check) - Ruby + bundle-audit (otherwise dependency check will throw errors when it tries to scan ruby code) - Authenticated Snyk CLI installed (I used version 1.658.0) - Authenticated GitHub CLI or unauthenticated with an access token - Unix find command available in path - GitHub.com enterprise account (without it dependabot results might be incorrect: https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/troubleshooting-the-detection-of-vulnerable-dependencies#are-there-limits-which-affect-the-dependency-graph-data) ## Functionality ### Select Selects projects based on a provided JSON file which has to be produced by GitHub Search or be in the same format. It checks the amount of dependencies in the repository and if it meets the minimum requirement for dependencies it gives a link to the user to the repository for the user to manually inspect its suitability. Then the user simply selects if it accepts it or not. Then the project is saved to the database for later analysis. This project uses TypeORM so the type of database is freely configurable. Some extra work is also done regarding later analysis with repository based tools. The project is forked and Dependabot is enabled using an authenticated GitHub CLI. ### Analyze Produces results with all the tools and saves them to ./results/${GitHub username}/${Repository name}/${Commit hash}
61.448276
314
0.801347
eng_Latn
0.99674
f4b77a66093e25933343d2d13e0de6f26ecb799f
634
md
Markdown
README.md
Azambik/test
44b5e7da5cb5d3dc15b5a7d5645dd5bf9be741e0
[ "MIT" ]
null
null
null
README.md
Azambik/test
44b5e7da5cb5d3dc15b5a7d5645dd5bf9be741e0
[ "MIT" ]
null
null
null
README.md
Azambik/test
44b5e7da5cb5d3dc15b5a7d5645dd5bf9be741e0
[ "MIT" ]
null
null
null
dfa ------------------ Table of contents ------------------ [Description](#description) [Usage](#usage) [Test](#testing) [Created by](#createdby) description <a name="description"></a> adsf Usage <a name="usage"></a> asdf Testing instructions <a name="testing"></a> asdf created by <a name="createdby"></a> asdf contact me at: asdf My github : [https://github.com/asdf](https://github.com/asdf) ## License[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
14.409091
119
0.537855
eng_Latn
0.194506
f4b7e01051194d17c505f17852bb1a72d904fb9c
2,260
md
Markdown
docs/api-reference/soap/Services86/BLOB/SetBlobStream.md
SuperOfficeDocs/data-access
f0a7033b0f43c4b60921885c7cebde9b789f4487
[ "MIT" ]
null
null
null
docs/api-reference/soap/Services86/BLOB/SetBlobStream.md
SuperOfficeDocs/data-access
f0a7033b0f43c4b60921885c7cebde9b789f4487
[ "MIT" ]
59
2021-04-28T07:02:54.000Z
2022-02-03T13:08:46.000Z
docs/api-reference/soap/Services86/BLOB/SetBlobStream.md
SuperOfficeDocs/data-access
f0a7033b0f43c4b60921885c7cebde9b789f4487
[ "MIT" ]
2
2021-07-01T10:19:20.000Z
2021-08-12T09:41:30.000Z
--- title: Services86.BLOBAgent.SetBlobStream SOAP generated: 1 uid: Services86-BLOB-SetBlobStream --- # Services86 BLOB SetBlobStream SOAP request and response examples **Remote/Services86/BLOB.svc** Implemented by the <see cref="M:SuperOffice.Services86.IBLOBAgent.SetBlobStream">SuperOffice.Services86.IBLOBAgent.SetBlobStream</see> method. ## SetBlobStream Store a binary object from it's stream * **blobEntityId:** Id of the BLOB entity object that the binary data should be stored to. * **stream:** The binary object as a Stream [WSDL file for Services86/BLOB](../Services86-BLOB.md) Obtain a ticket from the [Services86/SoPrincipal.svc](../SoPrincipal/index.md) Application tokens must be specified if calling an Online installation. ApplicationTokens are not checked for on-site installations. ## SetBlobStream Request ```xml <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" xmlns:SOAP-ENC="http://www.w3.org/2003/05/soap-encoding" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:NetServerServices861="http://schemas.microsoft.com/2003/10/Serialization/" xmlns:BLOB="http://www.superoffice.net/ws/crm/NetServer/Services86"> <BLOB:ApplicationToken>1234567-1234-9876</BLOB:ApplicationToken> <BLOB:Credentials> <BLOB:Ticket>7T:1234abcxyzExample==</BLOB:Ticket> </BLOB:Credentials> <SOAP-ENV:Body> <BLOB:SetBlobStream> <BLOB:BlobEntityId xsi:type="xsd:int">0</BLOB:BlobEntityId> <BLOB:Stream xsi:type="xsd:base64Binary"></BLOB:Stream> </BLOB:SetBlobStream> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ``` ## SetBlobStream Response ```xml <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" xmlns:SOAP-ENC="http://www.w3.org/2003/05/soap-encoding" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:NetServerServices861="http://schemas.microsoft.com/2003/10/Serialization/" xmlns:BLOB="http://www.superoffice.net/ws/crm/NetServer/Services86"> <SOAP-ENV:Body> <BLOB:SetBlobStreamResponse> </BLOB:SetBlobStreamResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ```
30.958904
142
0.749115
yue_Hant
0.839418
f4b7f6f9e27cac989351cb8eb60574a75330cc3e
525
md
Markdown
packages/git/README.md
localpcguy/cardstack
fde5ff6204469464c0a4c667be7a59ffb253b033
[ "MIT" ]
2
2018-07-17T03:05:57.000Z
2021-12-08T07:36:51.000Z
packages/git/README.md
localpcguy/cardstack
fde5ff6204469464c0a4c667be7a59ffb253b033
[ "MIT" ]
null
null
null
packages/git/README.md
localpcguy/cardstack
fde5ff6204469464c0a4c667be7a59ffb253b033
[ "MIT" ]
null
null
null
This is a Cardstack data source plugin for reading and writing to Git. Building native nodegit dep on osx -------------------------------- You need the latest Xcode *and* you need to manually tell it to get the latest CLI tools via sudo xcode-select --install Merely upgrading Xcode will still leave you broken and frustrated. I cloned and built nodegit in its own repo, and then used `yarn link`. This seems to function as insurance against `yarn` deciding to rebuild it from scratch (which takes a long time).
40.384615
184
0.721905
eng_Latn
0.999636
f4b818e115c7d781c66527f28f6b54db00e85c4b
7,346
md
Markdown
README.md
LiteCode/DITranquillity
c415ee568d69ff717a075bb5bb553f599d8881a4
[ "MIT" ]
null
null
null
README.md
LiteCode/DITranquillity
c415ee568d69ff717a075bb5bb553f599d8881a4
[ "MIT" ]
null
null
null
README.md
LiteCode/DITranquillity
c415ee568d69ff717a075bb5bb553f599d8881a4
[ "MIT" ]
null
null
null
[![Carthage compatible](https://img.shields.io/badge/Carthage-compatible-4BC51D.svg?style=flat)](https://github.com/Carthage/Carthage) [![CocoaPods Version](https://img.shields.io/cocoapods/v/DITranquillity.svg?style=flat)](http://cocoapods.org/pods/DITranquillity) [![License](https://img.shields.io/github/license/ivlevAstef/DITranquillity.svg?maxAge=2592000)](http://cocoapods.org/pods/DITranquillity) [![Platform](https://img.shields.io/cocoapods/p/DITranquillity.svg?style=flat)](http://cocoapods.org/pods/DITranquillity) [![Swift Version](https://img.shields.io/badge/Swift-3.0--5.0-F16D39.svg?style=flat)](https://developer.apple.com/swift) # DITranquillity The small library for [dependency injection](https://en.wikipedia.org/wiki/Dependency_injection) in applications written on pure Swift for iOS/OSX/tvOS. Despite its size, it solves a large enough range of tasks, including Storyboard support. Its main advantage - modularity of support, detailed errors description and lots of opportunities. ## Features <img align="right" src="https://habrastorage.org/files/c6d/c89/5d0/c6dc895d02324b96bc679f41228ab6bf.png" alt="Tranquillity"> * Pure Swift Type Support * Initializer injections [ru](Documentation/ru/registration.md#Разрешение-зависимостей-при-инициализации) * Property, Method injections [ru](Documentation/ru/injection.md#Внедрение) * Named, Tags definitions and Many [ru](Documentation/ru/modificators.md#Модификаторы) * Type forwarding [ru](Documentation/ru/registration.md#Указание-сервисов) * Lifetimes: single, perRun(weak/strong), perContainer(weak/strong), objectGraph, prototype, custom [ru](Documentation/ru/lifetime.md#Время-жизни) * iOS/macOS Storyboard and StoryboardReference [ru](Documentation/ru/storyboard.md#storyboard) * Circular dependencies [ru](Documentation/ru/injection.md#Внедрение-циклических-зависимостей-через-свойства) * Three level hierarchy: types, part, framework [ru](Documentation/ru/part_framework.md#Части-и-Фреймворки) * Short resolve syntax [ru](Documentation/ru/resolve.md#Разрешение-зависимостей) * keyPath injection (since swift4.0) [ru](Documentation/ru/injection.md#Внедрение-зависимостей-через-свойства-используя-keypath) * Very detail logs [ru](Documentation/ru/log.md#Логирование) * Validation at the run app [ru](Documentation/ru/validation.md#Валидация-контейнера) * Injection into Subviews and cells [ru](Documentation/ru/storyboard.md#Внедрение-в-subview-и-ячейки) * Support Delayed injection [ru](Documentation/ru/delayed_injection.md#Отложенное-внедрение) * Injection with arguments at any depth * Container Hierarchy * Thread safe ### Helpful links * Read the Quick Start [ru](Documentation/ru/quick_start.md#Быстрый-старт) / [~~en~~](Documentation/en/Ups.md) * Or documentation [ru](Documentation/ru/main.md) / [~~en~~](Documentation/en/Ups.md) * Samples [ru](Documentation/ru/sample.md) / [en](Samples) * Also see [code documentation](https://htmlpreview.github.io/?https://github.com/ivlevAstef/DITranquillity/blob/master/Documentation/code/index.html) ## Usage ```Swift // container - for register and resolve your types let container = DIContainer() container.register{ Cat(name: "Felix") } .as(Animal.self) // register Cat with name felix by protocol Animal .lifetime(.prototype) // set lifetime container.register(PetOwner.init) // register PetOwner // you can validate you registrations graph if !container.validate() { fatalError("...") } ................................................. // get instance of a types from the container let owner: PetOwner = container.resolve() let animal: Animal = *container // short syntax print(owner.pet.name) // "Felix" print(animal.name) // "Felix" ................................................. // where protocol Animal { var name: String { get } } class Cat: Animal { let name: String init(name: String) { self.name = name } } class PetOwner { let pet: Animal init(pet: Animal) { self.pet = pet } } ``` <details> <summary>See More</summary> ```Swift let container = DIContainer() container.register{ Cat(name: "Felix") } .as(Animal.self) container.register{ Dog(name: "Rex") } .as(Animal.self) .default() container.register{ PetOwner(pets: many($0)) } .injection(\.home) // since swift4.0 and 3.2.0 lib container.register(Home.init) .postInit{ $0.address = "City, Street, Number" } ................................................. let owner: PetOwner = *container print(owner.pets.map{ $0.name }) // ["Felix", "Rex"] print(onwer.home.address) // "City, Street, Number" ................................................. // where protocol Animal { var name: String { get } } class Cat: Animal { let name: String init(name: String) { self.name = name } } class Dog: Animal { let name: String init(name: String) { self.name = name } } class PetOwner { let pets: [Animal] init(pets: [Animal]) { self.pets = pets } private(set) var home: Home! } class Home { var address: String! } ``` </details> ### Storyboard (iOS/OS X) <details> <summary>See code</summary> Create your ViewController: ```Swift class ViewController: UIViewController/NSViewController { private(set) var inject: Inject? override func viewDidLoad() { super.viewDidLoad() print("Inject: \(inject)") } } ``` Create container: ```Swift let container = DIContainer() container.register(ViewController.self) .injection(\.inject) ``` Create Storyboard: ```Swift /// for iOS func applicationDidFinishLaunching(_ application: UIApplication) { let storyboard = DIStoryboard.create(name: "Main", bundle: nil, container: container) window = UIWindow(frame: UIScreen.main.bounds) window!.rootViewController = storyboard.instantiateInitialViewController() window!.makeKeyAndVisible() } ``` ```Swift /// for OS X func applicationDidFinishLaunching(_ aNotification: Notification) { let storyboard = DIStoryboard.create(name: "Main", bundle: nil, container: container) let viewController = storyboard.instantiateInitialController() as! NSViewController let window = NSApplication.shared.windows.first window?.contentViewController = viewController } ``` </details> ## Install ###### Via CocoaPods. To install DITranquillity with CocoaPods, add the following lines to your Podfile: `pod 'DITranquillity'` ###### Via Carthage. `github "ivlevAstef/DITranquillity"` Swift (iOS8+,macOS10.10+,tvOS9+) ## Requirements iOS 8.0+,macOS 10.10+,tvOS 9.0+; ARC * Swift 5.0: Xcode 10.2; version >= 3.6.3 * Swift 4.2: Xcode 10; version >= 3.4.3 * Swift 4.1: Xcode 9.3; version >= 3.2.3 * Swift 4.0: Xcode 9.0; version >= 3.0.5 * Swift 3.0-3.2: Xcode 8.0-9.0; 0.9.5 <= version < 3.7.0 * Swift 2.3: Xcode 7.0; version < 0.9.5 ## Migration * v1.x.x -> v2.x.x [ru](Documentation/ru/migration1to2.md) * v2.x.x -> v3.x.x [ru](Documentation/ru/migration2to3.md) ## Changelog See [CHANGELOG.md](CHANGELOG.md) file. ## Feedback ### I've found a bug, or have a feature request Please raise a [GitHub issue](https://github.com/ivlevAstef/DITranquillity/issues). ### I've found a defect in documentation, or thought up how to improve it Please help library development and create [pull requests](https://github.com/ivlevAstef/DITranquillity/pulls) ### Question? You can feel free to ask the question at e-mail: ivlev.stef@gmail.com.
31.663793
341
0.714947
eng_Latn
0.320745
f4b88cd8d2e311277eba6273a83d11a9337fbe86
6,293
md
Markdown
CHANGELOG.md
LaudateCorpus1/oci-compute-jenkins-plugin
2e4586ec6d29c5dd9235d20fa400bf4541eaccce
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
LaudateCorpus1/oci-compute-jenkins-plugin
2e4586ec6d29c5dd9235d20fa400bf4541eaccce
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
LaudateCorpus1/oci-compute-jenkins-plugin
2e4586ec6d29c5dd9235d20fa400bf4541eaccce
[ "Apache-2.0" ]
null
null
null
# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/). ## 1.0.15 - November 2021 ### Added - OCI Java SDK 2.5.1 - Credentials Plugin 2.3.19 - Custom Agent User ### Fixed - Error reporting - print original stack trace. - Details on Instance Principals option. ## 1.0.14 - April 2021 ### Added - OCI Java SDK 1.36.0 ### Fixed - Instance Termination if Jenkins tries to reconnect node while its Stopping - Terminating log string is ambiguous ## 1.0.13 - February 2021 ### Added - OCI Java SDK 1.30.0 - Masking of OCI Credentials - Explicit multiple Instance Provisioning ### Fixed - Stop/Start Node's name not matching Instance's name - Stop/Start Option not affecting already created Nodes ## 1.0.12 - January 2021 ### Added - OCI Java SDK 1.29.0 - Export Jenkins Variables to Init Script - **Memory in GBs** option for Flex Shapes - Network Security Groups Compartment option ### Fixed - java.lang.NumberFormatException: For input string: "" - NPE in SDKBaremetalCloudClient.getTenant() when using Instance Principals - Only the last Tag is set when setting more than one Tag ## 1.0.11 - November 2020 ### Added - OCI Java SDK 1.26.0 - Tags Template option - Custom Instance Name Prefix option - root compartment added to Compartment list - Images added to the Stop/Start filter ## 1.0.10 - September 2020 ### Fixed - Fix java.util.NoSuchElementException: No value present ### 1.0.9 - September 2020 ### Added - Network **Subnet Compartment** field. - **Network Security Groups** field. - **Identical Named Images** checkbox to automatically select the newest Image if multiple Images exist with same name. - **Stop on Idle Timeout** checkbox so an instance is stopped and not terminated when the Idle timeout expires. - Log if an Instance was created via Jenkins Job label or via Jenkins Nodes. ## 1.0.8 - July 2020 ### Added - OCI Java SDK 1.19.2 - Support for E3/Flex Compute Shapes. **Note:** After upgrade please check all OCI Cloud values are OK in Manage Jenkins > Manage Nodes and Clouds > Configure Clouds. Then Click **Save**. - Improvements in Instance Termination behavior. ### Fixed - Fix in Exception handling to avoid dangling instances. ## 1.0.7 - June 2020 ### Added - OCI Java SDK 1.17.4 ## 1.0.6 - September 2019 ### Added - OCI API keys and SSH Keys are now defined in Jenkins Credentials. **Note:** if upgrading you need to update the values in your existing Cloud configuration(s). - Support for Instance Principals and calling services from an Instance. See [Calling Services from an Instance](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm) documentation for additional information. - OCI Java SDK 1.7.0 ## 1.0.5 - July 2019 ### Added - Jenkins Master's IP Address to the Instance Names in OCI i.e. jenkins-**12.191.12.125**-182258c6-7dc7-4d8c-acce-1a292a56cfaa. - Regional subnets in the Virtual Networking service support. - OCI Java SDK 1.5.11 ### Changed - Default values for **Instance Creation Timeout** and **Instance SSH Connection Timeout** to 900. ### Fixed - OCI Slaves removing from Jenkins when Jenkins loses Network Connectivity. ## 1.0.4 - November 2018 ### Fixed - Compartments listed are no longer limited to 25 values. - Child Compartments are now visible in Compartments. ### Added - Template Instance Cap functionality. Instance Cap can now be placed at Template level. - "Virtual Cloud Network Compartment" Drop Down in Template configuration to access Network resources in separate compartments. Improves Template loading performance. **Note:** if upgrading from v1.0.3 (or earlier) and the Networks resources is in a separate compartment than the default Compartment, you may have to update the values in your existing Template configuration. ## 1.0.3 - October 2018 ### Fixed - Fix "I/O error in channel oci-compute" java.io.EOFException severe messages in Jenkins log. - Fix issue where some values fail due to OCI API limit being exceeded with large number of Templates. ### Changed - Plugin Description seen in Plugin's Available screen. ### Added - "Max number of async threads" Field in Cloud configuration. Allows user to specify the max number of async threads to use when loading Templates configuration. - "Image Compartment" Drop Down in Template configuration for images in separate compartments. **Note:** if upgrading from v1.0.2 (or earlier) and the Images are in a separate compartment than the default Compartment, you may have to update the values in your existing Template configuration. ## 1.0.2 - June 2018 ### Fixed - Instance cap can no longer be exceeded - Fix error on Node Configuration Screen ### Changed - Subnets now filtering by Availability Domain - Use Jenkins HTTP proxy configuration for OCI API calls - Prevent termination of temporarily offline Agents ### Added - Faster loading of Cloud and Template configuration options in Jenkins Configure screen - Better error description for remote machine with no Java installed - "Name" and "Number of Executors" reconfiguration options in the Nodes > Configure Screen ## 1.0.1 - April 2018 ### Fixed - Idle Termination Minutes. 0 now working as expected and Instance will not Terminate. - Fixed broken links in Plugin Help options. - Fixed "unexpected stream termination" issue which removes HTTP Proxy for ssh connection to agents. - ssh credentials are now encrypted in Jenkins config file. ### Changed - Shorten Compartment Drop-Down names and removed bloated bracket content. ### Added - Ability to access Images, Virtual Cloud Network, and Subnet items from separate Compartments. - Checkbox to attach Public IP to Instances. If this option is unchecked, only the private IP is assigned. - Checkbox to use Public IP to ssh to instances. If this Option is unchecked, the Plugin will connect to the private IP of the instance. ## 1.0.0 - December 2017 ### Added - Initial Release - Support added for OCI resource allocation via Jenkins plugin
31.623116
376
0.730335
eng_Latn
0.956221
f4b8da09e068b1fc84fe6881d04272aeac31f620
7,225
md
Markdown
help/target-home.md
shashanknigam01/target.en
e736daf6f76ab02b0b845941f31934e90386efbf
[ "MIT" ]
null
null
null
help/target-home.md
shashanknigam01/target.en
e736daf6f76ab02b0b845941f31934e90386efbf
[ "MIT" ]
null
null
null
help/target-home.md
shashanknigam01/target.en
e736daf6f76ab02b0b845941f31934e90386efbf
[ "MIT" ]
null
null
null
--- keywords: Target;home;popular topics;adobe target;standard;premium;target documentation;adobe target documentation title: Adobe Target Guide description: View the home page of the Target Guide. seo-description: Adobe Target is the Adobe Experience Cloud solution that provides everything you need to tailor and personalize your customers' experience so you can maximize revenue on your web and mobile sites, apps, social media, and other digital channels. feature: landing --- # [!DNL Adobe Target] Guide ![banner](assets/target-home-banner-simple.png) [!DNL Adobe Target] is the [!DNL Adobe Experience Cloud] solution that provides everything you need to tailor and personalize your customers' experience so you can maximize revenue on your web and mobile sites, apps, social media, and other digital channels. **Last Updated: November 10, 2020 ( [See What Changed](r-release-notes/doc-change.md) )** >[!NOTE] > >**Adobe Again Named a Leader in Gartner Magic Quadrant for Personalization Engines** > >Adobe was once again named a Leader in the third-annual Gartner Magic Quadrant for Personalization Engines, 2020 report. The Gartner Magic Quadrant for Personalization Engines evaluated vendors across 15 criteria that fall into two categories: completeness of vision and ability to execute. [Read about it on The Adobe Blog](https://theblog.adobe.com/adobe-again-named-leader-in-gartner-magic-quadrant-for-personalization-engines/). The following sections point you to useful links in this guide, arranged by intended audience based on typical job functions: - [All Target users](#all) - [Marketers](#marketers) - [Developers](#developers) - [Target and Adobe Experience Cloud admins](#admins) - [Analysts](#analysts) - [QA engineers](#qa) ## All [!DNL Target] users {#all} Marketers, developers, administrators, analysts, and quality assurance engineers. - [Target release notes](r-release-notes/release-notes.md): Contains information about the current release, information about known issues that affect [!DNL Target], a list of important changes to this documentation, and an archive of past release notes. - [Introduction to Target](c-intro/intro.md): Explains the core concepts of the [!DNL Target] solution. - Integrate Target with the Adobe Experience Cloud: Explains how to integrate [!DNL Target] with other [!DNL Experience Cloud] solutions, including [Analytics for Target](/help/c-integrating-target-with-mac/a4t/a4t.md) (A4T), [Experience Cloud Audiences](/help/c-integrating-target-with-mac/mmp.md), [Adobe Campaign](/help/c-integrating-target-with-mac/campaign-and-target.md), and the [Experience Cloud Device Co-op](/help/c-integrating-target-with-mac/experience-cloud-device-co-op.md). - [Adobe Target Tutorials](https://experienceleague.adobe.com/docs/target-learn/tutorials/overview.html): Provides tutorials and videos to help you get the most out of Target. - [Troubleshooting Target](r-troubleshooting-target/troubleshooting-target.md): Provides links to troubleshooting information contained in this guide, including information about the character limits and other limits (offer size, audiences, profiles, values, parameters, etc.) that affect activities and other elements in [!DNL Target]. - [Target for mobile apps](c-target-mobile-app/target-mobile-app.md): Explains how [!DNL Target] can be used for mobile app optimization and personalization. - [Resources and contact information](cmp-resources-and-contact-information.md): Provides information about additional resources to help you learn about [!DNL Target] features and how to contact [!DNL Adobe] should you need help. ## Marketers {#marketers} - [Activities](c-activities/activities.md): Explains how to set up, manage, and QA [!DNL Target] activities. - [Audiences](c-target/target.md): Explains how to determine who will see content and experiences in targeted activities. - [Experiences and offers](c-experiences/experiences.md): Explains how to specify which content displays when a visitor meets the audience criteria for an activity. - [Recommendations](c-recommendations/recommendations.md): Explains how [!DNL Recommendations] activities automatically display products or content that might interest your customers based on previous user activity or other algorithms. ## Developers {#developers} - [Implement Target](c-implementing-target/implementing-target.md): Explains how to implement [!DNL Target] on web sites, mobile apps, Single-Page Apps (SPAs), and iOT/OTT platforms. - [Target Server-Side APIs](https://developers.adobetarget.com/api/delivery-api/): Describes the resources that make up the Adobe Target Delivery API. - [Target NodeJS SDK](https://github.com/adobe/target-nodejs-sdk): Explains how to deploy Target server-side. This Node.js SDK helps you easily integrate Target with other Adobe Experience Cloud solutions. - [Target Java SDK](https://github.com/adobe/target-java-sdk): Explains how to deploy Target server-side. This Java SDK helps you easily integrate Target with other Adobe Experience Cloud solutions. - [Target Recommendations API](https://developers.adobetarget.com/api/recommendations/): Describes the resources that make up the official Adobe Target Recommendations API. - [Target Adobe.IO documentation](http://developers.adobetarget.com/api/#introduction): Describes how to use Target’s Admin and Profile REST APIs that use the Adobe.IO integration to manage activities, audiences, offers, properties, reports, mboxes, environments, and profiles. ## Target and Adobe Experience Cloud admins {#admins} - [Administer Target](administrating-target/administrating-target.md): Explains how to add users and configure your [!DNL Target] account. ## Analysts {#analysts} - [Audiences](c-target/target.md): Explains how to determine who will see content and experiences in targeted activities. - [Reports](c-reports/reports.md): Explains how to interpret the performance of your activities. ## QA engineers {#qa} - [Activities](c-activities/activities.md): Explains how to set up, manage, and QA [!DNL Target] activities. ## Additional Resources {#additional} | Adobe [!DNL Target] solutions help | [!DNL Adobe Experience Cloud] resources | |--- |--- | |<ul><li>[Adobe Target Learn & Support](https://helpx.adobe.com/support/target.html)</li><li>[Premium Recommendations](c-recommendations/recommendations.md)</li><li>[Adobe Recommendations Classic](/help/assets/adobe-recommendations-classic.pdf)</li><li>[Search&Promote](https://experienceleague.adobe.com/docs/search-promote/using/sp-home.html)</li><li>[Target API Documentation](c-implementing-target/c-api-and-sdk-overview/api-and-sdk-overview.md)</li></ul>|<ul><li>[Target Community Forum](https://forums.adobe.com/community/experience-cloud/marketing-cloud/target)</li><li>[Experience Cloud Release Notes](https://experienceleague.adobe.com/docs/release-notes/experience-cloud/current.html)</li><li>[Experience Cloud Help Home](https://helpx.adobe.com/support/experience-cloud.html)</li><li>[Adobe Experience Cloud Documentation](https://experienceleague.adobe.com/docs/experience-cloud/user-guides/home.html)</li><li>[Adobe Training and Tutorials](https://helpx.adobe.com/learning.html?promoid=KAUDK)</li></ul>||
92.628205
1,017
0.784498
eng_Latn
0.816937
f4b8e3899ab17bb81607c39e8ae0e6c28f1a44c1
5,926
md
Markdown
docs/windows/how-to-handle-events-using-wrl.md
manbearian/cpp-docs
a5916b48541f804a79891ff04e246628b5f9a24a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/windows/how-to-handle-events-using-wrl.md
manbearian/cpp-docs
a5916b48541f804a79891ff04e246628b5f9a24a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/windows/how-to-handle-events-using-wrl.md
manbearian/cpp-docs
a5916b48541f804a79891ff04e246628b5f9a24a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "How to: Handle Events Using WRL | Microsoft Docs" ms.custom: "" ms.date: "11/04/2016" ms.reviewer: "" ms.suite: "" ms.technology: ["cpp-windows"] ms.tgt_pltfrm: "" ms.topic: "reference" dev_langs: ["C++"] ms.assetid: 1c77543f-7b0c-4a94-93bf-e3225885ed76 caps.latest.revision: 10 author: "mikeblome" ms.author: "mblome" manager: "ghogen" ms.workload: ["cplusplus", "uwp"] --- # How to: Handle Events Using WRL This document shows how to use the Windows Runtime C++ Template Library (WRL)to subscribe to and handle the events of a Windows Runtime object. For a more basic example that creates an instance of that component and retrieves a property value, see [How to: Activate and Use a Windows Runtime Component](../windows/how-to-activate-and-use-a-windows-runtime-component-using-wrl.md). ## Subscribing to and Handling Events The following steps start an `ABI::Windows::System::Threading::IDeviceWatcher` object and use event handlers to monitor progress. The `IDeviceWatcher` interface enables you to enumerate devices asynchronously, or in the background, and receive notification when devices are added, removed, or changed. The [Callback](../windows/callback-function-windows-runtime-cpp-template-library.md) function is an important part of this example because it enables it to specify event handlers that process the results of the background operation. The complete example follows. > [!WARNING] > Although you typically use the Windows Runtime C++ Template Library in a Universal Windows Platform app, this example uses a console app for illustration. Functions such as `wprintf_s` are not available from a Universal Windows Platform app. For more information about the types and functions that you can use in a Universal Windows Platform app, see [CRT functions not supported with /ZW](http://msdn.microsoft.com/library/windows/apps/jj606124.aspx) and [Win32 and COM for Windows Store apps](http://msdn.microsoft.com/library/windows/apps/br205757.aspx). 1. Include (`#include`) any required Windows Runtime, Windows Runtime C++ Template Library, or C++ Standard Library headers. [!code-cpp[wrl-consume-event#2](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_1.cpp)] Windows.Devices.Enumeration.h declares the types that are required to enumerate devices. We recommend that you utilize the `using namespace` directive in your .cpp file to make the code more readable. 2. Declare the local variables for the app. This example holds count of the number of enumerated devices and registration tokens that enable it to later unsubscribe from events. [!code-cpp[wrl-consume-event#7](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_2.cpp)] 3. Initialize the Windows Runtime. [!code-cpp[wrl-consume-event#3](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_3.cpp)] 4. Create an [Event](../windows/event-class-windows-runtime-cpp-template-library.md) object that synchronizes the completion of the enumeration process to the main app. [!code-cpp[wrl-consume-event#4](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_4.cpp)] > [!NOTE] > This event is for demonstration only as part of a console app. This example uses the event to ensure that an async operation completes before the app exits. In most apps, you typically don’t wait for async operations to complete. 5. Create an activation factory for the `IDeviceWatcher` interface. [!code-cpp[wrl-consume-event#5](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_5.cpp)] The Windows Runtime uses fully-qualified names to identify types. The `RuntimeClass_Windows_Devices_Enumeration_DeviceInformation` parameter is a string that's provided by the Windows Runtime and contains the required runtime class name. 6. Create the `IDeviceWatcher` object. [!code-cpp[wrl-consume-event#6](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_6.cpp)] 7. Use the `Callback` function to subscribe to the `Added`, `EnumerationCompleted`, and `Stopped` events. [!code-cpp[wrl-consume-event#8](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_7.cpp)] The `Added` event handler increments the count of enumerated devices. It stops the enumeration process after ten devices are found. The `Stopped` event handler removes the event handlers and sets the completion event. The `EnumerationCompleted` event handler stops the enumeration process. We handle this event in case there are fewer than ten devices. > [!TIP] > This example uses a lambda expression to define the callbacks. You can also use function objects (functors), function pointers, or [std::function](../standard-library/function-class.md) objects. For more information about lambda expressions, see [Lambda Expressions](../cpp/lambda-expressions-in-cpp.md). 8. Start the enumeration process. [!code-cpp[wrl-consume-event#9](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_8.cpp)] 9. Wait for the enumeration process to complete and then print a message. All `ComPtr` and RAII objects leave scope and are released automatically. [!code-cpp[wrl-consume-event#10](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_9.cpp)] Here is the complete example: [!code-cpp[wrl-consume-event#1](../windows/codesnippet/CPP/how-to-handle-events-using-wrl_10.cpp)] ## Compiling the Code To compile the code, copy it and then paste it in a Visual Studio project, or paste it in a file that is named `wrl-consume-events.cpp` and then run the following command in a Visual Studio Command Prompt window. **cl.exe wrl-consume-events.cpp runtimeobject.lib** ## See Also [Windows Runtime C++ Template Library (WRL)](../windows/windows-runtime-cpp-template-library-wrl.md)
63.72043
567
0.746372
eng_Latn
0.961245
f4b8f8e082cf911389f9bba74a9da7a1d54d3216
681
md
Markdown
docs/visual-basic/language-reference/error-messages/unable-to-find-required-file-filename.md
zhao365845726/docs.zh-cn
fadaf15a3905ab3b2ef5d0d4920c37ff0e19a83a
[ "CC-BY-4.0", "MIT" ]
1
2020-04-14T18:16:06.000Z
2020-04-14T18:16:06.000Z
docs/visual-basic/language-reference/error-messages/unable-to-find-required-file-filename.md
zhao365845726/docs.zh-cn
fadaf15a3905ab3b2ef5d0d4920c37ff0e19a83a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/language-reference/error-messages/unable-to-find-required-file-filename.md
zhao365845726/docs.zh-cn
fadaf15a3905ab3b2ef5d0d4920c37ff0e19a83a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 无法找到所需的文件“<filename>” ms.date: 07/20/2015 f1_keywords: - bc30655 - vbc30655 helpviewer_keywords: - BC30655 ms.assetid: 756db378-e758-48a9-88ff-496bc55bc0b6 ms.openlocfilehash: 1abb420c997afbc69a652502801d91043eb48757 ms.sourcegitcommit: 5a28f8eb071fcc09b045b0c4ae4b96898673192e ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 10/31/2019 ms.locfileid: "73197252" --- # <a name="unable-to-find-required-file-filename"></a>找不到所需的文件 "\<filename >" Visual Studio 所需的文件丢失或已损坏。 **错误 ID:** BC30655 ## <a name="to-correct-this-error"></a>更正此错误 - 重新安装 Visual Studio。 ## <a name="see-also"></a>请参阅 - [与我们交流](/visualstudio/ide/feedback-options)
23.482759
77
0.741557
kor_Hang
0.070688
f4b9aab3a1ac5f7faa368b11edc1b7ae0f7624c2
3,017
md
Markdown
sdk-api-src/content/gdiplusbrush/nf-gdiplusbrush-solidbrush-getcolor.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/gdiplusbrush/nf-gdiplusbrush-solidbrush-getcolor.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/gdiplusbrush/nf-gdiplusbrush-solidbrush-getcolor.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:gdiplusbrush.SolidBrush.GetColor title: SolidBrush::GetColor (gdiplusbrush.h) description: The SolidBrush::GetColor method gets the color of this solid brush. helpviewer_keywords: ["GetColor","GetColor method [GDI+]","GetColor method [GDI+]","SolidBrush class","SolidBrush class [GDI+]","GetColor method","SolidBrush.GetColor","SolidBrush::GetColor","_gdiplus_CLASS_SolidBrush_GetColor_color_","gdiplus._gdiplus_CLASS_SolidBrush_GetColor_color_"] old-location: gdiplus\_gdiplus_CLASS_SolidBrush_GetColor_color_.htm tech.root: gdiplus ms.assetid: VS|gdicpp|~\gdiplus\gdiplusreference\classes\solidbrushclass\solidbrushmethods\getcolor_82color.htm ms.date: 12/05/2018 ms.keywords: GetColor, GetColor method [GDI+], GetColor method [GDI+],SolidBrush class, SolidBrush class [GDI+],GetColor method, SolidBrush.GetColor, SolidBrush::GetColor, _gdiplus_CLASS_SolidBrush_GetColor_color_, gdiplus._gdiplus_CLASS_SolidBrush_GetColor_color_ req.header: gdiplusbrush.h req.include-header: Gdiplus.h req.target-type: Windows req.target-min-winverclnt: Windows XP, Windows 2000 Professional [desktop apps only] req.target-min-winversvr: Windows 2000 Server [desktop apps only] req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: Gdiplus.lib req.dll: Gdiplus.dll req.irql: targetos: Windows req.typenames: req.redist: req.product: GDI+ 1.0 ms.custom: 19H1 f1_keywords: - SolidBrush::GetColor - gdiplusbrush/SolidBrush::GetColor dev_langs: - c++ topic_type: - APIRef - kbSyntax api_type: - COM api_location: - Gdiplus.dll api_name: - SolidBrush.GetColor --- # SolidBrush::GetColor ## -description The <b>SolidBrush::GetColor</b> method gets the color of this solid brush. ## -parameters ### -param color [out] Type: <b><a href="/windows/desktop/api/gdipluscolor/nl-gdipluscolor-color">Color</a>*</b> Pointer to a <a href="/windows/desktop/api/gdipluscolor/nl-gdipluscolor-color">Color</a> object that receives the color of this solid brush. ## -returns Type: <b><a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a></b> If the method succeeds, it returns Ok, which is an element of the <a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> enumeration. If the method fails, it returns one of the other elements of the <a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> enumeration. ## -see-also <a href="/windows/desktop/gdiplus/-gdiplus-brushes-and-filled-shapes-about">Brushes and Filled Shapes</a> <a href="/windows/desktop/api/gdipluscolor/nl-gdipluscolor-color">Color</a> <a href="/windows/desktop/gdiplus/-gdiplus-filling-a-shape-with-a-solid-color-use">Filling a Shape with a Solid Color</a> <a href="/windows/desktop/api/gdiplusbrush/nl-gdiplusbrush-solidbrush">SolidBrush</a> <a href="/windows/desktop/api/gdiplusbrush/nf-gdiplusbrush-solidbrush-setcolor">SolidBrush::SetColor</a>
32.793478
287
0.772953
kor_Hang
0.203987
f4baafe8ea10c2b9940751686c5ee54d4ee75c80
1,597
md
Markdown
windows-driver-docs-pr/storage/handling-srb-function-reset-device.md
msarcletti/windows-driver-docs
177447a51fe3b95ccba504c0e91f9f3889cab16c
[ "CC-BY-4.0", "MIT" ]
1
2020-02-26T02:51:21.000Z
2020-02-26T02:51:21.000Z
windows-driver-docs-pr/storage/handling-srb-function-reset-device.md
msarcletti/windows-driver-docs
177447a51fe3b95ccba504c0e91f9f3889cab16c
[ "CC-BY-4.0", "MIT" ]
1
2021-01-21T17:24:17.000Z
2021-01-21T17:24:17.000Z
windows-driver-docs-pr/storage/handling-srb-function-reset-device.md
msarcletti/windows-driver-docs
177447a51fe3b95ccba504c0e91f9f3889cab16c
[ "CC-BY-4.0", "MIT" ]
2
2020-08-11T00:01:58.000Z
2021-11-24T02:51:30.000Z
--- title: Handling SRB_FUNCTION_RESET_DEVICE description: Handling SRB_FUNCTION_RESET_DEVICE ms.assetid: d95bca21-306e-4598-a8c6-04990885e23d keywords: - SCSI miniport drivers WDK storage , HwScsiStartIo - HwScsiStartIo - SRB_FUNCTION_RESET_DEVICE ms.date: 04/20/2017 ms.localizationpriority: medium --- # Handling SRB\_FUNCTION\_RESET\_DEVICE ## <span id="ddk_handling_srb_function_reset_device_kg"></span><span id="DDK_HANDLING_SRB_FUNCTION_RESET_DEVICE_KG"></span> The ScsiPort driver no longer sends this SRB to its miniport drivers. Only Storport miniport drivers might have to handle this SRB. If the HBA has the ability to reset a target device, as indicated when [*HwScsiFindAdapter*](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff557300(v=vs.85)) sets up the [**PORT\_CONFIGURATION\_INFORMATION**](https://docs.microsoft.com/windows-hardware/drivers/ddi/srb/ns-srb-_port_configuration_information), the port driver requests a device reset when an uncompleted request times out. The system port driver calls the miniport driver's *HwScsiStartIo* routine with an SRB in which the **Function** member is set to SRB\_FUNCTION\_RESET\_DEVICE. The miniport driver is responsible for completing any requests that are uncompleted on the device when it receives a reset-device request. If the device reset fails or times out, or if the time-out occurs while the port driver is waiting for a **NextRequest** notification, the port driver calls [*HwScsiResetBus*](https://docs.microsoft.com/previous-versions/windows/hardware/drivers/ff557318(v=vs.85)).
46.970588
412
0.802755
eng_Latn
0.871349
f4baf55dc391b46f44427e47382d1131fae0bc4c
1,960
md
Markdown
ChangeLog.md
webmailcontatos/dbunit
419808b43f468f10964fd1cb8d5cb12f1608c0f9
[ "BSD-3-Clause" ]
null
null
null
ChangeLog.md
webmailcontatos/dbunit
419808b43f468f10964fd1cb8d5cb12f1608c0f9
[ "BSD-3-Clause" ]
null
null
null
ChangeLog.md
webmailcontatos/dbunit
419808b43f468f10964fd1cb8d5cb12f1608c0f9
[ "BSD-3-Clause" ]
null
null
null
# Changes in DbUnit All notable changes to DbUnit are documented in this file using the [Keep a CHANGELOG](http://keepachangelog.com/) principles. ## [5.0.0] - 2019-03-26 * DbUnit is now compatible with, and requires, PHPUnit 8.0 ## [4.0.0] - 2018-02-07 ### Removed * This extension now requires PHPUnit 7 * This extension is no longer supported on PHP 7.0 ## [3.0.3] - 2018-01-23 ### Fixed * Fixed [#191](https://github.com/sebastianbergmann/dbunit/pull/191): MySQL's `FOREIGN_KEY_CHECKS` setting gets lost * Fixed [#192](https://github.com/sebastianbergmann/dbunit/pull/192): Error message for wrong fixture is not good enough * Fixed [#201](https://github.com/sebastianbergmann/dbunit/pull/201): `TestCaseTrait::tearDown()` does not call parent's `tearDown()` * Fixed [#204](https://github.com/sebastianbergmann/dbunit/pull/204): `DefaultConnection::close()` does not close database connection * Fixed [#205](https://github.com/sebastianbergmann/dbunit/pull/205): Metadata for empty SQLite table is not handled correctly ## [3.0.2] - 2017-11-18 ### Changed * This extension is now compatible with Symfony Console 4 ## [3.0.1] - 2017-10-19 ### Fixed * Fixed [#193](https://github.com/sebastianbergmann/dbunit/pull/193): Multibyte values are not displayed correctly * Fixed [#195](https://github.com/sebastianbergmann/dbunit/issues/195): Empty tables are not displayed correctly ## [3.0.0] - 2017-02-03 ### Changed * DbUnit's units of code are now namespaced * DbUnit is now compatible with, and requires, PHPUnit 6.0 ### Removed * The `dbunit` CLI tool was removed [4.0.0]: https://github.com/sebastianbergmann/dbunit/compare/3.0.3...4.0.0 [3.0.3]: https://github.com/sebastianbergmann/dbunit/compare/3.0.2...3.0.3 [3.0.2]: https://github.com/sebastianbergmann/dbunit/compare/3.0.1...3.0.2 [3.0.1]: https://github.com/sebastianbergmann/dbunit/compare/3.0.0...3.0.1 [3.0.0]: https://github.com/sebastianbergmann/dbunit/compare/2.0...3.0.0
36.296296
133
0.72449
eng_Latn
0.47765
f4bc1509f3b371106c5d680bbf4964d90f8aabda
3,699
md
Markdown
docs/profiling/call-tree-view.md
tommorris/visualstudio-docs.es-es
651470ca234bb6db8391ae9f50ff23485896393c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/call-tree-view.md
tommorris/visualstudio-docs.es-es
651470ca234bb6db8391ae9f50ff23485896393c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/call-tree-view.md
tommorris/visualstudio-docs.es-es
651470ca234bb6db8391ae9f50ff23485896393c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Vista Árbol de llamadas | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: vs-ide-debug ms.topic: conceptual f1_keywords: - vs.performance.view.calltree helpviewer_keywords: - Call Tree view - profiling tools reports, Call Tree view - performance reports, Call Tree view - profiling tools, Call Tree view ms.assetid: b2dbc033-bf95-4d10-8e51-f9462979133e author: mikejo5000 ms.author: mikejo manager: douge ms.workload: - multiple ms.openlocfilehash: f8973f1536ded24d2fd327aa3eac1ceee795cb54 ms.sourcegitcommit: 209c2c068ff0975994ed892b62aa9b834a7f6077 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 05/17/2018 ms.locfileid: "34262452" --- # <a name="call-tree-view"></a>Vista Árbol de llamadas La vista Árbol de llamadas muestra las rutas de acceso de ejecución de funciones que se recorrieron en la aplicación de la que se generaron perfiles. La raíz del árbol es el punto de entrada a la aplicación o el componente. Cada nodo de función muestra todas las funciones a las que ha llamado, así como los datos de rendimiento de esas llamadas a funciones. La vista Árbol de llamadas también puede expandir y resaltar la ruta de acceso de ejecución de una función que consumió más tiempo o de la que se han tomado muestras con más frecuencia. Para mostrar la ruta de acceso más exigente, haga clic con el botón derecho en la función y, después, haga clic en **Expandir ruta de acceso activa**. Cada proceso en la ejecución de generación de perfiles se muestra como un nodo raíz. Para establecer el nodo de inicio de la vista Árbol de llamadas, haga clic con el botón derecho en el nodo que quiere establecer como nodo de inicio y seleccione **Establecer raíz**. Al establecer el nodo raíz, se eliminan todas las demás entradas de la vista, excepto el subárbol del nodo seleccionado. Puede restablecer el nodo raíz en el nodo que estaba viendo. En la ventana de la vista Árbol de llamadas, haga clic con el botón derecho y seleccione **Restablecer raíz**. La vista Árbol de llamadas se puede personalizar para agregar o quitar columnas. Haga clic con el botón derecho en la **barra de título del nombre de columna** y seleccione **Agregar o quitar columnas**. La vista Árbol de llamadas puede configurarse para la reducción de ruido si se limita la cantidad de datos que se presentan. Gracias a la reducción de ruido, los problemas de rendimiento destacan más en la vista. Cuando los problemas de rendimiento son fáciles de distinguir, el análisis resulta más sencillo. Para más información, vea [Cómo: Configurar la reducción de ruido en las vistas de informes](../profiling/how-to-configure-noise-reduction-in-report-views.md). > [!NOTE] > Si la reducción de ruido está configurada para mostrar una advertencia cuando se habilita, se mostrará una barra de información en el informe. Para obtener más información sobre las definiciones de las columnas de la vista Árbol de llamadas, vea lo siguiente: [Vista Árbol de llamadas](../profiling/call-tree-view-sampling-data.md) [Vista Árbol de llamadas](../profiling/call-tree-view-instrumentation-data.md) [Vista Árbol de llamadas: muestreo](../profiling/call-tree-view-dotnet-memory-sampling-data.md) [Vista Árbol de llamadas](../profiling/call-tree-view-contention-data.md) ## <a name="see-also"></a>Vea también [Vistas de informes de rendimiento](../profiling/performance-report-views.md) [Introducción a los valores de datos de instrumentación](../profiling/understanding-instrumentation-data-values.md) [Descripción de los valores de datos de muestreo](../profiling/understanding-sampling-data-values.md)
66.053571
472
0.773182
spa_Latn
0.992792