content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Windows User Guide¶
Connect dcrd to the Decred network¶
Step One
Note that all
dcrd,
dcrwallet, and
dcrctl commands must be executed in the directory where your Decred files are! Start
dcrd:
C:\Decred> dcrd -u <username> -P <password>
Start dcrwallet:
C:\Decred> dcrwallet -u <username> -P <password>
Step Two
Generate a new wallet address:
C:\Decred> dcrctl -u <username> -P <password> --wallet getnewaddress
Copy the new address (output from the last command). Close/stop
dcrd and
dcrwallet by pressing
ctrl+c in each window.
Step Three
Restart
dcrd using the command:
C:\Decred> dcrd --miningaddr <new address from step two or your web client wallet address>
Right-click on
start_local.bat and click
Edit. Change the username and password to match the credentials used in step 1. Save and close start_local.bat For reference, here is the command in start_local.bat:
C:\Decred> cgminer --blake256 -o -u <username> -p <password> --cert "%LOCALAPPDATA%\Dcrd\rpc.cert"
Step Two
If dcrd is not finished syncing to the blockchain, wait for it to finish, then proceed to the next step. When it is finished, it will show:
[INF] BMGR: Processed 1 block in the last 5m34.49s
Step Three
Double click on
start_local.bat.
cgminer should open. double clicking
cgminer.exe. This concludes the basic solo cgminer setup guide. For more information on cgminer usage and detailed explanations on program functions, refer to the official cgminer README. | https://docs.decred.org/mining/proof-of-work/solo-mining/cgminer/windows/ | 2017-06-22T18:20:26 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.decred.org |
Overview
Thank:
Powerful DataBinding to Objects, Collections, XML and WCF services - Binding Telerik RadBreadcrumb is as simple as setting a single property. The binding sources which the breadcrumb supports include Objects, XML, WCF services.
Auto-complete - Telerik RadBreadcrumb control features an auto-complete searching mechanism
TextMode and UI Navigation- The Telerik RadBreadcrumb control enhances further your web application’s capabilities through the rich navigation functionality. Your users can navigate in the control by entering a destination in a text mode or by choosing an item from the RadBreadcrumbItem's children collection.
Styling and Appearance - The RadBreadcrumb control ships with several pre-defined themes that can be used to style it. Furthermore, Telerik's unique style building mechanism allows you to change the skin’s color scheme with just a few clicks.
Keyboard Support - Navigate through the nodes of the breadcrumb without using the mouse. The keyboard can replace the mouse by allowing you to perform navigation, expanding and selecting the breadcrumb items.
Item Images - RadBreadcrumb gives you the ability to define images for each item
Enhanced Routed Events Framework - To help your code become even more elegant and concise, we. | http://docs.telerik.com/devtools/wpf/controls/radbreadcrumb/overvew | 2017-06-22T18:33:51 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['images/breadcrumb_wpf_icon.png', 'breadcrumb wpf icon'],
dtype=object)
array(['images/radbreadcrumb-overview-1.png', None], dtype=object)] | docs.telerik.com |
Smart Tags
In R1 2017 we introduced the Smart Tags feature which provides easy access to the most important documentation articles related to each control. Though the feature is especially useful for users that have recently started using Telerik controls, it might come handy for experienced users as it provides easy navigation to the control's documentation and demos.
As shown in Figure 1, when you focus a certain Telerik control in the Visual Studio Designer, the Smart Tag arrow would appear on the right top corner of the control.
Figure 1: Smart Tag icon on RadGridView:
Unfolding the Smart Tag presents you with a list of different useful resources as well as the possibility to search the forum for certain topics. Figure 2 shows the default menu for the RadGridView control. The appearance and resources vary depending on the control.
Figure 2: Available resources for the RadGridView control:
| http://docs.telerik.com/devtools/wpf/smarttags-overview | 2017-06-22T18:33:24 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['images/smarttag.png', 'Smart Tag icon on RadGridView'],
dtype=object)
array(['images/smarttag2.png',
'Available resources for the RadGridView control'], dtype=object)] | docs.telerik.com |
Recently Viewed Topics
On the top navigation bar, click the Scans button.
The My Scans page appears.
On the left pane, click Target Groups.
The Target Groups page appears.
In the list of target groups, click the target group that you want to modify.
The Edit Target Group page appears, where you can manage the target group settings.. | https://docs.tenable.com/cloud/Content/Scans/EditATargetGroup.htm | 2017-06-22T18:30:10 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.tenable.com |
Submitting Data¶
To submit your test data to Treeherder, you have two options:
This is the new process Task Cluster.
This is historically how projects and users have submitted data to Treeherder. This requires getting Hawk credentials approved by a Treeherder Admin. There is a client library to help make this easier. However, there is no schema to validate the payload against. But using the client to build your payload will help you get it in the accepted form. Your data only goes to the host you send it to. Dev instances can not subscribe to this data.", "destinations": [ 'treeherder' ], "projects": [ 'mozilla-inbound._' ], },
Treeherder will bind to the exchange looking for all combinations of routing
keys from
destinations and
projects listed above. For example with
the above config, we will only load jobs with routing keys of
treeherder.mozilla-inbound._
If you want all jobs from your exchange to be loaded, you could simplify the config by having values:
"destinations": [ '#' ], "projects": [ '#' ],
If you want one config to go to Treeherder Staging and a different one to go to Production, please specify that in the bug. You could use the same exchange with different routing key.
Using the Python Client¶
There are two types of data structures you can submit with the Python client: job and resultset collections. The client provides methods for building a data structure that treeherder will accept. Data structures can be extended with new properties as needed, there is a minimal validation protocol applied that confirms the bare minimum parts of the structures are defined.
See the Python client section for how to control which Treeherder instance will be accessed by the client.
Authentication is covered here.
Job Collections¶
Job collections can contain test results from any kind of test. The revision provided should match the associated revision in the resultset structure. The revision is the top-most revision in the push. The job_guid provided can be any unique string of 50 characters at most. A job collection has the following data structure.
[ { 'project': 'mozilla-inbound', 'revision': ', 'product_name': 'spidermonkey', 'reason': 'scheduler', 'who': 'spidermonkey_info__mozilla-inbound-warnaserr', 'desc': 'Linux x86-64 mozilla-inbound spidermonkey_info-warnaserr build', 'name': 'SpiderMonkey --enable-sm-fail-on-warnings Build', # The symbol representing the job displayed in # treeherder.allizom.org 'job_symbol': 'e', # The symbol representing the job group in # treeherder.allizom.org 'group_symbol': 'SM', 'group_name': 'SpiderMonkey', 'submit_timestamp': 1387221298, 'start_timestamp': 1387221345, 'end_timestamp': 1387222817, 'state': 'completed', 'result': 'success', 'machine': 'bld-linux64-ec2-104', 'build_platform': { 'platform':'linux64', 'os_name': 'linux', 'architecture': 'x86_64' }, 'machine_platform': { 'platform': 'linux64', 'os_name': 'linux', 'architecture': 'x86_64' }, 'option_collection': {'opt': True}, # jobs can belong to different tiers # setting the tier here will determine which tier the job # belongs to. However, if a job is set as Tier of 1, but # belongs to the Tier 2 profile on the server, it will still # be saved as Tier 2. 'tier': 2, # the ``name`` of the log can be the default of "buildbot_text" # however, you can use a custom name. See below. 'log_references': [ { 'url': '...', 'name': 'buildbot_text' } ], # The artifact can contain any kind of structured data associated with a test. 'artifacts': [{ 'type': 'json', 'name': '', 'blob': { my json content here} }], # List of job guids that were coalesced to this job 'coalesced': [] }, ... ]
see Specifying Custom Log Names for more info.
Usage¶
If you want to use TreeherderJobCollection to build up the job data structures to send, do something like this:
from thclient import (TreeherderClient, TreeherderClientError, TreeherderJobCollection) tjc = TreeherderJobCollection() for data in dataset: tj = tjc.get_job() tj.add_revision( data['revision'] ) tj.add_project( data['project'] ) tj.add_coalesced_guid( data['coalesced'] ) tj.add_job_guid( data['job_guid'] ) tj.add_job_name( data['name'] ) tj.add_job_symbol( data['job_symbol'] ) tj.add_group_name( data['group_name'] ) tj.add_group_symbol( data['group_symbol'] ) tj.add_description( data['desc'] ) tj.add_product_name( data['product_name'] ) tj.add_state( data['state'] ) tj.add_result( data['result'] ) tj.add_reason( data['reason'] ) tj.add_who( data['who'] ) tj.add_tier( 1 ) tj.add_submit_timestamp( data['submit_timestamp'] ) tj.add_start_timestamp( data['start_timestamp'] ) tj.add_end_timestamp( data['end_timestamp'] ) tj.add_machine( data['machine'] ) tj.add_build_info( data['build']['os_name'], data['build']['platform'], data['build']['architecture'] ) tj.add_machine_info( data['machine']['os_name'], data['machine']['platform'], data['machine']['architecture'] ) tj.add_option_collection( data['option_collection'] ) tj.add_log_reference( 'buildbot_text', data['log_reference'] ) # data['artifact'] is a list of artifacts for artifact_data in data['artifact']: tj.add_artifact( artifact_data['name'], artifact_data['type'], artifact_data['blob'] ) tjc.add(tj) client = TreeherderClient(client_id='hawk_id', secret='hawk_secret') client.post_collection('mozilla-central', tjc)
If you don’t want to use TreeherderJobCollection to build up the data structure to send, build the data structures directly and add them to the collection.
from thclient import TreeherderClient, TreeherderJobCollection tjc = TreeherderJobCollection() for job in job_data: tj = tjc.get_job(job) # Add any additional data to tj.data here # add job to collection tjc.add(tj) client = TreeherderClient(client_id='hawk_id', secret='hawk_secret') client.post_collection('mozilla-central', tjc)
Job artifacts format¶
Artifacts can have name, type and blob. The blob property can contain any valid data structure accordingly to type attribute. For example if you use the json type, your blob must be json-serializable to be valid. The name attribute can be any arbitrary string identifying the artifact. Here is an example of what a job artifact looks like in the context of a job object:
[ { 'project': 'mozilla-inbound', 'revision_hash': ', # ... # other job properties here # ... 'artifacts': [ { "type": "json", "name": "my first artifact", 'blob': { k1: v1, k2: v2, ... } }, { 'type': 'json', 'name': 'my second artifact', 'blob': { k1: v1, k2: v2, ... } } ] } }, ... ]
A special case of job artifact is a “Job Info” artifact. This kind of artifact will be retrieved by the UI and rendered in the job detail panel. This is what a Job Info artifact looks like:
{ "blob": { "job_details": [ { "url": "", "value": "website", "content_type": "link", "title": "Mozilla home page" }, { "value": "bar", "content_type": "text", "title": "Foo" }, { "value": "This is <strong>cool</strong>", "content_type": "raw_html", "title": "Cool title" } ], }, "type": "json", "name": "Job Info" }
All the elements in the job_details attribute of this artifact have a mandatory title attribute and a set of optional attributes depending on content_type. The content_type drives the way this kind of artifact will be rendered. Here are the possible values:
Text - This is the simplest content type you can render and is the one used by default if the content type specified is not recognised or is missing.
This content type renders as:
<label>{{title}}</label><span>{{value}}</span>
Link - This content type renders as an anchor html tag with the following format:
{{title}}: <a title="{{value}}" href="{{url}}" target="_blank">{{value}}</a>
Raw Html - The last resource for when you need to show some formatted content.
Some Specific Collection POSTing Rules¶
Treeherder will detect what data is submitted in the
TreeherderCollection
and generate the necessary artifacts accordingly. The outline below describes
what artifacts Treeherder will generate depending on what has been submitted.
See Schema Validation for more info on validating some specialized JSON data.
JobCollections¶
Via the
/jobs endpoint:
- Submit a Log URL with no
parse_statusor
parse_statusset to “pending”
- This will generate
text_log_summaryand
Bug suggestionsartifacts
- Current Buildbot workflow
- Submit a Log URL with
parse_statusset to “parsed” and a
text_log_summaryartifact
- Will generate a
Bug suggestionsartifact only
- Desired future state of Task Cluster
- Submit a Log URL with
parse_statusof “parsed”, with
text_log_summaryand
Bug suggestionsartifacts
- Will generate nothing
- Submit a
text_log_summaryartifact
- Will generate a
Bug suggestionsartifact if it does not already exist for that job.
- Submit
text_log_summaryand
Bug suggestionsartifacts
- Will generate nothing
- This is Treeherder’s current internal log parser workflow
Specifying Custom Log Names¶
By default, the Log Viewer expects logs to have the name of
buildbot_text
at this time. However, if you are supplying the
text_log_summary artifact
yourself (rather than having it generated for you) you can specify a custom
log name. You must specify the name in two places for this to work.
- When you add the log reference to the job:
tj.add_log_reference( 'my_custom_log', data['log_reference'] )
- In the
text_log_summaryartifact blob, specify the
lognameparam. This artifact is what the Log Viewer uses to find the associated log lines for viewing.
{ "blob":{ "step_data": { "steps": [ { "errors": [ ], "name": "step", "started_linenumber": 1, "finished_linenumber": 1, "finished": "2015-07-08 06:13:46", "result": "success", } ], "errors_truncated": false }, "logurl": "", "logname": "my_custom_log" }, "type": "json", "id": 10577808, "name": "text_log_summary", "job_id": 1774360 }(See other entries in that file for examples of the data to fill.)
- | http://treeherder.readthedocs.io/submitting_data.html | 2017-06-22T18:20:53 | CC-MAIN-2017-26 | 1498128319688.9 | [] | treeherder.readthedocs.io |
Select a value from the Layers and Properties list to control how to export object styles from Revit Architecture to AutoCAD (or other CAD applications).
When you export a Revit view to DWG or DXF, each Revit category is mapped to an AutoCAD layer, as specified in the Export Layer dialog. In AutoCAD, the layer controls the display of the entities (Revit elements), including their colors, line weights, and line styles. In Revit Architecture, you define object styles in the Object Styles dialog. (See Object Styles.) The Layers and Properties setting determines what happens to a Revit element if it has attributes (object styles) that differ from those defined for its category. In AutoCAD and in Revit Architecture, view-specific element graphics are referred to as overrides.
Select one of the following values:
For example, suppose that, in a Revit Architecture project, most walls display with solid black lines, with a line weight of 5. In a floor plan, however, you have changed the view-specific element graphics for one wall to use dashed blue lines, with a line weight of 7.
When you export this view to DWG or DXF and, for Layers and Properties, select: | http://docs.autodesk.com/REVIT/2011/ENU/filesUsersGuide/WS1a9193826455f5ff104d7f510f19418261-6e66.htm | 2014-12-18T12:27:45 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.autodesk.com |
Use a canvas strategy
The large area of a typical computer interface allows you to present an application with a mix of content and UI components. The same application that is created for the BlackBerry PlayBook to do this is to categorize your canvas as either continuous or discrete. A continuous canvas contains content that can be arbitrarily subdivided (for example, a map or a building blueprint). A discrete canvas contains content that has obvious, defined subcomponents (for example, a deck of cards, a contact list, or an eBook).
On a continuous canvas, consider allowing users to move (pan) slowly through the content, move quickly, zoom in and out, and perhaps rotate. On a discrete canvas, you should also consider allowing users to move (shift) slowly or quickly through the content, but there are likely some other actions that you might want to enable using gestures. For example, you can flip over a card, navigate within a contact list, or jump to the next chapter in a document.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/zh-tw/developers/deliverables/43087/Use_a_canvas_strategy_1562260_11.jsp | 2014-12-18T12:37:52 | CC-MAIN-2014-52 | 1418802766292.96 | [array(['bma1320689624402_lowres_en-us.jpg',
'This image shows an example of a canvas strategy.'], dtype=object)] | docs.blackberry.com |
.
Setting as the default,. | http://docs.codehaus.org/pages/viewpage.action?pageId=231082607 | 2014-12-18T12:35:23 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.codehaus.org |
Upgrade Guide
Local Navigation
Configure permissions for the Windows account
On each computer that you want to install the BlackBerry® Enterprise Server Express components on, you must configure permissions for the Windows® account that you want to use to install the BlackBerry Enterprise Server Express components and run the services for the BlackBerry Enterprise Server Express.
Without the correct permissions, the BlackBerry Enterprise Server Express cannot run.
- Right-click My Computer. Click Manage.
- In the left pane, expand Local Users and Groups.
- Navigate to the Groups folder.
- In the right pane, double-click Administrators.
- Click Add.
- In the Enter the object names to select field, type the Windows account name that you want the services for the BlackBerry Enterprise Server Express to use (for example, BESAdmin).
- Click OK.
- Click Apply.
- Click OK.
- On the taskbar, click Start > Programs > Administrative Tools > Local Security Policy.
- Configure the following permissions for the Windows account:
- On the taskbar, click Start > Programs > Administrative Tools > Computer Management.
- Add the Windows account to the local administrators group.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/27981/Configure_permissions_for_the_MS_Windows_account_899838_11.jsp | 2014-12-18T12:43:47 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.blackberry.com |
User Guide
Local Navigation
Use a shortcut for switching typing input languages when you're typing
- On the Home screen or in a folder, click the Options icon.
- Click Typing and Language > Language and Method.
- Press the
key > Save.
Next topic: Troubleshooting: Language
Previous topic: Change the language
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/33213/Use_shortcut_for_switching_langs_70_1658393_11.jsp | 2014-12-18T12:36:48 | CC-MAIN-2014-52 | 1418802766292.96 | [] | docs.blackberry.com |
The Intelligent Document Processing (IDP) application is primarily a mechanism to extract information from documents in order to digitize that data.
This process requires users to:
This document teaches users how to use IDP to upload, classify, and reconcile documents, as well as how to view information about processed documents, such as document status and the extracted information. It also provides an overview of the document processing metrics.
Not all users will have access to all actions and views. See the groups reference page for more information on what type of access each security group provides.
IDP transforms unstructured data from PDF documents into structured data. The application only accepts documents in PDF format, so before you get started, you may need to convert your documents. Appian Community offers multiple plug-ins to convert documents from other file formats to PDF:
We don't recommend converting Excel files to PDF for use in IDP. Instead, Appian can parse information from Excel files using rules. Use the Excel Tools plug-in to extract information from this file format.
If you need to upload documents manually, IDP features an easy-to-use upload form for document processing. However, you can also use it as a subprocess in a larger workflow. Furthermore, you can upload documents from external systems automatically, using a web API.
To upload documents manually:
After you upload the documents, the classification and extraction process will start.
The status of each document displays on the DOCUMENTS tab, along with other information and metrics about each document.
On the top-right corner of the page, click refresh
to view the updated status. You can also use the filters at the top of the page to find certain documents.
The possible statuses are:
The Reconciled By column shows who completed the reconciliation task for a document. If the document type has automatic validations set up, the reconciliation task is skipped and Straight Through Processed appears in this column. If automatic validation fails, a user will need to complete the reconciliation task and their name appears in this column.
After documents are uploaded, tasks are assigned to users so that they can classify documents and confirm or correct the extracted information. See Tasks for more information on using tasks in Appian.
If there are any documents that are in the Pending Classification or Pending Reconciliation status, you can classify and reconcile them in the TASKS tab.
In this tab, you can search for tasks by Task Name and filter by Task Type (Classification or Reconciliation), Document Channel (if configured), or who the task is Assigned To.
Classification tasks are represented by icons:
For documents that didn't meet the minimum confidence threshold during auto-classification, a task will automatically be created for a user to manually classify the document.
These documents will be in the Pending Classification status.
To complete a manual classification task:
If the document is invalid, click INVALIDATE. For example, if an unsigned process order is uploaded instead of a signed one, you can classify it as invalid. Invalid documents won't go through data extraction and reconciliation.
All documents need to be reviewed by a user for accuracy and to fill in any missing fields. This is called reconciliation. After a document is uploaded, the Appian Document Extraction runs, extracting data from the document. Extraction usually takes about 2 - 5 minutes. When it is finished, a reconciliation task is automatically generated.
While the data is extracting, these documents will be in the Auto-Extracting status. After extraction is complete, they will be in the Pending Reconciliation status.
To complete the reconciliation task:
The status for reconciled documents will change to Completed and the data extracted will be written to the database.
To view the information that was input for a document, go to the DOCUMENTS tab and click the document name.
The Summary tab lists Overview information about the document at the top of the page. It also displays the data that was extracted from the document, along with a document viewer.
To edit the information that was extracted, click the edit button. Then update the information in the fields.
The METRICS tab is used for reporting and governance so that users can see how well the application is performing.
You can filter the information by Document Channel (if configured) and Document Type, as well as only show information for documents processed in the Past 3 Months or the Past 6 Months.
The first section on this page shows some key performance indicators for documents that have completed processing, including:
The next section displays charts that show:
At the bottom of the page is a grid that shows the metadata for processed documents.
On This Page | https://docs.appian.com/suite/help/21.3/idp-1.6/idp-user-guide.html | 2021-11-27T12:11:23 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.appian.com |
To ensure traceability in processes, output needs to be versioned and tracked as it is handed from one process stage to another. With CloudBees CD/RO Artifact Management, implementing your processes is auditable and secure.
This tutorial shows how to produce and consume artifacts as part of a CloudBees CD/RO process.
To view an example procedure for this tutorial , go to the automation platform UI > Projects > EC-Tutorials- <version> , and click the procedure name.
An explanation of what you see—the following concepts are illustrated in this procedure:
The first five steps are simple ec-perl scripts that create files and directories as use as an artifact.
The first step, "Create myartifact.txt file", creates a text file in a workspace that can be added as an artifact. Click this step to go to the Edit Step page to see the full "dummy" script used to set up the artifact in the Command(s) field. This script creates a file named
myartifact.txt.
The next four steps create a directory, a subdirectory and the second text file in it.
The "Publish artifact" step calls the EC-Artifact:Publish procedure. This step is configured to place the file
myartifact.txt, created in the Setup step, into an artifact.
To do this, the EC-Artifact: Publish procedure requires the following information: The artifact where you want to publish. This is in the format
Artifact Group:Artifact Key. In this case, the name of the artifact (where you want to publish) is EC-Tutorials:MyArtifact. The artifact version being published. The version is in the form of
major.minor.patch-qualifier-buildnumberor
major.minor.patch.buildnumber-qualifier. For this tutorial, the version is set to
1.0.$[jobId]-tutorial. Using the
$[jobId]property expansion results in a new artifact being created every time the procedure is run. ** The repository server from which you need to retrieve the artifact. This tutorial uses a repository named
defaultthat was created when CloudBees CD/RO was installed.
+ To see how this information is specified in this step, click the "Publish artifact" step name to go to the Edit Step page.
The "Retrieve artifact" step calls the EC-Artifact: Retrieve procedure. This step takes two required parameters and several optional parameters (see the Edit Step page/Parameter section for this step). The required parameters are:
Artifact – EC-Tutorials:MyArtifact (this is the name of the artifact to retrieve).
Version –
1.0.$[jobId]-tutorial(this is the artifact version to be retrieved—either an exact version or a version range). This tutorial uses an exact version.
The optional parameters are:
Retrieve to directory – the location where the artifact files are downloaded when they are retrieved. This is where the retrieved artifact versions are stored.
Overwrite – the default is update where the artifact files are updated when they are retrieved. The other values are true (the files for the retrieved artifact versions are overwritten) or false (the files for the the retrieved artifact versions are not overwritten).
The Retrieved Artifact Location Property parameter is the name or property sheet path that the step uses to create a property sheet. The default value is ` /myJob/retrievedArtifactVersions/$[assignedResourceName]`.
This property sheet has information about the retrieved artifact versions, including their location in the file system. It displays the location from where the artifact is to be retrieved and retrieves the property value in the
/myJob/retrievedArtifactVersions/$[assignedResourceName].
For the Filter(s) parameter, enter search filters, one per line, applicable to querying the CloudBees CD/RO database for the artifact version to retrieve.
For more information about these parameters, see Retrieve Artifact Version Step.
The "Output location that artifact was retrieved to" step prints the message "Artifact extracted to: <directory where the retrieved artifact will be located>".
Click Run to run this sample procedure and see the resulting job on the Job Details page.
Implementation
To publish and retrieve artifacts, apply these concepts to your artifact projects, creating the steps as required to your procedure.
Related information
Artifact Management —Help topic
Artifact Details —Help topic
Job Details —Help topic
Publish Artifact Version step —Help topic
Retrieve Artifact Version step —Help topic | https://docs.cloudbees.com/docs/cloudbees-cd/10.1/automation-platform/help-tutorial-pubretrieveartifact | 2021-11-27T11:17:47 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.cloudbees.com |
Corelight Suricata Alerts
Overview
This article explains how to ingest your Corelight Suricata alerts to Hunters. Corelight Suricata alerts are a different data type than regular open source Suricata alerts (described here), since they're passed through the Zeek processing engine and are outputted in Zeek format, as explained here.
For Hunters to integrate with your Corelight Suricata logs, the logs should be collected to a Storage Service (e.g. to an S3 bucket or Azure Blob Storage) shared with Hunters.
Expected Log Format
The expected log format is JSON, which is configurable as part of the Corelight Suricata solution.
Below is an example of a currently supported log line:
{"_path":"suricata_corelight","_system_name":"sys-01","_write_ts":"2021-10-01T00:00:00.853803Z","ts":"2021-10-01T00:00:00. | https://docs.hunters.ai/wiki/Corelight-Suricata-Alerts.56885277.html | 2021-11-27T10:39:57 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.hunters.ai |
Jamf
Overview
Jamf is the most prominent way to manage MacOS devices in an enterprise organization. As such, logs pulled from the Jamf API provide important information regarding the organizational MacOS devices being used, which is all the more important as these MacOS endpoints are usually not a part of a managed Active Directory network (as opposed to Windows enterprise fleets).
For example, the Jamf Computers API allows establishing a contextual list of all endpoints belonging to the organization, which enables detection of access to organizational resources or SaaS applications done from an unmanaged device.
Additional important contextual information pulled from the Jamf API includes user lists, policies, managed scripts, network segments and more.
Supported data types
Computers
Mac Applications
Network Segments
Packages
Policies
Scripts
Users
Sending data to Hunters
Prerequisites
In order to intergate your JAMF instance with Hunters, you will need to follow these steps in order to create an appropriate user and an API key.
Login to jamf and go to the Settings section.
Go to Accounts. it can be found in All settings or System Settings tabs and under Jamf Pro User Accounts & Groups.
Add a new user.
Choose create account. Select Create Standard Account, and then click Next.
Fill out the new user account form. Please make sure that:
Access level is Full Access
Privilege Set is Auditor
Access Status is Enabled
Copy the Username and Password for the next stage and click save.
Generate Authentication Key - generate a basic authentication token by encoding the
<username>:<password>in Base64 using the username and password from last step.
One way to do it is to open a (macos or linux) terminal and run this command:
echo -n "username:password" | base64
Get API domain copy the api host address from your browser address bar when in the jamf console | https://docs.hunters.ai/wiki/Jamf.6291554.html | 2021-11-27T11:41:03 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.hunters.ai |
NAV 2017 CU1 on Azure
Last.
Building the image
A number of pre-requisites are installed before installing NAV on the image. All of these have changed in NAV 2017 CU1. Here's a list of what a NAV 2017 CU1 image consists of:
- Windows Server 2012 R2 Datacenter (might try 2016 for CU2)
- All updates from Windows Update
- PowerShellGet (PackageManagement_x64.msi) for PowerShell Gallery (from here:)
- NuGet Package Provider 2.8.5.201 or higher (Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force)
- Search Service (Install-WindowsFeature Search-Service)
- Download all DVD's to C:\NAVDVD\<countrycode>)
- .net 4.5.2 (\Prerequisite Components\Microsoft .NET Framework 4.5.2\dotNetFx452_Full_x86_x64.exe on the DVD)
- SQL Express 2016 (instance name NAVDEMO)
- SharePoint Online Management Shell ()
- Install full NAV W1 on the server to standard folders and standard ports, using localhost\NAVDEMO as the database server using Windows Authentication.
- Excluding the following installation options: ADCS, Outlook Addin, Excel Addin, Outlook Integration
- Copy the content of the DEMO folder to C:\DEMO
Basically that's it. When people select a different localization of NAV to use in the C:\DEMO\Initialize script, the script will remove the W1 database, restore the new database (from the DVD in C:\NAVDVD) and run all the local installers from the DVD (C:\NAVDVD\<countrycode>\Installers\)
Now of course there are some tweeks because creating an image, because I have to do SYSPREP, which removes users and settings, so I have to make all administrators sysadmins on the SQL Server (as the normal administrator who installs SQL Express will be gone).
Earlier versions of the image used WebPI (Web Platform Installer) which caused new users created on the server after spinning up a VM to be unable to logon if they were not administrators. Earlier versions also had a lot of other pre-requisites - primarily because we didn't have the PowerShell Gallery.
At this time, you might be wondering why I am explaining how to build the image? - hold that thought for one second...
Unfortunately no CZ (Czech) localization on NAV 2017 CU1
Unfortunately the Czech localization is NOT on NAV 2017 CU1.
NAV 2017CU1 is based on W1 build 14199.
Due to a late bug in the Czech app, we had to take a new build for the Czech Localization. This build was created two days later on W1 build 14300. Unfortunately during these two days, the database version number was bumped.
This means that it is not possible to restore the CZ database and run the local installers to change the country code on the image. We build 14199 cannot use/mount a database from W1 14300. This is unfortunately a hard stop and I was forced to remove the CZ build from the image.
If you want to create a CZ demo environment with NAV 2017 CU1 - you will have to create the NAV VM on azure manually following the steps in the section about building the image.
Aka.ms/...
When deploying a new image to Azure, it is available almost immediately in the Classic Portal. It is also available to templated deployment (like).
At this time, the following two Short URLs are working with NAV 2017CU1. - gives you a VM on Azure without running any scripts. You can run the demo scripts manually if you like. - gives you an initialized VM on Azure and runs a number of demo scripts to setup a demo environment.
Note The deployments done under these short Urls can potentially patch the files in the DEMO folder if there are any pending bug fixes to these files. This is the recommanded way of deploying NAV on Azure, you will always get the latest.
What else?
All DEMO scripts have been re-visited and updated. Earlier you had warnings and verbose messages flooding the screen and you couldn't really determine whether or not there were problems or everything was OK. As an example, the initialize output could look like this:
Every installation script in the DEMO folder used to have its own HelperFunctions.ps1. A lot of these had the same HelperFunctions. These have now been moved to a Common HelperFunctions.ps1 script. This supports things like logging to the DEMO status file and ensuring that a lot of the warnings and verbose messages that rolled over the screen whenever running a script, now is at a minimum and the necessary things are logged to ensure that it is easier to locate any problems on your VM.
Using the PowerShell Gallery
Some scripts, that needed to download and install PowerShell modules (like AzureRM) are now using the PowerShell Gallery to do this:
if (Get-Module -ListAvailable -Name AzureRM) { Log "AzureRM Powershell module already installed" } else { Log "Install AzureRM PowerShell Module" Install-Module -Name "AzureRM"-Repository "PSGallery" -Force } Import-Module -Name "AzureRM"
Other scripts, like the AzureSQL installer requires the DAC Framework and uses NuGet package provider to download this:
function Install-DACFx { $packageName = "Microsoft.SqlServer.DacFx.x64" if (Get-Package -Name $packageName) { Log "$PackageName Powershell module already installed" } else { Log "Install $PackageName PowerShell Module" Register-PackageSource -Name NuGet -Location -Provider NuGet -Trusted -Verbose | Out-Null Install-Package -Name $packageName -MinimumVersion 130.3485.1 -ProviderName NuGet -Force | Out-Null } (Get-Package -name $packageName).Version }
Office 365 Single Sign On
A number of people reported, that the way the AAD (Azure Active Directory) App for Single Sign On was created, it was impossible for non-admin O365 users to access NAV. The reason for this was, that in the NAV 2017RTM image, I changed from using the Set-NAVSingleSignOn Cmdlet, to do things in PowerShell. The PowerShell code did set the ClientServicesFederationMetadataLocation and the WSFederationLoginEndpoint to use a specific AAD Tenant (instead of Common) and as a result of this, the AAD App had to request permissions, that only an O365 Admin could give.
In CU1 - these keys are:
<add key="ClientServicesFederationMetadataLocation" value="" /> <add key="WSFederationLoginEndpoint" value="<Web Client URL>" />
and the permissions required from the user accessing the app is just: Sign you in and read your profile:
The PowerShell Code for creating an AAD App, assigning permissions and adding a Logo can be found in c:\demo\O365 Integration\HelperFunctions.ps1 in the method called Setup-AadApps.
Extensions
The Azure Gallery Image contains three sample extensions (also in NAV 2016)
- BingMaps Integration
- O365 Integration
- MSBand Integration
For NAV 2017 CU1, these extensions have gone through a major overhaul:
- Objects, controls, variables and all have been renumbered to be in the 50000 range
- Exposing Web Services is now part of the extension (earlier this was done in PowerShell)
- The Control Add-ins are now part of the extension (earlier this was done in PowerShell)
- Local translations are now part of the extension (ealier this was .txt files copied to the Service Translations folder in PowerShell)
Looking at the BingMaps Sources folder gives you the source of the BingMaps Extension:
Definitions
- All .DELTA files are source code DELTAs.
- The .zip files are Control Add-ins.
- The .xml files are Web Services definitions
- The .txt files are translations
and here I need to express my apology to my icelandic friends that I do not have any icelandic translations. The primary reason for this is, that the Cognitive Translation Service on Azure doesn't support Icelandic and I found out of this too late. I am sure that some of my icelandic friends will help me translate the texts for the extensions to icelandic and I will include this in future CU's.
The BingMaps Extension has also been made more fail safe. This means that a simple geocoding error won't give a hard error, but will only cause that customer not to be geocoded.
Extension Development Shell (now with translation support)
The Extension Development Shell on the Azure Gallery Image is a lightweight set of scripts, that helps me develop the extensions on the Gallery Image.
After installing the Extension Development Shell you will see a list of available commands:
Nothing looks different since the NAV 2017RTM image here - but a lot have changed behind the scenes.
An AppFolder still refers to a folder under C:\DEMO, meaning that BingMaps is the App Folder name of the BingMaps extension. The CmdLets are still called the same, but a lot of the internals have changes.
The CmdLets now have fewer parameters and instead they assume, that in the App Folder, there is an AppSettings.ps1 available. If AppSettings is not available it will default values for you.
Lets try to setup a development environment for the BingMaps extension. Write the following commands:
New-DevInstance -DevInstance dev -Language w1 Import-AppFolderDeltas -DevInstance dev -AppFolder BingMaps
(Yes, I know that language should have been countrycode)
The New-DevInstance will:
- Create a new Service Tier for development (using W1)
- Create a Web Client for this dev environment
- Export all object files (we need them later)
- Export all language files (we need them later)
- Create shortcuts on the desktop for accessing the development environment, the Web Client and the Windows Client.
The Import-AppFolderDeltas will:
- Merge the .DELTA files in the AppFolder\Sources with the exported object files
- Import and Compile the merged objects
If you open the DEV Development Environment now, you will see:
(sorry for the Version List mismatch)
Now you can modify objects as you like and test things. In BingMaps Events you can see how to subscribe to the RegisterServiceConnection and have your Extension appear in the Service Connections list.
You will also find a sample on how to subscribe to the OnAfterActionEvent on the Extension Details Page in order to show a dialog when users install your extension and ask whether they want to setup the service connection right away.
When you have done your modifications you can open the extensions development shell again and use one of these CmdLets
Update-AppFolderDeltas -DevInstance dev -AppFolder BingMaps Update-AppFolderNavX -DevInstance dev -AppFolder BingMaps
The Update-AppFolderDeltas Cmdlet will:
- Export object files from modified objects
- Compare your modified objects to original objects and create .DELTA files
- Export language files from modified objects
- Compare your modified texts to original and create <OBJ>-strings.txt files for each object
- Compare translations for all languages mentioned AppLanguages (in AppSettings.ps1) and
- Remove removed strings or strings that have been translated in the object
- Add new strings (and make a translation suggestion using the Azure Translation Service if you have provided a free key. See c:\demo\Common\HelperFunctions.ps1::TranslateText for implementation details)
- Update files in the Sources folder.
Note: If you modify the translation files, and fix translation errors, your translations will NOT be overridden by the Azure Translated strings:-)
The Update-AppFolderNavX Cmdlet will:
- Do everything the Update-AppFolderDeltas does
- Create a new .NavX file in the AppFolder.
So far so good.
If you want to test your extension, you can spin up a new test environment using
New-DevInstance -DevInstance devdk -Language DK Install-AppFolderNavX -DevInstance devdk -AppFolder BingMaps
Note: The first time spinning up a dev instance in new language will take some time.
Where the Install-AppFolderNavX Cmdlet will:
- Install the .NavX from the AppFolder to the corresponding development instance.
After this, you can test the extension on the Danish version of NAV 2017 CU1.
Important note: The Extensions Development Shell is not a full blown development environment, that should be used for all extension development. It is a light weight development environment, which can be used as inspiration for people when building up their Development Environments - there are a LOT of things that could be added to this.
2nd very Important note: The Extension Development Shell has nothing to do with the upcoming New Developer Experience in Visual Studio Code.
If you want to clean up, use:
Remove-AllDevInstances
which will do exactly as the name suggests.
You can of course edit the O365 Extension by using "O365 Integration" instead of BingMaps.
Enough for now.
Enjoy
Freddy Kristiansen
Technical Evangelist | https://docs.microsoft.com/en-us/archive/blogs/freddyk/nav-2017-cu1-on-azure | 2021-11-27T13:26:20 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['https://msdnshared.blob.core.windows.net/media/2016/12/runinitialize.png',
'runinitialize'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/permissions.png',
'permissions'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/bingmapssources.png',
'bingmapssources'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/extensiondevelopmentshell.png',
'extensiondevelopmentshell'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2016/12/devenvbingmaps.png',
'devenvbingmaps'], dtype=object) ] | docs.microsoft.com |
$
Create a directory for your components:
$ mkdir my_components && cd my_components
Download the example back-end application:
$ git clone backend
Change to the back-end source directory:
$ cd backend
Check that you have the correct files in the directory:
$ --s2i frontend
Change the current directory to the front-end directory:
$ cd frontend
List the contents of the directory to see that the front end is a Node.js application.
$ ls
README.md openshift server.js views helm package.json tests
Create a component configuration of Node.js component-type named
frontend:
$ odo create --s2i to interact. OpenShift Container Platform provides linking mechanisms to publish communication bindings from a program to its clients.
List all the components that are running on the cluster:
$ odo list
OpenShift Components: APP NAME PROJECT TYPE SOURCETYPE STATE app backend testpro openjdk18 binary Pushed app frontend testpro nodejs local Pushed
Link the current front-end component to the back end:
$.
Navigate to the
frontend directory:
$ cd frontend
Create an external URL for the application:
$.
Use the
odo app delete command to delete your application.. | https://docs.openshift.com/container-platform/4.9/cli_reference/developer_cli_odo/creating_and_deploying_applications_with_odo/creating-a-multicomponent-application-with-odo.html | 2021-11-27T12:05:23 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.openshift.com |
agent tree provides access to the following functions:
Search for endpoints: Locate specific endpoints by typing search criteria in the text box.
Synchronize with OfficeScan: Synchronize the plug-in program’s agent tree with the OfficeScan server’s agent tree. For details, see Synchronizing the Agent Tree.
Administrators can also manually search the agent tree to locate endpoints or domains. Specific computer information displays in the table on the right. | https://docs.trendmicro.com/en-us/enterprise/endpoint-application-control-officescan-plug-in/client_management_ch_intro/cl_tr_tsk_sp.aspx | 2021-11-27T10:57:38 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.trendmicro.com |
Contents:
Contents:
This Trifacta Wrangler, you can use the following techniques to address some of the issues you might encounter in the standardization of units and values for numeric types.
Numeric precision
In Trifacta Wrangler,.
NOTE: When aggregation is applied, a new table of data is generated with the columns that you specifically select for inclusion.
For more information, see Pivot Data.
This page has no comments. | https://docs.trifacta.com/pages/viewpage.action?pageId=174747814&navigatingVersions=true | 2021-11-27T11:51:53 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.trifacta.com |
Local to remote¶
When developers write code they typically begin with a simple serial code and build upon it until all of the required functionality is present. The following set of examples were developed to demonstrate this iterative process of evolving a simple serial program to an efficient, fully-distributed HPX application. For this demonstration, we implemented a 1D heat distribution problem. This calculation simulates the diffusion of heat across a ring from an initialized state to some user-defined point in the future. It does this by breaking each portion of the ring into discrete segments and using the current segment’s temperature and the temperature of the surrounding segments to calculate the temperature of the current segment in the next timestep as shown by Fig. 2 below.
We parallelize this code over the following eight examples:
The first example is straight serial code. In this code we instantiate a vector
U that contains two vectors of doubles as seen in the structure
stepper.
struct stepper { // Our partition type typedef space do_work(std::size_t nx, std::size_t nt) { // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) U[0][i] = double(i); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; next[0] = heat(current[nx - 1], current[0], current[1]); for (std::size_t i = 1; i != nx - 1; ++i) next[i] = heat(current[i - 1], current[i], current[i + 1]); next[nx - 1] = heat(current[nx - 2], current[nx - 1], current[0]); } // Return the solution at time-step 'nt'. return U[nt % 2]; } };
Each element in the vector of doubles represents a single grid point. To
calculate the change in heat distribution, the temperature of each grid point,
along with its neighbors, is passed to the function
heat. In order to
improve readability, references named
current and
next are created
which, depending on the time step, point to the first and second vector of
doubles. The first vector of doubles is initialized with a simple heat ramp.
After calling the heat function with the data in the
current vector, the
results are placed into the
next vector.
In example 2 we employ a technique called futurization. Futurization is a method
by which we can easily transform a code that is serially executed into a code
that creates asynchronous threads. In the simplest case this involves replacing
a variable with a future to a variable, a function with a future to a function,
and adding a
.get() at the point where a value is actually needed. The code
below shows how this technique was applied to the
struct stepper.
struct stepper { // Our partition type typedef hpx::shared_future hpx::future<space> do_work(std::size_t nx, std::size_t nt) { using hpx::dataflow; using hpx::unwrapping; // U[t][i] is the state of position i at time t. std::vector<space> U(2); for (space& s : U) s.resize(nx); // Initial conditions: f(0, i) = i for (std::size_t i = 0; i != nx; ++i) U[0][i] = hpx::make_ready_future(double(i)); auto Op = unwrapping(&stepper::heat); // Actual time step loop for (std::size_t t = 0; t != nt; ++t) { space const& current = U[t % 2]; space& next = U[(t + 1) % 2]; // WHEN U[t][i-1], U[t][i], and U[t][i+1] have been computed, THEN we // can compute U[t+1][i] for (std::size_t i = 0; i != nx; ++i) { next[i] = dataflow(hpx::launch::async, Op, current[idx(i, -1, nx)], current[i], current[idx(i, +1, nx)]); } } // Now the asynchronous computation is running; the above for-loop does not // wait on anything. There is no implicit waiting at the end of each timestep; // the computation of each U[t][i] will begin as soon as its dependencies // are ready and hardware is available. // Return the solution at time-step 'nt'. return hpx::when_all(U[nt % 2]); } };
In example 2, we redefine our partition type as a
shared_future and, in
main, create the object
result, which is a future to a vector of
partitions. We use
result to represent the last vector in a string of
vectors created for each timestep. In order to move to the next timestep, the
values of a partition and its neighbors must be passed to
heat once the
futures that contain them are ready. In HPX, we have an LCO (Local Control
Object) named Dataflow that assists the programmer in expressing this
dependency. Dataflow allows us to pass the results of a set of futures to a
specified function when the futures are ready. Dataflow takes three types of
arguments, one which instructs the dataflow on how to perform the function call
(async or sync), the function to call (in this case
Op), and futures to the
arguments that will be passed to the function. When called, dataflow immediately
returns a future to the result of the specified function. This allows users to
string dataflows together and construct an execution tree.
After the values of the futures in dataflow are ready, the values must be pulled
out of the future container to be passed to the function
heat. In order to
do this, we use the HPX facility
unwrapping, which underneath calls
.get() on each of the futures so that the function
heat will be passed
doubles and not futures to doubles.
By setting up the algorithm this way, the program will be able to execute as quickly as the dependencies of each future are met. Unfortunately, this example runs terribly slow. This increase in execution time is caused by the overheads needed to create a future for each data point. Because the work done within each call to heat is very small, the overhead of creating and scheduling each of the three futures is greater than that of the actual useful work! In order to amortize the overheads of our synchronization techniques, we need to be able to control the amount of work that will be done with each future. We call this amount of work per overhead grain size.
In example 3, we return to our serial code to figure out how to control the
grain size of our program. The strategy that we employ is to create “partitions”
of data points. The user can define how many partitions are created and how many
data points are contained in each partition. This is accomplished by creating
the
struct partition, which contains a member object
data_, a vector of
doubles that holds the data points assigned to a particular instance of
partition.
In example 4, we take advantage of the partition setup by redefining
space
to be a vector of shared_futures with each future representing a partition. In
this manner, each future represents several data points. Because the user can
define how many data points are in each partition, and, therefore, how
many data points are represented by one future, a user can control the
grainsize of the simulation. The rest of the code is then futurized in the same
manner as example 2. It should be noted how strikingly similar
example 4 is to example 2.
Example 4 finally shows good results. This code scales equivalently to the OpenMP version. While these results are promising, there are more opportunities to improve the application’s scalability. Currently, this code only runs on one locality, but to get the full benefit of HPX, we need to be able to distribute the work to other machines in a cluster. We begin to add this functionality in example 5.
In order to run on a distributed system, a large amount of boilerplate code must
be added. Fortunately, HPX provides us with the concept of a component,
which saves us from having to write quite as much code. A component is an object
that can be remotely accessed using its global address. Components are made of
two parts: a server and a client class. While the client class is not required,
abstracting the server behind a client allows us to ensure type safety instead
of having to pass around pointers to global objects. Example 5 renames example
4’s
struct partition to
partition_data and adds serialization support.
Next, we add the server side representation of the data in the structure
partition_server.
Partition_server inherits from
hpx::components::component_base, which contains a server-side component
boilerplate. The boilerplate code allows a component’s public members to be
accessible anywhere on the machine via its Global Identifier (GID). To
encapsulate the component, we create a client side helper class. This object
allows us to create new instances of our component and access its members
without having to know its GID. In addition, we are using the client class to
assist us with managing our asynchrony. For example, our client class
partition‘s member function
get_data() returns a future to
partition_data get_data(). This struct inherits its boilerplate code from
hpx::components::client_base.
In the structure
stepper, we have also had to make some changes to
accommodate a distributed environment. In order to get the data from a
particular neighboring partition, which could be remote, we must retrieve the data from all
of the neighboring partitions. These retrievals are asynchronous and the function
heat_part_data, which, amongst other things, calls
heat, should not be
called unless the data from the neighboring partitions have arrived. Therefore,
it should come as no surprise that we synchronize this operation with another
instance of dataflow (found in
heat_part). This dataflow receives futures
to the data in the current and surrounding partitions by calling
get_data()
on each respective partition. When these futures are ready, dataflow passes them
to the
unwrapping function, which extracts the shared_array of doubles and
passes them to the lambda. The lambda calls
heat_part_data on the
locality, which the middle partition is on.
Although this example could run distributed, it only runs on one
locality, as it always uses
hpx::find_here() as the target for the
functions to run on.
In example 6, we begin to distribute the partition data on different nodes. This
is accomplished in
stepper::do_work() by passing the GID of the
locality where we wish to create the partition to the partition
constructor.
for (std::size_t i = 0; i != np; ++i) U[0][i] = partition(localities[locidx(i, np, nl)], nx, double(i));
We distribute the partitions evenly based on the number of localities used,
which is described in the function
locidx. Because some of the data needed
to update the partition in
heat_part could now be on a new locality,
we must devise a way of moving data to the locality of the middle
partition. We accomplished this by adding a switch in the function
get_data() that returns the end element of the
buffer data_ if it is
from the left partition or the first element of the buffer if the data is from
the right partition. In this way only the necessary elements, not the whole
buffer, are exchanged between nodes. The reader should be reminded that this
exchange of end elements occurs in the function
get_data() and, therefore, is
executed asynchronously.
Now that we have the code running in distributed, it is time to make some
optimizations. The function
heat_part spends most of its time on two tasks:
retrieving remote data and working on the data in the middle partition. Because
we know that the data for the middle partition is local, we can overlap the work
on the middle partition with that of the possibly remote call of
get_data().
This algorithmic change, which was implemented in example 7, can be seen below:
// The partitioned operator, it invokes the heat operator above on all elements // of a partition. static partition heat_part(partition const& left, partition const& middle, partition const& right) { using hpx::dataflow; using hpx::unwrapping; hpx::shared_future<partition_data> middle_data = middle.get_data(partition_server::middle_partition); hpx::future<partition_data> next_middle = middle_data.then( unwrapping( [middle](partition_data const& m) -> partition_data { HPX_UNUSED(middle); // All local operations are performed once the middle data of // the previous time step becomes available. std::size_t size = m.size(); partition_data next(size); for (std::size_t i = 1; i != size-1; ++i) next[i] = heat(m[i-1], m[i], m[i+1]); return next; } ) ); return dataflow( hpx::launch::async, unwrapping( [left, middle, right](partition_data next, partition_data const& l, partition_data const& m, partition_data const& r) -> partition { HPX_UNUSED(left); HPX_UNUSED(right); // Calculate the missing boundary elements once the // corresponding data has become available. std::size_t size = m.size(); next[0] = heat(l[size-1], m[0], m[1]); next[size-1] = heat(m[size-2], m[size-1], r[0]); // The new partition_data will be allocated on the same locality // as 'middle'. return partition(middle.get_id(), std::move(next)); } ), std::move(next_middle), left.get_data(partition_server::left_partition), middle_data, right.get_data(partition_server::right_partition) ); }
Example 8 completes the futurization process and utilizes the full potential of
HPX by distributing the program flow to multiple localities, usually defined as
nodes in a cluster. It accomplishes this task by running an instance of HPX main
on each locality. In order to coordinate the execution of the program,
the
struct stepper is wrapped into a component. In this way, each
locality contains an instance of stepper that executes its own instance
of the function
do_work(). This scheme does create an interesting
synchronization problem that must be solved. When the program flow was being
coordinated on the head node, the GID of each component was known. However, when
we distribute the program flow, each partition has no notion of the GID of its
neighbor if the next partition is on another locality. In order to make
the GIDs of neighboring partitions visible to each other, we created two buffers
to store the GIDs of the remote neighboring partitions on the left and right
respectively. These buffers are filled by sending the GID of newly created
edge partitions to the right and left buffers of the neighboring localities.
In order to finish the simulation, the solution vectors named
result are then
gathered together on locality 0 and added into a vector of spaces
overall_result using the HPX functions
gather_id and
gather_here.
Example 8 completes this example series, which takes the serial code of example 1 and incrementally morphs it into a fully distributed parallel code. This evolution was guided by the simple principles of futurization, the knowledge of grainsize, and utilization of components. Applying these techniques easily facilitates the scalable parallelization of most applications. | https://hpx-docs.stellar-group.org/branches/master/html/examples/1d_stencil.html | 2021-11-27T11:26:58 | CC-MAIN-2021-49 | 1637964358180.42 | [] | hpx-docs.stellar-group.org |
assertion¶
The assertion library implements the macros
HPX_ASSERT and
HPX_ASSERT_MSG. Those two macros can be used to implement assertions
which are turned of during a release build.
By default, the location and function where the assert has been called from are
displayed when the assertion fires. This behavior can be modified by using
hpx::assertion::set_assertion_handler. When HPX initializes, it uses
this function to specify a more elaborate assertion handler. If your application
needs to customize this, it needs to do so before calling
hpx::hpx_init,
hpx::hpx_main or using the C-main
wrappers.
See the API reference of the module for more details. | https://hpx-docs.stellar-group.org/branches/master/html/libs/core/assertion/docs/index.html | 2021-11-27T10:59:07 | CC-MAIN-2021-49 | 1637964358180.42 | [] | hpx-docs.stellar-group.org |
UMI Barcoded Illumina MiSeq 2x250 BCR mRNA¶
Overview of Experimental Data¶
This example uses publicly available data from:
B cells populating the multiple sclerosis brain mature in the draining cervical lymph nodes.Stern JNH, Yaari G, and Vander Heiden JA, et al.Sci Transl Med. 2014. 6(248):248ra107. doi:10.1126/scitranslmed.3008879.
Which may be downloaded from the NCBI Sequence Read Archive under BioProject accession ID: PRJNA248475. For this example, we will use the first 25,000 sequences of sample M12 (accession: SRR1383456), which may downloaded downloaded using fastq-dump from the SRA Toolkit:
fastq-dump --split-files -X 25000 SRR1383456
Primers sequences are available online at the supplemental website for the publication.
Read Configuration¶
Schematic of the Illumina MiSeq 2x250 paired-end reads with a 15 nucleotide UMI barcode preceding the C-region primer sequence.¶
Example Data¶
We have hosted a small subset of the data (Accession: SRR1383456) on the pRESTO website in FASTQ format with accompanying primer files. The sample data set and workflow script may be downloaded from here:
Stern, Yaari and Vander Heiden masking 1 sequence
with the 15 nucleotide UMI that precedes the C-region primer
(
MaskPrimers score --barcode):
To summarize these steps, the ParseLog.py tool is used to build a tab-delimited file from the MaskPrimers.py log:
Containing the following information:
Note
For this data set the UMI is immediately upstream of the C-region primer.
Another common approach for UMI barcoding involves placing the UMI
immediately upstream of a 5’RACE template switch site. Modifying the
workflow is simple for this case. You just need to replace the V-segment
primers with a fasta file containing the TS sequences and move the
--barcode argument to the
appropriate read:
MaskPrimers.py score -s R1_quality-pass.fastq -p CPrimers.fasta \ --start 0 --mode cut --outname R1 --log MP1.log MaskPrimers.py score -s R2_quality-pass.fastq -p TSSites.fasta \ --start 17 --barcode --mode cut --maxerror 0.5 \ --outname R2 --log MP2.log
In the above we have moved the UMI annotation to read 2, increased
the allowable error rate for matching the TS site
(
--maxerror 0.5),
cut the TS site (
--mode cut),
and increased the size of the UMI from 15 to 17 nucleotides
(
--start 17). 1, the
BARCODE annotation identified by MaskPrimers.py must
first be copied to the read 2 mate-pair of each read/ENA, then the appropriate argument would be
--coord illumina.
Note
If you have followed the 5’RACE modification above, then you must also
modify the first PairSeq.py step to copy the UMI from read 2 to read 1,
instead of vice versa (
--2f BARCODE):
PairSeq.py -1 R1_primers-pass.fastq -2 R2_primers-pass.fastq \ --2f BARCODE --coord sra. In the example data used here,
this step was not necessary due to the aligned primer design for the 45
V-segment primers, though this does require that the V-segment primers be
masked, rather than cut, during the MaskPrimers.py step
(
--mode mask).
See also
If your data requires alignment, then you can create multiple aligned UMI read groups as follows:
AlignSets.py muscle -s R1_primers-pass_pair-pass.fastq --bf BARCODE \ --exec ~/bin/muscle --outname R1 --log AS1.log AlignSets.py muscle -s R2_primers-pass_pair-pass.fastq --bf BARCODE \ --exec ~/bin/muscle --outname R2 --log AS2.log
Where the
--bf BARCODE defines the field
containing the UMI and
--exec ~/bin/muscle
is the location of the MUSCLE executable.
For additional details see the section on fixing UMI alignments.. As the accuracy of the primer assignment in read 1 is critical
for correct isotype identification, additional filtering of read 1 is carried out
during this step. Specifying the
--prcons 0.6
threshold: align subcommand of AssemblePairs.py:
During assembly, the consensus isotype.py can use the ungapped V-segment reference sequences to properly space non-overlapping reads. Or, if all else fails, the join subcommand can be used to simply stick mate-pairs together end-to-end with some intervening gap.
Deduplication and filtering¶
Combining UMI read group size annotations¶
In the final stage of the workflow, the high-fidelity Ig repertoire is
obtained by a series of filtering steps. First,
isotype primer (
--uf PRCONS)., MiSeq workflow are presented below. Performance was measured on a 64-core system with 2.3GHz AMD Opteron(TM) 6276 processors and 512GB of RAM, with memory usage measured at peak utilization. The data set contained 1,723,558 x 2 raw reads, and required matching of 1 C-region primer, 45 V-segment primers, and averaged 24.3 reads per UMI. | https://presto.readthedocs.io/en/latest/workflows/Stern2014_Workflow.html | 2021-11-27T11:18:38 | CC-MAIN-2021-49 | 1637964358180.42 | [] | presto.readthedocs.io |
Double
Animation
Double Using Key Frames Animation
Double Using Key Frames Animation
Double Using Key Frames Animation
Class
Using Key Frames
Definition
public : sealed class DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
struct winrt::Windows::UI::Xaml::Media::Animation::DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
public sealed class DoubleAnimationUsingKeyFrames : Timeline, IDoubleAnimationUsingKeyFrames
Public NotInheritable Class DoubleAnimationUsingKeyFrames Inherits Timeline Implements IDoubleAnimationUsingKeyFrames
<DoubleAnimationUsingKeyFrames> oneOrMoreDoubleKeyFrames </DoubleAnimationUsingKeyFrames>
- Inheritance
- DoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFramesDoubleAnimationUsingKeyFrames
- Attributes
-
Windows 10 requirements
Examples(); }
Constructors
Properties
Methods
Events
See also
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.media.animation.doubleanimationusingkeyframes | 2019-02-16T05:30:12 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
1) Trial Balance: It shows the statement related to all debit credit entry in a double accounting entry system.( For a particular time period you can set date range to see the trial balance report for that specific range of time)
Accounts --> Reports --> Financial Statements --> Trial Balance
Before viewing any reports the following reporting window format will appear everytime shown below:
View of Trial Balance Report:
2) Balance Sheet: Shows the summarized statement which includes Assets, Liabilities & Equity
Accounts--> Reports--> Financial Statement--> Balance Sheet
3) Income Statement: Shows the statement of all the revenues & Expenses in a reporting format. Date range can be set up in order to see the report of particular date range
Accounts--> Report --> Financial Statements--> Income Statement
4) Cashflow Statement : Shows all the cashflow related report (Operating, Investing & Financing Activities) | https://docs.onebookcloud.com/en/financial-accounting-reports/financial-statements/ | 2019-02-16T04:52:15 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.onebookcloud.com |
The GetData wizard retrieves database data from an ODBC data source. Using the GetData wizard, DriveWorks allows you to connect to existing data within your company. The wizard walks you through the database connection, assists in the selection of the table or view from which to pull the data, and then helps sort and filter the data accordingly.
This is then a dynamic read only link to the data, and can again be used to create lists or lookups within DriveWorks. You can choose when this data gets updated, whether it’s as a manual process or performed automatically. This function is more suited to retrieving large amounts of data.
If the data in the database changes then updating this link in DriveWorks either manually or automatically will refresh the Table in DriveWorks with the new data. Using this method, it is important that the table structure of the external data is accessible and understood.
GETDATA( [" Database"], [" Username"], [" Password"], [ Table], [ Field], [ Where] )
The GetData wizard produces the query language required to connect, filter and display the data. The following explains each step required.
The DSN name of the database to connect to
The username to gain access to the database
The password to gain access to the database
The table that contains the data in the database
The field in the table the data is to be retrieved from
The Where clause relies on some knowledge of the SQL Query language. Examples of typical Where clauses are listed below:
CustomerName Like ‘A%’
Where CustomerName is the field in the table of the database. Like is the comparison operator. % is a wild card symbol.
This Where clause will display all customers in the CustomerName field where the name begins with A
CustomerName Like ‘ “&LetterReturn & “%’
This Where clause will display all customers in the CustomerName field where the name begins with a letter selected from a control on the user form called LeterReturn.
Other comparison operators that can be used with the where clause are listed below:
See also
How To: Troubleshoot SQL Connection | http://docs.driveworkspro.com/Topic/GetData | 2019-02-16T06:16:13 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.driveworkspro.com |
Guides
- Fastly Status
Google Cloud Storage
Last updated October 03, 2018
Google Cloud Storage (GCS) can be used as an origin server with your Fastly services once you set up and configure your GCS account and link it to a Fastly service. It can also be configured to use private content. This speeds up your content delivery and reduces your origin's workload and response times with the dedicated links between Google and Fastly's POPs.
TIP: Google offers a Cloud Accelerator integration discount that applies to any Google Cloud Platform product. If you’re a Fastly customer and would like to take advantage of this discount, email salesgcp@fastly.com.
Using GCS as an origin server
To make your GCS data available through Fastly, follow the steps below.
Setting up and configuring your GCS account
Create a bucket to store your origin's data. The Create a bucket window appears.
- Use Google's Search Console to verify ownership of your domain name, if you have not already done so. See the instructions on Google's website.
- Fill out the Create a bucket fields as follows:
- In the Name field, type your domain name (e.g.,
example.comor
images.example.com) to create a domain-named bucket. Remember the name you type. You'll need it to connect your GCS bucket to your Fastly service.
- In the Default storage class area, select Regional.
- From the Regional location menu, select a location to store your content. Most customers select a region close to the interconnect location they specify for shielding.
- Click the Create button.
You should now add files to your bucket and make them externally accessible by selecting the Public link checkbox next to each of the files.
Adding your GCS bucket as an origin server
To add your GCS bucket as an origin server, follow the instructions for connecting to origins. You'll add specific details about your origin server when you fill out the Create a host fields:
- In the Name field, type the name of your server (for example,
Google Cloud Storage).
- In the Address field, type
storage.googleapis.com.
- In the Transport Layer Security (TLS) area, leave the Enable TLS? default set to Yes to secure the connection between Fastly and your origin.
- In the Transport Layer Security (TLS) area, type
storage.googleapis.comin the Certificate hostname field.
- From the Shielding menu, select an interconnect location from the list of shielding locations.
Interconnect locations
Interconnect locations allow you to establish direct links with Google's network edge when you choose your shielding location. By selecting one of the locations listed below, you will be eligible to receive discounted pricing from Google CDN Interconnect for traffic traveling from Google Cloud Platform to Fastly's network. Most customers select the interconnect closest to their GCS bucket's region.
Interconnects exist in the following locations within North America:
- Ashburn (DCA)
- Ashburn (IAD)
- Atlanta (ATL)
- Chicago (MDW)
- Dallas (DFW)
- Los Angeles (LAX)
- New York (JFK)
- Seattle (SEA)
- San Jose (SJC)
- Toronto (YYZ)
Interconnects outside of North America exist in:
- Amsterdam (AMS)
- Frankfurt (FRA)
- Frankfurt (HHN)
- Hong Kong (HKG)
- London (LCY)
- London (LHR)
- Madrid (MAD)
- Paris (CDG)
- Singapore (SIN)
- Stockholm (BMA)
- Tokyo (NRT)
Review our caveats of shielding and select an interconnect accordingly.
Setting the default host for your service to your GCS bucket
- Log in to the Fastly web interface and click the Configure link.
- From the service menu, select the appropriate service.
- Click the Configuration button and then select Clone active. The Domains page appears.
- Click the Settings link. The Settings page appears.
Click the Override host switch. The Override host header field appears.
- In the Override host header field, type the name of the override host for this service. The name you type should match the name of the bucket you created in your GCS account and will take the format
<your bucket name>.storage.googleapis.com. For example, if your bucket name is
test123, your override hostname would be
test123.storage.googleapis.com.
- Click the Save button. The new override host header appears in the Override host section.
Creating domains for GCS
- Log in to the Fastly web interface and click the Configure link.
- From the service menu, select the appropriate service.
- Click the Configuration button and then select Clone active. The Domains page appears.
Click the Create domain button. The Create a domain page appears.
- In the Domain Name field, type the name users will type in their browsers to access your site.
- In the Comment field, optionally type a comment that describes your domain.
- Click the Create button. A new domain appears on the Domains page.
- Because GCS responds to different hostnames than your Fastly service, click the Create domain button to create a second domain.
- In the Domain Name field of the second domain you create, type the same value as the default host you created earlier (e.g.,
<your bucket name>.storage.googleapis.com) and click the Create button. A second new domain appears on the Domains page. Shielding POPs need this additional domain so they can route requests correctly. (See Caveats of shielding for more information.)
- Click the Activate button to deploy your configuration changes.
- Add a CNAME DNS record for the domain if you haven't already done so.
You can use
http://<domain>.global.prod.fastly.net/<filename> to access the files you uploaded.
Setting the Cache-Control header for your GCS bucket
GCS performs its own caching, which may complicate efforts to purge cache. To avoid potential problems, we recommend using the gsutil command line utility to set the Cache-Control header for one or more files in your GCS bucket:
Replace
<bucket> in the example above with your GCS bucket's name. Note that
max-age should instruct GCS to cache your content for zero seconds, and Fastly to cache your content for one day. See Google's setmeta docs for more information.
Changing the default TTL for your GCS bucket
If you want to change the default TTL for your GCS bucket, if at all, keep the following in mind:
- Your GCS account controls the default TTL for your GCS content. GCS currently sets the default TTL to 3600 seconds. Changing the default TTL will not override the default setting in your GCS account.
- To override the default TTL set by GCS from within the Fastly web interface, create a new cache setting and enter the TTL there.
- To override the default TTL in GCS, download the gsutil tool and then change the cache-control headers to delete the default TTL or change it to an appropriate setting.
Using GCS with private objects
To use Fastly with GCS private objects, be sure you've already made your GCS data available to Fastly by pointing to the right GCS bucket, then follow the steps below.
Setting up interoperable access
By default, GCS authenticates requests using OAuth2, which Fastly does not support. To access private objects on GCS, your project must have HMAC authentication enabled and interoperable storage access keys (an "Access Key" and "Secret" pair) created. Do this by following the steps below.
- Open the Google Cloud Platform console and select the appropriate project.
- Click Settings. The Settings appear with the Project Access controls highlighted.
- Click the Interoperability tab. The Interoperability API access controls appear.
- If you have not set up interoperability before, click Enable interoperability access.
Click Make
<PROJECT-ID>your default project for interoperable access. If that project already serves as the default project, that information appears instead.
Click Create a new key. An access key and secret code appear.
- Save the access key and secret code that appear. You'll need these later when you're creating an authorization header.
Setting up Fastly to use GCS private content
To use GCS private content with Fastly, create two headers, a Date header (required Authorization Signature) and an Authorization header.
Creating a Date new header page appears.
- Fill out the Create a new header fields as follows:
- In the Name field, type
Date.
- From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
http.Date.
- In the Source field, type
now.
- From the Ignore if set menu, select No.
- In the Priority field, type
10.
- Click the Create button. A new Date header appears on the Content page. You will use this later within the Signature of the Authorization header.
Creating an Authorization header
Click the Create header button again to create another new header. The Create a header page appears.
- Fill out the Create a header fields as follows:
- In the Name field, type
Authorization.
- From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
http.Authorization.
- From the Ignore if set menu, select No.
- In the Priority field, type
20.
In the Source field, type the header authorization information using the following format:
replacing
<access key>,
<GCS secret>, and
<GCS bucket name>with the information you gathered before you began. For example:
- Click the Create button. A new Authorization header appears on the Content page.
- Click the Activate button to deploy your configurationExtensionHeaders><\n><CanonicalizedResource>
It tells us the following: | https://docs.fastly.com/guides/integrations/google-cloud-storage | 2019-02-16T06:00:38 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.fastly.com |
Getting support for Puppet Enterprise lifecycles End-of-life (EOL) and we encourage upgrading to the latest version.
See also the operating system support lifecycle.
Getting support
Getting support for Puppet Enterprise is easy; it is available both from Puppet customer support portal.
- Joining the Puppet Enterprise user group.
- Seeking help from the Puppet open source community.
See the support lifecycle page for more details.
Reporting issues to the customer support portal
Paid support
Puppet provides two levels of commercial support offerings for Puppet Enterprise: Standard and Premium. Both offerings allow you to report your support issues to our confidential customer support portal. You will receive an account and log-on for this portal when you purchase Puppet Enterprise.
Customer support portal:
The PE support script
When seeking support, you may be asked to run an information-gathering support script
- the PuppetDB
/summary-statsendpoint, non-identifying data that communicates table/data sizes for troubleshooting.
-
Join the Puppet Enterprise user group
- Click on “Sign in and apply for membership.”
- Click on “Enter your email address to access the document.”
- Enter your email address.
Your request to join will be sent to Puppet for authorization and you will receive an email when you’ve been added to the user group.
Getting support from the existing Puppet community
As a Puppet Enterprise customer you are more than welcome to participate in our large and helpful open source community as well as report issues against the open source project.
Puppet open source user group:
Puppet Developers group:
Report issues with the open source Puppet project: | https://docs.puppet.com/pe/2016.1/overview_getting_support.html | 2019-02-16T04:52:54 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.puppet.com |
Droplet¶
The
Droplet is a service container that gives you access to many of Vapor's facilities. It is responsible for registering routes, starting the server, appending middleware, and more.
Tip
Normally applications will only have one Droplet. However, for advanced use cases, it is possible to create more than one.
Initialization¶
As you have probably already seen, the only thing required to create an instance of
Droplet is to import Vapor.
import Vapor let drop = try Droplet() // your magic here try drop.run()
Creation of the
Droplet normally happens in the
main.swift file.
Note
For the sake of simplicity, most of the documentations sample code uses just the
main.swift file. You can read more about packages and modules in the Swift Package Manager conceptual overview.
Environment¶
The
environment is accessible via the config of the droplet.
It contains the current environment your application is running in. Usually development, testing, or production.
if drop.config.environment == .production { ... }
The environment affects Config and Logging. The environment is
development by default. To change it, pass the
--env= flag as an argument.
vapor run serve --env=production
If you are in Xcode, you can pass arguments through the scheme editor.
Warning
Debug logs can reduce the number of requests your application can handle per second. Enable production mode to silence non-critical logs.
Config Directory¶
The
workDir property contains a path to the current working directory of the application. Vapor uses this property to find the folders related to your project, such as
Resources,
Public, and
Config.
print(drop.config.workDir) // /var/www/my-project/
Vapor automatically determines the working directory in most situations. However, you may need to manually set it for advanced use cases.
You can override the working directory through the
Droplet's initializer, or by passing the
--workdir argument.
vapor run serve --workdir="/var/www/my-project"
Modifying Properties¶
The
Droplet's properties can be changed programmatically or through configuration.
Programmatic¶
Properties on the
Droplet are constant and can be overridden through the init method.
let drop = try Droplet(server: MyServerType.self)
Here the type of server the
Droplet uses is changed to a custom type. When the
Droplet is run, this custom server type will be booted instead of the default server.
Warning
Using the init method manually can override configured properties.
Configurable¶
If you want to modify a property of the
Droplet only in certain cases, you can use
addConfigurable. Say for example you want to email error logs to yourself in production, but you don't want to spam your inbox while developing.
let config = try Config() config.addConfigurable(log: MyEmailLogger.init, name: "email") let drop = Droplet(config)
The
Droplet will continue to use the default logger until you modify the
Config/droplet.json file to point to your email logger. If this is done in
Config/production/droplet.json, then your logger will only be used in production.
{ "log": "email" }
Supported Properties¶
Example¶
Let's create a custom logger to demonstrate Vapor's configurable properties.
AllCapsLogger.swift
final class AllCapsLogger: LogProtocol { var enabled: [LogLevel] = [] func log(_ level: LogLevel, message: String, file: String, function: String, line: Int) { print(message.uppercased() + "!!!") } } extension AllCapsLogger: ConfigInitializable { convenience init(config: Config) throws { self.init() } }
Now add the logger to the Droplet using the
addConfigurable method for logs.
main.swift
let config = try Config() config.addConfigurable(log: AllCapsLogger.init, name: "all-caps") let drop = try Droplet(config)
Whenever the
"log" property is set to
"all-caps" in the
droplet.json, our new logger will be used.
Config/development/droplet.json
{ "log": "all-caps" }
Here we are setting our logger only in the
development environment. All other environments will use Vapor's default logger.
Config Initializable¶
For an added layer of convenience, you can allow your custom types to be initialized from configuration files.
In our previous example, we initialized an
AllCapsLogger before adding it to the Droplet.
Let's say we want to allow our project to configure how many exclamation points get added with each log message.
AllCapsLogger.swift
final class AllCapsLogger: LogProtocol { var enabled: [LogLevel] = [] let exclamationCount: Int init(exclamationCount: Int) { self.exclamationCount = exclamationCount } func log(_ level: LogLevel, message: String, file: String, function: String, line: Int) { print(message.uppercased() + String(repeating: "!", count: exclamationCount)) } } extension AllCapsLogger: ConfigInitializable { convenience init(config: Config) throws { let count = config["allCaps", "exclamationCount"]?.int ?? 3 self.init(exclamationCount: count) } }
Note
The first parameter to
config is the name of the file.
Now that we have conformed our logger to
ConfigInitializable, we can pass just the type name to
addConfigurable.
main.swift
let config = try Config() config.addConfigurable(log: AllCapsLogger.init, name: "all-caps") let drop = try Droplet(config)
Now if you add a file named
allCaps.json to the
Config folder, you can configure the logger.
allCaps.json
{ "exclamationCount": 5 }
With this configurable abstraction, you can easily change how your application functions in different environments without needing to hard code these values into your source code. | https://docs.vapor.codes/2.0/vapor/droplet/ | 2019-02-16T05:41:36 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.vapor.codes |
Rich Content
If a Message Part contains a large amount of content (> 2KB), it will be sent as rich content instead. This has no effect on how you send a Message.
Sending Rich Content
Sending large messages works the same way as standard messages:
// Creates a message part with an image and sends it to the specified conversation NSData *imageData = UIImagePNGRepresentation(image); LYRMessagePart *messagePart = [LYRMessagePart messagePartWithMIMEType:@"image/png" data:imageData]; LYRMessage *message = [layerClient newMessageWithParts:@[messagePart] options:nil error:nil]; // Sends the specified message NSError *error = nil; BOOL success = [conversation sendMessage:message error:&error]; if (success) { NSLog(@"Message enqueued for delivery"); } else { NSLog(@"Message send failed with error: %@", error); }
Receiving Rich Content
There are some differences in how such a Message would be received:
Once the rich content has been downloaded (whether automatically or manually with the
downloadContent method), it can be filtered by its MIME type and handled appropriately.
// Filter for PNG message part NSPredicate *predicate = [NSPredicate predicateWithFormat:@"MIMEType == %@", @"image/png"]; LYRMessagePart *messagePart = [[message.parts filteredArrayUsingPredicate:predicate] firstObject]; // If it's ready, render the image from the message part if (!(messagePart.transferStatus == LYRContentTransferReadyForDownload || messagePart.transferStatus == LYRContentTransferDownloading)) { self.imageView.image = [UIImage imageWithData:messagePart.data]; }
Monitoring transfer progress
As Rich Content may involve upload of large files, being able to monitor the progress of sending these Messages can be significant. You can access the progress of any message part upload (while sending) or download (while receiving) via its progress.
LYRMessagePart *messagePart = message.parts.firstObject; // Auto-download LYRProgress *progress1 = messagePart.progress; // Manual download NSError *error; LYRProgress *progress2 = [messagePart downloadContent:&error];
The
progress property on the message part will reflect upload progress when sending a message.
The
fractionCompleted property of
LYRProgress reflects the download/upload progress as a percent value (ranging from 0.0 to 1.0). The
LYRProgress object also declares the
LYRProgressDelegate protocol, which you can implement to be notified of progress changes.
@interface MyAwesomeApp () <LYRProgressDelegate> @end @implementation MyAwesomeApp - (void)progressDidChange:(LYRProgress *)progress { NSLog(@"Transfer progress changed %@", progress.fractionCompleted); }
Best practice
The
LYRProgressdoesn’t contain a reference back to the message part that it’s associated with. A common UI is to show a progress bar in the view that will load the contents of the message part once it’s downloaded. If you are doing this, set the progress delegate to the content view itself, and update the progress bar when the progress delegate method is called.
Aggregate progress
You can get the progress of multiple operations in one progress object by combining
LYRProgress objects into a
LYRAggregateProgress object (a subclass of
LYRProgress). As each individual
LYRProgress object updates, the aggregate progress will update accordingly based on the total size of all the operations.
LYRMessagePart *part1 = self.message.parts[0]; LYRMessagePart *part2 = self.message.parts[1]; LYRMessagePart *part3 = self.message.parts[2]; NSError *error; LYRProgress *progress1 = [LYRClient downloadContent:part1 error:&error]; LYRProgress *progress2 = [LYRClient downloadContent:part2 error:&error]; LYRProgress *progress3 = [LYRClient downloadContent:part3 error:&error]; LYRAggregateProgress *aggregateProgress = [LYRAggregateProgress aggregateProgressWithProgresses:@[progress1, progress2, progress3]];
Transfer status
LYRMessagePart objects provide a
transferStatus property, which can be one of five values:
LYRContentTransferAwaitingUpload: The content has been saved in the local queue but hasn’t had a chance to start uploading yet
LYRContentTransferUploading: Content is currently uploading
LYRContentTransferReadyForDownload: Content is fully uploaded and ready on Layer servers but hasn’t had a chance to start downloading to the device yet
LYRContentTransferDownloading: Content is currently downloading
LYRContentTransferComplete: Content has finished uploading or downloading
The
transferStatus value is useful for certain use cases, such as only showing fully-downloaded messages. It is queryable.
Background transfers
You can continue a download or upload while the app is in the background by enabling background transfers.
layerClient.backgroundContentTransferEnabled = YES;
If you enable background content transfer, your app delegate must implement this method and call
handleBackgroundContentTransfersForSession:completion: on the Layer client:
Announcements QueryingAnnouncements Querying
- (void)application:(UIApplication *)application handleEventsForBackgroundURLSession:(NSString *)identifier completionHandler:(void (^)())completionHandler { // You'll have to get access to your `layerClient` instance // This may be a property in your app delegate // Or accesible via a `LayerController` or similar class in your app [layerClient handleBackgroundContentTransfersForSession:identifier completion:^(NSArray *changes, NSError *error)completionHandler { NSLog(@"Background transfers finished with %lu change(s)", (unsigned long)changes.count); completionHandler(); }]; } | https://docs.layer.com/sdk/ios/richcontent | 2019-02-16T05:42:15 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.layer.com |
How to Publish Configuration Manager Site Information to Active Directory Domain Services
Applies To: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP.
Note.
To enable a Configuration Manager site to publish site information to Active Directory Domain Services
In the Configuration Manager console, navigate to System CenterConfiguration Manager / Site Database / Site Management / <site code> - <site name>.
Right-click <site code> - <site name>, and click Properties.
On the Advanced tab of site properties, select the Publish this site in Active Directory Domain Services check box.
Click OK.
See Also
Other Resources
How to Extend the Active Directory Schema for Configuration Manager
For additional information, see Configuration Manager 2007 Information and Support.
To contact the documentation team, email SMSdocs@microsoft.com. | https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/bb680711(v=technet.10) | 2019-02-16T05:02:07 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
How to create and configure Azure Integration Runtime
The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. For more information about IR, see Integration runtime.
Azure IR provides a fully managed compute to natively perform data movement and dispatch data transformation activities to compute services like HDInsight. It is hosted in Azure environment and supports connecting to resources in public network environment with public accessible endpoints.
This document introduces how you can create and configure Azure Integration Runtime.
Default Azure IR
By default, each data factory has an Azure IR in the backend that supports operations on cloud data stores and compute services in public network. The location of that Azure IR is auto-resolve. If connectVia property is not specified in the linked service definition, the default Azure IR is used. You only need to explicitly create an Azure IR when you would like to explicitly define the location of the IR, or if you would like to virtually group the activity executions on different IRs for management purpose.
Create Azure IR
Integration Runtime can be created using the Set-AzureRmDataFactoryV2IntegrationRuntime PowerShell cmdlet. To create an Azure IR, you specify the name, location and type to the command. Here is a sample command to create an Azure IR with location set to "West Europe":
Set-AzureRmDataFactoryV2IntegrationRuntime -DataFactoryName "SampleV2DataFactory1" -Name "MySampleAzureIR" -ResourceGroupName "ADFV2SampleRG" -Type Managed -Location "West Europe"
For Azure IR, the type must be set to Managed. You do not need to specify compute details because it is fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see Create and Configure Azure-SSIS IR.
You can configure an existing Azure IR to change its location using the Set-AzureRmDataFactoryV2IntegrationRuntime PowerShell cmdlet. For more information about the location of an Azure IR, see Introduction to integration runtime.
Use Azure IR
Once an Azure IR is created, you can reference it in your Linked Service definition. Below is a sample of how you can reference the Azure Integration Runtime created above from an Azure Storage Linked Service:
{ "name": "MyStorageLinkedService", "properties": { "type": "AzureStorage", "typeProperties": { "connectionString": { "value": "DefaultEndpointsProtocol=https;AccountName=myaccountname;AccountKey=...", "type": "SecureString" } }, "connectVia": { "referenceName": "MySampleAzureIR", "type": "IntegrationRuntimeReference" } } }
Next steps
See the following articles on how to create other types of integration runtimes:
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/azure/data-factory/create-azure-integration-runtime | 2019-02-16T05:21:29 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Contents Now Platform Capabilities Previous Topic Next Topic Connect overlay Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Connect overlay The Connect overlay appears over the standard user interface. It consists of the Connect sidebar and any Connect mini windows that are open. Figure 1. Connect overlay Note: An administrator can disable the Connect overlay so users can only use the Connect workspace, a full-screen interface with additional Connect tools. Connect sidebarThe Connect sidebar is the primary interface for Connect Chat and Connect Support. It lists your conversations and provides access to create new conversations.Connect mini windowsWhen you open a Connect Chat or Connect Support conversation in the Connect overlay, it opens in a Connect mini window. Each mini window contains a header, a conversation area, and a message field. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/collaboration/concept/c_CollaborationOverlay.html | 2019-02-16T05:57:14 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.servicenow.com |
Please consult the Online Help File for the most up to date information and functionality.
The DriveWorks SOLIDWORKS CAM PowerPack plug-in extends a DriveWorks implementation by adding CAM related Generation Tasks.
Once downloaded, double click the DriveWorks-SolidWorksCamPowerPack-[version number].msi file to begin the installation process.
DriveWorks and SolidWorks should be closed while installing the plug-in.
Once installed the plug-in is automatically loaded in DriveWorks.
The plug-in is uninstalled from Windows Programs and Features and will be listed as DriveWorks-SolidWorksCamPowerPack [version number].
Once the SOLIDWORKS CAM PowerPack is installed, the following tasks will be available from Generation Tasks.
It is common to include these tasks in the Post Drive Tasks area of the Generation Tasks. This will allow the part to be driven by DriveWorks before any CAM data is created.
Each SOLIDWORKS CAM task is a separate task so other generation tasks can be run in between or after any CAM operation.
It is recommended the tasks are run in the following order:. | http://docs.driveworkspro.com/Topic/SwCAMPowerPack | 2019-02-16T06:25:55 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.driveworkspro.com |
This topic describes how to modify a Microsoft SQL Server Agent proxy in SQL Server 2016 by using SQL Server Management Studio or Transact-SQL.
In This Topic
Before you begin:
Limitations and Restrictions
To modify a SQL Server Agent proxy, using:
SQL Server Management Studio
Before You Begin.
Using SQL Server Management Studio.
Using Transact-SQL
To modify a SQL Server Agent proxy
In Object Explorer, connect to an instance of Database Engine.
On the Standard bar, click New Query.
Copy and paste the following example into the query window and click Execute.
-- Disables the proxy named 'Catalog application proxy'. USE msdb ; GO EXEC dbo.sp_update_proxy @proxy_name = 'Catalog application proxy', @enabled = 0; GO
For more information, see sp_update_proxy (Transact-SQL). | https://docs.microsoft.com/en-us/sql/ssms/agent/modify-a-sql-server-agent-proxy | 2017-05-22T17:30:03 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.microsoft.com |
Welcome to Vitalus’ documentation!¶
Vitalus is a rsync wrapper. Rsync is a good atomic tool, but it needs to be wrapped to have a real backup solution. Backup solutions are generally too basic or very difficult. This one fits my needs.
Contents:
Philosophy¶
- I want to centralize my backup in a unique script to achieve many tasks.
- I want to backup my desktop on external disks.
- I want to backup external disks to other external disks.
- I want to backup my /home/user on hosts accessible via ssh to a local disk.
- I want to backup my destop to hosts accessible via ssh.
- I want to keep increment if I need it.
- I want to adjust the frequency of copies for each task. The script starts much more frequently.
- I want readable log files telling me if everything goes fine.
- ...
Functionalities¶
This is just another way to express the philosophy :)
- Manage different tasks
- rsync from and to local disks
- rsync from SSH to local disk
- rsync from local to SSH almost supported
- Check disk space (local disks)
- Keep track of time evolution (increments done with hard links), not possible via SSH for the moment.
- Old increments deleted (keeping a minimal amount of increments)
- Rotated logs (general + one per task)
How to install?¶
See How to install?.
How to setup?¶
In my use case, I have a cron job running every hour. IMHO, this is quite atomic. Then, the script decides which task has to be done.
About ssh¶
Keys must be configured with an empty passphrase. Add in your ~/.ssh/config, something like
- Host sciunto.org
- IdentityFile ~/.ssh/key-empty-passphrase
Source or destination must have the format: login@server:path | http://vitalus.readthedocs.io/en/latest/ | 2017-05-22T17:23:31 | CC-MAIN-2017-22 | 1495463605485.49 | [] | vitalus.readthedocs.io |
After you specify whether you want to run the copy operation immediately, and after you optionally save the package that the wizard created, the SQL Server Import and Export Wizard shows Complete the Wizard. On this page, you review the choices that you made in the wizard, and then click Finish to start the copy operation.
Screen shot of the Complete the Wizard page
The following screen shot shows a simple example of the Complete the Wizard page of the wizard.
Review the options you selected
Review the summary and verify the following information:
- The source and destination of the data to copy.
- The data to copy.
- Whether the package will be saved.
- Whether the package will be run immediately.
What's next?
After you review the choices that you made in the wizard and click Finish, the next page is Performing Operation. On this page, you see the progress and the result of the operation that you configured on the preceding pages. For more info, see Performing Operation.
See also
Get started with this simple example of the Import and Export Wizard | https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/complete-the-wizard-sql-server-import-and-export-wizard | 2017-05-22T17:30:43 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['media/complete.png',
'Complete the Wizard page of the Import and Export Wizard Complete the Wizard page of the Import and Export Wizard'],
dtype=object) ] | docs.microsoft.com |
Box Connector Release Notes
Anypoint Connector for Box provides as a bi-directional gateway between Box, a secure content management and collaboration platform, and Mule.
Version 3.1.0 - March 8, 2017
Features
The following operations now support pagination:
Folders
Get Folder’s Items
Get Trashed Items
Get Folder Collaborations
Groups
Get Groups for an Enterprise
Get Memberships for Group
Get User’s Memberships.
Users
Get Enterprise Users
Improvement of exception messages: in addition to the HTTP status code, error messages also return the complete description of the failure cause.
Fields are now validated before sending the request: previously only a HTTP 400 response was returned.
New operation
Search with Parameters: unlike the search provided by the Box SDK, which still remains as an operation but deprecated, it provides all the parameters supported by the API, except for
mdfiltersand
filters.
Version 3.0.0 - August 11, 2016
Version 2.5.2 - April 23, 2015
Community
Version 2.5.2 - Fixed in this release
Retrieval of Remote User Id to enable integration with Dataloader.
Version 2.4.1 - September 25, 2013
See Also
Learn how to Install Anypoint Connectors using Anypoint Exchange.
Read more about Box Connector.
Access MuleSoft’s Forum to pose questions and get help from Mule’s broad community of users.
To access MuleSoft’s expert support team, subscribe to Mule ESB Enterprise and log in to MuleSoft’s Customer Portal. | https://docs.mulesoft.com/release-notes/box-connector-release-notes | 2017-05-22T17:23:56 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.mulesoft.com |
Understanding Application SSH
Page last updated:
This document describes details about the Pivotal Web Services (PWS) SSH components for access to deployed application instances. Pivotal Web Services (PWS) supports native SSH access to applications and load balancing of SSH sessions with the load balancer for your PWS deployment.
The SSH Overview document describes procedural and configuration information about application SSH access.
SSH Components
The PWS SSH includes the following central components, which are described in more detail below:
- An implementation of an SSH proxy server.
- A lightweight SSH daemon.
If these components are deployed and configured correctly, they provide a simple and scalable way to access containers apps and other long running processes (LRPs).
SSH Daemon
The SSH daemon is a lightweight implementation that is built around the Go SSH library. It supports command execution, interactive shells, local port forwarding, and secure copy. The daemon is self-contained and has no dependencies on the container root file system.
The daemon is focused on delivering basic access to application instances in PWS. It is intended to run as an unprivileged process, and interactive shells and commands will run as the daemon user. The daemon only supports one authorized key, and it is not intended to support multiple users.
The daemon can be made available on a file server and Diego LRPs that want to use it can include a download action to acquire the binary and a run action to start it. PWS applications will download the daemon as part of the lifecycle bundle.
SSH Proxy Authentication
The SSH proxy hosts the user-accessible SSH endpoint and is responsible for authentication, policy enforcement, and access controls in the context of PWS.. | http://docs.run.pivotal.io/concepts/diego/ssh-conceptual.html | 2017-05-22T17:26:11 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.run.pivotal.io |
What is machine learning?
Machine learning is a technique of data science that helps computers learn from existing data in order to forecast future behaviors, outcomes, and trends.
These.
For a brief overview, try the video series Data Science for Beginners. Without using jargon or math, Data Science for Beginners introduces machine learning and steps you through a simple predictive model.
What is Machine Learning in the Microsoft Azure cloud?.
Azure Machine Learning not only provides tools to model predictive analytics, but also provides a fully managed service you can use to deploy your predictive models as ready-to-consume web services.
What is predictive analytics?
Predictive analytics uses math formulas called algorithms that analyze historical or current data to identify patterns or trends in order to forecast future events.
Tools to build complete machine learning solutions in the cloud
Azure Machine Learning has everything you need to create complete predictive analytics solutions in the cloud, from a large algorithm library, to a studio for building models, to an easy way to deploy your model as a web service. Quickly create, test, operationalize, and manage predictive models.
Machine Learning Studio: Create predictive models
In Machine Learning Studio, you can quickly create predictive models by dragging, dropping, and connecting modules. You can experiment with different combinations, and try it out for free.
In Cortana Intelligence Gallery, you can try analytics solutions authored by others or contribute your own. Post questions or comments about experiments to the community, or share links to experiments via social networks such as LinkedIn and Twitter.
Use a large library of Machine Learning algorithms and modules in Machine Learning Studio to jump-start your predictive models. Choose from sample experiments, R and Python packages, and best-in-class algorithms from Microsoft businesses like Xbox and Bing. Extend Studio modules with your own custom R and Python scripts.
Operationalize predictive analytics solutions by publishing your own
The following tutorials show you how to operationalize your predictive analytics models:
- Deploy web services
- Retrain models through APIs
- Manage web service endpoints
- Scale a web service
- Consume web services tutorial and by building on samples. | https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-what-is-machine-learning | 2017-05-22T17:20:54 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['media/machine-learning-what-is-machine-learning/machine-learning-service-parts-and-workflow.png',
'What is machine learning? Basic workflow to operationalize predictive analytics on Azure Machine Learning.'],
dtype=object) ] | docs.microsoft.com |
Changes the owner of an object in the current database.
Important. sp_changeobjectowner changes both the schema and the owner. To preserve compatibility with earlier versions of SQL Server, this stored procedure will only change object owners when both the current owner and the new owner own schemas that have the same name as their database user names.
Important
A new permission requirement has been added to this stored procedure.
||
|-|
|Applies to: SQL Server ( SQL Server 2008 through current version).|
Transact-SQL Syntax Conventions
Syntax
sp_changeobjectowner [ @objname = ] 'object' , [ @newowner = ] 'owner'
Arguments
[ .
Return Code Values
0 (success) or 1 (failure)
Remarks.
To change the owner of a securable, use ALTER AUTHORIZATION. To change a schema, use ALTER SCHEMA.
Permissions
Requires membership in the db_owner fixed database role, or membership in both the db_ddladmin fixed database role and the db_securityadmin fixed database role, and also CONTROL permission on the object.
Examples
The following example changes the owner of the
authors table to
Corporate\GeorgeW.
EXEC sp_changeobjectowner 'authors', 'Corporate\GeorgeW'; GO
See Also
ALTER SCHEMA (Transact-SQL)
ALTER DATABASE (Transact-SQL)
ALTER AUTHORIZATION (Transact-SQL)
sp_changedbowner (Transact-SQL)
System Stored Procedures (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-changeobjectowner-transact-sql | 2017-05-22T17:38:20 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
About Custom Policies
If you want an API policy that isn’t included in the default set of policies, you can create your own custom policy. A custom policy requires the following files:
Policy Definition - YAML file that describes the policy and its configurable parameters
Policy Configuration - XML file with the backend processes that implement the policy
A custom policy also must contain a pointcut declaration]
You can create a custom policy if you use one of the following runtimes:
Mule 3.8 or later unified runtime
API Gateway runtime 1.3 or later
In Studio 6.1 and later, you can use the Studio custom policy editor (Beta).
Limitations
Custom policies must be self-contained. From a custom policy, do not reference another dependent policy, a connector on the app, or a shared connector.
You create a custom policy by leveraging the elements in Mule Runtime to evaluate and process HTTP calls and responses. The policy filters calls to an API and matches one of the query parameters in the request to a configurable, regular expression. Unmatched requests are rejected. You set the HTTP status and the payload to indicate an error whenever a request does not match the conditions of the filter.
Applying a Custom Policy
To make a custom policy available to users, you add the policy to Anypoint Platform in API Manager. On the API version details page of an API, users can then choose Policies, select the custom policy from the list, and apply the policy to the API.
If you deploy the API on a private server using a .zip file that you downloaded from Anypoint Platform, the new policy is available for on-premises users to apply. Even if the proxy was already deployed before creating the policy, there’s no need to re-download or re-deploy anything. The new policy automatically downloads to the
/policies folder, in the location where the API Gateway or Mule 3.8 unified runtime is installed. Configure your organization’s Client ID and Token in the
wrapper.conf file.
Failed Policies
In Mule Runtime 3.8 and later and API Gateway Runtime 2.1 and later, when an online policy is malformed and it raises a parse exception, it’s stored under
failedPolicies directory inside
policies directory, waiting to be reviewed. In the next poll for policies it won’t be parsed. If you delete that policy, it is deleted from that folder too. If the folder has no policies, it is deleted. | https://docs.mulesoft.com/api-manager/applying-custom-policies | 2017-05-22T17:23:02 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.mulesoft.com |
The basic units of OpenShift and Kubernetes add the ability to orchestrate Docker containers across multi-host installations.
Though you do not directly interact with Docker tools when using OpenShift, understanding Docker’s capabilities and terminology is important for understanding its role in OpenShift and how your applications function inside of containers. Docker is available as part of RHEL 7, as well as CentOS and Fedora, so you can experiment with it separately from OpenShift. Refer to the article Get Started with Docker Formatted Container Images on Red Hat Systems for a guided introduction.
Docker containers are based on Docker images. A Docker image is a binary that includes all of the requirements for running a single Docker container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Docker containers only have access to resources defined in the image, unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift can provide redundancy and horizontal scaling for a service packaged into an image.
You can use Docker directly to build images, but OpenShift also supplies builders that assist with creating an image by adding your code or configuration to existing images.
Since, Docker.
A Docker registry is a service for storing and retrieving Docker images. A
registry contains a collection of one or more Docker image repositories. Each
image repository contains one or more tagged images. Docker provides its own
registry, the Docker Hub, but you may
also use private or third-party registries. Red Hat provides a Docker registry
at
registry.access.redhat.com for subscribers. OpenShift can also supply its
own internal registry for managing custom Docker images.
The relationship between Docker containers, images, and registries is depicted in the following diagram: | https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/containers_and_images.html | 2017-05-22T17:14:16 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.openshift.com |
Storage as a Service
- Overview
- Benefits
- Using Storage Service Applications
- Web-Based Storage Browser
- Storage Service Scenarios
Overview
The CloudCenter platform decouples shared storage from CloudCenter and allows you to attach your own external Artifact Repository to store and access files. To this effect, the CloudCenter platform provides a File System service to build and own storage. This feature, also referred to as storage as a service or storage service, provides read-write storage access across users and deployments.
Benefits
The benefits of using storage as a service include:
- The ability to mount storage with multiple disks and encryption.
- The ability to control the cost of storage by providing the storage service at the application level (rather than at the system level).
- The flexibility to use the storage service as part of one application or as part of the deployment service for multiple applications.
Job-based applications (not N-tier applications) that require persistent storage can use the Storage as a Service application.
Using Storage Service Applications
You can model and deploy N-tier applications for a single tier the following storage service options:
The storage repository is automatically mounted when you reboot a running storage service application. The NFS or CephFS storage service is mounted in the /shared path and files can be accessed or written to/from /shared path.
Web-Based Storage Browser
CloudCenter launches the elFinder web-based storage browser using your SSO credentials (see SSO AD or SAML SSO or Use Case: Shibboleth SSO for additional details) so you can drag and drop files when using storage services. You do not need to explicitly login to the storage browser.
Storage Service Permissions
All users within the tenant can read (view) this storage service by default.
To allow specific user groups to have full access (read and write) to this storage service, add the group name to the User Groups (Write Permission) field.
By default, the follow conditions apply to this storage service:
Users from different tenants cannot access this service and associated storage browser.
Storage space owners have read/write access.
Users in any user group that was included in the User Groups (Write Permission) field have read/write access.
Other users within this tenant have read only access.
Configuring Storage Services
To use this feature, you must verify the following requirements:
- Allow the storage browser UI to be accessed by the he CloudCenter UI user. All users in a tenant can view this service by default.
- Specify user groups that can have full access (read and write) to this storage service and click Save App.
End-to-End Storage Configuration Use Case
To configure NFS storage using the storage mounting feature, follow this process.
- Login as the tenant admin and create a shared deployment environment, called StorageNFS.
- Log into the CCM UI as a the tenant administrator.
- Click Deployment > Deployment Environments > Add New Deployment Environment.
- Assign the following values (see Deployment Environment for additional context):
- Name = StorageNFS
- Select the required Cloud Region and Account
- Identify the Default Cloud Region, if applicable.
- Select the Default Instance Type
- Click Save. The Deployment Environments page refreshes to display the new deployment environment StorageNFS.
- Share StorageNFS with users (see Permission Control for additional context).
- Click the Share Action icon for the StorageNFS deployment environment.
- In the Share popup, verify that you are in the Users tab (default) and select Share with all users.
- Leave the Access permissions as View (default).
- Change the User's Deployment permissions to Access.
- Change the Others' Deployment permissions to Access.
- Click Save.
- Model an application using NFS storage service and grant read/write access to the user called group GroupNFS.
- In the CCM UI, click Applications > Model > N-Tier Execution.
- In the Topology Modeler,
- Click the Basic Information tab and assign the following values (see Model an Application for additional context).
- Web App Name = NFS Demo
- Version = 1.0
- Click the Topology Modeler tab.
- Click File System in the Services pane.
- Drag and drop the NFS service into the Graphical Workflow pane.
- Click the dropped NFS service box to access the corresponding properties in the Properties pane.
- In the General Settings section, assign the following values:
- In the Hardware Specification section, assign the following value:
- Memory = 256 MB
- Click Save as App. StorageNFS now displays in the Applications page along with other applications.
- Launch a new job from the modeled NFS Storage application into StorageNFS.
- Click the StorageNFS application to deploy it and assign the following values (see Deploy the Application for additional context).
- Deployment Name = NFS-Demo1
- In the nfs_0 section click Advanced and assign the following values (see IP Allocation Mode > NIC Configuration for additional context).
- Cluster = cluster1
- Datastore = DatastoreCluster
- Resource Pool = CliQr
- Target Deployment Folder = /test2
- Verify that the Network Interface Controller is displaying the configured number of NIC cards and the Network is identified in the Attach Network Interfaces section.
- Click Submit.
- The page refreshes to display the deployed job details.
- Access the NFS storage using the administrator account and upload data.
- In the CCM UI, access the Deployment page for the NFS-Demo1 deployment and click the Access NFS-Demo1 button.
- CloudCenter launches the web browser and displays the Your connection is not private... screen.
- Like Proceed to <URL> (unsafe), if you wish to proceed.
- The storage browser displays.
- Right-click and select New folder from the available options.
- Name this folder Test1.
- Create a second new folder and name it Test2.
- Select and drill into Test1.
- Right-click and select Upload files.
- In the Upload files popup, click Select files to upload.
- Select the required file(s) from the Browser file selection window. For example, Sample.png and Sample.dll.
- Log out of the storage browser and login again.
- Click Logout in the storage browser.
- Access the CCM UI and click the Access NFS-Demo-1 button to login again.
The following image provides an example of the Access NFS button:
- Verify that your files and folders are displaying as configured.
- Log out of the administrator account.
- Click Logout in the storage browser.
- Try accessing the storage browser URL and verify that the permissions are secure.
- From another browser, log in as a standard user (User1).
- In the CCM UI click Deployments.
- Expand the + icon for NFS-Demo1 and click the NFS-Demo1_run_1 link. The "NFS-Demo1_run_1" Deployment Details page displays.
- Click the Access NFS-Demo1 button. You should see the There is a problem with this website's security certificate... screen.
- If you want to proceed, click Continue to this webpage (not recommended).
- Verify that your files and folders are displaying as configured and that this user has view access to the storage browser and the files and folders.
- Add User1 to GroupNFS.
- Access the CCM UI as the tenant administrator.
- Click Admin > Groups
- Locate GroupNFS in the list and click the Edit link for GroupNFS.
- In the Edit User Group page, locate the Associated Users section.
- In the Add user to this group field within this section, select User1 from the list of users that display when you start typing us...(be aware that only users within this tenant can be added to this group).
- Click Save.
- User1 can now access the NFS storage in read/write mode.
- Login to the CCM UI as User1.
- Click Deployments.
- Expand the + icon for NFS-Demo1 and click the NFS-Demo1_run_1 link. The "NFS-Demo1_run_1" Deployment Details page displays.
- Click the Access NFS-Demo1 button.
- Verify that your files and folders are displaying as configured and that this user has READ/WRITE access to the storage browser and the files and folders.
Storage Service Scenarios
Enterprises can use the storage service in multiple scenarios:
- To write application logs and other output files to a persistent storage.
- To write output files when benchmarking applications or to run job-based applications.
- Other scenarios identified in this section.
Scenario 1: Unique Storage Service Per Deployment
In this scenario, the storage service is unique for each deployment. The storage settings are fine tuned based on the application dependencies and requirements. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as an environment variable through the NFS service.
Scenario 2: Storage Service Shared across Deployments
In this scenario, the storage service is shared across permitted deployments in your enterprise. The storage settings are fine tuned based on the application dependencies and requirements. Your enterprise can have multiple storage services available and you can set the instance at deployment time. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as an environment variable through the he CloudCenter platform.
Scenario 3: Storage Service Shared with Users
In this scenario, the storage service is shared with all users. The storage settings are fine tuned based on the application dependencies and requirements. Your enterprise can have multiple storage services available and then you can set the NFS service IP address or DNS at deployment time. The application only needs a simple node initialization script to mount the storage and the Storage IP address can be passed as a custom parameter.
- No labels | http://docs.cliqr.com/display/CCD48/Storage+as+a+Service | 2017-05-22T17:31:46 | CC-MAIN-2017-22 | 1495463605485.49 | [array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%208.50.16%20AM.png?version=1&modificationDate=1437411401000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.06.13%20AM.png?version=1&modificationDate=1437411394000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-10-28%20at%2012.48.11%20PM.png?version=1&modificationDate=1446061876000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.30.53%20AM.png?version=1&modificationDate=1437411398000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.36.52%20AM.png?version=1&modificationDate=1437411393000&api=v2',
None], dtype=object)
array(['/download/attachments/1084674/Screen%20Shot%202015-07-20%20at%209.53.02%20AM.png?version=1&modificationDate=1437411400000&api=v2',
None], dtype=object) ] | docs.cliqr.com |
(PECL paradox >= 1.4.0)
px_update_record — Updates record in paradox database
$pxdoc, array
$data, int
$num).
Nota:.
Retorna
TRUE em caso de sucesso ou
FALSE em caso de falha.
px_insert_record() - Inserts record into paradox database | http://docs.php.net/manual/pt_BR/function.px-update-record.php | 2017-05-22T17:29:18 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.php.net |
When designing your target SuperSTAR database, in most cases you will want to create references between the fact table columns and the classification tables that contain the descriptions of the values in the fact table.
For example, you might have a fact table column called Gender that contains codes like M, F and U, and create a reference to a classification table containing the descriptions Male, Female, and Unknown:
However, you can also create classification tables directly from fact table columns. When you do this, SuperCHANNEL will automatically create the classifications based on the values in the fact table (it uses the "add to classification" cleansing action to do this during the build).
Although you can create a classification table from any fact table column, it is most useful in situations where the fact table column contains the actual descriptions, rather than a code.
For example, the Customer table in the sample retail banking database contains a column called Company, which contains values such as Individual or Company:
To include this column in the SXV4, you can create a classification table in SuperCHANNEL, as follows:
Right-click the fact table column in the Target View and select Create Classification.
The Create Classification dialog displays.
Enter a name for the classification table and click OK. The table name must be unique (it cannot be the same as any existing tables in the source database).
This label will appear by default in the drop-down list in SuperWEB2 (although you can override the drop-down labels if necessary):
SuperCHANNEL automatically creates the classification table with code and name columns:
SuperCHANNEL automatically sets the cleansing action on the fact table column to add to classification. Leave this setting unchanged.
This means that during the build, SuperCHANNEL will automatically populate both the name and code columns in the new classification table with the values from the fact table column: | http://docs.spacetimeresearch.com/display/SuperSTAR98/Create+Classification+Tables+-+SuperCHANNEL | 2017-05-22T17:17:57 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.spacetimeresearch.com |
Troubleshooting: BBM
I can't add a BBM contact
-
.
I.
An app keeps prompting me:
- In the Settings app, tap Security and Privacy > Application Permissions.
- Tap the app's name.
- Change the options. For example, limit certain actions or disconnect the app from BBM.
- Tap
.
I accidentally blocked an invitation
- In BBM, swipe down from the top of the screen.
- Tap
> Blocked Contacts and Updates.
- Touch and hold a name.
- Tap
.
I can't use BBM
- Check that your device is connected to a wireless network.
- Check that other apps that require Internet access are working. For example, open the BlackBerry Browser and go to a website.
- Open BBM. If prompted, complete any setup steps.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/60523/laf1334697770228.jsp | 2014-11-21T02:46:36 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.blackberry.com |
Writing a pyglet application¶
Getting started with a new library or framework can be daunting, especially when presented with a large amount of reference material to read. This chapter gives a very quick introduction to pyglet without going into too much detail.
Hello, World¶()
This will enter pyglet’s default event loop, and let pyglet respond to
application events such as the mouse and keyboard.
Your event handlers will now be called as required, and the
run() method will return only when all application
windows have been closed.
If you are coming from another library, you may be used to writing your own event loop. This is possible to do with pyglet as well, but it is generally discouraged; see The application event loop for details.
Image viewer¶
Most games and applications will need to load and display images on the screen. In this example we’ll load an image from the application’s directory and display it within the window:
import pyglet window = pyglet.window.Window() image = pyglet.resource.image('kitten.png') @window.event def on_draw(): window.clear() image.blit(0, 0) pyglet.app.run()
We used the
image() function of
pyglet.resource.
Handling mouse and keyboard events¶. An easy way to find the event names and parameters you need is to add the following lines to your program:
event_logger = pyglet.window.event.WindowEventLogger() window.push_handlers(event_logger)
This will cause all events received on the window to be printed to the console.
An example program using keyboard and mouse events is in examples/programming_guide/events.py
Playing sounds and music¶
pyglet makes it easy to play and mix multiple sounds together.
pyglet.media.load().
By default, audio is streamed when playing. This works well for longer music
tracks. Short sounds, such as a gunfire shot used in a game, should instead be
fully decoded in memory before they are used. This allows them to play more
immediately and incur less of a CPU performance penalty. It also allows playing
the same sound repeatedly without reloading it..
Where to next?¶
The examples above have shown you how to display something on the screen, and perform a few basic tasks. You’re probably left with a lot of questions about these examples, but don’t worry. The remainder of this programming guide goes into greater technical detail on many of pyglet’s features. If you’re an experienced developer, you can probably dive right into the sections that interest you.
For new users, it might be daunting to read through everything all at once. If you feel overwhelmed, we recommend browsing through the beginnings of each chapter, and then having a look at a more in-depth example project. You can find an example of a 2D game in the In-depth game example section.
To write advanced 3D applications or achieve optimal performance in your 2D applications, you’ll need to work with OpenGL directly. If you only want to work with OpenGL primitives, but want something slightly higher-level, have a look at the Graphics module.
There are numerous examples of pyglet applications in the
examples/
directory of the documentation and source distributions. If you get
stuck, or have any questions, join us on the mailing list or Discord! | https://pyglet.readthedocs.io/en/latest/programming_guide/quickstart.html | 2020-07-02T10:27:08 | CC-MAIN-2020-29 | 1593655878639.9 | [] | pyglet.readthedocs.io |
Installing the TFS Process Template Editor
After an extended absence, I returned to
hacking Domain-Specific Language Tools for Visual Studio 2005 Redistributable Components. No problemo... but still no Editor. I then repaired the Power Tool... but still no Editor. It was only after an uninstall/reinstall cycle of the Power Tool did the Editor appear on the Team menu of Visual Studio.
The moral of the story... RTFM and install the DSL Tools before the Power Tool. | https://docs.microsoft.com/en-us/archive/blogs/anlynes/installing-the-tfs-process-template-editor | 2020-07-02T09:50:17 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
Overview
When setting up your sending domain, you'll need to add specific DNS records to authenticate your domain and get the best deliverability possible for your emails.
Update your DNS information
To get your domain authenticated, go to Account Settings
> Domains & IPs. From there, send the information to your DNS administrator or copy the fields for use in your DNS provider. To email them to your DNS administrator, select the email the full list of DNS records to a teammate option, and enter the email address of your administrator.
Once your administrator has updated the DNS records, you can return to the Domain & IPs screen and click Verify all DNS Records at the top of the table. The authentication usually takes 10-15 seconds. Once verified, the status for all the records should become green checkboxes.
If the entries do not verify, the DNS entries may still be propagating. This propagation period typically takes no longer than 30 minutes. If this period hasn't passed, please wait and try again. If you have issues with this step, please reach out to Support with questions.
Next: Setup Your Sender Profile | https://docs.zaius.com/hc/en-us/articles/360012277794-Authenticate-your-sending-domain | 2020-07-02T08:10:41 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['/hc/article_attachments/360024091133/mceclip0.png', None],
dtype=object)
array(['/hc/article_attachments/360022648193/mceclip0.png', None],
dtype=object) ] | docs.zaius.com |
API Reference
Image Height
h
The height of the output image. Primary mode is a positive integer, interpreted as pixel height. The resulting image will be
h pixels tall.
A secondary mode is a float between
0.0 and
1.0, interpreted as a ratio in relation to the source image size. For example, an
h value of
0.5 will result in an output image half the height of the source image.
If only one dimension is specified, the other dimension will be calculated according to the aspect ratio of the input image. If both width and height are omitted, then the source image’s dimensions are used.
If the
fit parameter is set to
clip or
max, then the actual output height may be equal to or less than the dimensions you specify. If you’d like to resize using only
h, please use
fit=clip to ensure that the other dimension will be accurately calculated.
Note: The maximum output image size on imgix is 8192×8192 pixels. All output images will be sized down to accomodate this limit. | https://docs.imgix.com/apis/url/size/h | 2018-09-18T18:27:31 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.imgix.com |
All content with label as5+gridfs+infinispan+listener+non-blocking.
Related Labels:
expiration, publish, datagrid, interceptor, server, transactionmanager, release,, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, docs, consistent_hash, batching, store, jta, faq, 2lcache, jsr-107, jgroups, lucene, locking, hot_rod
more »
( - as5, - gridfs, - infinispan, - listener, - non-blocking )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+gridfs+infinispan+listener+non-blocking | 2018-09-18T18:29:10 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.jboss.org |
Shape Keys Panel¶
Reference
Shape Keys panel.
Settings¶
- Active Shape Key Index
A List Views.
- Value
- Current Value of the Shape Key (0.0 to 1.0).
- Mute (eye icon)
- This visually disables the shape key in the 3D View.
- Specials
- Transfer Shape Key
- Transfer the active Shape Key from a different object. Select two objects, the active Shape Key is copied to the active object.
- Join as Shapes
- Transfer the Current Shape from a different object. Select two objects, the Shape is copied to the active object.
- Mirror Shape Key
- If your mesh is nice and symmetrical, in Object Mode, you can mirror the shape keys on the X axis. This will not work unless the mesh vertices are perfectly symmetrical. Use thetool in Edit Mode.
- Mirror Shape Key (Topology)
- This is the same as Mirror Shape Key though it detects the mirrored vertices based on the topology of the mesh. The mesh vertices do not have to be perfectly symmetrical for this one to work.
- New Shape From Mix
- Add a new shape key with the current deformed shape of the object.
- Delete All Shapes
- Delete all shape keys.
- Relative
- Set the shape keys to Relative or Absolute.
- Show Only (pin icon)
- Show the shape of the active shape key without interpolation in the 3D View. Show Only is enabled while the object is in Edit Mode, unless the setting below is enabled.
- Edit Mode
- Modify the shape key while the object is in Edit Mode.
Relative Shape Keys¶
Relative shape keys deform from a selected shape key. By default, all relative shape keys deform from the first shape key called the Basis shape key.
Relative Shape Keys options.
- Clear Weights
X
- Set all values to zero.
- Value
- The value of the active shape key.
- Range
- Min and Max range of the active shape key value.
- Blend
- Vertex Group
- Limit the active shape key deformation to a vertex group.
- Relative
- Select the shape key to deform from.
Absolute Shape Keys¶
Absolute shape keys deform from the previous and to the next shape key. They are mainly used to deform the object into different shapes over time.
Absolute Shape Keys options.
- Reset Timing (clock icon)
- Reset the timing for absolute shape keys.
- Interpolation
This controls the interpolation between shape keys.
Linear, Cardinal, Catmull-Rom, B-Spline
Different types of interpolation.The red line represents interpolated values between keys (black dots).
- Evaluation Time
- This is used to control the shape key influence.
Examples¶
Reset Timing¶
For example, if you have the shape keys, Basis, Key_1, Key_2, in that order.
Reset Timing will loop the shape keys, and set the shape keyframes to +0.1:
- Basis 0.1
- Key_1 0.2
- Key_2 0.3
Evaluation Time will show this as frame 100:
- Basis 10.0
- Key_1 20.0
- Key_2 30.0 | https://docs.blender.org/manual/en/dev/animation/shape_keys/shape_keys_panel.html | 2018-09-18T17:54:47 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../../_images/animation_shape-keys_shape-keys-panel_basis.png',
'../../_images/animation_shape-keys_shape-keys-panel_basis.png'],
dtype=object)
array(['../../_images/animation_shape-keys_shape-keys-panel_relative.png',
'../../_images/animation_shape-keys_shape-keys-panel_relative.png'],
dtype=object)
array(['../../_images/animation_shape-keys_shape-keys-panel_absolute.png',
'../../_images/animation_shape-keys_shape-keys-panel_absolute.png'],
dtype=object) ] | docs.blender.org |
All content with label infinispan+jcache+jsr-107.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, listener, cache,
amazon, grid, test, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, concurrency, out_of_memory, jboss_cache, import, index, events, batch, hash_function, configuration, buddy_replication, write_through, cloud, tutorial, jbosscache3x, xml, read_committed, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, websocket, transaction, interactive, xaresource, searchable, demo, installation, scala, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, snapshot, webdav, repeatable_read, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jgroups, lucene, locking, rest, hot_rod
more »
( - infinispan, - jcache, - jsr-107 )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/infinispan+jcache+jsr-107 | 2018-09-18T18:19:59 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.jboss.org |
What's New in Exchange Server many new features for each server role. This topic discusses the new and improved features that are added when you install Exchange 2007 SP1.
To download Exchange 2007 SP1, see Exchange Server 2007 Downloads.
New Deployment Options
You can install Exchange 2007 SP1 on a computer that is running the Windows Server 2008 operating system. For more information about the installation prerequisites for installing Exchange 2007 SP1 on a Windows Server 2008 computer, see How to Install Exchange 2007 SP1 and SP2 Prerequisites on Windows Server 2008 or Windows Vista. For more information about the supported operating systems for Exchange 2007 SP1, see Exchange 2007 System Requirements.
If Exchange 2007 and SP2.
Client Access Server Role Improvements
Exchange ActiveSync in Exchange 2007 SP1 includes the following enhancements for the administrator and for the end user:
An Exchange ActiveSync default mailbox policy is created.
Enhanced Exchange ActiveSync mailbox policy settings have been added.
Remote Wipe confirmation has been added.
Direct Push performance enhancements have been added.
For more information about the new Exchange ActiveSync features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
Outlook Web Access Office 2007 file formats. as follows:
Ability to integrate with custom message types in the Exchange store so that they are displayed correctly in Outlook Web Access
Ability to customize the Outlook Web Access user interface to seamlessly integrate custom applications together with Outlook Web Access
For more information about new Outlook Web Access features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
POP3/IMAP4
A new administration user interface has been added to the Exchange Management Console for the POP3 and IMAP4 protocols. This administration user interface enables you to configure the following settings for POP3 and IMAP4 for your individual Client Access server:
Port settings
Authentication settings
Connection settings
Message and calendar settings
For more information about new POP3 and IMAP4 features in Exchange 2007 SP1 see New Client Access Features in Exchange 2007 SP1.
Improvements in Transport
Exchange 2007 SP1 includes the following improvements to core transport functionality:
Back pressure algorithm improvements
The addition of transport configuration options to the Exchange Management Console
Exchange 2007 SP1 includes the following enhancements to message processing and routing functionality on the Hub Transport server role:
Priority queuing
Message size limits on Active Directory site links
Message size limits on routing group connectors
The addition of Send connector configuration options to the Exchange Management Console
The addition of the Windows Rights Management Services (RMS) agent
X.400 authoritative domains
Transport rules are now able to act on Unified Messaging messages
Exchange 2007 SP1 includes the following enhancements to the Edge Transport server role:
Improvements to the following EdgeSync cmdlets:
Start-EdgeSynchronization cmdlet
Test-EdgeSynchronization cmdlet
Improvements to the cloned configuration scripts
For more information about improvements to the Transport server roles in Exchange 2007 SP1, see New Transport Features in Exchange 2007 SP1.
Mailbox Server Role Improvements
Exchange 2007 SP1 introduces several new features for the Mailbox server role including the following:
Public folder management by using the Exchange Management Console
New public folder features
Mailbox management improvements
Ability to import and export mailbox by using .pst files
Changes to Messaging Records Management (MRM)
New performance monitor counters for online database defragmentation
For more information about the Mailbox server role improvements in Exchange 2007 SP1, see New Mailbox Features in Exchange 2007 SP1.
High Availability:
Standby continuous replication
Support for Windows Server 2008
Support for multi-subnet failover clusters
Support for Dynamic Host Configuration Protocol (DHCP) IPv4
Support for IPv6
New quorum models (disk and file share witness)
Continuous replication (log shipping and seeding) over redundant cluster networks in a cluster continuous replication environment
Reporting and monitoring improvements
Performance improvements
Transport dumpster improvements
Exchange Management Console improvements
For more information about the high availability features in Exchange 2007 SP1, see New High Availability Features in Exchange 2007 SP1.
Unified Messaging Server Role Improvements.
Development Improvements
Exchange 2007 SP1 introduces several enhancements to the Exchange API set. The most significant of those changes are to the Exchange Web Services.
Exchange Web Services
Exchange 2007 SP1 introduces the following new functionality and improvements to the Exchange Web Services API. The following list identifies functionality now available in Exchange 2007 SP1:
Support for public folder access. Public folders can now be created, deleted, edited, and synchronized by using the Exchange Web Services.
Improved delegate access.
Delegate management.
Item identifier translation between identifier formats.
Folder level permissions.
Proxy to the best Client Access server.
For more information about Microsoft Exchange development and enhancements made to the Microsoft Exchange APIs, visit the Exchange Server Developer Center.
For More Information
For more information about each server role that is included in Exchange 2007, see the following topics: | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/bb676323(v=exchg.80) | 2018-09-18T17:56:22 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.microsoft.com |
Contacting BigPanda From Inside The Application
When you log in to BigPanda, the Messenger button appears in the bottom corner of the screen. Use this feature to send and receive messages directly with the BigPanda team.
Sending Messages
- Click the Messenger button.The message window opens on the right side of the screen.
- Click New Conversation or select an existing conversation.
- Enter your message in the text field at the bottom of the window.
You can attach files by clicking on the paper clip in the lower right. All file types are supported and can be up to 20MB in size.
Click Send:
- The message is sent to the BigPanda team.
The BigPanda team responds within the time specified in your support tier SLA. Refer to your contract for guaranteed response times on support services.
- The response message appears in the conversation and you receive an email.
You can continue to correspond with the BigPanda team from the message window or by responding to the email. The BigPanda team may open a support ticket to get help from additional team members, if necessary. A link to the ticket will be available in the conversation.
Viewing Existing Conversations
You can see any conversations you've had with BigPanda from inside the application by clicking the Messenger button. If you're viewing an existing conversation, click the back arrow
to return to the list of conversations.
Viewing Announcements
BigPanda sometimes sends you a message to announce a new feature or service that is available to you. The announcement can appear across the full screen, as a window in the top right corner, or as a small text popup beside the Messenger button.
- Click the announcement to read it.
- (Optional) Send a message to provide feedback or ask a question.
- Click the X in the top right to close the message window.
The message is saved to your list of conversations.
Using BigPanda Docs and API Reference
BigPanda offers support documentation for common questions and API reference information for developers.
BigPanda Docs––comprehensive reference guides for how to use BigPanda features and functions.
BigPanda API Reference––REST API structures, example code, JSON objects, and parameters.
You can search the Docs and API Reference together or filter results to see only one or the other.
Opening a Ticket
You can submit a question or request directly to our technical support team.
- From BigPanda Docs and FAQs, click the Help button in the bottom right.
- Follow the direct link.
- Send an email to support@bigpanda.io.
View your open tickets by logging in with the email address you used to contact BigPanda. | https://docs.bigpanda.io/docs/getting-help | 2018-09-18T18:13:57 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['https://bigpanda.io/docs/download/attachments/23003722/Messenger.png?version=1&modificationDate=1479234211536&api=v2',
None], dtype=object) ] | docs.bigpanda.io |
Managing Avatars on the Platform
This topic includes information about managing avatars on the platform, including:
- Resources that Have Avatars on the Platform
- Supported File Types and File Size
- Working With Resource Images
- Managing Caching for Avatars
- Usage Note: Difference Between getAvatar and getDefaultAvatar
- Process Flow: Example
Resources that Have Avatars on the Platform
The platform supports upload of an image for use as an avatar for the following resource types:
- APIs
- Apps
- Groups
- Users
Supported File Types and File Size
Supported image files must be no more than 4MB in size, and must be one of these supported image types:
- JPG
- GIF
- TIFF
- BMP
- PNG
Working With Resource Images
The Dropbox service provides operations for uploading images to the platform. Working with images might include such activities as uploading an image, cropping it, and saving the cropped portion as the avatar for a resource.
Below is a suggested implementation that uses several of the Dropbox operations, in sequence, to complete these tasks.
The services that support managing these various resources include operations that allow you to add, update, or delete avatars or get avatar information:
- DELETE /api/users/{UserID}/picture (Users only; deletes an avatar. For APIs and Apps services, you cannot delete an avatar, only replace it).
- GET {service}/{ID}/avatars/{AvatarVersionID}.png
- GET {service}/{ID}/avatar: Retrieves the default avatar image for the specified resource.
Managing Caching for Avatars
Because many different resources, such as APIs, apps, and users, can have a different avatar, it's possible that a platform resource, such as an API's Board page with many comments and notifications, might include many different images. For maximum efficiency, avatars can be cached.
Whenever the avatar for a resource is changed, the URL is updated. An updated image will always have a different URL. This means that you can cache avatars indefinitely without risk that you might be referencing an outdated avatar.
Usage Note: Difference Between getAvatar and getDefaultAvatar
There are two operations available for retrieving the avatar associated with a resource, with one key difference between them:
- GET /api/{service}/{ID}/avatar (getDefaultAvatar): This operation simply returns the avatar for the resource, such as app, API, or user.
- GET {service}/{ID}/avatars/{AvatarVersionID}.png (getAvatar): Each time a new avatar is set up for a resource, the platform assigns an AvatarVersionID. If the avatar is changed, the new avatar has a new AvatarVersionID. By specifying the AvatarVersionID you can retrieve the avatar for a resource even if it isn't the current avatar.
Process Flow: Example
Let's say a user, Jane Mead, adds an avatar to her user profile. She then decides to change the image; she removes the existing avatar and adds a different one. This process flow uses the following operations:
- User uploads an image from disk: POST /api/dropbox/pictures. Image is saved to database and PictureID is returned.
- Image is retrieved for user to crop: GET /api/dropbox/pictures/{pictureId}.
- User crops picture: PUT /api/dropbox/pictures/{pictureId}. Cropped image is saved to database and PictureID is returned (same PictureID).
- Cropped image is retrieved for use: GET /api/users/UserID/avatar.
- User record is updated: PUT /api/users/{UserID}.
- Updated user record is retrieved: GET /api/users/{UserID}.
- User deletes avatar: DELETE {service}/{ID}/picture.
- User record is updated: PUT /api/users/{UserID}.
- Updated user record is retrieved: GET /api/users/{UserID}.
- User uploads a second image from disk: steps 1-6 above are repeated. | http://docs.akana.com/cm/api/aaref/Ref_ManagingAvatarsOnThePortal.htm | 2018-09-18T18:36:26 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.akana.com |
Selecting an AI Navigation Type
Each AI agent needs to have a navigation type assigned, either animate (human-based) or inanimate (vehicle-based). The following AI agent properties are relevant from a navigation perspective:
AgentType - MediumSizedCharacters or VehicleMedium
voxelSize - 0.125m x 0.125m x 0.125m minimum
radius - agent radius, in voxels
climbableHeight - maximum climbable height of maximum slope, in voxels
maxWaterHeight - maximum walkable water depth, in voxels
To assign a navigation type for an AI agent
In Lumberyard Editor, click Tools, Other, DataBase View.
On the Entity Library tab, click the Load Library button and select your asset file.
Under Class Properties pane, for Navigation Type, make a selection. This sets the navigation type for all AI agents.
In Rollup Bar, under Objects, AI, NavigationArea, NavigationArea Params, make a selection. | https://docs.aws.amazon.com/lumberyard/latest/userguide/ai-nav-agent-types.html | 2018-09-18T17:46:10 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.aws.amazon.com |
Cloud Canvas Built-In Roles and Policies
You can use the built-in Cloud Canvas roles and policies to manage resource and deployment permissions for your project.
Roles
You can use the AWS::IAM:Role resource to define roles in your
project-template.json or
deployment-access-template.json files. Cloud Canvas Resource Manager defines
the following roles for you:
The configuration file in which you define a role determines the resources to which the role provides access.
You can use the
lmbr_aws command line tool to manage the role definitions in
the
project-template.json and
deployment-access-template.json files. For more information, see Using the Cloud Canvas Command Line to Manage Roles and
Permissions.
Implicit Roles
Some Cloud Canvas custom resources also create roles. For example, when a Lambda function
is
executed, it assumes the role that the
Custom::LambdaConfiguration resource
creates. When API Gateway invokes a Lambda function or accesses other resources, it
assumes the
role that the
Custom::ServiceApi resource creates. Including these custom
resources in a
resource-group-template.json file causes these implicit
roles to be created (and deleted when the resource is deleted). For information on
implicit
role names, see Implicit
Role Mappings.
Managed Policies
You can use
AWS::IAM::ManagedPolicy resources to define permissions that
are shared across any number of roles. Cloud Canvas defines the following managed
policies for
you:
The
ProjectAdmin and
DeploymentAdmin roles are granted the
same permissions as the
ProjectOwner and
DeploymentOwner roles,
minus any permissions specifically denied by the
ProjectAdminRestrictions and
DeploymentAdminRestrictions managed policies, respectively. In effect an,
"admin" is granted all the permissions of an "owner" minus any special actions that
the
"admin" should not be able to perform.
Role Mapping Metadata
The
AbstractRole property in the
Permission metadata object does
not directly specify the actual role that receives the described permission. These
values must
be mapped to actual IAM roles. This makes it possible to setup roles in whatever way
makes
sense for your project. It also removes the need to modify the permissions defined
by
individual resource groups.
The ability to map abstract roles to actual IAM roles is important when you use a cloud gem across multiple projects or from a third party. Cloud gems acquired from a third party might have roles that are different from the roles that you use in your organization. (A cloud gem is a Lumberyard gem that uses the AWS resources defined by a Cloud Canvas Resource Group. For more information, see Cloud Gems.)
The
Custom::AccessControl resource looks for CloudCanvas
RoleMappings metadata on
AWS::IAM::Role resources to determine
which abstract roles map to that physical role. In the following example, the
CustomerSupport abstract role from all resource groups is mapped to the
DevOps physical role.
... "DevOps": { "Type": "AWS::IAM::Role", "Properties": { "Path": { "Fn::Join": [ "", [ "/", { "Ref": "ProjectStack" }, "/", { "Ref": "DeploymentName" }, "/" ]] } }, "Metadata": { "CloudCanvas": { "RoleMappings": [ { "AbstractRole": [ "*.CustomerSupport" ], "Effect": "Allow" } ] } } }, ...
Each Cloud Canvas
RoleMapping metadata object can have the following
properties.
You can use the
lmbr_aws command line tool to manage
RoleMappings metadata on role resource definitions in the
project-template.json and
deployment-access-template.json files. For more information, see Using
the Cloud Canvas Command Line to Manage Roles and Permissions.
Default Role Mappings
Cloud Canvas defines role mappings for the following roles:
Implicit Role Mappings
As mentioned in Implicit
Roles, role mappings
are automatically defined for the implicit roles created by Cloud Canvas resources
like
Custom::LambdaConfiguration. These mappings are only used with permission
metadata in the same
resource-group-template.json file as the custom
resource that creates the role. The name of the abstract role used in permission metadata
to
reference an implicit role depends on the custom resource type. | https://docs.aws.amazon.com/lumberyard/latest/userguide/cloud-canvas-built-in-roles-and-policies.html | 2018-09-18T17:42:24 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.aws.amazon.com |
Community Advertising¶:
- Various Python conferences from PyCon to smaller regional conferences
- Many topic or framework specific conferences such as Djangocon US and Sustain
- Python Software Foundation
- Mozilla
- Beeware - tools and libraries for building native apps in Python
- Godot - an open source game engine
- Kiwi TCMS - an open source test case management system
- World Possible - an education non-profit
- Write the Docs - a series of events for documentatarians
Get in touch. We can help!¶
If you run a conference, non-profit, or a funding strapped open source project, we would love to help you get the word out. Make sure you qualify for our community ads and send us an email to be considered for the program.
If you have any feedback about our community advertising program, we’d love to hear from you too. | http://blog.readthedocs.com/community-ads-2018/ | 2018-09-18T18:22:09 | CC-MAIN-2018-39 | 1537267155634.45 | [] | blog.readthedocs.com |
aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot.
Related Post
Tv Stand On Wheels Boy Play Tents Toy Race Track Exerpeutic Aero Air Ellipticals Adidas Ladies Golf Shoes Large Building Blocks Trifold Sleeping Mats 3 Ring Binders With Zipper Gorilla Playset Stamina In Motion Elliptical Trainer Remote Control Toy Trucks Coleman Canopy Battery Powered Rideon Seventh Generation Ultra Power Plus Chess And Checkers Set | http://top-docs.co/aquarest-spas/aquarest-spas-as-one-of-the-first-manufacturers-hot-tubs-we-design-our-to-maximize-personal-relaxation-family-entertainment-and-hydrotherapy-home-depot/ | 2018-09-18T17:52:57 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['http://top-docs.co/wp-content/uploads/2018/05/aquarest-spas-as-one-of-the-first-manufacturers-hot-tubs-we-design-our-to-maximize-personal-relaxation-family-entertainment-and-hydrotherapy-home-depot.jpg',
'aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot aquarest spas as one of the first manufacturers hot tubs we design our to maximize personal relaxation family entertainment and hydrotherapy home depot'],
dtype=object) ] | top-docs.co |
h1. How do I back up Textpattern? [todo]
Currently, there is no built-in backup or export function in Textpattern. You can use several tools designed for the purpose:
phpMyAdmin is supplied by most web hosts. The phpMyAdmin FAQ has a brief “explanation”: of how to back up and restore.
mysqldump generates SQL backups that can be restored using phpMyAdmin or with the mysql command-line client. See the “MySQL manual”: for details.
rss_admin_db_manager is a Textpattern plugin that includes backup and restore functions. Read more “here”:.
If your server runs Unix, and has a cron function, here’s a sample crontab entry for an automatic weekly backup:
bq.
If you don’t know what a crontab is, or how to test one, we recommend instead using one of the other options listed above.
AutoMySQLBackup is an open source Unix shell script which automates the process of rotating daily, weekly, and monthly “MySQL backups”:.
The remark regarding crontabs applies here as well. | https://docs.textpattern.io/faqs/how-do-i-back-up-textpattern | 2017-11-17T23:02:58 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |
Product: Carrara 8 Pro
Product Code: ca_ap38f
DAZ Original: Yes
Created by: DAZ 3D
Build: 8.0
Version Released: May 16, 2010
Please report any issues besides the ones listed above to the DAZ 3D Bug Database. When submitting more than one issue, please submit one report for each issue. For multiple issues, you do not need to include all of your hardware configuration information or company contact information. The DAZ 3D Bug Database requires a separate account than that of the DAZ 3D website.
Thank you for helping to make Carrara a powerful and enjoyable tool!
Visit our site for further technical support questions or concerns:
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170 | http://docs.daz3d.com/doku.php/artzone/azproduct/10608 | 2017-11-17T23:07:24 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.daz3d.com |
Advanced Features¶
Byte Pair Encoding¶
This is explained in the machine translation tutorial.
Dropout¶
Neural networks with a large number of parameters have a serious problem with an overfitting. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. But during the test time, the dropout is turned off. More information in
If you want to enable dropout on an encoder or on the decoder, you can simply add dropout_keep_prob to the particular section:
[encoder] class=encoders.recurrent.SentenceEncoder dropout_keep_prob=0.8 ...
or:
[decoder] class=decoders.decoder.Decoder dropout_keep_prob=0.8 ...
Pervasive Dropout¶
Detailed information in
If you want allow dropout on the recurrent layer of your encoder, you can add use_pervasive_dropout parameter into it and then the dropout probability will be used:
[encoder] class=encoders.recurrent.SentenceEncoder dropout_keep_prob=0.8 use_pervasive_dropout=True ... | http://neural-monkey.readthedocs.io/en/latest/features.html | 2017-11-17T22:43:38 | CC-MAIN-2017-47 | 1510934804019.50 | [] | neural-monkey.readthedocs.io |
sequence_annotations¶
This protocol defines annotations on GA4GH genomic sequences It includes two types of annotations: continuous and discrete hierarchical.
The discrete hierarchical annotations (called Features) are derived from the Sequence Ontology (SO) and GFF3 work
The goal is to be able to store annotations using the GFF3 and SO conceptual model, although there is not necessarly a one-to-one mapping in Protobuf messages to GFF3 records.
The minimum requirement is to be able to accurately represent the current state of the art annotation data and the full SO model. Feature is the core generic record which corresponds to the a GFF3 record.
The continuous data (called Continuous) represents numerical data associated with each base position, such as is stored in BigWig, Wiggle or BedGraph Formats | https://ga4gh-schemas.readthedocs.io/en/latest/schemas/sequence_annotations.proto.html | 2017-11-17T22:46:45 | CC-MAIN-2017-47 | 1510934804019.50 | [] | ga4gh-schemas.readthedocs.io |
The Sixcycle coach dashboard shows a Rhythm metric for each athlete.
The volume bar shows the percentage of recently applied training volume that the athlete has executed. The arrow on the right shows the athlete's current trend, be it increasing, maintaining or decreasing.
In the example image above, the athlete has executed approximately 75% of her recent training volume but is on a downwards trend in the near term.
For those familiar with the Stress & Form chart shown on on the athlete Calendar and Charting pages, you will be interested to learn that the volume bar is correlated to the 6wSB/Fitness metric, whereas the trend arrow is correlated to the 1xSB/Fatigue metric. This means that the Rhythm metric shows both the relationship between the athlete's medium and near term training rhythm, along with their overall Fitness and Fatigue levels. | http://docs.sixcycle.com/coaching-tools/the-rhythm-metric | 2017-11-17T23:12:20 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.sixcycle.com |
RadChartView: Palettes
In this article, you will learn to use the predefined palettes in RadChartView for Android and also how to create custom palettes.
Default Palette
In order to provide the default styles for its series, RadChartView uses palettes. Each palette defines a set of styles for the different series and axes types. Here's a demonstration for some of the colors which are provided by the default palette:
Creating Custom Palettes
All chart objects (RadPieChartView and RadCartesianChartView) have a method setPalette(ChartPalette) which allows you to set a new palette that you have defined. Here's an example of setting a custom palette to an instance of RadCartesianChartView. We will start with the chart that we have created in the Bar Series example:
// Use a copy of the existing palette in order to avoid redefining the whole palette. // We are only interested in changing the color of the bar series. ChartPalette customPalette = chartView.Palette.ClonePalette(); // Get the entry for the first bar series. PaletteEntry barEntry = customPalette.GetEntry(ChartPalette.BarFamily); barEntry.Fill = Color.Green; // Also if there are more than one bar series we can get the entry // for any of them with their index in the collection. // Edit the entry for the second bar series. barEntry = customPalette.GetEntry(ChartPalette.BarFamily, 1); barEntry.Fill = Color.Cyan; chartView.Palette = customPalette;
It is important to note that the chart palette will override all settings set manually by the developer. To prevent the chart palette from overriding the manual settings, developers must call the chartElement.setCanApplyPalette(false). This prevents the palette from being applied to the given chart element (axis, series etc.) and the developer can take full control over the visual customization.
Custom Palette Family
Finally, developers can take advantage of custom palette families. Each chart element has a setPaletteFamily(String) method that can be used to set a custom family. This is useful when we want to style different elements in the same way. For example in a scenario where we have multiple axes, each series can be paired with its relevant axis by using the same color. To apply the fill or stroke to both the axis and the series developers must call series.setPaletteFamily("CustomFamily") and axis.setPaletteFamily("CustomFamily"). | https://docs.telerik.com/devtools/xamarin/nativecontrols/android/chart/chart-palettes.html | 2017-11-17T23:17:51 | CC-MAIN-2017-47 | 1510934804019.50 | [array(['images/chart-palettes-1.png',
'Demo of Palettes in the chart. TelerikUI-Chart-Palettes'],
dtype=object) ] | docs.telerik.com |
As an account Administrator, you can keep track of board engagement and identify areas of opportunity. Quickly view and download reports of your board's activity for meetings, people, polls, and goals to share with staff and board members.
Visit our Reports Overview Help Article to start using Reports today!
Our Reports Feature includes:
Activity Snapshot
People Reports
Meetings Reports
Polls Reports
Goals Reports
Each report can be viewed or downloaded organization-wide or by group. Slice reports by 30 days, 90 days, year and all time intervals.
Organizations on the Essentials plan can access reports for the past 30 or 90 days, and organizations on the Professional or Enterprise plans can access reports for any time period since joining Boardable (past 30 days, past 90 days, past year, and all time).
Related Articles
Reports Feature Overview & Resource Links
Reports Overview: Account Administrators: report on up-to-date board health, board engagement, and monitor and identify areas of opportunity for improvement. | https://docs.boardable.com/en/articles/3606966-release-notes-reporting | 2022-08-08T08:35:14 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.boardable.com |
The PCA function (Principal Component Analysis (PCA)) outputs a set of principal components, and each principal component is a linear combination of the set of original predictors.
In the PCA example output table pca_health_ev_scaled (see Output), the first-ranked principal component is:
-0.082 * age + 0.387 * bmi + (-0.0935) * bloodpressure + 0.042 * glucose …
The PCA_Plot function uses these coefficients from (the output table of the PCA function) to compute a principal component score for each observation. If the PCA function returned n principal components, the PCA_Plot function calculates n scores for each observation. That is, the n principal components replace the original, larger set of predictors for subsequent analyses.
The version of PCA_Reduce, a component of the PCA function, must be AA 6.21 or later. | https://docs.teradata.com/r/Teradata-Aster-Analytics-Foundation-User-GuideUpdate-2/September-2017/Statistical-Analysis/PCAPlot | 2022-08-08T06:52:23 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.teradata.com |
The FFT function uses a Fast Fourier Transform (FFT) algorithm to compute the discrete Fourier Transform (DFT) of each signal in one or more input table columns. A signal can be either real or complex, and can have one, two, or three dimensions. If the signal length is not a power of two, the function either pads or truncates it to the closest power of two.
The DFT of a time sequence of length N, 0..N-1, is:
X(k) = Xk(N - k)
where k ϵ 0..N-1.
Therefore:
- The FFT of a time sequence of length 1 is the one-element sequence itself.
- The FFT of a time sequence of length 2 has only real values.
- The FFT of a time sequence of length 4 or greater has conjugate symmetry.
To recover the original signals, use the IFFT function. | https://docs.teradata.com/r/Teradata-Aster-Analytics-Foundation-User-GuideUpdate-2/September-2017/Time-Series-Path-and-Attribution-Analysis/Fast-Fourier-Transform-Functions/FFT | 2022-08-08T08:24:15 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.teradata.com |
Risk Matrix used in the RAID Log
A RAID Log is a great tool for managing Project Risk.
RAID stands for Risks, Assumptions, Issues, Dependencies.
BEWARE – it is hard to keep track of these aspects of your project in your head. You can keep track of them using a LOG TEMPLATE for your own safety.
Risks (R in RAID)
Your project risks are the “issues waiting to happen”.
i.e. Ask yourself “What could go wrong?”, and the list of items in answer to that are your risks.
e.g. when planning for a race, an example risk could be “My shoes fail during the race”
RISK SHEET – Excel RAID log & Dashboard Template
Assumptions (A in RAID)
Assumptions are items that you believe to be fine,… but that may not be. Assumptions are aspects of the environment, or of the surroundings to your project that you believe will be in a certain state.
The purpose of tracking assumptions is that you need to be prepared for your assumptions being wrong.
Issues (I in RAID)
Issues are the things which are actually going wrong – i.e. Risks that have been realised, and have turned into issues.
If you were lucky with your Risks identification earlier, you may already be prepared to deal with the issues 🙂
Dependencies (D in RAID)
Dependencies are items being delivered- or supplied- from elsewhere, and that may not be directly in your control.
i.e. in order for your project to deliver, your dependencies must be present / delivered / supported.
Dependencies are quite frequently what cause project failure – track these carefully!
RAID Log Template
This Excel Template is a handy format which allows you to track your RAID items, their status, and assign them to owners.
Some examples templates in the “Risk” area
Further Reading about Raid Log:
RAID logs are often called Risk Registers. Read about Risk Registers here on WIkipedia. | https://business-docs.co.uk/blog/raid-log-manage-risk/ | 2021-02-24T21:07:20 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['http://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/04/BDUK-33-RAID-LOG-15-RISK-MATRIX-286x300.png',
'Risk Heatmap Score Matrix Risk Matrix used in the RAID Log'],
dtype=object)
array(['http://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/04/BDUK-33-RAID-LOG-15-RISKS-300x142.png',
'RAID Log RISK SHEET - Excel RAID log & Dashboard Template'],
dtype=object) ] | business-docs.co.uk |
Optimizing queries using partition pruning
When predicate push-down optimization is not applicable—for example, if all stripes contain records that match the predicate condition—a query with a WHERE clause might need to read the entire data set. This becomes a bottleneck over a large table. Partition pruning is another optimization method; it exploits query semantics to avoid reading large amounts of data unnecessarily.
Partition pruning is possible when data within a table is split across multiple logical partitions. Each partition corresponds to a particular value of a partition column and is stored as a subdirectory within the table root directory on HDFS. Where applicable, only the required partitions (subdirectories) of a table are queried, thereby avoiding unnecessary I/O.
Spark supports saving data in a partitioned layout seamlessly, through the partitionBy method available during data source write operations. To partition the "people" table by the “age” column, you can use the following command:
people.write.format("orc").partitionBy("age").save("peoplePartitioned")
As a result, records are automatically partitioned by the age field and then saved into
different directories: for example,
peoplePartitioned/age=1/,
peoplePartitioned/age=2/, and so on.
After partitioning the data, subsequent queries can omit large amounts of I/O when the
partition column is referenced in predicates. For example, the following query automatically
locates and loads the file under
peoplePartitioned/age=20/and omits all
others:
val peoplePartitioned = spark.read.format("orc").load("peoplePartitioned") peoplePartitioned.createOrReplaceTempView("peoplePartitioned") spark.sql("SELECT * FROM peoplePartitioned WHERE age = 20") | https://docs.cloudera.com/runtime/7.2.7/developing-spark-applications/topics/spark-optimizing-queries-partition-pruning.html | 2021-02-24T21:07:37 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloudera.com |
Step 1: Access using a web browser (Chrome, Mozilla Firefox, Safari, etc.) and log in using your Kinderpedia credentials.
Step 2: Go to "Video Conferences" (on the left side of the page) and select the green "Activate Video Meetings" button.
Step 3: After you click on "Activate video meetings, you will receive an email to activate the account at the email address you entered into the Kinderpedia account, and the status on the "Kinderpedia"> "Video Conferences" page will be updated to "Account status: Waiting".
Step 4:.
Step 5: On the Zoom account activation page you will see 3 options: "Sign In With Google", "Sign In With Facebook" and "Sign Up With a Password".
From here, you must select "Sign Up with a Password".
Step 6: Enter a password for the Zoom account, then click on the orange button "Continue".Now, the Zoom account is successfully created and you can return to the Kinderpedia page.
On the Kinderpedia page, if you select "Video Conferences", you will notice that the status has been updated: "Account Status: Account Activated".
Also in the "Video Conferences" section, you will see 3 buttons (from left to right): The first one will send you to a tutorial on "How do I add an activity?", The second one to a tutorial on "How to plan an activity", and using the the third button called "Plan Now", you can schedule a video conference. If you have any questions about how to add and plan an activity, use the tutorials mentioned earlier.
Step 7: Schedule the video conference
To schedule a video conference go to the "Activities" module on the left side of the page or click on "Plan now" from the "Video Conferences" module (see the picture above). Here, you have to select the activity for which you want to schedule video conferencing by clicking on it. In the example below I will click on the activity called "Arts & Crafts".
After you click on the activity, you will see a button called "Schedule a meeting", click on that button to schedule the conference.
After you click on "Schedule a meeting", that button will change to "Start meeting". To enter the conference, click on "Start meeting". A new window will open in your browser to launch the Zoom application. If you do not have Zoom already installed, select "download & run Zoom" (if you have Zoom already installed, skip this step because the application will run automatically) and install the application. After you install the application, it will run and enter the conference automatically.
For more assistance regarding the Zoom application, you can access the following link:
These were all the necessary steps. For a better clarification, we also have a video tutorial below. | https://docs.kinderpedia.co/en/articles/3806823-i-am-a-teacher-manager-how-do-i-connect-my-kinderpedia-account-to-the-zoom-account-and-how-do-i-schedule-a-conference | 2021-02-24T20:53:23 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.kinderpedia.co |
How to display values from custom managed properties in search results - option 1
This is a blog post in the series "How to change the way search results are displayed in SharePoint Server 2013." | https://docs.microsoft.com/en-us/archive/blogs/tothesharepoint/how-to-display-values-from-custom-managed-properties-in-search-results-option-1 | 2021-02-24T20:58:49 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/66/12/7103.SearchResultListItem.png',
'Values of custom properties displayed in search results'],
dtype=object) ] | docs.microsoft.com |
You can restart the vRealize Suite Lifecycle Manager server immediately or schedule weekly server restarts.
Procedure
- Click Settings and click the System Settings tab.
- To restart the server immediately, click RESTART SERVER.
- To schedule a weekly server restart, select Schedule a restart and select the day of the week and time for the weekly restart.
- Click SAVE. | https://docs.vmware.com/en/vRealize-Suite/2017/com.vmware.vrsuite.lcm.13.doc/GUID-9C403DA4-92CE-4D00-8055-8541D6F7CBCE.html | 2021-02-24T21:28:56 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
How to display values from custom managed properties in search results – option 2
This | https://docs.microsoft.com/en-us/archive/blogs/tothesharepoint/how-to-display-values-from-custom-managed-properties-in-search-results-option-2 | 2021-02-24T20:57:53 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 56