content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
The
nova.conf configuration file is an
INI file format
as explained in Configuration file format.
You can use a particular configuration option file by using the
option
(
nova.conf) parameter when you run one of the
nova-* services.
This parameter inserts configuration option definitions from the
specified configuration file name, which might be useful for debugging
or performance tuning.
For a list of configuration options, see the tables in this guide.
Important
Do not specify quotes around nova options.
Configuration options are grouped by section. The Compute configuration file supports the following sections:
nova-conductorservice.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/ocata/config-reference/compute/nova-conf.html | 2019-06-16T03:26:53 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.openstack.org |
SafeSquid for Linux SWG safesquid-2018.1204.1921.3-swg-standard released
From Secure Web Gateway
Improvements
- Optimized SSL Memory Utilization.
Some users had reported abnormal memory utlization patterns.
This was caused due to incremental collection of of OpenSSL's error message queues.
SafeSquid now has a better ability to flush such queues.
Users in heavily loaded environments can take further adavantage of this by increasing THREAD_TIMEOUT values in the start-up parameters.
- Optimized SSL Session Caching.
The SSL context and session eviction algorithms implemented in the previous release, have been further optimized.
The optimization enables SafeSquid to intelligently extend the age of more heavily used SSL contexts and sessions.
This enhancement enables further reduction of SafeSquid's memory foot-print.
BugFixes
- Fix for SSL inspection while handling request to web-sites with FQDN longer than 2730 characters.
Vulnerabilities that could lead to abnormal termination of SafeSquid was detected.
It was found that a hacker could successfully cause SafeSquid to crash by engineering a request to web-sites with FQDN longer than 2730 characters.
Identified the flaw and fixed it.
- Fix while validating user credentials with username longer than 512 characters.
The flaw could be exploited by making such malafide requests using tools such as Curl.
It was found that a hacker could engineer a bufferflow, if SafeSquid was configured to use LDAP services for user validation, when Kerberos Authentication was not enabled.
The attack required hackers to respond to authentication challenges with usernames longer than 512 characters.
This vulnerability have been fixed.
- A minor bug that could abrupt SafeSquid's automatic log rotation mechanism, was also detected and fixed.
New Users? Getting_Started
Download SafeSquid ISO to create your appliance.
Download safesquid-2018.1204.1921.3-swg-concept.tar.gz tarball for up-gradation or If you already have Linux 14.04 machine. | https://docs.safesquid.com/wiki/SafeSquid_for_Linux_SWG_safesquid-2018.1204.1921.3-swg-standard_released | 2019-06-16T02:58:59 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.safesquid.com |
Contents Now Platform Administration Previous Topic Next Topic Upgrade Monitor Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents.Note: If your instance is self-hosted (not hosted by ServiceNow) this message may not necessarily indicate a problem. If you have customized or disabled the upgrade job and want to keep that customization or disabled state, do not click the button to fix the upgrade issue. Figure 2. Upgrade Monitor with Issue Detected To resolve the issues with the upgrade jobs, click Fix Upgrade Jobs. This action reverts both upgrade triggers ('Upgrade' and 'Check Upgrade Script') to their base versions. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/platform-upgrades/reference/upgrade-monitor-screen.html | 2019-06-16T03:20:30 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.servicenow.com |
Contents Now Platform Administration Previous Topic Next Topic Export date and time formats Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Export date and time formats Because some export formats are intended for human consumption and others are intended for database usage, different methods provide date and time field information in different formats. ExcelDate, Date-Time, and Time fields are all exported as their display values, displayed using a custom format instead of the system date format. Duration fields, however, export as the value stored in the database, which is an integer value of seconds.Note: If the date and time format is hh:mm:ss in the glide.sys.date_format setting in System Properties, and you export time values to Excel, they show in 24-hour military time format. To display the exported values in standard 12-hour am/pm time formats, select the 1:30PM time format in Format Cells > Time in Excel.hh:mm:ss XML All Date and Time fields export as the value stored in the database. PDF All Date and Time fields (including Duration) export as their display value. CSV All Date and Time fields export as the value stored in the database. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/time/concept/c_ExportDateAndTimeInformation.html | 2019-06-16T03:16:54 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.servicenow.com |
Plugin Identifiers¶
Sponge plugins are identified using a unique plugin ID, and a human-readable plugin name. It is important that you choose a proper name for your project now, because later plugins will interact with your plugin under your chosen plugin ID (e.g. when defining plugin dependencies). The plugin ID is also used for creating the default configuration folders for your plugin.
Note
The plugin ID must be lowercase and start with an alphabetic character. It may only contain alphanumeric characters, dashes or underscores. The plugin name does not have such a limitation and can even contain spaces or special characters.
Keep in mind your plugin ID will be the main identification of your plugin, used in other plugins as dependencies, for your configuration files, as well as other properties stored for your plugin. That’s why it is important you always choose a proper plugin ID directly, because changing it again later will be difficult.
Continue at Main Plugin Class for an introduction how to set up your plugin class. | https://docs.spongepowered.org/stable/en-PT/plugin/plugin-identifier.html | 2019-06-16T02:29:42 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.spongepowered.org |
Feature: #82014 - Extension scanner¶
See Issue #82014
Description¶
A new feature in the install tool called “Extension Scanner” finds possible code lines in extensions that use TYPO3 core API which has been changed.
Currently, PHP core API changes are supported and the extension scanner does its best to find for instance a usage of a core class which has been deprecated.
The scanner configuration is maintained by the core team to keep the system up to date if a patch with a core API change is merged.
The extension scanner is meant as a developer tool to find code places which may need adaption to raise extension compatibility with younger core versions.
Be aware the scanner is only a helper, not everything is found and it will also show false positives.
More details about this feature can be found at. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.0/Feature-82014-ExtensionScanner.html | 2019-06-16T04:17:11 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.typo3.org |
Basic Standard Types¶
Every pandapower network comes with a default set of standard types.
Note
The pandapower standard types are compatible with 50 Hz systems, please be aware that the standard type values might not be realistic for 60 Hz (or other) power systems.
Lines¶
Note
To add the optional column “alpha” to net.line that is used if power flow is calculated for a different line temperature than 20 °C, use the function pp.add_temperature_coefficient() | https://pandapower.readthedocs.io/en/stable/std_types/basic.html | 2019-06-16T02:34:39 | CC-MAIN-2019-26 | 1560627997533.62 | [] | pandapower.readthedocs.io |
Requirements and Installation¶
Requirements¶
These are the supported, tested versions of Django-MySQL’s requirements:
- Python: 3.5, 3.6, 3.7
- Django: 1.11, 2.0, 2.1, 2.2
- MySQL: 5.6, 5.7 / MariaDB: 10.0, 10.1, 10.2, 10.3
- mysqlclient: 1.3
Installation¶
Install it with pip:
$ pip install django-mysql
Or add it to your project’s
requirements.txt.
Add
'django_mysql' to your
INSTALLED_APPS setting:
INSTALLED_APPS = ( ... 'django_mysql', )
Django-MySQL comes with some extra checks to ensure your configuration for Django + MySQL is optimal. It’s best to run these now you’ve installed to see if there is anything to fix:
$ ./manage.py check
For help fixing any warnings, see Checks.
Extending your QuerySets¶
Half the fun features are extensions to
QuerySet. You can add these to your
project in a number of ways, depending on what is easiest for your code - all
imported from
django_mysql.models.
- class
Model¶
The simplest way to add the
QuerySetextensions - this is a subclass of Django’s
Modelthat sets
objectsto use the Django-MySQL extended
QuerySet(below) via
QuerySet.as_manager(). Simply change your model base to get the goodness:
# from django.db.models import Model - no more! from django_mysql.models import Model class MySuperModel(Model): pass # TODO: come up with startup idea.
- class
QuerySet¶
The second way to add the extensions - use this to replace your model’s default manager:
from mythings import MyBaseModel from django_mysql.models import QuerySet class MySuperDuperModel(MyBaseModel): objects = QuerySet.as_manager() # TODO: what fields should this model have??
- class
QuerySetMixin¶
The third way to add the extensions, and the container class for the extensions. Add this mixin to your custom
QuerySetclass to add in all the fun:
from django.db.models import Model from django_mysql.models import QuerySetMixin from stackoverflow import CopyPasteQuerySet class MySplendidQuerySet(QuerySetMixin, CopyPasteQuerySet): pass class MySplendidModel(Model): objects = MySplendidQuerySet.as_manager() # TODO: profit
add_QuerySetMixin(queryset)¶
A final way to add the extensions, useful when you don’t control the model class - for example with built in Django models. This function creates a subclass of a
QuerySet’s class that has the
QuerySetMixinadded in and applies it to the
QuerySet:
from django.contrib.auth.models import User from django_mysql.models import add_QuerySetMixin qs = User.objects.all() qs = add_QuerySetMixin(qs) # Now qs has all the extensions!
The extensions are described in QuerySet Extensions. | https://django-mysql.readthedocs.io/en/latest/installation.html | 2019-06-16T02:36:41 | CC-MAIN-2019-26 | 1560627997533.62 | [] | django-mysql.readthedocs.io |
DisassociateVpcCidrBlock).: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DisassociateVpcCidrBlock.html | 2019-06-16T03:17:31 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.aws.amazon.com |
Using Service-Linked Roles for Elastic Beanstalk
AWS Elastic Beanstalk uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that is linked directly to Elastic Beanstalk. Service-linked roles are predefined by Elastic Beanstalk and include all the permissions that the service requires to call other AWS services on your behalf.
Elastic Beanstalk defines two types of service-linked roles:
Monitoring service-linked role – Allows Elastic Beanstalk to monitor the health of running environments and publish health event notifications.
Maintenance service-linked role – Allows Elastic Beanstalk to perform regular maintenance activities for your running environments. | https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-service-linked-roles.html | 2019-06-16T03:37:11 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.aws.amazon.com |
CancelHandshake's deleted.
Request Syntax
{ "HandshakeId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- HandshakeId
The unique identifier (ID) of the handshake that you want to cancel. You can get the ID from the ListHandshakesForOrganizationthat
Diego, the admin of an organization, previously sent an invitation to Anaya's account
to
join the organization. Diego later changes his mind and decides to cancel the invitation
before Anaya accepts it. The following example shows Diego cancelling the handshake
(and the
invitation it represents). The output includes a handshake object that shows that
the state
is now
CANCELED.
Sample Request
POST / HTTP/1.1 Host: organizations.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 120 X-Amz-Target: AWSOrganizationsV20161128.CancelHandshake X-Amz-Date: 20161130T21370065 Date: Mon, 30 Nov 2016 21:37:00 GMT { "Handshake": { "Id": "h-examplehandshakeid111", "State":"CANCELED", "Action": "INVITE", "Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111", "Parties": [ { "Id": "o-exampleorgid", "Type": "ORGANIZATION" }, { "Id": "anaya@example.com", "Type": "EMAIL" } ], "Resources": [ { "Type": "ORGANIZATION", "Value": "o-exampleorgid", "Resources": [ { "Type": "MASTER_EMAIL", "Value": "diego@example.com" }, { "Type": "MASTER_NAME", "Value": "Master Account" }, { "Type": "ORGANIZATION_FEATURE_SET", "Value": "CONSOLIDATED_BILLING" } ] }, { "Type": "EMAIL", "Value": "anika@example.com" }, { "Type": "NOTES", "Value": "This is a request for Anaya's account to join Diego's organization." } ], "RequestedTimestamp": 1.47008383521E9, "ExpirationTimestamp": 1.47137983521E9 } }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/organizations/latest/APIReference/API_CancelHandshake.html | 2019-06-16T03:45:49 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.aws.amazon.com |
In the expressions tab you can now do easy imports and exports of your NLP model.
Clicking the export button opens a dialog where you can choose which languages and intents you want to export.
Clicking import opens a model that guides you through the steps of importing an NLP model. In the first step you can download a template for your NLP import, and upload your filled in version.
The second step lets you choose between adding to or replacing your existing NLP model. Be very careful with replacing your model, this cannot be undone!
Entities are not yet supported by our NLP import and export. This means that no entities will be included in the csv download of your model.
If you add your import to the existing NLP model, entities will not be overwritten, but if you do completely replace your NLP model, all entities will be lost. | https://docs.chatlayer.ai/overview/natural-language-processing-nlp/nlp-import-and-export | 2019-06-16T02:53:58 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.chatlayer.ai |
Not at all! If your store is using one of our platform partners Shopify, BigCommerce or Magento, then installing Fera should require no developer or code changes and you can get started in just a few clicks.
You really should be a developer or hire one if you want to customize your Fera usage using this app. If you need a developer or want to hire us to customize things for you, simply drop us a line via the live chat or email support on our website.
You can also use Fera with other tools your business uses through our pre-built skills via the skills marketplace here.
Please let us know if you have any questions about anything. We offer a dedicated developer support, just ask about it when you contact us via live chat or email. | https://docs.fera.ai/ | 2019-06-16T03:12:32 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.fera.ai |
New Relic Browser supports the uploading of source maps.
Source maps support is primarily useful for "decoding" minified JavaScript. Minified JavaScript results in mostly useless error stack traces on New Relic Browser's JS errors page. Uploading source maps converts these errors to understandable stack traces, with useful references to code lines. This feature might also be useful for bundled or transpiled JavaScript code.
Access to this feature depends on your subscription level.
New Relic
rpm.newrelic.com/browser > (select an app) > JS errors > (select a JS error group): You can easily drag and drop source map into the New Relic Browser UI to associate source maps with specific JavaScript files. You can also upload source maps via the API.
You can drag and drop, or upload, a source map file into the New Relic Browser UI to associate it with a specific JavaScript file. New Relic will then convert minified stack traces into un-minified traces and source code visible on the JS errors page.
To drag in or upload a source map to New Relic Browser, the source map must be on your local machine.
To associate a source map with an error stack trace on the JS errors page:
- Go to rpm.newrelic.com/browser > (select an app) > JS Errors, then select a JS error group. (Don't select an Errors without a stack trace group.)
- From the selected JS error group, select the error instance details tab (next to the Overview:
- Generate source maps with UglifyJS
- Generate source maps using UglifyJS: When "uglifying" source files, specify a source map file name and include the original source content:
uglifyjs [source files] \ -o bundle.min.js \ --source-map bundle.min.js.map \ --source-map-include-sources \ -c -m
- Generate source maps with webpack
Generate source maps using Webpack: In your production webpack config, simply specify
source-mapfor the config.devtool property. The sourceMapFilename property of config.output is optional and defaults to
[name].js.map.
devtool: 'source-map', output: { path: path.join(__dirname, 'dist'), filename: '[name].js', sourceMapFilename: '[name].js.map', }, | https://docs.newrelic.com/docs/browser/new-relic-browser/browser-pro-features/upload-source-maps-un-minify-js-errors | 2019-06-16T02:54:43 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
Tools Installation¶
Platforms - Computers with the following Operating Systems
- Windows 10 or 7
- Ubuntu version 14.0 or later
- MAC OS
Visual Studio Code
Visual Studio Code - can be downloaded from here:
ST-LINK Debugger Driver
- MacOS - ST-LINK drivers are automatically installed for MAC OS.
- Windows - ST-LINK V2 driver must be manually installed for Windows. The Windows driver is downloaded from the following page link:
- Ubuntu - please see step 5.
Installation of OpenIMU development platform
To install OpenIMU development platform:
Start Visual Studio Code.
On leftmost toolbar find “Extensions” icon and click on it.
In the text box “Search extensions on Marketplace” type “Aceinna” and hit enter
Install Aceinna Extension and Follow prompts.
First steps
After installation of “Aceinna” extension click on “Home” icon at the bottom of the screen. It will bring up Aceinna OpenIMU platform homepage. Click on “Custom IMU examples”, chose desired example and click “Import”.
The required example will be imported into working directory in folder:
C:\Users\<username>\Documents\platformio\Projects\ProjectName
Now you can edit, build and test the project. All your changes will remain in the above-mentioned directory and subdirectories. Next time when you return to development - open Aceinna “Home” page and click “Open Project”, choose “Projects” and select required project from the list.
The source tree of imported project has the following structure:project directory -| | |---build directory (.pioenvs) | | libraries |---platform library directory (.piolibdeps)-| | | |--library name-| V | |---lib1--| | | |--src | | |--include | | | | | |---lib2--| | | |--src |--include (user include files) | |--include | ........ |--lib (optional user library directory) | |--src (user source files) |
Compile and JTAG Code Loading
Once you have imported an example project, a good first step is to compile and download this application using your ST-LINK. At the bottom of the VS Code window is the shortcut toolbar shown below. To load an application to the OpenIMU with JTAG, simply click the Install/Download button while the ST-LINK is connected to your EVB.
The OpenIMU development environment uses PlatformIO’s powerful open-source builder and IDE. This on-line manual focuses on on OpenIMU specific information, and it does not attempt to fully discuss all of the IDE’s powerful features in depth. For more information on PlatformIO builder and IDE features include command line interface, scripting and more please see the PlatformIO
5. ST-LINK Install for Ubuntu (Manual Version)
Go to and read instructions carefully.
On local Ubuntu machine, you will clone the aforementioned repository and make the project. This requires the following packages to be installed:
- CMake > v2.8.7
- Gcc compiler
- Libusb v1.0# Run from source directory stlink/ $make release $cd build/Release $sudo make install # Plug ST-LINK/V2 into USB, and check the device is present $ls /dev/stlink-v2 | https://openimu.readthedocs.io/en/latest/install.html | 2019-06-16T03:59:49 | CC-MAIN-2019-26 | 1560627997533.62 | [] | openimu.readthedocs.io |
Storage¶
See also
Unit Systems and Conventions
Create Function¶
pandapower.
create_storage(net, bus, p_mw, max_e_mwh, q_mvar=0, sn_mva=nan, soc_percent=nan, min_e_mwh=0.0, name=None, index=None, scaling=1.0, type=None, in_service=True, max_p_mw=nan, min_p_mw=nan, max_q_mvar=nan, min_q_mw (float) - The momentary real power of the storage (positive for charging, negative for discharging)
max_e_mwh (float) - The maximum energy content of the storage (maximum charge level)
- OPTIONAL:
q_mvar (float, default 0) - The reactive power of the storage
sn_mva (float, default None) - Nominal power of the storage
soc_percent (float, NaN) - The state of charge of the storage
min_e_mw (float, NaN) - Maximum active power injection - necessary for a controllable storage in OPF
min_p_mw (float, NaN) - Minimum active power injection - necessary for a controllable storage in OPF
max_q_mvar (float, NaN) - Maximum reactive power injection - necessary for a controllable storage in OPF
min_q_mw = -30, max_e_mwh = 60, soc_percent = 1.0, min_e_mva, state of charge soc and storage capacity max_e_mwh are provided as additional information for usage in controller or other applications based on panadapower. It is not considered in the power flow! | https://pandapower.readthedocs.io/en/stable/elements/storage.html | 2019-06-16T02:31:15 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['../_images/storage.png', 'alternate Text'], dtype=object)] | pandapower.readthedocs.io |
There are currently two CueServer 2 models available:
CS-900 CueServer 2 Pro
The CueServer 2 Pro is housed in an enclosure with removable brackets suitable for either 19” rack-mounting or desktop use. It features dual LAN ports, four field-replaceable DMX module slots and customizable front-panel button caps. See the CS-900 CueServer 2 Pro section for more details for this model.
CS-940 CueServer 2 DIN
The CueServer 2 DIN is housed in an enclosure with replaceable side brackets suitable for DIN-Rail or surface mounting. It features a single LAN port, two DMX input ports and two DMX output ports. See the CS-940 CueServer 2 DIN section for more details for this model. | http://docs.interactive-online.com/cs2/1.0/en/topic/models | 2019-06-16T03:59:45 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['https://manula.r.sizr.io/large/user/3910/img/cs-900-portrait-4in.jpg',
None], dtype=object)
array(['https://manula.r.sizr.io/large/user/3910/img/cs-940-portrait-4in.jpg',
None], dtype=object) ] | docs.interactive-online.com |
Caching page output
Output caching is the most effective way to increase page performance. The output cache stores the full source code of pages, i.e. the HTML and client script that the server sends to browsers for rendering. When a visitor views a page, the server caches the output code in the application's memory. Until the cache expires, the system displays the page using the cached output. As a result, the application can quickly serve requests without running the page code or communicating with SQL servers.
We recommend using output caching for all possible pages. The drawback is that cached output can become outdated on dynamic pages, or if the website's content is updated. Output caching is not suitable for pages that need to refresh frequently (for example pages that update based on user input, or have randomly generated content). You cannot disable output caching for parts of pages.
However, the system provides several solutions that allow you to cache output even for pages with dynamic content:
- Set dependencies for the output cache (allows automatic clearing of the cache)
- Adjust how the system caches multiple versions of pages
- Use substitution macros to create dynamic content inside cached pages
- Disable full page output caching, but use partial caching for sections of the page (individual web parts)
Caching page output on the IIS level
You can also configure IIS Output caching for your web application. IIS Output caching is completely separate from the caching mechanisms in Kentico. When clients request a page whose valid output is cached on the IIS level, the server returns the response without requiring any processing from Kentico.
Note: Kentico only supports the User-mode policy for IIS output caching. You cannot use Kernel-mode caching.
Configuring full page output caching
To use output caching for your website's pages:
Allow output caching globally:
- Open the Settings application.
- Select the System -> Performance category.
- Check Enable output caching.
- Save the settings.
Enable output caching for individual pages:
- Open the Pages application.
Select the page in the content tree.
Open the Properties -> General tab.
- Set the Use output cache property to Yes (in the Output cache section).
- Specify the number of Cache minutes. The value must be greater than 0, and determines the expiration time of the page's output cache.
Save the page.
To enable output caching for all pages on your site, edit the properties of the root page. The underlying pages need to have the Use output cache property set to Inherit (this is the default state).
When a visitor requests the page, the system saves the full output code into the cache. Until the cache expires, the system displays the cached output without processing the page.
Setting output cache dependencies
The output cache of pages can become outdated when users update the website's pages and objects. Dependencies allow the system to automatically delete cached content whenever related objects are modified. The output cache provides default dependencies that clear the cache for pages:
- When a user modifies the given page's content, properties or page template
- Whenever a postback occurs on the page
By default, pages do NOT delete their output cache when you modify other pages or objects whose data is displayed on the page. For example, if a page contains a list of news articles and you update one of the news pages, the page displays the old content until the output cache expires.
To ensure that pages are always up-to-date, you need to set your own output cache dependencies:
- Edit the page in the Pages application.
- Switch to the Design tab.
- Add the Output cache dependencies web part anywhere on the page.
- Define the dependencies in the web part's Cache dependencies property. For more information and examples, refer to Setting cache dependencies.
- Click OK.
The system clears the page's output cache whenever an object specified by the dependencies is modified. With the cache cleared, the system fully processes the page on the next request, and caches the new output.
Note: If the page uses a shared page template, the dependencies apply to all pages with the given template.
Using persistent storage for the output cache
By default, the application stores the output cache of pages only in the memory. Websites lose all cached content if the application restarts. You can configure the system to save the output cache in the server's file system, which provides more persistent storage. The output cache then stores the content of the selected pages in both the application's memory and file system. The file system cache is used if the output of a requested page is not found in the memory cache.
Warning: Enabling the file system output cache may cause your website to serve outdated content. When clearing invalid output cache due to changes made to pages or other dependencies, the system can only remove cache files if the corresponding cache item also exists in the memory. The file system cache remains outdated if the memory cache is not available when the invalidation occurs (for example after an application restart or if you have different expiration times set for the two caching types).
As a workaround, you can manually clear the file system output cache by clicking Clear cache on the General tab of the System application.
To allow storage of output cache in the file system:
- Go to Settings -> System -> Performance.
- Type a number of minutes into the Cache output in file system (minutes) setting. The value must be greater than 0, and determines the expiration time of the output cache files.
- Save the settings.
To enable persistent output caching for pages:
- Open the Pages application.
- Edit individual pages on the Properties -> General tab.
- Set the Allow file system cache property to Yes (in the Output cache section).
Save the page.
To enable file system output caching for all pages on your site, edit the properties of the root page. The underlying pages need to have the Allow file system cache property set to Inherit (this is the default state).
Deleting the output cache of pages
To manually clear the output cache of a page:
- Open the Pages application.
Select the page in the content tree.
- Open the Properties -> General tab.
- Click Clear output cache in the Output cache section.
The page's output is removed from the cache in the server's memory. The next time a visitor opens the page, the system runs the page's code and saves the output into the cache again.
Note: The Clear output cache action does not delete the file system output cache. To manually clear the file system output cache, you need to delete all data cached by the application – open the System application and click Clear cache on the General tab.
Caching portions of the page output
If you cannot use full output caching on a page, consider enabling Partial caching for sections of the page. The partial cache stores the HTML output of individual web parts (components that make up the content of pages). Partial caching allows the system to refresh pages on every request, but still improves performance by caching at least portions of the page output. You can set up partial caching for any number of web parts on a single page.
Note: Partial caching is not available on precompiled websites.
Web parts do not use partial caching by default. You need to allow partial caching globally for your website, and then enable it for individual web part instances.
To allow partial caching:
- Open the Settings application.
- Select the System -> Performance category.
- Check the Enable partial caching setting.
- Save the settings.
To enable partial caching for web part instances:
- Open the Pages application.
- Edit the page containing the web part on the Design tab.
- Configure the web part instance (double-click).
- Expand the Performance category, and type a number of minutes into the Partial cache minutes property. The value must be greater than 0, and determines the expiration time of the web part's partial cache.
- Click OK.
When a visitor opens the page, the system saves the HTML output generated by the web part into the cache. Until the cache expires, the system renders the page without processing the web part, and displays the cached output instead.
Automatic clearing of partial cache
By default, the system deletes the partial cache of web part instances:
- When you modify the property configuration of the given web part instance.
- Whenever a postback occurs on the page containing the web part.
- Editable web parts (Editable text, Editable image) automatically clear their partial cache when the content of the parent page is updated.
You can set additional cache dependencies for web parts through the Partial cache dependencies property (in the Performance category). The system uses the dependencies to automatically clear the partial cache when the related objects are modified.
To disable the default dependencies listed above, uncheck Use default cache dependencies and enable the Preserve partial cache on postback property.
Tip: If you wish to use partial caching on pages that are not based on the portal engine, you can leverage standard ASP.NET output caching for controls.
Caching multiple output versions of pages
Web pages often display different content based on variable parameters, such as user permissions, language preferences, or the device and browser used to view the page. To compensate, the output cache stores multiple versions of pages. The system then automatically serves the appropriate cache version to visitors.
The output cache always creates separate cache items for pages according to the following variables:
- The page path (including the virtual directory and extension)
- The protocol in the request URL
You can configure the conditions that determine when the system uses separate or shared output cache:
- Open the Settings application.
- Select the System -> Performance category.
- Click Edit next to the Output cache variables setting.
Enable or disable the following output cache variables:
- Click Save.
The system adds the selected variables and their values into the names of output cache keys. As a result, the cache keys are unique for all combinations of the required conditions.
For example, the username variable ensures that the system stores a separate version of each page's output cache for every logged in user. If you disable the username option, the output cache does not distinguish between users — if a page's output is cached for one user, the system may load the same content for all other users (depending on the remaining output cache variables).
Tip: You can configure how the system shares partial output cache through the Partial cache variables setting. The variables apply globally to the partial cache of all web parts.
Caching of personalized content
Personalized content (for example content contained in web part zones or personalized web parts) is cached per personalization variant.
For example, when a page containing a web part zone with multiple personalization variants is served, the system caches only the personalization variant fitting the current request. When another request, meeting different personalization criteria, is made, the system reuses cached data for the static segments of the page (if still available) and caches only the different personalization variant of the web part zone. This way, the system avoids storing the static segments of the page multiple times. | https://docs.kentico.com/k82/configuring-kentico/optimizing-website-performance/configuring-caching/caching-page-output | 2019-06-16T03:43:45 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.kentico.com |
Potential issues and resolutions
Applies to Dynamics 365 for Customer Engagement apps version 9.x
Troubleshoot We can’t find any apps for your role error message on Dynamics 365 for phones and tablets.
When using Dynamics 365 for phones and tablets, you encounter the following message:
We can’t find any apps for your role. To check for recently-added apps, select Refresh.If you can’t find your app, change your search criteria and try again.
For more information see, Troubleshoot "We can’t find any apps for your role" error message.
Troubleshoot issues where user does not have the Dynamics 365 for mobile privilege
Make sure you have these required privileges before using the mobile app.
Troubleshoot error code 800c0019 on Windows Phones
If you get error code 800c0019 when you try to sign in to your Microsoft account while using the Dynamics 365 for phones or CRM for phones express apps, chances are that you have the wrong date and time settings on your Windows 8 phone. This can occur after updating your Windows 8 phone, removing and replacing the battery, or after a time change.
In most cases, your phone’s date and time is set automatically by your mobile operator. If it’s not, you need to set it manually so you can sign in to your Microsoft account successfully. Here’s how:
On Start, flick left to the App list and tap Settings.
Tap Date+time.
Turn off Set automatically.
Set the correct values for Time zone, Date, and Time.
Troubleshoot a Windows app start-up error
If you receive this error:
Additional steps may be needed to configure Dynamics 365 for Customer Engagement for this organization. Please contact your system administrator.
If you’re using a computer or tablet
If you’re using a Windows phone
You received this error because you’re trying to connect to an on-premises deployment of Customer Engagement, which is not supported for your version on Windows Phones. Windows Phone connection to Microsoft Dynamics 365 on-premises requires the Dynamics 365 for Customer Engagement apps for Windows app built for Windows 10. More information: Support for Dynamics 365 for phones and tablets
Error message: “This record is unavailable.”
If this message appears when a user starts the mobile app, taps the Home button, or selects Dashboards from the menu, the user likely doesn’t have access to the expected dashboards.
If you’re an admin, you can avoid users getting this error by making sure all mobile users have access to the sales dashboard:
In the web app, go to Settings > Customizations > Customize the System.
Click Dashboards.
Select Sales Dashboard.
Click Enable Security Roles.
Select Display to everyone and then click OK. If you prefer to display only to select security roles, be sure to select your user’s security role.
Click Publish.
Have your user close and open the mobile app so your dashboard changes will download.
If you’re an end user and you’re seeing this message on your home page, you can choose a different dashboard and set it as your home page:
From the mobile app, tap the menu and then tap Dashboards.
On the command bar, tap Select Dashboard and then select the dashboard you would like to use as your home page.
On the command bar, tap Set as Home.
If you’re an end user and you’re seeing this message on the dashboards page, you can create a personal dashboard through the web app and enable it for mobile:
In the web app, go to Sales > Dashboards.
Click New.
Click Properties.
Enter a name for your dashboard and select Enable for mobile.
Add the components you want on your dashboard and click Save.
In the mobile app, follow the previous procedure to select your new dashboard and set it as your home page.
Error message: “Your server is not available or does not support this application.”
Cause 1: The Customer Engagement server is down. Verify that the server is on and connected to your network.
Sample Trace Message for Cause 1:
“Dynamics CRM [Error] | Connection error: 404”
Cause 2: Your Dynamics 365 for Customer Engagement apps version is not supported. See What's supported for version support information.
Cause 4: This error can also occur if you enter an invalid URL. Make sure the same URL you have provided works to access Dynamics 365 for Customer Engagement in your browser on your device.
Sample Trace Messages for Cause 4:
“XMLHttpRequest: Network Error 0x2ee7, Could not complete the operation due to error 00002ee7.”
“Dynamics CRM [Error] | Connection error: 0”
Error message: "You haven't been authorized to use this app. Check with your system administrator to update your settings."
Cause 1: Verify that your Dynamics 365 for Customer Engagement security role includes the Use Dynamics 365 for tablets privilege. See "Required privileges" in Get started with Dynamics 365 for phones and Dynamics 365 for tablets.
Cause 2: This error can occur if you have a Dynamics 365 for Customer Engagement organization and your user has not been assigned a Dynamics 365 for Customer Engagement license. If you add a Dynamics 365 for Customer Engagement subscription to an existing Office 365 tenant, your user may not have a Dynamics 365 for Customer Engagement license assigned. If the user has the Global Administrator or Service Administrator role in the Microsoft Online Service Portal, you’re able to sign in to the Dynamics 365 for Customer Engagement web application to perform certain administrative actions, but you can’t perform end user tasks, such as creating records (for example, accounts, contacts, and leads) or configuring Dynamics 365 for tablets. When you sign in to the web application, you may notice that not all areas appear within the navigation (for example, Sales and Marketing are missing):
Access the Users and Groups section within the Microsoft Online Service Portal and verify you have a Dynamics 365 for Customer Engagement license assigned to your user record.
Error message: "You need an internet connection to use this app. Reconnect and try again."
Cause 1: This error can occur if you do not have an Internet connection. Verify you are connected to the Internet and can access the same URL in your web browser.
Cause 2: Check if you are using a preview build of Windows 8.1. So far this issue has only been reported with the preview version of Windows 8.1.
Error message: "Sorry, something went wrong while initializing the app. Please try again, or restart the app."
Cause 1: Permissions might not be set properly. See "Required privileges" in Get started with Dynamics 365 for phones and Dynamics 365 for tablets.
Cause 2: See the following KB article:
An error occurs in the Dynamics 365 for Customer Engagement app for users in child business units. For more information, see Sorry, something went wrong while initializing the app.
Sample Trace Message for Cause 2:
Error Message:System.NullReferenceException: Object reference not set to an instance of an object.
Microsoft.Crm.Application.WebServices.ApplicationMetadataService.<>c__DisplayClass30.<UserRolesChanged>b__2d(Entity role)
at System.Linq.Enumerable.Any[TSource](IEnumerable1 source, Func
2 predicate)
at Microsoft.Crm.Application.WebServices.ApplicationMetadataService.UserRolesChanged(Guid[] clientUserRoles, DateTime syncTime, ExecutionContext context)
at Microsoft.Crm.Application.WebServices.ApplicationMetadataService.RetrieveUserContext(UserContextRetrieveRequest userContextRetrieveRequest)
Cause 3: This can occur if the download of the metadata failed. The next attempt to connect will fully regenerate the metadata and successfully connect. Microsoft is aware of an issue where metadata may fail to download due to a timeout and plans to address this issue in a future update.
Sample Trace Messages for Cause 3:
“Error occurred during complete refresh of Application/Entity/Attribute metadata”
“XMLHttpRequest: Network Error 0x2ef3, Could not complete the operation due to error 00002ef3.”
Error message: “The language installed on your company’s system isn’t available on the app. Please contact your system administrator to set up a supported language.”
Cause: This error will occur if one of the supported languages is not enabled in Dynamics 365 for Customer Engagement. For more information on the supported languages, see Dynamics 365 for tablets: Set up and use and expand What you need to use Dynamics 365 for tablets and Supported Languages.
Error message: “The process assigned to this record is unavailable or has been deleted.”
If you receive this message for a record which has a non-deleted process assigned to it, you should manually synchronize Dynamics 365 for tablets with your Dynamics 365 for Customer Engagement data. Close the Dynamics 365 for tablets app, reopen, and then choose to download the latest customizations. This procedure forces Dynamics 365 for tablets to check for updated customizations. Recently viewed data while you were connected is cached and synched. Record data like Accounts or Contacts are not synched. You can’t choose which data synchronizes to the device like you can with Microsoft Dynamics 365 for Outlook.
Error message: “This operation failed because you’re offline. Reconnect and try again.”
This error may occur for the following scenarios when you are using a Windows 88 device and you have a Dynamics 365 for Customer Engagement organization that uses Microsoft account (formerly named Live ID). This issue doesn’t occur for organizations provisioned through Office 365.
Cause 1: You are automatically authenticated as a different Microsoft account that is not a member of the Dynamics 365 for Customer Engagement organization. This may happen if you sign into your Windows 8 device and your domain account is connected to a Microsoft account. For example: you sign in to your device as <userid>@contoso.com (your domain account) and that account is connected to <userid>@live.com (a Microsoft account). If your connected account (for example, <userid>@live.com) is not a member of the Dynamics 365 for Customer Engagement organization, you will encounter this error. In this scenario, the error occurs after providing your URL, but you are never prompted for credentials. When you connect your domain account to a Microsoft account, that account will be used to automatically sign in to apps and services that use Microsoft account for authentication. If you’re using a Windows 8 device, use the steps listed here to check if your domain account is connected to a Microsoft account. If you’re using a Windows RT device, see the Windows RT section.
Windows 8
Swipe from the right side of the screen to access the charms bar and then tap Settings.
Tap Change PC settings.
Tap Users.
Check to see if under the Your Account section it says “This domain account is connected to <Your Microsoft account>”
Windows RT
If you are using a Windows RT device and need to authenticate as a Microsoft account that is different than the one you use to log on to your device, you must create another account and switch to that account when using the app. For example: you currently sign in to your Windows RT device as <userid>@live.com, but want to access your Dynamics 365 for Customer Engagement organization via the tablet app as <userid>@outlook.com. For more information on how to create a new account on your device, see Video: Create a user account.
Sample Trace Message for Cause 1:
The app couldn’t navigate to<OrganizationId> because of this error: FORBIDFRAMING.
Cause 2: This error may occur if you previously authenticated to the app as a different Microsoft account and chose the option “Keep me signed in”. Even after uninstalling and reinstalling the app, the token for the previous credentials is still stored on your device. If you are trying to connect as a different user, you will need to remove the token. To completely clear the app, after you uninstall the app, you must clear the Indexed DB folder (Drive:\Users\%USERNAME%\AppData\Local\Microsoft\Internet Explorer\Indexed DB). You may have to sign in as a different user and use the command prompt as an administrator to clear the Indexed DB folder. That is because some files in this folder can be held by the Host Process for Windows Tasks. Once the token is successfully removed, you should see the sign-in page after you enter your URL in the app.
The same error as Cause 1 may be found in the traces.
Cause 3: You have not accepted your invitation to the Dynamics 365 for Customer Engagement organization. If you attempt to access the same URL through your browser, you see a notification that you are invited to the organization but need to accept the invitation. Once you accept the invitation, you are able to configure the app successfully.
Sample Trace Message for Cause 3:
The app couldn’t navigate to<username>%40live.com&lc=1033 because of this error: FORBIDFRAMING.
For each of the causes listed previously, you may also see the following event logged in the traces:
“Authentication: Failed - cookie setup”
Cause 4: If you connect to a Dynamics 365 for Customer Engagement organization on an Android device, this error can occur if the certificate from the Dynamics 365 for Customer Engagement website or the federated server, such as AD FS, is not trusted by the device. To avoid this scenario, make sure to use a publicly trusted certificate or add the Certificate Authority certificate to the device. For more information, see KB article: While configuring Dynamics CRM for phones and tablets, you receive an error message.
Error message, Dynamics 365 for Good: “We’re sorry. Your server is not available or does not support this application” have above.
Event 10001 messages appear in the Event Log when you run Dynamics 365 for Windows 8
The following event may be recorded multiple times to the Event Log, when Show Analytic and Debug Logs is enabled, on the device where Dynamics 365 for Windows 8 is running. Notice that, by default, Show Analytic and Debug Logs is disabled in Event Viewer and these messages won’t be recorded. More information: Enable Analytic and Debug Logs
Event Id: 10001
SEC7131 : Security of a sandboxed iframe is potentially compromised by allowing script and same origin access.
Verify the source of the messages. If the source is Dynamics 365 Server, these events don’t pose a security threat and can be ignored.
By design: “—d” added to URL
For Dynamics 365 for Customer Engagement users
To improve the reliability of DNS resolutions to Dynamics 365 for Customer Engagement organizations, Dynamics 365 for tablets modifies the organization URL used when signing in. When a user signs in, Dynamics 365 for tablets adds “—d” (two dashes + d) to the URL. For example, if the organization URL is, Dynamics 365 for tablets will change the URL to.
If a user needs to retry signing in, they’ll see “—d” in the web address. They can sign in with the modified URL or reset it to the URL normally used.
After providing credentials the app appears to load indefinitely and never completes
This can occur if the time on the device is not within a certain variance of the Dynamics 365 for Customer Engagement server. For example: you may encounter this issue if the time on the server is 2 PM on November 11th but the device is set to 2 PM on November 12th.
You may see events like the following logged multiple times in the trace files:
Dynamics CRM [PAL] | Authentication: Token Expired with Token Timeout value (-255674015) --- Retrieving new Auth Token from shim
For possible resolution, see Microsoft Dynamics CRM for Phone and Tablets cannot connect to Dynamics CRM organization due to length of TokenLifetime
Dynamics 365 for tablets users are repeatedly prompted for sign-in credentials and can’t sign in
Cause: This can occur if certain directories under the Dynamics 365 for Customer Engagement website have Windows Authentication enabled. For Dynamics 365 for tablets to successfully connect to a new deployment of Dynamics CRM Server 2013 or Dynamics CRM Server 2015, you must run a Repair of Dynamics CRM Server 2013 or Dynamics CRM Server 2015, on the server running IIS where the Web Application Server role is installed after the Internet-Facing Deployment Wizard is successfully completed.
Important
To resolve this issue by running Repair, the Dynamics 365 for Customer Engagement deployment must already be configured for claims-based authentication and IFD.
Note
When the logon prompt appears, it is an Active Directory logon prompt instead of the sign-in page of your Secure Token Service (STS) such as Active Directory Federation Services (AD FS). The prompt looks like the one shown here.
After you tap Cancel or enter credentials 3 times, you see the correct sign-in prompt.
Redirected URLs do not work when you configure Dynamics 365 for tablets or Dynamics 365 for phones
URLs that redirect, such as IIS host headers or link-shortening websites such as tinyurl or bitly, do not work when you use the URL in the Dynamics 365 for Customer Engagement apps web address field with Dynamics 365 for tablets or Dynamics 365 for phones during configuration.
For example, an host header for a Dynamics 365 for Customer Engagement apps online website URL that is actually, will not work and will display an error message. To work around this behavior, you must enter the actual web address for the Dynamics 365 for Customer Engagement organization. When this issue occurs and you have enabled logging, the information logged is similar to the following. Notice that the URLs in lines 2 and 3 are different. That difference indicates a redirected URL.
User entered URL:
Constructed server URL:
HTTP Response location:
Users not getting customizations
Users will not get customizations made to Customer Engagement if there are draft records present. Users should be encouraged to save records as soon as they go online.
Data cached for offline viewing remains after the entity is no longer enabled for Dynamics 365 for tablets
In Dynamics 365 for tablets, record data is cached as the user visits the record so the user can access the data when going offline.
This cached data persists after the entity is no longer enabled for Dynamics 365 for tablets (Settings > Customizations > Customize the System > [select an entity] > under Outlook & Mobile, deselect Dynamics 365 for tablets).
To remove the cached data, the user must sign out of Dynamics 365 for tablets, or Dynamics 365 for tablets must be reconfigured or uninstalled.
Customization changes do not appear in Dynamics 365 for tablets
Cause 1: The customizations (metadata) from your Dynamics 365 for Customer Engagement organization are cached on your device. The app checks for updated metadata after 24 hours or any time you reopen the app. For customization changes to become available immediately, you must completely close and then reopen the app. If new metadata is found, you will be prompted to download it. For more information on how to completely close an app, refer to the help for your operating system or reference one of the articles provided:
Windows 8: How do I close an app?
iPad: Force an app to close
Android: How to force close Android apps
Cause 2: You may be seeing a different form than the one you customized. If you have multiple forms for an entity, you will see the first form in the form order that you have access to. This is different than the web application where you see the last form you used and have the ability to change between forms.
Private Browsing not supported in Safari
If you enable Private Browsing on your iPad in your Safari browser, you will see the following error message when you attempt to connect to your Customer Engagement organization: “Dynamics 365 for Customer Engagement has encountered an error.” You will need to disable Private Browsing. Tap the address bar, and then tap Private.
Web app differences in mobile browsers
For differences you can expect to find in the web app when you’re accessing it from a mobile device, see Support for Dynamics 365 for phones and Dynamics 365 for tablets.
Clipboard data – available to admins and customizers
Dynamics 365 for Customer Engagement System Administrators or System Customizers can access other users’ Clipboard data for users of Windows 8 and 8.1 devices.
Users can view queue items in another person’s queue
A user viewing records in Dynamics 365 for tablets can view records in another user’s queue.
Update the Dynamics 365 for Good app before updating to Dynamics CRM Online 2015 Update 1 haven previously.
App restart required after reconfiguring Dynamics 365 for Good
After you reconfigure Dynamics 365 for Good, the app can get stuck in a loop. You need to close and reopen the app.
On your iPad, press the Home button two times quickly. You'll see small previews of your recently used apps.
Swipe to find the Dynamics 365 for Good app.
Swipe up on the app's preview to close it.
Tap the Dynamics 365 for Good app icon to launch the app and configure for the new org.
Prevent click for mapping and Dynamics CRM Online 2015 Update 1
For users of version 1.0 (1.0.0) of the Dynamics 365 for Good app that have updated to Dynamics CRM Online 2015 Update 1, note that the Prevent click for mapping setting does not work.
To prevent click for mapping in version 1.0 (1.0.0), admins should enable the Require a secure browser for opening URLs setting in the Good Control server, as shown here.
The Prevent click for mapping setting works as expected in Dynamics 365 for Good app version 1.1 (1.1.0). We recommend updating to the latest version of the Dynamics 365 for Good app rather than applying this workaround.
Issue still not resolved?
If the information provided previously doesn’t resolve your issue, either Post your issue in the Dynamics CRM Community or Contact Technical Support.
The following are some suggested details to provide:
What are the specific symptoms you encounter? For example, if you encounter an error, what is the exact error message?
Does the issue only occur for users with certain Dynamics 365 for Customer Engagement security roles?
Does the issue only occur on certain devices but works correctly for the same user on another device?
If you attempt to connect to a different Dynamics 365 for Customer Engagement organization that does not include your customizations, does the same issue occur? If the issue only occurs with your customizations, provide a copy of the customizations if possible.
Does the issue still occur after uninstalling the app and reinstalling it?
- What type of device (ex. iPad 4th Generation, Microsoft Surface, etc…) are you using and what is the version of the operating system (ex. iOS 6.0, Windows 8, etc…)?
See also
Set up Dynamics 365 for phones and tablets
Dynamics 365 for phones and tablets User's Guide
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/mobile-app/troubleshooting-things-know-about-phones-tablets | 2019-06-16T03:28:31 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['../admin/media/mobile-app-sales-marketing-missing.png',
'Sales and Marketing tabs missing Sales and Marketing tabs missing'],
dtype=object)
array(['../admin/media/mobile-app-social-engagement-icense.png',
"Terry Adam's Dynamics 365 for Customer Engagement apps License Terry Adam's Dynamics 365 for Customer Engagement apps License"],
dtype=object)
array(['../admin/media/mobile-app-welcometimeout.png',
'Welcome screen timeout Welcome screen timeout'], dtype=object)
array(['../admin/media/mobile-app-adfs-login.png',
'Active Directory Sign in Active Directory Sign in'], dtype=object)
array(['../admin/media/mobile-app-adfs-login-2.png',
'ADFS Sign-in prompt ADFS Sign-in prompt'], dtype=object)
array(['../admin/media/good-click-mapping.png',
'Require a secure browser for opening URLs Require a secure browser for opening URLs'],
dtype=object) ] | docs.microsoft.com |
CSS expressions no longer supported for the Internet zone
Starting with Internet Explorer 11, CSS expressions are no longer enabled for webpages loaded in the Internet zone. CSS expressions are supported for pages loaded in other zones (such as the intranet zone) when pages are rendered in IE7 standards mode or IE5 quirks mode.
For recent versions of Internet Explorer, CSS expressions (also called dynamic properties) are deprecated and work only in IE7 standards mode or IE5 quirks mode. (CSS expressions were deprecated in Internet Explorer 8.)
Starting with IE11, CSS expressions no longer work in any document mode for webpages loaded in the Internet zone.
Update affected pages to use features defined by modern standards, such as HTML5, CSS3, and others. | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/dn384050(v=vs.85) | 2019-06-16T03:02:01 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
Released on:
Thursday, October 20, 2016 - 15:05
New featuresand
NEW_RELIC_DATASTORE_DATABASE_NAME_REPORTING_ENABLED
Bug fixes
- The agent will no longer crash the process when an express param handler is executed when a transaction is not active. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-1320 | 2019-06-16T02:54:08 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
Difference between revisions of "Allow anydesk"
Latest revision as of 16:49, 7 June 2019
Contents, First SafeSquid checks for that user and decide whether this user is allowed to access remote application or not, if yes then SafeSquid gives access to that user, before giving the access it will check for user-agent..
Note: Remote applications like Any desk and Ammy admin does not supporting SSO authentication.If SSO authentication is enabled you have to bypass it.
Remote applications like Remote desktop application, Download managers, etc.(Anydesk and Teamviewer ) should get automatically block if HTTPS inspection is enabled. No need to configure any policy for blocking purpose.
Configuration on anydesk
- Set proxy on anydesk application
- If authentication is enabled you have to specify Username and Password on any desk application.
- Anydesk should not take auto proxy settings : If you set proxy in IE browser or chrome browser and you select "Try to detect the proxy server" option on anydesk, it should not take proxy automatically. You must have to configure proxy on anydesk application.
- Any desk and ammy admin is not supporting SSO authentication.If SSO authentication is enabled you have to bypass it. | https://docs.safesquid.com/index.php?title=Allow_anydesk&diff=6699&oldid=6087 | 2019-06-16T03:35:34 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.safesquid.com |
User Menu
A user can click his name at top right corner to open user menu:
Linked Accounts
Linked accounts are used to link multiple accounts to each other. In this way, a user can easily navigate through his/her accounts using this feature.
User can link new accounts or delete already linked accounts by clicking the "Manage accounts" link.
In order to link a new account, user must enter login credentials of related account.
UserLinkAppService class is used to manage application logic for account linking, UserLinkManager class is used to manage domain logic for account linking.
Profile Settings
My settings is used to change user profile settings:
As shown here, admin user name can not be changed. It's considered a special user name since it's used in database migration seed. Other users can change their usernames. ProfileAppService is used to get/change settings.
Login Attempts
All login attempts (success of failed) are logged in the application. A user can see last login attempts for his/her account. UserLoginAppService is used to get login attempts from server.
Change Picture
A user can change own profile picture. ProfileController is used to upload and get user profile pictures. Currently, JPG, JPEG, GIF and PNG files are supported, you can extend it.
Change Password
ProfileAppService is used to change password.
Download Collected Data
A user can download his/her collected data using this menu item.
AccountController is used to logout the user and redirect to Login page. | https://docs.aspnetzero.com/documents/aspnet-core-mvc/latest/Features-Mvc-Core-User-Menu | 2019-06-16T03:40:56 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/user-menu-4.png',
'User menu'], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/linked-accounts-3.png',
'User menu'], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/link-new-account-1.png',
'link new account'], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/user-settings-3.png',
'User settings'], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/login-attempts-1.png',
'Login attempts'], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/gdpr_download_item.png',
'Login attempts'], dtype=object) ] | docs.aspnetzero.com |
Configuration Options Reference
This section lists and describes, by container and then by domain, the configuration settings found in the <Genesys Softphone Installation Directory>/Genesys Softphone/GenesysSoftphone/Softphone.config file. For an example of the configuration file, see Configuring Genesys Softphone.
Basic Container
ImportantYour environment can have up to six SIP URIs (Connectivity sections) that represent six endpoint connections with SIP Server.
Genesys Container
The second Container ("Genesys") holds a number of configurable settings that are organized into domains and sections. These settings do not have to be changed, but can be customized.
An overview of the settings in this container and the valid values for these settings is provided here:
For more information about these options, see SIP Endpoint SDK for .NET Developer's Guide.
This page was last modified on December 8, 2017, at 10:37.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/SP/8.5.2/dep/Options | 2019-06-16T02:37:12 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.genesys.com |
Pseudo-destinations
Some of the Destinations do not refer to data that will be stored, but exist only to collect together values for other operations. While this may seem to add verbosity, it allows your control file to use the full data transformation and mapping features.
When viewing a Model in the admin UI, pseudo-destinations will be flagged, and their name will describe their purpose.
To load an existing Destination for modification, or adding data in dependent destinations, a ‘load’ pseudo-destination must be set with information that identifies the specific object, followed by a
load Instruction.
Note that control files for the User sync should not
load or
new any Destinations.
Structured values
Some values are structured, in that they are formed from multiple fields. For example, names of people are formed of their first and last names.
The model will include pseudo-destinations for each of the structured values used in the model, which are used to assemble the data, then store it in the actual Destination with the
field-structured Instruction. | https://docs.haplo.org/import/model/pseudo | 2019-06-16T03:22:08 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.haplo.org |
The "Who's Online" module displays information about users that are visiting your site at a particular moment.
To 'add' a new Who's Online module or 'edit' an existing Who's Online module, navigate to the Module Manager:
Click the 'New' button and click on Who's Online in the modal popup window.
To 'Edit' an existing Who's Online module, in the Module Manager click on the Who's Online Module's Title or click the Who's Online module's check box and then click the Edit button in the Toolbar.
The Who's Online Module displays the number of Anonymous Users (e.g. Guests) and Registered Users (ones logged-in) that are currently accessing the Web site.: | http://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Who_Online&diff=prev&oldid=106417 | 2014-07-10T07:28:45 | CC-MAIN-2014-23 | 1404776405824.29 | [] | docs.joomla.org |
Platform design for modern standby
To modern connected standby, a PC hardware platform must meet a specific set of requirements. These requirements govern the selection of the SoC chip, DRAM, networking device, and other key hardware components.
Enabling modern standby on a PC platform requires careful planning and engineering. The primary reason for additional engineering is to deliver the low power consumption that the end user expects when the system is in a sleep state and the screen is turned off. Users will not tolerate excessive battery drain, particularly relative to the very good battery life of most smartphones.
The second largest engineering investment for modern standby is to enable low-power communications (Wi-Fi, mobile broadband, and Ethernet). Each communications device includes a significant amount of autonomous processing capability and firmware to allow the platform's SoC or core silicon to power off while maintaining connectivity.
Low-power core silicon (CPU, SoC, DRAM)
The modern standby power state requires frequent transitions between a low-power idle mode and short periods of activity. Through all these transitions, the system is in standby and the screen stays turned off. This model allows the operating system and apps to be always on and running while the hardware delivers low idle power. This combination results in low average power and long battery life during standby.
A modern standby platform with long battery life includes low-power core silicon (or SoC) and DRAM that have the following characteristics:
- The capability to switch between idle and active modes in less than 100 milliseconds. The active mode allows code to run on the CPU(s), but does not necessarily allow accessing the storage device or other host controllers or peripherals. The idle mode can be a clock-gated or power-gated state, but should be the state that has the lowest power consumption for the SoC and DRAM.
- DRAM technology and size to minimize power consumption while in self-refresh mode. Current modern connected standby PCs typically use mobile DRAM (LP-DDR) or low-voltage PC DRAM (PC-DDR3L, PC-DDR3L-RS).
- A power engine plug-in (PEP) that coordinates the low-power state of host controllers on the SoC with SoC-wide power states. The PEP is a small, lightweight driver that abstracts the SoC-specific power dependencies. All modern connected standby platforms must include a PEP that, at minimum, communicates to Windows when the SoC is ready to enter the lowest-power idle mode. For Intel based platforms, the PEP is already present as an inbox driver where SoC specific information is conveyed directly through ACPI FW tables.
Communications and networking devices
The networking device(s) in a modern connected standby-capable platform are responsible for maintaining connectivity to the cloud while the SoC remains in a low-power idle mode. This capability is achieved by offloading basic network maintenance to the networking device.
The network devices in a modern connected standby-capable platform must be capable of protocol offloads. Specifically, the network device must be capable of offloading Address Resolution Protocol (ARP), Name Solicitation (NS), and several other Wi-Fi-specific protocols. To offload protocol processing, the small microcontroller on the networking device responds to network requests while the SoC remains in a low-power idle mode, saving battery power during sleep.
The network devices in a modern connected standby-capable platform must also be capable of detecting important incoming network packets and waking the SoC if necessary. The ability to detect these packets is called wake-on-LAN (WoL) patterns. With WoL patterns, the network device wakes the SoC or core silicon only when an important network packet is detected, which allows the SoC to otherwise remain in a low-power idle mode. The list of important packets to detect is provided to the networking device by Windows and correspond to the system services or apps on the lock screen.
For example, Windows always asks the network adapter to listen for incoming packets from the Windows Notification Service (WNS). Apps that are pinned to the lock screen can also request that the network device listen for app-specific packets for real-time communications, such as Skype.
For more information about protocol offloads, see Protocol Offloads for NDIS Power Management. For more information about WoL patterns, see WOL Patterns for NDIS Power Management.
System designers who build modern connected standby-capable PCs are highly encouraged to build a deep working relationship with their networking hardware vendors.
Platform requirements for modern standby
To support modern standby, a PC platform must meet the technical requirements summarized in the following table. | https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/platform-design-for-modern-standby | 2020-08-03T18:18:19 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.microsoft.com |
Customizing Appearance
This article points to the RadComboBox API that can be used to customize its appearance using styles and templates.
ItemTemplate
To customize the appearance of the items when RadComboBox's ItemsSource is set, use the ItemTemplate property. An example can be found in the Binding to Collection article.
Additionally, you can use the ItemTemplateSelector property to implement a DataTemplateSelector for the items.
Styling the TextBox Input
RadComboBox shows a text input area when in edit mode (IsEditable=True) represented by a TextBox control. The control can be customized via the TextBoxStyle property of RadComboBox. Read more, in the TextBoxStyle article.
Styling the Selection Box
The Selection Box part of RadComboBox is customized via the SelectionBoxTemplate, MultipleSelectionBoxTemplate and EmptySelectionBoxTemplate properties. Read more, in the Selection Box Template article.
Editing the Control Template
You can edit the control template of RadComboBox in order to achieve visualization and functionality that is not provided out of the box or via the built-in API. To do this, extract the ControlTemplate of the control and modify the elements in it. Read more about extracting templates in the Editing Control Templates article.
To apply the customized ControlTemplate, use the NonEditableTemplate and EditableTemplate properties of RadComboBox. NonEditableTemplate is applied when the IsEditable property of the control is set to False (default). EditableTemplate is applied when IsEditable is True.
All elements in the ControlTemplate that contains "PART_" in their Name should be present in the template. Those are required elements used in code by the control. Removing the required elements will lead to missing functionality or runtime errors. | https://docs.telerik.com/devtools/silverlight/controls/radcombobox/styles-and-templates/combobox-styles-templates-overview | 2020-08-03T18:15:36 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.telerik.com |
An Act to amend 49.45 (23) (a) and 49.471 (4) (a) 4. b.; and to create 49.471 (1) (cr), 49.471 (4g), 118.60 (10m), 119.23 (10m) and 946.94 of the statutes; Relating to: fraud in parental choice programs, Medicaid expansion, eligibility for BadgerCare Plus and BadgerCare Plus Core, and providing a criminal penalty. | http://docs.legis.wisconsin.gov/2015/proposals/ab855 | 2020-08-03T17:11:32 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.legis.wisconsin.gov |
To remove a resource hierarchy from a single server in the LifeKeeper cluster, do the following:
- On the Edit menu, select Resource, then Unextend Resource Hierarchy.
- Select the Target Server where you want to unextend the Postfix resource. It cannot be the server where the Postfix resource is currently in service. (This dialog box will not appear if you selected the Unextend task by right clicking on a resource instance in the right pane.) Click Next.
- Select the Postfix hierarchy to unextend and click Next. (This dialog will not appear if you selected the Unextend task by right clicking on a resource instance in either pane).
- An information box appears confirming the target server and the Postfix resource hierarchy you have chosen to unextend. Click Unextend.
- Another information box appears confirming that the Postfix resource was unextended successfully. Click Done to exit the Unextend Resource Hierarchy menu selection.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.1/en/topic/unextending-a-postfix-resource-hierarchy | 2020-08-03T18:12:50 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.us.sios.com |
The following image illustrates the flow of the tracking tool, and how the access is given to the recipient to the web site. Later on we will explain how you can also embed this experience in your own site or mobile app.
The detail of the steps are :
A request of delivery is generated in our system via API or DASH tools and is broadcasted to our drivers network.
Driver confirms the request of delivery and starts the journey to pickup at the time required.
A unique token is generated to gain access to the tracking link. A notification is sent to the recipient through SMS or EMAIL with the URL and token of the tracking site, when the driver picks up the packages.
The recipient opens the tracking link in the web site and is able to fix the address, contact information, report a problem and ultimately start a conversation with the driver to solve any problems on the field.
Businesses can provide real time tracking for their customers giving them access to live information and instant updates about their product delivery.
A token is generated in our system given a metadata that contains either the email or the phone of the recipient. Depending on your configurations we send the URL access of the real time tracking web site through 2 channels:
SMS : If your recipient information contains a phone number and you configure your account to send SMS and charge you the country fee.
EMAIL : If your recipient information contains an email address . This option is set by default for all accounts. You can customize your email and subject.
WHATSAPP : Ask to our sales representative about this new channel to send notifications.
The secure url of the tracking page is sent through these mediums once the delivery changes its status. For example you can configure to send the notification to the recipient each time the driver marks in his app that "is going" to the client address.
Check the basic customization you can configure in our DASH APP here
When the recipient opens the real time tracking site, he will be able to:
Report a problem The recipient is able to report a missing information or a bad geolocation of his address. We provide a drag and drop tool to fix the location.
Real time chat with driver: The recipient can establish a conversation via live chat with the driver in order to send instant relevant information that will help fulfill an effective delivery.
We will describe upcoming features and tools you can access. Using our technology, you can create deep integrated and customized look and feel that will give your customer a whole different digital experience.
To integrate real time tracking within your app or your website contact our team at developers@shippify.co and we can help you over with our latest SDKs and Widgets. | https://docs.shippify.co/deliveries/general/real-time-tracking | 2020-08-03T17:56:19 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.shippify.co |
Join our Slack workspace to get the latest updates on the app development, provide feedback, and vote on features in the backlog.
Smart Attachments is a document management app for Confluence. It allows you to create a space storage for storing important project documents in one place. You manage and organize documents within folders, you can quickly browse through folders and locate files at once. | https://docs.stiltsoft.com/display/public/SAFC/Managing+folders | 2020-08-03T17:53:57 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.stiltsoft.com |
administrator password as soon as possible. Follow the steps below:
- Log in to the application as an administrator.
- Select the “Users & Groups -> Users” menu item.
In the resulting list of users, select your user name.
Click the “Modify” button below the user profile details.
Enter a new password in the “Password” field.
Click “Save” to save your changes. | https://docs.bitnami.com/installer/apps/dolibarr/administration/change-admin-password/ | 2020-08-03T18:26:07 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.bitnami.com |
We use JWT specifications to generate your access keys. You can read more about it on :
To generate a new API key, log into your MyCustomizer dashboard. In the left side bar, click on your account name, then click on "Account Settings" and "API keys".. Ex: keep it in a separate configuration instead of hardcoding it in your codebase. : | https://docs.mycustomizer.com/api/accessing-the-api-endpoints | 2020-08-03T17:09:27 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.mycustomizer.com |
Create a link to a scorecard Users with the admin role can create UI actions that allow users to view scorecards from tables. Before you beginRole required: assessment_admin or admin Procedure Generate assessable records you want to evaluate. For example, you might create a metric type called Project to assess project management records. Navigate to System Definition > UI Actions. Open the View Scorecard record. Right-click the header bar and select Insert and Stay from the context menu to create a duplicate record. Change the Table name to the table on which you want the UI action to appear. For example, you might select Project [pm_project]. Do not edit the Action name field or the Condition script. Save the record. Navigate to the table on which you created the UI action and open an assessable record. In this example, navigate to Project > Projects > All. Open any record in the list. Click View Scorecard under Related Links to open the scorecard for that assessable record. The scorecard appears with the title in the form of <table display name> Scorecard. For example, a scorecard for an assessable record in the Project [pm_project] table is named Project Scorecard.Note: Content does not appear in the scorecard unless the associated assessable record has assessment results or related live feed conversations. Insert a new View Scorecard UI action record for each table where you want the related link to appear. Related tasksExport a quiz scorecard as an imageRelated referenceAssessment scorecard averagesAssessment scorecard categoriesAssessment scorecard category metricsAssessment scorecard head-to-head compare viewAssessment scorecard historyLive feed view of assessable recordsAssessment scorecard ratings | https://docs.servicenow.com/bundle/orlando-servicenow-platform/page/administer/assessments/task/t_CreateALinkToAScorecard.html | 2020-08-03T18:24:23 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.servicenow.com |
Table of Contents
Product Index
This product is a fantasy based skin texture and simple hair set. It will work well with alien, mutant, creepy or scary scenes. Included are 8 full skin textures, 4 eye sclera textures, 6 eyelash textures, 12 eye textures, 5 lip textures, 5 mouth/tongue textures and 5 teeth textures. Also included is 1 hair figure with 6 shiny texture presets and 1 white matte texture for re-coloring.
The textures and hair in this set will work with Genesis 8. | http://docs.daz3d.com/doku.php/public/read_me/index/52647/start | 2020-08-03T19:37:57 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.daz3d.com |
TOPICS×
Live Copy Overview Console
The Live Copy Overview enables you to:
- View/manage inheritance across a site:
- View the blueprint tree and corresponding live copy structure, together with their inheritance status
- Change the inheritance status; for example, suspend, resume
- View blueprint and live copy properties
- Perform rollout actions
Opening the Live Copy Overview
You can open the Live Copy Overview from the:
Opening Live Copy Overview - References for a Blueprint Page
The Live Copy Overview can be opened from the References side panel of the Sites console:
- In the Sites console, navigate to your blueprint page and select it .
- Open the References panel and select Live Copies .You can also open References first and then select the blueprint.
- Select Live Copy Overview to show and use the overview of all live copies related to the selected blueprint.
- Use Close to exit and return to the Sites console.
Opening Live Copy Overview - Properties of a Blueprint Page
The Live Copy Overview can be opened when viewing properties of a blueprint page:
- Open Properties for the appropriate blueprint page.
- Open the Blueprint tab - the Live Copy Overview option will be shown in the top toolbar:
- Select Live Copy Overview to show and use the overview of all live copies related to the current blueprint.For further details see also the Knowledge Base article Livecopy status message - Up-to-date/Green/In Sync .
- Use Close to exit and return to the Sites console.
Using the Live Copy Overview
The Live Copy Overview can also be used to peform actions on the live copy:
- Open the Live Copy Overview .
Actions for a Live Copy Page
When you select a live copy page, the following actions are available:
- Edit
- Open the live copy page for editing.
- View information about the status and inheritance.
- Synchronize a live copy to pull changes from the source to the livecopy.
- Reset a live copy page to remove all inheritance cancellations and return the page to the same state as the source page.
- Temporarily deactivates the live relationship between a live copy and its blueprint page.
- Resume allows you to reinstate a suspended relationship.
- Permanently removes the live relationship between a live copy and its blueprint page.
Relationship Status
The Relationship Status console has two tabs providing a range of functionality:
Relationship Status Information
This tab provides detailed information about the status of the relationship between the blueprint and live copy:
Live Copy Information
This tab allows you to view and edit the live copy configuration:
| https://docs.adobe.com/content/help/en/experience-manager-64/administering/introduction/msm-livecopy-overview.html | 2020-08-03T18:47:21 | CC-MAIN-2020-34 | 1596439735823.29 | [array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-362.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-363.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-364.png',
None], dtype=object) ] | docs.adobe.com |
Speeding
2: {
3: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
4: {: }
13:
14: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
15: {.
Avi | https://docs.microsoft.com/en-us/archive/blogs/dditweb/speeding-up-image-loading-in-wpf-using-thumbnails | 2020-08-03T18:37:49 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.microsoft.com |
Authentication infrastructure¶
-
-
QgsAuthManager the entry point
Adapt plugins to use Authentication infrastructure
-
Aviso
Despite our constant efforts, information beyond this line may not be updated for QGIS 3. Refer to for the python API documentation or, give a hand to update the chapters you know about. Thanks.
Introdução¶
User.
QgsAuthManager the entry point¶
The
QgsAuthManager singleton
is the entry point to use the credentials stored in the QGIS encrypted
Authentication DB, i.e. the
qgis-auth.db file under the
active user profile folder.
This class takes care of the user interaction: by asking to set master password or by transparently using it to access crypted stored info.
Init the manager and set the master password¶
The following snippet gives an example to set master password to open the access to the authentication settings. Code comments are important to understand the snippet..
Remove entry from authdb¶
We can remove an entry from Authentication Database using it’s
authcfg identifier with the following snippet:
authMgr = QgsAuthManager.instance() authMgr.removeAuthenticationConfig( "authCfg_Id_to_remove" )
Leave authcfg expansion to QgsAuthManager¶
The best way to use an Authentication Config stored in the
Authentication DB is referring it with the unique identifier
authcfg. Expanding, means convert it from an identifier to a complete
set of credentials.
The best practice to use stored Authentication Configs, is to leave it
managed automatically by the Authentication manager.
The common use of a stored configuration is to connect to an authentication
enabled service like a WMS or WFS or to a DB connection.
Nota
Take into account that not all QGIS data providers are integrated with the
Authentication infrastructure. Each authentication method, derived from the
base class
QgsAuthMethod
and support a different set of Providers. For example the
certIdentity () method supports the following list
of providers:
In [19]: authM = QgsAuthManager.instance():
In the upper case, the
wms provider will take care to expand
authcfg
URI parameter with credential just before setting the HTTP connection.
Aviso
The developer would have to leave
authcfg expansion to the
QgsAuthManager, in this way he will be sure that expansion is not done too early.
Usually an URI string, built using the
QgsDataSourceURI
class, is used to set a data source in the following way:
rlayer = QgsRasterLayer( quri.uri(False), 'states', 'wms')
Nota. | https://docs.qgis.org/3.4/pt_PT/docs/pyqgis_developer_cookbook/authentication.html | 2020-08-03T18:04:37 | CC-MAIN-2020-34 | 1596439735823.29 | [array(['../../_images/QgsAuthConfigSelect.png',
'../../_images/QgsAuthConfigSelect.png'], dtype=object)
array(['../../_images/QgsAuthEditorWidgets.png',
'../../_images/QgsAuthEditorWidgets.png'], dtype=object)] | docs.qgis.org |
AVE (PEL)
This function returns the average exchange rate between the source currency and the currency to which it is being converted. The average rate is associated with a specified time period and a specified scenario. Optionally, you can override the exchange rate by entity.
Remarks
This function may only be used in Currency rules. In addition, you must create a model-to-model association between the financial model of interest and the Exchange Rate assumption model.
The PEL compiler cannot generate SQL code for this function.
See Also
Other Resources
PEL currency conversion functions | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2007/bb795374(v=office.12) | 2021-05-06T14:37:45 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.microsoft.com |
User Guide Version 1.0.0 Table of Contents
1. INTRODUCTION
- Magento Chase payment gateway extension specializes for capturing payment from credit and debit card to your Chase account.
- Rootways Chase Extension is very easy to install and configure. Just enable payment method, add your Chase information and you can use Chase Chase Payment Gateway Extension. It’s very easy and fast!
2.1 License Key Settings:
- Log in to Admin Panel and then click STORES -> Configuration -> ROOTWAYS EXTENSIONS -> Chase Payment -> Settings
- Below is the screen shot of Chase Payment Gateway license key settings. Please enter license key for this extension that was emailed by us to you after your purchase.
2.2 Chase Payment Method Management:
- Go to STORES -> Configuration -> SALES -> Payment Methods -> Chase Payment Method -> By Rootways Inc.
- You can see below screen. Fill up all required detail and save it.
- Enable: You can enable/disable this payment method using this option.
- Title: Title of the payment method to be visible at front side.
- Gateway URL: Add the Gateway URL of the Chase account.
- Username: Add the username for the Chase account.
- Password: Add the password for the Chase account.
- Merchant ID: Add Merchant ID of the Chase account.
- Terminal ID: Add Terminal ID of the Chase account.
- Bin: Add Bin of the Chase Chase Payment Method you can see it on the checkout page, as in the screenshot below. Now you can capture ample amount of orders using Chase.
- That’s how easy it is to use Chase Payment Gateway extension by Rootways. Please contact us for any queries regarding Magento and custom web-development of E-commerce websites. | https://docs.rootways.com/magento-2-chase-payment-module-user-guide/ | 2021-05-06T13:02:03 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.rootways.com |
After you connect the appliance, configure the nodes in your Mac or Windows terminal emulator. Follow the steps in this checklist.
If you completed ThoughtSpot’s site survey form and returned it to ThoughtSpot Support before ThoughtSpot shipped the appliance, the appliance may be pre-configured for your network environment and ready to install and connect to your network.
If the network configuration was not pre-set, then this step must be done as part of the installation process.
Follow these steps to determine the configuration status of your appliance.
- SSH into your cluster. Run
ssh [email protected]<nodeIP>.
Replace
nodeIPwith your specific network information.
- Run
tscli cluster status.
$ tscli cluster status
- If the output shows READY, and looks like the cluster status output in the next article, your appliance is configured.
If your status is not READY, continue with the installation process outlined below.
Step 1: SSH into your cluster
SSH into your cluster with admin credentials.
- Run the command
ssh [email protected]<cluster-IP>or
ssh [email protected]<hostname>on the command line.
Replace
clusterIPor
hostnamewith your specific network information.
- Enter your admin password when prompted.
Ask your network administrator if you don’t know the password.
Step 2: Change to the
install directory
In your terminal, change directory to
/home/admin/install by running the command
cd /home/admin/install. If your
/install subdirectory does not exist, you may have to use the
/home/admin directory.
$ cd /home/admin/install
Step 3: Get a template for network configuration
Run the
tscli cluster get-config command to get a template for network configuration. Redirect it to the file
nodes.config. You can find more information on this process in the nodes.config file reference.
$ tscli cluster get-config |& tee nodes.config
Step.
Edit only the parts of the nodes.config file that are explicitly discussed in Parameters of
nodes.config. If you delete quotation marks, commas, or other parts of the code, it may cause setup to fail.
Step 5: Configure the nodes
Configure the nodes in the
nodes.config file using the
set-config command.
Run
$ cat nodes.config | tscli cluster set-config in your terminal.
If the command returns an error, refer to set-config error recovery.
$ cat nodes.config | tscli cluster set-config Connecting to local node-scout Setting up hostnames for all nodes Setting up networking interfaces on all nodes Setting up hosts file on all nodes Setting up IPMI configuration Setting up NTP Servers Setting up Timezone Done setting up ThoughtSpot
Step the cluster
Next, install your cluster.
Additional resources
As you develop your expertise in network configuration, we recommend the following ThoughtSpot U course:
See other training resources at
| https://docs.thoughtspot.com/6.1/appliance/hardware/configure-nodes-smc.html | 2021-05-06T13:17:21 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['/6.1/images/ts-u.png', 'ThoughtSpot U'], dtype=object)] | docs.thoughtspot.com |
Task Progress window
The Task Progress window displays detailed information about the task progress. For example, when performing the Direct Network Scan of a Site, you can monitor the progress status of each discovery or audit operation, the number of succeeded and failed operations, the number of operations that remain to perform, the total number of operations including succeeded, remained and failed ones, and the number of discovered (enumerated) Network Nodes.
In case of any operation failures, Alloy Discovery shows the corresponding messages in the Status column (for details, see Audit Troubleshooting). The Skipped status indicates that the operation for the same Network Node was invoked by multiple tasks at the same time. To evade the failure during the data processing, one task is automatically assigned to perform the operation, when all the other operations for the Node are skipped. Alloy Discovery can also automatically resume operations that were forcefully interrupted. For example, if the Inventory Server was restarted during Direct Network Scan Audit, the operations will also be restarted from the point of interruption. For other tasks, the Interrupted status will be shown in the Task List and a new task will be added automatically. Alloy Discovery will update the title of a newly-added task with the (Restarted) ending displayed in the grid. For example, Check for New Snapshots (Restarted).
Additionally, the Task Progress window shows the Computer Name, Start Time, and Total Time of the operation.
In the Task Progress window, the following actions are available:
To start performing the operation from the beginning:
Click Restart. You can also restart all the operations in a task by clicking Restart All or only the failed once, by clicking Restart Failed from a drop-down list.
To stop performing the operation:
Click Cancel. You can also cancel all the operations in a task by clicking Cancel All from a drop-down list.
NOTE: You can not restart or cancel the operation in the Uploading status.
Right-click a conflict record in the grid and choose Resolve Conflicts. The Conflict Resolution section opens, where you can handle the conflict as needed. For details, see Conflict Resolution.
NOTE: You can use the Resolve Conflicts option only if you have the administrative privileges.
To change the quick filter of the progress data grid:
Alloy Discoveryhas several predefined quick filters for the progress data grid: Show All, Show Detected (default filter), Show Succeeded, Show Failed. You can change the applied filter as follows:
Click the filter down arrow above the grid and choose the quick filter from the drop-down list.
To copy the text from the grid:
Right-click the selected row, choose Copy Text to Clipboard from a drop-down list, and paste the data to the desired file by pressing CTRL+V. For example, you can copy information about the occurred failure or error.
Right-click in the grid and choose Find from the drop-down list (or press CTRL+F). Enter a keyword or phrase into the Find Text field, and specify other search criteria. Select the Case Sensitive check box to search for text that matches the capitalization of the text you entered. You can narrow the search by selecting a certain direction.
To find next occurrence of the text:
Right-click in the grid and choose Find Next from a drop-down list (or press F3).
To hide the Task Progress window:
Click Close. | https://docs.alloysoftware.com/alloydiscovery/8/help/using-alloy-discovery/task-progress-window.htm | 2021-05-06T12:47:35 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.alloysoftware.com |
A newer version of this page is available. Switch to the current version.
BandedGridHitInfo.Band Property
Namespace: DevExpress.XtraGrid.Views.BandedGrid.ViewInfo
Assembly: DevExpress.XtraGrid.v18.2.dll
Declaration
Property Value
Remarks
The Band property returns a band object in the following cases:
- the test point is over a band header;
- the test point is over a data cell displayed within a column that belongs to a band.
In all other cases, the Band property returns null (Nothing in Visual Basic).
See Also
Feedback | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.BandedGrid.ViewInfo.BandedGridHitInfo.Band?v=18.2 | 2021-05-06T13:00:07 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.devexpress.com |
It is possible to customize themes, or even to create your own from scratch, with Mautic. To do this go to the Theme Manager, open the drop down menu next to the pre-installed theme you want to modify and download it.
To customize the downloaded theme, unzip the files to an empty folder. Name the folder as the new name of your theme. Then edit the following files to amend the theme paths and name:
Mautic themes are written in HTML and TWIG, to make amendments to the structure or layout simply edit the files in the new theme and upload them to your instance. Ensure that your hosting provider does not have caching enabled, as this can sometimes prevent changes to CSS files being replicated instantly. | https://docs.mautic.org/en/themes/customizing-themes | 2021-05-06T13:08:38 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.mautic.org |
Windows file system backups use the default Snapshot copy:
When creating a backup, you can also add a descriptive tag to help identify the backup. In contrast, if you want to use a customized backup naming convention, you need to rename the backup after the backup operation is complete. | https://docs.netapp.com/ocsc-40/topic/com.netapp.doc.ocsc-con/GUID-628C811E-846E-471E-A714-2E1D0AB52233.html?lang=en | 2021-05-06T13:01:01 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.netapp.com |
Release Orchestration means the management, direction, and automation of a software release (or Go Live) process. All the steps, checks, gates and validations a typical software release go through can be managed by a Release Orchestration system like Vamp. Steps can include a certain percentage of visitors, and/or a specific visitor-segment using Layer7 elements like: geo-location, device-type, headers, cookies or IP addresses. Multiple steps can be chained one after the other. For each step, a Release Orchestration system can validate for both technical and business metrics if the release is still within acceptable quality-range. These ranges are typically defined by a combination of DevOps and SRE “golden metrics”, thresholds, and error-budgets, SLA/SLO based targets, and ML-powered anomaly detection. There can also be rolled-up top-line metrics, like Vamp Health (tm) that give an “early warning signal” when the quality of a release decreases. A Release Orchestration system can orchestrate releases within environments (intra-cluster), and over different environments and clusters (inter-cluster). A Release Orchestration system typically uses traffic-shaping mechanisms like proxies, loadbalancers, ingresses and service-meshes for it’s segmentation.
Continuous Delivery or Deployment is the technical act of making sure runtime artifacts (like containerised services and applications), that come out of a CI pipeline, are being deployed to a specific environment, and are running correctly. Releasing is the process of “going live” with the new software version, in a controlled and safe way. It’s customer, business and value focused, instead of technically focused.
Vamp is a Release Orchestration system. It extends and enriches your current CI/CD solution with automated and smart features to get a controlled, safe and scalable release process in place. So: not really, allthough Vamp Vamp has some elements of very advanced CD tools
No, Vamp’s DNA is very much “cloud native” and in such it works tightly integrated with Kubernetes and k8s-related cloud-native components. But Vamp can also apply it’s release-automation with non-k8s traffic-shaping mechanisms like HAProxy and Nginx, or Cloud-vendor loadbalancers like AWS ALB. Vamp’s event and metric detection, validation and ML processes can also be fed with non-k8s metric stores, datasources and other systems like AWS Cloudwatch, Google Stackdriver, event-streams like Kafka, SQS or NATS, Google Analytics, Sentry.io etc.
You can configure a service mesh to create a static routing or traffic-split situation. But the service-mesh knows nothing about when it’s safe to go to the next release-step (or rollback), and also nothing about what this step needs to be. In other words, a service-mesh is an abstraction layer on top of proxies. A Release Orchestration system like Vamp is a smart engine that makes use of and drives a service-mesh (and or other components like Ingress and external loadbalancers) by providing it with dynamic configuration, and continuously observing the effects and results of that configuration.
Feature flags and toggles are awesome, and perfect to use when you are still developing certain features, but you don’t want to expose them to your audience yet, but you want to be able to build and deploy your code. Or if you want to have semi/permanent feature packs to customer segments. Canary Releasing, which is an important pattern and feature in Release Orchestration systems like Vamp, are all about creating a safe. fast and automated release process of going live with your new software functionalities or updates. We believe that Layer4 and Layer7 are a better solution to create traffic splits, conditional routing and dynamic releasing, because you don’t need to instrument your code, there is no code overhead by adding additional logic in your code, and the L4/L7 mechanisms are already there anyway.
Vamp doesn’t add any components inside your application end-2-end flow, it simply dynamically configures the L4/L7 traffic shaping mechanisms that you already have in your infrastructure anyway. More complex rules can add negligible overhead, but solutions like HAproxy, Nginx and Envoy are super-efficient.
Very. We use best-practices like mTLS, RBAC, Auth0 and Hashicorp Vault time-based tokens. Only rolled-up and sanitized high-level metrics are passed from the client infrastructure to our central management environment. All data is isolated and encrypted. Vamp is very Enterprise savvy and has this onboard by design.
Yes, we can provide a managed private Vamp Cloud environment for you to deploy and run on-premises on your own infrastructure. This would only be a non-standard implementation, so ask sales for how this looks commercially versus the Vamp Cloud standard implementation.
No, Vamp can also work with plain Ingress solutions like Nginx and Contour (Envoy based). Vamp’s release agent will check if the selected ingress solution is already installed and running, and if not, will automatically install and configure it for you.
Yes, Vamp also has support for “headless” services. We have support for KNative and AWS Fargate in Q1 2021
See our pricing page. Reach out to us for special discounts for opensource, community and non-profit projects.
Yes you can. Vamp offers upgrades to service levels to enterprise level. Go to for the details | https://docs.vamp.cloud/faq | 2021-05-06T12:24:07 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.vamp.cloud |
Assistance Portal - Activate/Deactivate Organization [EN]
Dokumentation auf Deutsch
Deactivate
As an Organization Owner, you can deactivate the Organization by clicking on the "DEACTIVATE" button.
If you are sure to deactivate this Organization, click on the "DEACTIVATE" button. After doing this, all Users from the this Organization will receive an email, that the Organization was deactivated.
The Organization is deactivated. Now the users cannot create sessions from this Organization.
Activate
If you want to activate the Organization, click on the "ACTIVATE" button.
All Users from the this Organization will receive an email, that the Organization was activated again. | http://docs.xrgo.io.s3-website-us-east-1.amazonaws.com/english/applications-%5Ben%5D/assistance-portal-%5Ben%5D/assistance-portal-user-manual-%5Ben%5D/assistance-portal-activate%252Fdeactivate-organization-%5Ben%5D/ | 2021-05-06T12:27:15 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.xrgo.io.s3-website-us-east-1.amazonaws.com |
cellrank.datasets.lung¶
cellrank.datasets.
lung(path='datasets/lung_regeneration.h5ad', **kwargs)[source]¶
Regeneration of murine lung epithelial cells at 13 time points from [Lung20].
scRNA-seq dataset comprising 24,051 cells recorded using Dropseq [Macosko15] at 13 time points spanning days 2-15 past lung bleomycin injury. Data was filtered to remove control cells as well as later time points which are more spaced out. We wanted to focus on the densely sampled days where RNA velocity [Manno18] [Bergen20] can be used to predict the future cellular state.
Contains raw spliced and un-spliced count data, low-dimensional embedding coordinates as well as original cluster annotations. | https://cellrank.readthedocs.io/en/stable/api/cellrank.datasets.lung.html | 2021-05-06T13:05:10 | CC-MAIN-2021-21 | 1620243988753.97 | [] | cellrank.readthedocs.io |
).
The notification contains:
- The vads_subscription field that indicates the recurring payment reference.
- The vads_recurrence number field that indicates the installment number.
- The vads_occurrence_type field that indicates the installment in question (first, nth or last installment).
- The vads_trans_status field that indicates the payment status (accepted or refused).
If the payment has been refused:
- description (vads_payment_error) is set to 107 - The payment method associated with the token is no longer valid.
Special case of daily recurrences
- The payment from the previous day (that corresponds to the effective date)
- And the payment from the current day
To avoid this, it is recommended to transmit an effective date the day after the recurring payment is created. | https://docs.lyra.com/en/collect/form-payment/subscription-token/lifecycle-of-a-recurring-payment.html | 2021-05-06T13:42:37 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.lyra.com |
Step 1
Install modman -
Step 2
Run the following modman command:
$ modman clone -b 'APIv2'
Step 3
Configure the plugin by going to "System" -> "Configuration" -> "Metrilo Analytics"
You can take the credentials from the Settings/Installation page of your Metrilo project.
Step 4
Flush your magento cache.
Step 5
The last step importing your historical orders by choosing the store on the "Current Configuration Scope", while still on the Configuration page -> Metrilo Analytics, and hit the Import button (if you use a Magento MultiStore installation, you will need to make the import for each store separately). | https://docs.metrilo.com/en/articles/1846137-integrating-metrilo-with-magento-1 | 2021-05-06T12:05:07 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.metrilo.com |
Extract (export) a list of customers
If you want to use a list of customers and emails with another email tool or for Facebook remarketing, for example, you can export a CSV file from the Metrilo database.
The CSV button is on the top of the page in the Customer Database tab. It works with the current selection of customers so the CSV will include only those people. For more, remove (all) filters.
Important note
This is the format you need to adhere to:
2011-01-31 09:30:00
2011/01/31 09:30:00
January 31 2011 09:30:00
31 January 2011 09:30:00
Jan 31 2011 09:30:00
31 Jan 2011 1 09:30:00 | https://docs.metrilo.com/en/articles/944435-how-to-export-customer-lists-to-use-with-other-email-marketing-softwares | 2021-05-06T13:36:13 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.metrilo.com |
Newton Series Release Notes¶
1.11.0¶
New Features¶
Add list, show, and reset-status admin commands for snapshot instances.
Returns
access_keyas part of
access_listAPI response for API microversions >= ‘2.21’.
Added parameter to change share type when performing migration.
Share Migration now has parameters to force share migration procedure to maintain the share writable, preserve its metadata and be non-disruptive when migrating.
Added parameter to change share network when performing migration.
Deprecation Notes¶
Renamed Share Migration ‘force_host_copy’ parameter to ‘force_host_assisted_migration’, to better represent the parameter’s functionality.
API version 2.22 is now required for all Share Migration APIs. | https://docs.openstack.org/releasenotes/python-manilaclient/newton.html | 2021-05-06T13:17:38 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.openstack.org |
How can I connect an Instagram account without logging in to Instagram?
If you don't wish to log in to an Instagram account to connect it to Spotlight, or if you're working with a client who, rightfully so, doesn't want to provide their Instagram login details to you, Spotlight offers its own Access Token Generator. You or your client will be able to generate an access token (and user ID for Business accounts) that can then be used to connect the Instagram account to Spotlight. | https://docs.spotlightwp.com/article/784-how-can-i-connect-an-instagram-account-without-logging-in-to-instagram | 2021-05-06T13:45:08 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.spotlightwp.com |
Overview
Open Source Library for cross-platform development library.
Version: SDL2-2.0.8
Reference :
Simple DirectMedia Layer(SDL) is a cross-platform development library designed to provide low level hardware abstraction layer to computer multimedia hardware components.
SDL on Tizen manages video, audio, input devices, threads and timers.
For 3D graphics it can handle an OpenGL ES or Vulkan.
Below part of SDL would not be included as Tizen Native APIs.
- SDL_system.h
: This header contains functions for advanced, platform-specific functionality, such as Android, ios, WinRT, DirectX
- SDL_render.h, SDL_pixels.h, SDL_rect.h
: This header contains functions for 2D rendering. SDL on Tizen only provides rendering using OpenGL ES or Vulkan.
- SDL_filesystem.h, SDL_rwops.h, SDL_loadso.h, SDL_cpuinfo.h, SDL_endian.h, SDL_bits.h, SDL_system.h, SDL_haptic.h
SDL_shape.h, SDL_touch.h, SDL_gesture.h, SDL_messagebox.h, SDL_joystic.h, SDL_gamecontroller.h, SDL_clipboard.h
: In Tizen, some APIs related to file accessing, Shared Object loading, Platform and CPU Information, Power Management are not supported.
SDL on Tizen can provide rotation degree when device is changed orientation.
In some cases such as internet access, your application requires some privileges. For more information, please consult the libsdl.org | https://docs.tizen.org/application/native/api/mobile/latest/group__OPENSRC__SDL__FRAMEWORK.html | 2021-05-06T13:52:50 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.tizen.org |
Tutorial: Build and Test a Serverless Application with AWS Lambda
You can build a serverless Lambda application by using an AWS Toolkit for Visual Studio
template.
The Lambda project templates include one for an AWS Serverless Application, which is the AWS Toolkit for Visual Studio implementation of the AWS Serverless Application Model (AWS SAM)
For prerequisites and information about setting up the AWS Toolkit for Visual Studio, see Using the AWS Lambda Templates in the AWS Toolkit for Visual Studio.
Topics
Create a New AWS Serverless Application Project
Open Visual Studio, and on the File menu, choose New, Project.
For Visual Studio 2017:
In the New Project dialog box, expand Installed, expand Visual C#, and select AWS Lambda.
For Visual Studio 2019:
In the New Project dialog box, ensure that the Language, Platform, and Project type drop-down boxes are set to "All ..." and type aws lambda in the Search field.
There are two types of project to choose from:
AWS Lambda projects for creating a project to develop and deploy an individual Lambda function.
AWS Serverless Applications projects for creating Lambda functions with a serverless AWS CloudFormation template. AWS serverless applications enable you to define more than just the function. For example, you can simultaneously create a database, add IAM roles, etc., with serverless deployment. AWS serverless applications also enable you to deploy multiple functions at one time.
Select the AWS Serverless Application with Tests (.NET Core - C#) template.
For Visual Studio 2017:
Enter "Blogger" for the Name, enter the desired Location, etc., and then click OK.
For Visual Studio 2019:
Click Next. In the next dialog, enter "Blogger" for the Name, enter the desired Location, etc., and then click Create.
The Select Blueprint page shows several Lambda function templates.
Choose the Blog API using DynamoDB blueprint, and then choose Finish to create the Visual Studio project.
Examine the Files in the Serverless Application
Blog.cs
Blog.cs is a simple class used to represent the blog items that are stored in Amazon DynamoDB.
Functions.cs
Functions.cs defines the C# functions to expose as Lambda functions. There are four functions
defined to manage a blog platform:
GetBlogsAsync: gets a list of all the blogs.
GetBlogAsync: gets a single blog identified by the query parameter ID or by the ID added to the URL resource path.
AddBlogAsync: adds a blog to DynamoDB table.
RemoveBlogAsync: removes a blog from the DynamoDB table.
Each of these functions accepts an APIGatewayProxyRequest
You expose these Lambda functions as HTTP APIs by using Amazon API Gateway.
The
APIGatewayProxyRequest contains all the information representing the HTTP request. The
GetBlogAsync task finds the blog ID in the resource path or query string.
public async Task GetBlogAsync(APIGatewayProxyRequest request, ILambdaContext context) { string blogId = null; if (request.PathParameters != null && request.PathParameters.ContainsKey(ID_QUERY_STRING_NAME)) blogId = request.PathParameters[ID_QUERY_STRING_NAME]; else if (request.QueryStringParameters != null && request.QueryStringParameters.ContainsKey(ID_QUERY_STRING_NAME)) blogId = request?.QueryStringParameters[ID_QUERY_STRING_NAME]; ... }
The default constructor for this class passes the name of the DynamoDB table storing the blogs as an environment variable. This environment variable is set when Lambda deploys the function.
public Functions() { // Check if a table name was passed in through environment variables and, if so, // add the table mapping var tableName = System.Environment.GetEnvironmentVariable(TABLENAME_ENVIRONMENT_VARIABLE_LOOKUP); if(!string.IsNullOrEmpty(tableName)) { AWSConfigsDynamoDB.Context.TypeMappings[typeof(Blog)] = new Amazon.Util.TypeMapping(typeof(Blog), tableName); } var config = new DynamoDBContextConfig { Conversion = DynamoDBEntryConversion.V2 }; this.DDBContext = new DynamoDBContext(new AmazonDynamoDBClient(), config); }
serverless.template
The
serverless.template is the AWS CloudFormation template used to deploy the four functions.
The parameters for the template enable you to set the name of the DynamoDB table,
and choose whether you want DynamoDB to create the table or to assume the table is
already created.
The template defines four resources of type
AWS::Serverless::Function.
This is a special meta resource defined as part of the AWS SAM specification.
The specification is a transform that is applied to the template as part of the DynamoDB
deployment.
The transform expands the meta resource type into the more concrete resources, like
AWS::Lambda::Function and
AWS::IAM::Role.
The transform is declared at the top of the template file, as follows.
{ "AWSTemplateFormatVersion" : "2010-09-09", "Transform" : "AWS::Serverless-2016-10-31", ... }
The
GetBlogs declaration is similar to the function declarations.
"GetBlogs" : { "Type" : "AWS::Serverless::Function", "Properties": { "Handler": "Blogger::Blogger.Functions::GetBlogsAsync", "Runtime": "dotnetcore1.0", "CodeUri": "", "Description": "Function to get a list of blogs", "MemorySize": 256, "Timeout": 30, "Role": null, "Policies": [ "AWSLambdaFullAccess" ], "Environment" : { "Variables" : { "BlogTable" : { "Fn::If" : ["CreateBlogTable", {"Ref":"BlogTable"}, { "Ref" : "BlogTableName" } ] } } }, "Events": { "PutResource": { "Type": "Api", "Properties": { "Path": "/", "Method": "GET" } } } } }
Many of the fields are similar to those of a Lambda project deployment.
In the
Environment property, the name of the DynamoDB table is passed in as an environment variable.
The
CodeUri property tells DynamoDB where your application bundle is stored in Amazon S3.
Leave this property blank.
The toolkit fills it in during deployment, after it uploads the application bundle
to S3 (it won't change the template file on disk when it does so).
The
Events section is where the HTTP bindings are defined for your Lambda function.
This is all the API Gateway setup you need for your function.
You can also set up other types of event sources in this section.
One of the benefits of using AWS CloudFormation to manage the deployment is you can also add and configure any other AWS resources necessary for your application in the template, and let DynamoDB take care of creating and deleting the resources.
Deploy the Serverless Application
Deploy the serverless application by right-clicking the project and choosing Publish to AWS Lambda.
This launches the deployment wizard, and because all the Lambda configuration was
done in the
serverless.template file, all you need to supply are the following:
The name of the CloudFormation stack, which will be the container for all the resources declared in the template.
The S3 bucket to upload your application bundle to.
These must exist in the same AWS Region.
Because the serverless template has parameters, an additional page is displayed in
the wizard so you can specify the values for the parameters.
You can leave the
BlogTableName property blank and let CloudFormation generate a unique name for the table.
You do need to set
ShouldCreateTable to
true so that DynamoDB will create the table.
To use an existing table, enter the table name and set the
ShouldCreateTable parameter to
false.
You can leave the other fields at their default values and choose Publish.
Once the publish step is complete, the CloudFormation stack view is displayed in AWS Explorer. This view shows the progress of the creation of all the resources declared in your serverless template.
Test the Serverless Application
When the stack creation is complete, the root URL for the API Gateway is displayed on the page. If you click that link, it returns an empty JSON array because you haven't added any blogs to the table. To get blogs in the table, you need to make an HTTP PUT method to this URL, passing in a JSON document that represents the blog. You can do that in code or in any number of tools. This example uses the Postman tool, which is a Chrome browser extension, but you can use any tool you like. In this tool, you set the URL and change the method to PUT. In the Body tab, you put in some sample content. When you make the HTTP call, you can see the blog ID is returned.
Go back to the browser with the link to the AWS Serverless URL and you can see you are getting back the blog you just posted.
Using the AWS Serverless Application template, you can manage a collection of Lambda functions and the application's other AWS resources. Also, with the AWS SAM specification, you can use a simplified syntax to declare a serverless application in the DynamoDB template. | https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/lambda-build-test-severless-app.html | 2021-05-06T14:10:13 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['images/template-addeventsource.png', 'Add an event source'],
dtype=object)
array(['images/template-addresources.png', 'Add a resource'], dtype=object)
array(['images/serverless-publishmenu.png', 'Publish the project'],
dtype=object)
array(['images/serverless-first-page.png', 'Enter publish details'],
dtype=object)
array(['images/serverless-second-page.png', 'Edit template parameters'],
dtype=object)
array(['images/serverless-stack-view.png', 'CloudFormation stack view'],
dtype=object)
array(['images/procman-addpost.png', 'Post the blog'], dtype=object)
array(['images/get-post.png', 'Get the blog'], dtype=object)] | docs.aws.amazon.com |
Using additional features
To obtain a custom form adapted to your needs, you can use additional optional features from the list below:
- Defining the currency for creating or updating a token
- Defining a different amount for the first n installments
These features are described in the following chapters. They will allow you to easily build your payment form. | https://docs.lyra.com/en/collect/form-payment/subscription-token/using-additional-features.html | 2021-05-06T12:01:02 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.lyra.com |
<frameset> is used to contain
<frame> elements.
<iframe>, this element is not typically used by modern web sites.
Like all other HTML elements, this element supports the global attributes.
cols
rows
<frameset cols="50%,50%"> <frame src="" /> <frame src="" /> </frameset>
<frame>
<iframe>
© 2005–2020 Mozilla and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. | https://docs.w3cub.com/html/element/frameset | 2021-05-06T12:40:51 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.w3cub.com |
Using Git¶
The following helpful git tips, as so many others, are attributed to Ian Schneider and have been stolen from his email.
Git Repository¶
The GeoTools git repository holds the source code for GeoTools:
This repository is setup with the following branches of GeoTools:
Typical Development Environment¶
Typically, a developer will create a local
geotools directory, move into
that directory, and do a checkout of the GeoTools repository. Within the
repository switching between branches is trivial:
[geotools]% git branch 2.7.x 8.x * main [geotools]% git checkout main [geotools]% git checkout 8.x [geotools]% git checkout 2.7.x
Developers generally do most work on a branch, even small scale bug fixes. This is typical with Git since branches are so cheap:
[geotools]% git checkout -b bugfix main [geotools]% git branch 2.7.x 8.x * bugfix main
A developers repository will usually contain several of these “feature branches” but they generally never get pushed into the main/canonical git repository.
Git Workflow¶
Git is unlike version control systems such as Subversion and CVS in that commits are always local. It is not until one pushes local commits up to a repository that they become “public”. The basic workflow of a single change in git is:
Make a change
Stage the change
Commit the change
Push the change
While extra steps may seem cumbersome at first they are actually very handy in many cases. For example consider making a
documentation change.
git status gives the current state of the local repository:
[geotools]% git status # On branch main # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: advanced/build/source.rst # no changes added to commit (use "git add" and/or "git commit -a")
Which reports that we have modified a single file but have yet to stage the file for commit. We use
git add to
stage the file for commit:
[geotools]% git add advanced/build/source.rst [geotools]% git status # On branch main # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: advanced/build/source.rst #
Status now reports the file is ready for commit. The syntax of
git commit is identical to that of Subversion:
[geotools]% git commit -m "updating documentation for change to git"
Doing another status reports no more local changes:
[geotools]% git status # On branch main # Your branch is ahead of 'geotools/main' by 1 commit. # nothing to commit (working directory clean)
But also reports that our local branch is ahead of the remote branch by 1 commit. This is because we have yet to push the commit. Before pushing it is always a good idea to first pull in case we have any commits that conflict with other commits that have already been pushed up to the repository:
[geotools]% git pull geotools main [geotools]% git push geotools main
At this point the change has been pushed up to the remote git repository.
Ignoring Files¶
Git uses a special file named
.gitignore that contains a list of files and patterns to ignore:
.project .classpath .settings target *.patch *.class
These settings ignore files not suitable to be committed into the repository such as eclipse project files, compiled class files, and patch files.
Typically a
.gitignore file is located in the root of the repository but such a file can be
contained in any project directory.
Reverting Changes¶
How to rollback a change depends on which stage of the workflow the change is at. For changes that have yet to be staged it is
simply a matter of using
git checkout:
git checkout /path/to/file/to/rollback
If the change has been staged but not yet committed:
git reset HEAD /path/to/file/to/rollback git checkout /path/to/file/to/rollback
If the change has been committed but not pushed it gets interesting. If the change to rollback is at the tip of the branch (i.e. is the most recent commit) you can use git reset:
git reset <previous_commit>
Where
previous_commit is the commit that is directly before the commit to rollback.
If the change has been committed but not pushed and the change is not at the tip of the branch then an interactive rebase can be used:
git rebase -i <previous_commit>
Where
previous_commit is the commit that is directly before the commit to rollback. An interactive rebase provides
an editor that allows us to delete commits from history. With it we can simply delete the commit(s) we wish to revert
and it will be as if it never happened. Again it is important to note that this can only be done on local commits that
have yet to pushed up to a remote repository.
If the change has been committed and pushed up to a remote repository then the only option is to manually roll it back by
applying a revert commit. Thankfully git provides the
git revert command for just this purposes:
git revert <commit>
Where
commit is the commit we wish to roll back. | https://docs.geotools.org/latest/userguide/build/git.html | 2021-05-06T13:35:38 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.geotools.org |
Adding OSX colour tags to files using a Worker action
This document provides the user with guidance on setting a Worker action (OSX only) to add colour tags to a files via a Python script.
As a prerequisite to this exercise you must have Python installed on you workstation, which can be downloaded from the following link: –.
You will also require a copy of the Python script that will be called by the Worker to rename the files. This can be downloaded from the following link:.
The first step is to create an appropriate tag field; in this example user defined field “User1” is configured to read “FileTag”.
Create a pick list for this field; in the example shown the colours “Red”, “Green” and “Blue” were added to the pick list.
Select three media files and assign them each a unique colour FileTag from the pick list that you previously created.
Publish the changes to the server.
Review the files in the Finder utility to check that they have not already been tagged.
Open CatDV Worker and create a new Worker job if one does not exist already.
Configure your server settings if required.
Now create a new Server Query and add job name and description details, click the visible tick box.
On the Server Query tab, under Query Terms add your user defined field from the pick list, in this instance “User 1”, click the “NOT” tick box to the left of it and set the option to the right of it to “is blank”. This is effectively telling the Worker job to ignore all files where the user defined variable is blank.
In the Processing Steps tab add another processing step by clicking the plus button. This will open up another window. Select “Execute Command” from the pick list.
Key in the additional command details in the “Command Line“ window as demonstrated below; where tagfile.py is the Python script which carries out the tagging routine, ${U1} represents User Field 1 and $b points the Worker to the original directory path.
In the Publish tab set the “Publish to Server” option to Publish changes, set “Media Path = “to last copied/moved file” and set “Extra fields” to the name of the user defined variable. Click OK to go back to the main Edit Configuration window.
In the Watch Actions tab click apply.
The Worker will now process the newly created job and add tags to files to the files that were selected earlier on in this tutorial.
You can verify the changes by opening up the directory with the Finder Utility to see whether the tags were added to the files.
| https://docs.squarebox.com/tutorials/getting-organised/Adding-OSX-colour-tags-to-files-using-a-Worker-action.html | 2021-05-06T13:43:56 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['img/image60.jpeg', 'Screen Shot 2015-07-09 at 16.26.21'],
dtype=object)
array(['img/image61.jpeg', 'Screen Shot 2015-07-09 at 16.26.31'],
dtype=object)
array(['img/image62.jpeg', 'Screen Shot 2015-07-09 at 16.26.29'],
dtype=object)
array(['img/image63.jpeg', 'Screen Shot 2015-07-09 at 16.28.27'],
dtype=object)
array(['img/image64.jpeg', 'Screen Shot 2015-07-09 at 16.27.30'],
dtype=object)
array(['img/image65.jpeg', 'Screen Shot 2015-07-09 at 16.29.04'],
dtype=object)
array(['img/image66.jpeg', 'Screen Shot 2015-07-09 at 16.29.12'],
dtype=object)
array(['img/image67.jpeg', 'Screen Shot 2015-07-09 at 16.29.37'],
dtype=object)
array(['img/image68.jpeg', 'Screen Shot 2015-07-09 at 16.29.49'],
dtype=object)
array(['img/image69.jpeg', 'Screen Shot 2015-07-09 at 16.29.57'],
dtype=object)
array(['img/image70.png', 'Execute command eg'], dtype=object)
array(['img/image71.jpeg', 'Screen Shot 2015-07-09 at 16.30.24'],
dtype=object)
array(['img/image72.jpeg', 'Screen Shot 2015-07-09 at 16.30.49'],
dtype=object)
array(['img/image73.jpeg', 'Screen Shot 2015-07-09 at 16.31.04'],
dtype=object)
array(['img/image74.jpeg', 'Screen Shot 2015-07-09 at 16.31.12'],
dtype=object) ] | docs.squarebox.com |
# Configuration reference
# Sample configuration file
The sample configuration file shows all possible configuration options with their defaults for the
ymir.yml file:
id: 1 name: project-name type: wordpress environments: environment-name: build: commands: [] include: [] cdn: caching: enabled cookies_whitelist: ['comment_*', 'wordpress_*', 'wp-settings-*'] default_expiry: 300 excluded_paths: ['/wp-admin/*', '/wp-login.php'] invalidate_paths: [] concurrency: 10 cron: true database: server: database-server name: database-name user: database-user domain: [] log_retention_period: 7 network: network-name php: 7.4 warmup: true website: memory: 256 timeout: 30 console: memory: 1028 timeout: 60
# Project configuration
Project configuration options are global to a project and apply to all environments.
# id
type:
int required
The ID of the project in Ymir.
# name
type:
string required
The name of the project.
# type
type:
string required
The project type.
The possible values are:
bedrockfor Bedrock (opens new window) projects
wordpressfor WordPress projects
# environments
type:
array required
All the project environments. Configuration values for environments are covered in the next section.
# Environment configuration
Each key under the
environments option is the name of the environment. Everything under that key is the configuration options for that environment.
# build
type:
array
This is the array of values to configure the environment build. If the database value is a simple array, it'll be used as the
commands value.
# commands
type:
array
This is an array of build commands that the Ymir CLI will run when building your project. These commands are executed on the computer performing the build and not on the Ymir platform. If your build commands generate files, they'll get packaged along with the rest of your project files during deployment.
# include
type:
array
This is an array of paths that you want to include in the environment build artifact. By default, Ymir will only add
.mo and
.php files to the build artifact to ensure a small build artifact. This lets you specify additional files or folders to add the build artifact besides the defaults.
# cdn
type:
array | string
This is the array of values to configure the environment's CloudFront distribution. If the
cdn value is a string, it'll be used as the
caching value.
# caching
type:
string default:
enabled
This option controls the CloudFront distribution.
The possible values are:
enabledenables CloudFront distribution caching
assetsonly caches assets
disableddisables CloudFront distribution caching
Provisioning delay
Switching the
caching value to
enabled can cause your WordPress site to not load certain assets while the CloudFront distribution updates. This process can take as long as 40 minutes.
Full CloudFront caching disabled with REST API
If you have the
caching set to
enabled and the
gateway option set to
rest, the caching level will be downgraded to
assets automatically. That's because REST APIs already have CloudFront page caching by default.
# cookies_whitelist
type:
array default:
['comment_*', 'wordpress_*', 'wp-settings-*']
The list of cookies that are ignored by CloudFront and always forwarded to your WordPress site. Supports
* wildcard character.
# default_expiry
type:
int default:
300
The default time (in seconds) that CloudFront will keep something cached.
# excluded_paths
type:
array default:
['/wp-admin/*', '/wp-login.php']
The list of paths that are ignored by CloudFront and always forwarded to your WordPress site. Supports
* wildcard character.
Works with uploads directory
By default, CloudFront caches files in the
/uploads directory for 24h. But some plugins use the
/uploads directory to store dynamic files since it's the only writeable directory on a server. You can add these directories to have CloudFront exclude them from the cache.
Tailored to all project types
The project
type will change default paths for non-WordPress projects. So you don't need to edit this for
bedrock projects.
# invalidate_paths
type:
array
The list of paths that are cleared from the CloudFront distribution cache during the project deployment. Supports
* wildcard character.
# concurrency
type:
int | false default:
10
This option controls the maximum number of
website Lambda functions that can exist at the same time. (AWS calls this reserved concurrency (opens new window).) Setting this option to
false removes the limit and allows unrestricted scaling.
Overwhelming your database server
If your
concurrency value is too high or disabled, your database server could get overwhelmed when a traffic spike hits your WordPress site. If this happens, you'll want to increase the capacity of your database server.
# cron
type:
int | false default:
1
The interval (in minutes) that WP-Cron (opens new window) gets called by CloudWatch. Also controls the
DISABLE_WP_CRON constant. If set to
false, it disables the CloudWatch rule and renables the standard WP-Cron behaviour.
# database
type:
array | string
This is the array of values to configure the environment's database. If the
database value is a string, it'll be used as the
server value.
# server
type:
string
The database server used by the WordPress site. It can be the name of the database server if it's managed by Ymir or the host name of the database server otherwise.
# name
type:
string default:
wordpress
The name of the database used by the WordPress site.
# user
type:
string
The user used by the WordPress site to connect to the database server.
# domain
type:
array
The list of domain names mapped to the environment.
Domain name requirement
All domains names must either be managed by a DNS zone or have an issued certificate in the project region.
# gateway
type:
string
The gateway type used by the environment. Allowed values are
http for HTTP APIs and
rest for REST APIs.
DNS changes when switching gateway types
Whenever you switch gateway types, the DNS records pointing to your environment will change. If Ymir manages the DNS zone used by your environment, it'll update your DNS records automatically. Otherwise, you will have to do it yourself. That said, even with a managed DNS zone, your environment will be briefly unavailable while the DNS changes propagate.
CloudFront page caching disabled with REST API.
When using a REST API, it isn't possible to use CloudFront for page caching. That's because the REST API already caches response using CloudFront. If your CloudFront caching is set to
enabled, it'll get downgraded to
assets automatically.
# log_retention_period
type:
int default:
7
Controls the duration (in days) that the environment's logs are retained in CloudWatch. Allowed values are
1,
3,
5,
7,
14,
30,
60,
90,
120,
150,
180,
365,
400,
545,
731,
1827 and
3653.
# network
type:
string
The network to use with your environment.
Overrides private database configuration
If your project environment uses a private database, Ymir will automatically connect your environment Lambda functions to the database's private network. However, setting the
network option will override this. So if you're using a private database, it's important it be accessible from the configured
network.
Can create a NAT gateway
If the configured
network doesn't have a NAT gateway, a NAT gateway will be configured during deployment. A NAT gateway costs about $32/month billed by the hour.
# php
type:
string default:
7.4
The PHP version used by the environment. The supported versions are
7.2,
7.3 and
7.4.
# warmup
type:
int | false default:
1
The number of
website functions that CloudWatch will keep warmed up. If set to
false, it disables the warming up CloudWatch rule.
# console / website
type:
array
This is the array of values to configure the environment's Lambda functions. There are two function nodes:
console and
website.
console is the function used when running WP-CLI commands or any other background tasks.
website is the function connected to the API gateway and that handles all the HTTP traffic.
# memory
type:
int default:
256 for
website and
1024 for
console
The amount of memory used by the Lambda function. The
website function has lower memory needs by default because most memory heavy tasks are handled by the
console function. Both default memory values are the lowest amount possible. If you lower them more, there might be issues during your function execution.
Low memory termination
If a function goes over the memory limit during its execution, it gets terminated automatically. So it's important to configure to give it enough memory to execute the worst case scenarios.
Memory cost
Lambda charges based on the configured memory used. So more memory, means a higher Lambda bill. This isn't much of a concern for the
console function since it won't get called a lot by default. But it's something to keep in mind when configuring the amount of memory that the
website function has.
# timeout
type:
int default:
30 for
website and
60 for
console
The maximum amount of time (in seconds) that the Lambda function can run before Lambda terminates it. The maximum allowed values depend on the function type.
website cannot have a timeout larger than 30 seconds.
console can have a maximum timeout of 900 seconds (15 minutes).
API gateway timeout
The 30 second timeout limit for the
website function is due to a limit with AWS API gateways. An API gateway will terminate a connection after 30 seconds. This isn't modifiable.
This can be a significant technical hurdle if your WordPress site has long running operations that take more than 30 seconds to complete. In that scenario, these long operations need to be offloaded to a WP-CLI command or some external service. | https://docs.ymirapp.com/reference/configuration.html | 2021-05-06T13:50:54 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.ymirapp.com |
Modifying Site Properties
To modify the properties of a Site:
Do either of the following:
Right-click the Site in the Sites section of the Sidebar and select Properties from the pop-up menu.
Click Sites in the Sidebar, select the Site in the list on the right, and click Edit in the Module menu.
Edit the Site name and description, if you need.
NOTE: You cannot edit the tag of an existing Site.
The Inventory Server field displays the Inventory Server associated with the Site. When there are more than one server registered in the database, you can associate the Site with another server available in the drop-down list, if needed.
On the Audit Sources tab, manage the audit sources by using the following methods:
To add a new source, click New, then select the audit method for the new source. Fill out the audit source properties (for details, see Configuring Direct Network Scan Audit Sources, Configuring Network Folder Audit Sources, Configuring E-mail Audit Sources, Configuring FTP Audit Sources and Configuring Portable Audit Sources) and click OK. Alternatively, you can create a copy of an existing audit source: select the source to copy and click Actions > Copy , then edit the source details as needed, and click OK.
To edit an existing audit source, select the source in the list and click Edit. Edit the audit source properties as needed and click OK.
To delete an audit source, select it in the list and click the Delete Selected Item(s) icon
.
On the Excluded Nodes tab, manage the list of excluded nodes. For details, see Defining Excluded Nodes.
Click OK. | https://docs.alloysoftware.com/alloydiscovery/8/help/using-alloy-discovery/modifying-site-properties.htm | 2021-05-06T13:12:59 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.alloysoftware.com |
Different sunshades designs are available and downloadable for free on Airbot Systems website. These files are STL and can be printed on any 3D printer. Version 1 : - Basic sunshade short version - Basic sunshade long version Version_1 can be attached to the Herelink by using thin strips of double taped foam or thin strips of Velcro hooks (to make it removable)
Version 2 : - Self tight sunshade short version - Self tight sunshade long version This version is an improved one on which two brackets have been added. This make it quick and easy removable by using a 1/4 screw. | https://docs.cubepilot.org/user-guides/herelink/herelink-accessories | 2021-05-06T13:11:05 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.cubepilot.org |
Manage X509 certificates
New in version 2015.8.0.
M2Crypto
salt.modules.x509.
create_certificate(path=None, text=False, overwrite=True, ca_server=None, **kwargs)¶
Create an X509 certificate.
Path to write the certificate to.
If
True, return the PEM text without writing to a file.
Default
False.
If
True (default), create_certificate will overwrite the entire PEM
file. Set False to preserve existing private keys and dh params that
may exist in the PEM file.
Any of the properties below can be included as additional keyword arguments.
Request a remotely signed certificate from ca_server. For this to
work, a
must be configured on the ca_server. See
details. Also, the salt master must permit peers to call the
sign_remote_certificate function.
Example:
/etc/salt/master.d/peer.conf
peer: .*: - x509.sign_remote_certificate
Any of the values below can be included to set subject properties Any other subject properties supported by OpenSSL should also work.
2 letter Country code
Certificate common name, typically the FQDN.
Given Name
Locality
Organization
Organization Unit
SurName
State or Province
A path or string of the private key in PEM format that will be used
to sign this certificate. If neither
public_key,
or
csr are included, it will be assumed that this is a self-signed
certificate, and the public key matching
be used to create the certificate.
Passphrase used to decrypt the signing_private_key.
A certificate matching the private key that will be used to sign this certificate. This is used to populate the issuer values in the resulting certificate. Do not include this value for self-signed certificates.
The public key to be included in this certificate. This can be sourced
from a public key, certificate, CSR or private key. If a private key
is used, the matching public key from the private key will be
generated before any processing is done. This means you can request a
certificate from a remote CA using a private key file as your
public_key and only the public key will be sent across the network to
the CA. If neither
public_key or
csr are specified, it will be
assumed that this is a self-signed certificate, and the public key
derived from
public_key or
csr, not both. Because you can input a CSR as a
public key or as a CSR, it is important to understand the difference.
If you import a CSR as a public key, only the public key will be added
to the certificate, subject or extension information in the CSR will
be lost.
If the public key is supplied as a private key, this is the passphrase used to decrypt it.
A file or PEM string containing a certificate signing request. This will be used to supply the subject, extensions and public key of a certificate. Any subject or extensions specified explicitly will overwrite any in the CSR.
X509v3 Basic Constraints extension.
The following arguments set X509v3 Extension values. If the value
starts with
critical, the extension will be marked as critical.
Some special extensions are
subjectKeyIdentifier and
authorityKeyIdentifier.
subjectKeyIdentifier can be an explicit value or it can be the
special string
hash.
hash will set the subjectKeyIdentifier
equal to the SHA1 hash of the modulus of the public key in this
certificate. Note that this is not the exact same hashing method used
by OpenSSL when using the hash value.
authorityKeyIdentifier Use values acceptable to the openssl CLI
tools. This will automatically populate
authorityKeyIdentifier
with the
subjectKeyIdentifier of
self-signed cert these values will be the same.
X509v3 Basic Constraints
X509v3 Key Usage
X509v3 Extended Key Usage
X509v3 Subject Key Identifier
X509v3 Issuer Alternative Name
X509v3 Subject Alternative Name
X509v3 CRL Distribution Points
X509v3 Issuing Distribution Point
X509v3 Certificate Policies
X509v3 Policy Constraints
X509v3 Inhibit Any Policy
X509v3 Name Constraints
X509v3 OCSP No Check
Netscape Comment
Netscape Certificate Type
The number of days this certificate should be valid. This sets the
notAfter property of the certificate. Defaults to 365.
The version of the X509 certificate. Defaults to 3. This is
automatically converted to the version value, so
version=3
sets the certificate version field to 0x2.
The serial number to assign to this certificate. If omitted a random
serial number of size
serial_bits is generated.
The number of bits to use when randomly generating a serial number. Defaults to 64.
The hashing algorithm to be used for signing this certificate. Defaults to sha256.
An additional path to copy the resulting certificate to. Can be used to maintain a copy of all certificates issued for revocation purposes.
If set to True, the CN and a dash will be prepended to the copypath's filename.
/etc/pki/issued_certs/
A signing policy that should be used to create this certificate.
Signing policies should be defined in the minion configuration, or in
a minion pillar. It should be a YAML formatted list of arguments
which will override any arguments passed to this function. If the
minions key is included in the signing policy, only minions
matching that pattern (see match.glob and match.compound) will be
permitted to remotely request certificates from that policy.
In order to
match.compound to work salt master must peers permit
peers to call it.
Example:
/etc/salt/master.d/peer.conf
peer: .*: - match.compound
Example:
x509_signing_policies: www: - minions: 'www*' - signing_private_key: /etc/pki/ca.key - signing_cert: /etc/pki/ca.crt - C: US - ST: Utah - L: Salt Lake City - basicConstraints: "critical CA:false" - keyUsage: "critical cRLSign, keyCertSign" - subjectKeyIdentifier: hash - authorityKeyIdentifier: keyid,issuer:always - days_valid: 90 - copypath: /etc/pki/issued_certs/
The above signing policy can be invoked with.
CLI Example:
salt '*' x509.create_certificate path=/etc/pki/myca.crt signing_private_key='/etc/pki/myca.key' csr='/etc/pki/myca.csr'}
salt.modules.x509.
create_crl(path=None, text=False, signing_private_key=None, signing_private_key_passphrase=None, signing_cert=None, revoked=None, include_expired=False, days_valid=100, digest='')¶
Create a CRL
PyOpenSSL Python module
Path to write the CRL to.
If
True, return the PEM text without writing to a file.
Default
False.
A path or string of the private key in PEM format that will be used to sign the CRL. This is required.
Passphrase to decrypt the private key.
A certificate matching the private key that will be used to sign the CRL. This is required.
A list of dicts containing all the certificates to revoke. Each dict
represents one certificate. A dict must contain either the key
serial_number with the value of the serial number to revoke, or
certificate with either the PEM encoded text of the certificate,
or a path to the certificate to revoke.
The dict can optionally contain the
revocation_date key. If this
key is omitted the revocation date will be set to now. If should be a
string in the format "%Y-%m-%d %H:%M:%S".
The dict can also optionally contain the
not_after key. This is
redundant if the
certificate key is included. If the
Certificate key is not included, this can be used for the logic
behind the
include_expired parameter. If should be a string in
the format "%Y-%m-%d %H:%M:%S".
The dict can also optionally contain the
reason key. This is the
reason code for the revocation. Available choices are
unspecified,
keyCompromise,
CACompromise,
affiliationChanged,
superseded,
cessationOfOperation and
certificateHold.
Include expired certificates in the CRL. Default is
False.
The number of days that the CRL should be valid. This sets the Next Update field in the CRL.
The digest to use for signing the CRL. This has no effect on versions of pyOpenSSL less than 0.14
CLI Example:
salt '*' x509.create_crl path=/etc/pki/mykey.key \ signing_private_key=/etc/pki/ca.key \ signing_cert=/etc/pki/ca.crt \ revoked="{'compromized-web-key': {'certificate': '/etc/pki/certs/www1.crt', 'revocation_date': '2015-03-01 00:00:00'}}"
salt.modules.x509.
create_csr(path=None, text=False, **kwargs)¶
Create a certificate signing request.
Path to write the certificate to.
If
True, return the PEM text without writing to a file.
Default
False.
The hashing algorithm to be used for signing this request. Defaults to sha256.
The subject, extension and version arguments from
x509.create_certificate
can be used.
CLI Example:
salt '*' x509.create_csr path=/etc/pki/myca.csr public_key='/etc/pki/myca.key' CN='My Cert'
salt.modules.x509.
create_private_key(path=None, text=False, bits=2048, passphrase=None, cipher='aes_128_cbc', verbose=True)¶
Creates a private key in PEM format.
The path to write the file to, either
path or
text
are required.
If
True, return the PEM text without writing to a file.
Default
False.
Length of the private key in bits. Default 2048
Passphrase for encrypting the private key
Cipher for encrypting the private key. Has no effect if passphrase is None.
Provide visual feedback on stdout. Default True
New in version 2016.11.0.
CLI Example:
salt '*' x509.create_private_key path=/etc/pki/mykey.key
salt.modules.x509.
expired(certificate)¶
Returns a dict containing limited details of a certificate and whether the certificate has expired.
New in version 2016.11.0.
The certificate to be read. Can be a path to a certificate file, or a string containing the PEM formatted text of the certificate.
CLI Example:
salt '*' x509.expired "/etc/pki/mycert.crt"
salt.modules.x509.
get_pem_entries(glob_path)¶
Returns a dict containing PEM entries in files matching a glob
A path to certificates to be read and returned.
CLI Example:
salt '*' x509.get_pem_entries "/etc/pki/*.crt"
salt.modules.x509.
get_pem_entry(text, pem_type=None)¶
Returns a properly formatted PEM string from the input text fixing any whitespace or line-break issues
Text containing the X509 PEM entry to be returned or path to a file containing the text.
If specified, this function will only return a pem of a certain type, for example 'CERTIFICATE' or 'CERTIFICATE REQUEST'.
CLI Example:
salt '*' x509.get_pem_entry "-----BEGIN CERTIFICATE REQUEST-----MIICyzCC Ar8CAQI...-----END CERTIFICATE REQUEST"
salt.modules.x509.
get_private_key_size(private_key, passphrase=None)¶
Returns the bit length of a private key in PEM format.
A path or PEM encoded string containing a private key.
CLI Example:
salt '*' x509.get_private_key_size /etc/pki/mycert.key
salt.modules.x509.
get_public_key(key, passphrase=None, asObj=False)¶
Returns a string containing the public key in PEM format.
A path or PEM encoded string containing a CSR, Certificate or Private Key from which a public key can be retrieved.
CLI Example:
salt '*' x509.get_public_key /etc/pki/mycert.cer
salt.modules.x509.
get_signing_policy(signing_policy_name)¶
Returns the details of a names signing policy, including the text of the public key that will be used to sign it. Does not return the private key.
CLI Example:
salt '*' x509.get_signing_policy www
salt.modules.x509.
read_certificate(certificate)¶
Returns a dict containing details of a certificate. Input can be a PEM string or file path.
The certificate to be read. Can be a path to a certificate file, or a string containing the PEM formatted text of the certificate.
CLI Example:
salt '*' x509.read_certificate /etc/pki/mycert.crt
salt.modules.x509.
read_certificates(glob_path)¶
Returns a dict containing details of all certificates matching a glob
A path to certificates to be read and returned.
CLI Example:
salt '*' x509.read_certificates "/etc/pki/*.crt"
salt.modules.x509.
read_crl(crl)¶
Returns a dict containing details of a certificate revocation list. Input can be a PEM string or file path.
OpenSSL command line tool
A path or PEM encoded string containing the CRL to read.
CLI Example:
salt '*' x509.read_crl /etc/pki/mycrl.crl
salt.modules.x509.
read_csr(csr)¶
Returns a dict containing details of a certificate request.
OpenSSL command line tool
A path or PEM encoded string containing the CSR to read.
CLI Example:
salt '*' x509.read_csr /etc/pki/mycert.csr
salt.modules.x509.
sign_remote_certificate(argdic, **kwargs)¶
Request a certificate to be remotely signed according to a signing policy.
A dict containing all the arguments to be passed into the create_certificate function. This will become kwargs when passed to create_certificate.
kwargs delivered from publish.publish
CLI Example:
salt '*' x509.sign_remote_certificate argdic="{'public_key': '/etc/pki/', 'signing_policy': 'www'}" __pub_id='www1'
salt.modules.x509.
verify_crl(crl, cert)¶
Validate a CRL against a certificate. Parses openssl command line output, this is a workaround for M2Crypto's inability to get them from CSR objects.
The CRL to verify
The certificate to verify the CRL against
CLI Example:
salt '*' x509.verify_crl crl=/etc/pki/myca.crl cert=/etc/pki/myca.crt
salt.modules.x509.
verify_private_key(private_key, public_key, passphrase=None)¶
Verify that 'private_key' matches 'public_key'
The private key to verify, can be a string or path to a private key in PEM format.
The public key to verify, can be a string or path to a PEM formatted certificate, CSR, or another private key.
Passphrase to decrypt the private key.
CLI Example:
salt '*' x509.verify_private_key private_key=/etc/pki/myca.key \ public_key=/etc/pki/myca.crt
salt.modules.x509.
verify_signature(certificate, signing_pub_key=None, signing_pub_key_passphrase=None)¶
Verify that
certificate has been signed by
The certificate to verify. Can be a path or string containing a PEM formatted certificate.
The public key to verify, can be a string or path to a PEM formatted certificate, CSR, or private key.
Passphrase to the signing_pub_key if it is an encrypted private key.
CLI Example:
salt '*' x509.verify_signature /etc/pki/mycert.pem \ signing_pub_key=/etc/pki/myca.crt
salt.modules.x509.
will_expire(certificate, days)¶
Returns a dict containing details of a certificate and whether the certificate will expire in the specified number of days. Input can be a PEM string or file path.
New in version 2016.11.0.
The certificate to be read. Can be a path to a certificate file, or a string containing the PEM formatted text of the certificate.
CLI Example:
salt '*' x509.will_expire "/etc/pki/mycert.crt" days=30
salt.modules.x509.
write_pem(text, path, overwrite=True, pem_type=None)¶
Writes out a PEM string fixing any formatting or whitespace issues before writing.
PEM string input to be written out.
Path of the file to write the PEM out to.
If
True (default), write_pem will overwrite the entire PEM file.
Set False to preserve existing private keys and dh params that may
exist in the PEM file.
The PEM type to be saved, for example
CERTIFICATE or
PUBLIC KEY. Adding this will allow the function to take
input that may contain multiple PEM types.
CLI Example:
salt '*' x509.write_pem "-----BEGIN CERTIFICATE-----MIIGMzCCBBugA..." path=/etc/pki/mycert.crt | https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.x509.html | 2021-05-06T13:46:07 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.saltproject.io |
How to set a custom profile photo and bio text
Instagram does not provide a profile photo and bio text for Personal accounts.
Since we believe that this shouldn't limit the use of Spotlight's features, you can upload your own photo and add a custom bio in Spotlight itself to match your Instagram profile.
Account-level
At the account level, found by going to Spotlight > Settings > Accounts and clicking on the account name or the info icon button, you can set up a custom bio and profile photo for the account. This will apply to all feeds where the account is used, but it can also be overridden per feed (see below).
When connecting a Personal account for the first time you will also see a prompt to set up these custom details which looks like this:
You can click on "No" and do this later as explained above. If you click "Yes", it proceeds to the Account details modal as seen below:
From here, use the Change profile picture and Edit bio options to create your custom profile. This can be made to match your Instagram account or can be something completely different just for your site.
Note: The custom bio and profile photo applied in Spotlight do not have any effect on those used in your official Instagram account. They will not override your Instagram account's details. The custom bio and profile photo will only appear in Spotlight feeds on your website.
Here's a look at the Account details modal with custom details set up to be used only in Spotlight feeds:
Feed-level
When creating or editing a feed, you have the option to override the original bio and profile photo, as well as the custom ones set in the Spotlight account, for just that feed.
This can be useful when creating feeds to, for example, display posts featuring a particular product or location. You can change the profile details to match the feed's topic to make it look more consistent.
These feed-level custom bio options can be set in the Header options of the Design step. | https://docs.spotlightwp.com/article/624-custom-profile-photo-and-bio-text | 2021-05-06T11:51:45 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/59de4aa92c7d3a40f0ed5f1e/images/5efca4802c7d3a10cba9dad8/file-RkQ82Tj0qQ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/59de4aa92c7d3a40f0ed5f1e/images/5efca4882c7d3a10cba9dad9/file-b7hTErHJ49.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/59de4aa92c7d3a40f0ed5f1e/images/5efca48f2c7d3a10cba9dada/file-n8YYr1WnOB.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/59de4aa92c7d3a40f0ed5f1e/images/5efca4972c7d3a10cba9dadc/file-dZuXY8Iqi7.png',
None], dtype=object) ] | docs.spotlightwp.com |
Changelog
Version 1.4.11
Feb 11, 2021
- Fixed missing attachments block for guest customers when customer group restriction config was saved with empty value.
Version 1.4.10
Jan 13, 2021
- Restict order attachment for specific customer groups.
- Convert section “Swissup Checkout” into item “Checkout” under section “Swissup” at System Config page.
Version 1.4.9
Dec 28, 2020
- Restict order attachemnt for specific customer groups.
- Remove not needed index.
- Fixed php error on frontend.
Version 1.4.5
Oct 30, 2020
- Fixed not working search field in backend order attachments grid.
Version 1.3.11
Oct 19, 2020
- Fixed missing “Attachments” tab at admin order view page.
Version 1.4.3
Aug 5, 2020
- Magento 2.3.5 and 2.4.0 compatibilty (Email was missing attachments block).
- Added order number to the backend grid.
Version 1.4.0
Jan 8, 2020
- Fixed serialize error in full page cache module when downloading attachment
- 16 locales added to translate backend and frontend phrases:
- Arabic
- Chinese
- Dutch
- Hebrew
- Italian
- German
- Japanese
- Norwegian
- Korean
- Polish
- Portuguese
- Russian
- Spanish
- Swedish
- Ukrainian
Version 1.3.4
Oct 16, 2019
- Remove direct ‘jquery/ui’ dependency to improve js performance.
Version 1.3.3
Jun 4, 2019
- Fixed broken order view page when Amasty_RewardPoints is installed
Version 1.3.2
May 24, 2019
- Fixed invalid data in the grid after filters reset.
Version 1.3.0
Mar 21, 2019
- RTL languages support added
Version 1.3.0
- Added ability to show attachments at the CheckoutSuccess page
Version 1.1.1
- Fixed not working upload functionality on slow internet connections
Version 1.1.0
- Magento EQP errors fixed
- Duplicated code removed
- Small composer.json fixes
Version 1.0.7
- Magento 2.2 compatibility
Version 1.0.6
- Fixed backend download and preview action urls
Version 1.0.5
- Fixed possible js error inside uploaded files loop
Version 1.0.4
- Acl fixes
Version 1.0.3
- Magento 2.0.x compatibility added | https://docs.swissuplabs.com/m2/extensions/order-attachments/changelog/ | 2021-05-06T11:55:08 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.swissuplabs.com |
In a strange series of circumstances, you can end up with a Enterprise Agent showing 0% loss, without the path visualization showing connectivity to the destination.
There is a disconnect between the loss statistics of the Enterprise Agent (showing 0% loss), and the gateway for the Enterprise Agent (showing 100% loss).
This occurs specifically when a Enterprise Agent is connected to the internet behind an Apple Airport gateway, running an up to date firmware revision. This happens due to the fact that Apple Airport routers rewrite the IP header on outbound packets to ensure their routability back to the host which generated them. This disrupts the flow of packets from ThousandEyes, which uses the Identifier field to identify packet sequence.
The example below shows the path visualization of a test which is 100% successful. Note the agent is green, and shows 0% loss, and the endpoint (showing 100% loss) is an Apple Airport device (which is the default gateway for the agent), with 100% loss.
We are working on a workaround for these specific circumstances, but for now, any Enterprise Agent which shows as green, and indicates 0% loss, with a route which terminates on the edge (or gateway) which is an Apple Airport device can be ignored as a troubleshooting point. The fact that we are showing 100% loss at the edge is an indicator of this behavior, rather than an indicator of loss. | https://docs.thousandeyes.com/product-documentation/global-vantage-points/enterprise-agents/troubleshooting/agent-unable-to-trace-path-to-destination | 2021-05-06T12:03:58 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.thousandeyes.com |
Contact block: Get customer input
Description
It plays a prompt to get a response from the customer. For example, "For Sales, press one. For Support, press two."
When customers enter DTMF input (touch-tone keypad or telephone input), the prompt is interruptible.
When an Amazon Lex bot plays a voice prompt, customers can interrupt it with their voice. To set this up, use the
barge-in-enabledsession attribute.
It then branches based on the customer's input.
This block works for chat only when Amazon Lex is used.
Contact flow types
You can use this block in the following contact flow types:
Inbound contact flow
Customer queue flow
Transfer to Agent flow
Transfer to Queue flow
Properties
You can configure this block to accept DTMF input, a chat response, or an Amazon Lex intent.
DTMF tab properties
Audio prompt: Select from a list of default audio prompts, or upload your own audio prompt.
Set timeout: Specify how long to wait while the user decides how they want to respond to the prompt. The maximum timeout you can set is 179 seconds.
Amazon Lex tab properties
Lex bot properties: After you create your Lex bot, enter the name and alias of the bot here. Only published bots appear in the drop-down list.
Important
In a production environment, always use a different alias than $LATEST. Only use $LATEST for manual testing. If the bot alias is $LATEST Lex invocations are throttled. For more information, see Runtime Service Quotas.
Session attributes: Specify attributes that apply to the current contact's session only.
Configurable time-outs for voice input
To configure time-out values for voice contacts, use the following session attributes in the Get customer input block that calls your Lex bot. These attributes allow you to specify how long to wait for the customer to finish speaking before Amazon Lex collects speech input from callers, such as answering a yes/no question, or providing a date or credit card number.
Max Speech Duration
x-amz-lex:max-speech-duration-ms:[intentName]:[slotToElicit]
How long the customer speaks before the input is truncated and returned to Amazon Connect. You can increase the time when a lot of input is expected or you want to give customers more time to provide information.
Default = 13000 milliseconds (13 seconds). The maximum allowed value is 15000 milliseconds.
Important
If you set Max Speech Duration to more than 15000 milliseconds, the contact is routed down the Error branch.
Start Silence Threshold
x-amz-lex:start-silence-threshold-ms:[intentName]:[slotToElicit]
How long to wait before assuming that the customer isn't going to speak. You can increase the allotted time in situations where you’d like to allow the customer more time to find or recall information before speaking. For example, you might want to give customers more time to get out their credit card so they can enter the number.
Default = 4000 milliseconds (4 seconds).
End Silence Threshold
x-amz-lex:end-silence-threshold-ms:[intentName]:[slotToElicit]
How long to wait after the customer stops speaking before assuming the utterance has concluded. You can increase the allotted time in situations where periods of silence are expected while providing input.
Default = 600 milliseconds (0.6 seconds)
Barge-in configuration and usage for Amazon Lex
You can allow customers to interrupt the Amazon Lex bot mid-sentence using their voice, without waiting for it to finishing speaking. Customers familiar with choosing from a menu of options, for example, can now do so, without having to listen to the entire prompt.
To set up this functionality, set the
barge-in-enabled session
attribute in the Get customer input block that calls your
Lex bot. This attribute only controls Amazon Lex barge-in; it doesn't control DTMF
barge-in.
Barge-in
x-amz-lex:barge-in-enabled:[intentName]:[slotToElicit]
Barge-in is disabled globally by default. You must set the session attribute to enable it at the global, bot, or slot levels. For more information, see How to use Lex session attributes.
Configurable fields for DTMF input
Use the following session attributes to specify how your Lex bot responds to DTMF input.
End character
x-amz-lex:dtmf:end-character:[IntentName]:[SlotName]
The DTMF end character that ends the utterance.
Default = #
Deletion character
x-amz-lex:dtmf:deletion-character:[IntentName]:[SlotName]
The DTMF character that clears the accumulated DTMF digits and ends the utterance.
Default = *
End timeout
x-amz-lex:dtmf:end-timeout-ms:[IntentName]:[SlotName]
The idle time (in milliseconds) between DTMF digits to consider the utterance as concluded.
Default = 5000 milliseconds (5 seconds)
Max number of allow DTMF digits per utterance
x-amz-lex:dtmf:max-length:[IntentName]:[SlotName]
The maximum number of DTMF digits allowed in a given utterance. This cannot be increased.
Default = 1024 characters
For more information, see How to use Lex session attributes.
Intents
Enter the intents you created in Amazon Lex. They are case sensitive!
Configuration tips
When you use text, either for text-to-speech or chat, you can use a maximum of 3,000 billed characters (6,000 total characters).
Amazon Lex bots support both spoken utterances and keypad input when used in a contact flow.
For both voice and DTMF, there can be only one set of session attributes per conversation. Following is the order of precedence:
Lambda provided session attributes: Overrides to session attributes during customer Lambda invocation.
Amazon Connect console provided session attributes: Defined in the Get customer input block.
Service defaults: These are used only if no attributes are defined.
You can prompt contacts to end their input with a pound key # and to cancel it using the star key *. When you use a Lex bot, if you don't prompt customers to end their input with #, they will end up waiting five seconds for Lex to stop waiting for additional key presses. It's not possible to configure Lex to wait a shorter length of time.
To control time-out functionality, you can use Lex session attributes in this block, or in set them in your Lex Lambda function. If you choose to set the attributes in a Lex Lambda function, the default values are used until the Lex bot is invoked. For more information, see Using Lambda Functions in the Amazon Lex Developer Guide.
When you specify one of the session attributes described in this article, you can use wildcards. They let you set multiple slots for an intent or bots.
Following are some examples of how you can use wildcards:
To set all slots for a specific intent, such as PasswordReset, to 2000 milliseconds:
Name =
x-amz-lex:max-speech-duration-ms:PasswordReset:*
Value = 2000
To set all slots for all bots to 4000 milliseconds:
Name =
x-amz-lex:max-speech-duration-ms:*:*
Value = 4000
Wildcards apply across bots but not across blocks in a contact flow.
For example, you have a Get_Account_Number bot. In the contact flow, you have two Get customer input blocks. The first block sets the session attribute with a wildcard. The second one doesn't set the attribute. In this scenario, the change in behavior for the bot applies only to the first Get customer input block, where the session attribute is set.
Because you can specify that session attributes apply to the intent and slot level, you can specify that the attribute is set only when you're collecting a certain type of input. For example, you can specify a longer Start Silence Threshold when you're collecting an account number than when you're collecting a date.
If DTMF input is provided to a Lex bot using Amazon Connect, the customer input is made available as a Lex request attribute. The attribute name is
x-amz-lex:dtmf-transcriptand the value can be a maximum of 1024 characters.
Following are different DTMF input scenarios:
Where:
[DEL] = Deletion character (Default is * )
[END] = End character (Default is # )
Configured block
When this block is configured, it looks similar to the following image:
Timeout: What to do when the time in the Set timeout property has elapsed.
Default: What to do if a customer enters a value other than 1 or 2.
Sample flows
See these sample flows for scenarios that use this block:
Scenarios
See these topics for scenarios that use this block: | https://docs.aws.amazon.com/connect/latest/adminguide/get-customer-input.html | 2021-05-06T13:58:48 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['images/get-customer-input.png', None], dtype=object)
array(['images/get-customer-input-configured.png', None], dtype=object)] | docs.aws.amazon.com |
IBM Cloud Container Registry
2. Scan images Manage > Authentication > Credentials Store.
Click Add credential.
Enter a name.
In Type, select IBM Cloud.
In Account GUID, enter the GUID for your IBM Cloud account. See the IBM Cloud Docs to learn how to get the GUID of an account
In API Key, enter your API key. See the IBM Cloud Docs to learn how to create a service ID for Prisma Cloud, and then create an API key for the service ID.
Click Save.
Go to Defend > Vulnerabilities > Registry.
Click Add registry.
In the dialog, enter the following information:
From the Version drop-down list, select IBM Cloud Container Registry.
In Registry, enter the registry address for your region.
For example, if you use the us-south registry, enter registry.ng.bluemix.net.
In Namespace, Repository name, Tag,.
In Credential, select the credential you just created..
Cap the number of images to scan.
Specify the maximum number of images to scan in the given repository, sorted according to last modified date. To scan all images in a repository, set Cap to 0. For a complete explanation of Cap, see the table in registry scan settings.. | https://docs.twistlock.com/docs/enterprise_edition/vulnerability_management/registry_scanning/scan_ibm_cloud_container_registry.html | 2021-05-06T13:20:28 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.twistlock.com |
# Introduction
# What is Ymir?
Ymir is a WordPress serverless DevOps platform for AWS. While the description sounds like something you'd hear playing buzzword bingo (opens new window), that's actually a really good description of what Ymir does for you! So let's expand on all these terms and explain how they tie to Ymir.
First, Ymir is a DevOps (opens new window) platform because it handles a lot of the tasks that fall under the DevOps umbrella. With Ymir, you get platform that:
- Manages all your AWS infrastructure
- Creates and configures environments with dedicated resources for all your projects
- Packages and deploys your WordPress projects to these environments
- Monitors the performance of your WordPress sites and AWS infrastructure
- Ensures your WordPress site can scale to handle any traffic sent at it
This last point is where the serverless (opens new window) element of Ymir comes into play. While serverless makes it sound like there are no servers, that isn't the case. There are still servers that have to execute your WordPress code. It's just that there are no more servers to manage.
With serverless, you upload your code to AWS Lambda and that's it! There are no more system updates to worry about. You don't have to worry about your server going down when you get thousands of requests hitting your server. AWS Lambda will scale on demand to handle any traffic situation without you having to do anything.
# What does Ymir do for you?
Going more into the specifics, here are some of the things that Ymir will handle for you:
- Auto-scaling WordPress infrastructure that handles any amount of traffic
- Database server management including backups
- Zero-downtime deployments and rollbacks
- All your assets stored on S3 and served using CloudFront CDN
- Unlimited projects environments with their own unique URLs
- DNS management including custom domain support
- SSL certificate management
# Why is all this important?
As a WordPress professional, you know how important hosting is. It's a critical part of your client's experience with your service. They'll call you if their site is slow, down or even hacked. They'll question the quality of your work because of it.
You want to offer the best hosting experience to your client, but that comes with a price tag. You either have to pay a professional to manage servers for you or pay a premium hosting company. And it can be hard for your clients to justify the price.
With Ymir, you get to run your WordPress infrastructure on AWS. No one knows their stuff better than the people who work there. This lets you not have to worry about server security, performance, updates, etc.
Instead, you can focus on building the best possible WordPress solution for your clients. | https://docs.ymirapp.com/introduction.html | 2021-05-06T12:09:23 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.ymirapp.com |
Queens Series Release Notes¶
8.5.1-11¶
8.
8.4.
8.4.0¶
8.3.0¶
New Features¶
sriov_pf and sriov_vf object types are added to allow configuration and usage of the SR-IOV PF and VF devices in the host machine. The sriov_config service file shall be used to reconfigure the SR-IOV devices after reboots. VLAN suport for SR-IOV VF device is not available in this patch.
8.2.0¶
Bug Fixes¶
Getting the ordered, active, and available nics has been traditionally one of the parts of os-net-config that can cause issues due to either user error or bugs. There are helpful log messages to log what is going on in the logic, but it’s all hidden behind the DEBUG log level. These messages should be INFO instead of DEBUG. It’s much more user friendly when there are issues instead of having to rerun with –debug.
os-net-config always logs the message “Using mapping file at: /etc/os-net-config/mapping.yaml” even if the file does not exist, in which case it’s not actually being used at all. Move the log message to only be shown if the mapping file actually exists and is used. In the case where no mapping file is used, also log a message indicating such. | https://docs.openstack.org/releasenotes/os-net-config/queens.html | 2021-05-06T13:00:17 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.openstack.org |
The role of the Slang Voice Assistant is to enable the users of the app (with which it's integrated) to complete one or more User Journeys by voice, either fully, or partially, aided by touch. Technically, this is achieved through back-and-forth interactions between the Assistant and the app, in which the Assistant repeatedly notifies the app of the User Journey requested by the user, along with the accompanying data, while the app responds by informing the Assistant of the various App State transitions it makes, which allows the Assistant to speak appropriate messages or request more information from the user.
In order to effectively integrate Slang's Voice Assistant with your app, you will need to be familiar with the following concepts.
Slang CONVA provides Voice Assistants that are specific to a Domain, which corresponds to a real-world domain in which businesses and apps exist, such as Retail, Travel, Hospitality, Finance, etc. Each Voice Assistant in a specific Domain, supports a specific set of User Journeys for which it has been pre-trained.
Each domain consists of one or more Sub-domains. A Sub-domain corresponds to a data variant that could exist within a Domain. For example, within the Retail Domain, we could have Sub-domains for Grocery and Pharmacy. Similarly, within the Travel Domain, we could have Sub-domains for Flights, Trains, and Buses.
Typically, Slang's Voice Assistants for each Domain provide out-of-the-box support for one or more Sub-domains, and in some cases also allow Custom Sub-domains, where the customer is expected to upload data specific to that sub-domain.
A user journey is a path a user may take to reach their goal when using a web or mobile app
User journeys, in Slang's context, represent various journeys or functionalities that are supported by the Assistant, which would result in the user reaching some logical end-point inside an app from that domain.
A User Journey can consist of an initial App State, a final App State, and zero or more intermediate App States.
The user journeys begin in one of two ways -
The user speaks a command that the Assistant is able to recognize as the beginning of a new user journey. Eg: "show me onions" or "where is my last order". The Assistant will analyze the utterance and determine which user journey belongs to it and will call an appropriate method (eg onSearch or onOrderManagement) that the app is expected to instantiate (more details in the Getting Started section)
The app launches an explicit user journey programmatically, eg to fill a form optionally by voice, as soon as the user lands on a relevant page.
Sample user journeys: Search, Order management, Offers
The data associated with every user journey. It's the data that the Assistant extracted from what the user spoke or from what the user did on the UI or something that the app wants to pre-inform the Assistant.
Sample Context items: Item, brand, quantity, unit, source & destination stations
An app state is a point in the user journey path that represents a specific form and context the app is in or wants to be in, based on the data it received from the Assistant or directly from the end-user via the screen.
That's a mouthful. Let's try to simplify it.
From a developer's perspective, in its simplest form, you can think of the various screens through which a user transitions while completing a User Journey as the various App States for that journey.
As an example, for the Search User journey, here are some sample app-states.
The search user journey will be triggered when the user is trying to search for an item and can optionally give enough details to uniquely identify an item (eg: aashirvaad 2 kg aata - here the brand and the size variant is also specified and the app can potentially map this to a specific SKU item in a retail e-commerce app)
SEARCH_RESULTS - when the app is showing the search listing based on the input it received from the user
PRODUCT_DETAILS - when the app is showing a product detail page that is specific to the item that the user searched for
CART_DETAILS - when the app wants to automatically add the item to cart based on what the user asked for
An App State is fully self-sufficient, which means that it is descriptive enough to provide all the necessary information to the Slang Assistant by itself, but it can optionally be enhanced with more details (conditions) to provide more fine-grained information that better represents the true state of the application at that point.
For eg, in the SEARCH_RESULTS app state (which should be triggered when the app wants to show a search listing, based on a search command that the user gave), the following additional conditions are available for the app to inform the Assistant a more precise state of the app.
SearchSuccess - when the app is showing the user the search listing
ItemNotFound - when the app is not able to find the item being searched
ItemInvalid - when the app is not able to match to any existing SKU item
For another example the PRODUCT_DETAILS app state (which should be triggered when the app was able to narrow down an item explicitly based on the search command that the user gave), the following conditions are available for the app -
QuantityRequired - when the app wants to know the quantity the user is interested in
AddToCartSuccess - when the app has added the item to cart
In the QuantityRequired case, the Assistant will automatically prompt the user to specify the quantity and it will inform the app with the details but also preserve details of the user journey up to that point.
Every App State has a default condition.
Non-terminal Conditions - Conditions that conceptually represent points in the app where the journey is not complete. It will require some input from the user to move ahead and the same will be collected by the Assistant. The app could also allow collecting the same information from the screen. Eg is when the app needs the user to specify the quantity of an item they are requested to be added to the cart
Terminal Condition - Conditions that conceptually represent points in the app where the journey has ended. The Assistant will speak out the details of the state to the user. The app would also be showing something related to that app state on the screen. Eg is when the app is showing a search result and they want to speak out a successful showing of the listing
The final piece of the puzzle is the things that the Assistant speaks back to the user. There are 4 types of prompts that the Assistant handles today -
The message that the Assistant will speak when the user starts using the Assistant. Eg like "Welcome to Retail Assistant. What item do you want to buy?".
There are different types of Greeting prompts
Global greeting - The greeting that is spoken when the Assistant is invoked without any explicit user journey affinity. Eg: "Welcome to BigBasket. What do you want to do today?"
User journey specific greeting - The greeting that is spoken when the Assistant is invoked having an affinity for a specific user journey. Eg: "What item do you want to buy?"
The message that the Assistant will speak whenever it does not understand what the user is saying. Eg: "Sorry I did not understand. Please try again".
There are again two types of Clarification prompts
Global clarification - The clarification message that is spoken when the Assistant is invoked without any explicit user journey affinity. Eg: "Sorry I did not understand. Please try again"
User journey specific clarification - The clarification that is spoken when the Assistant is invoked having an affinity for a specific user journey. Eg: "Sorry I did not understand that item. Please try again" - notice the mention of the word item which is what it would be if the user journey was Search
The message that the Assistant will speak when asking for very specific input from the user. Eg: "Please mention the amount you want to buy?".
This would happen when the app sets a non-terminal app state condition.
The message that the Assistant will speak when it wants to tell some information to the user. Eg: "Showing you onions"
This would happen when the app sets a terminal app state condition. | https://docs.slanglabs.in/slang/user-journey-and-app-states | 2021-05-06T12:16:47 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.slanglabs.in |
Primary and Secondary Server¶
Set up a primary SMTP server to send email notifications. You can set up an optional secondary SMTP server for Deadline to use if the primary server is unavailable.
-.
- Automatically Generate Email Addresses for New Users: Generates new email address for new users in the form username@postfix, where ‘postfix’ is the value entered in the Email Address Postfix field.
Note that if you have SSL enabled, you may need to configure your Linux and Mac OS X machines for SSL to work. The process for doing this is explained in Mono’s Security Documentation.
If you using Google Mail to send emails (smtp.gmail.com), you will typically use port 25 if SSL is disabled, and port 465 if SSL is enabled. See Google’s documentation on Sending Emails for more information. | https://docs.thinkboxsoftware.com/products/deadline/7.2/1_User%20Manual/manual/repository-config.html | 2021-05-06T11:45:27 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['../_images/cro_en_primary.png', '../_images/cro_en_primary.png'],
dtype=object) ] | docs.thinkboxsoftware.com |
Trusted images
- 1. Overview
- 2. Feature overview
- 3. Trust indicators in the Console UI
- 4. Events Viewer
- 5. Establishing trust with rules
- 6. Creating trust groups manually
- 7. Creating trust groups based on what’s running in your environment
- 8. Writing policy
1. Overview
Trusted images is a security control that lets you declare, by policy, which registries, repositories, and images you trust, and how to respond when untrusted images are started in your environment.
Image provenance is a core security concern. In NIST SP 800-190 (Application Container Security Guide), the section on countermeasures for major risks (Section 4) says:
"Organizations should maintain a set of trusted images and registries and ensure that only images from this set are allowed to run in their environment, thus mitigating the risk of untrusted or malicious components being deployed."
Container runtimes, such as Docker Engine, will run, by default, any container you ask it to run. Trusted images lets you explicitly define which images are permitted to run in your environment. If an untrusted image runs, Prisma Cloud emits an audit, raises an alert, and optionally blocks the container from running.
Modern development has made it easy to reuse open source software. Pulling images from public registries, such as Docker Hub, is simple and fast, and it removes a lot of friction in operations. Retrieving and executing software with such ease, however, runs contrary to many organizations' security policies, which mandate that software originates from approved providers and distribution points. The Trusted Images rule engine lets you specify registries, repositories, and images that are considered trustworthy.
2. Feature overview
Trusted images is disabled by default. To enable it, go to Defend > Compliance > Trusted Images > Policy.
After enabling the feature, you must specify the images you trust. Declare trust using objects called trust groups. Trust groups collect related registries, repositories, and images in a single entity. Then use those entities for writing policy rules.
The default policy consists of a single rule that alerts on all images started in your environment. Build out your policy by writing new rules. Rules let you define:
Explicitly allowed trust groups.
Explicitly denied trust groups
An action to take when an image isn’t trusted.
When a container starts in your environment, Defender assesses the event against your trust policy, and then acts accordingly. Rules in a policy are evaluated top-down. The criteria for matching an event to a rule is the cluster or the hostname. When a matching rule is found, the rule is processed. No subsequent rules are processed. The first rule that matches the cluster or hostname holds the verdict for all images that can run on that cluster/host. If the image being started matches an explicitly denied trust group, the rule effect is applied. If an image doesn’t match either the list of explicitly allowed trust groups or explicitly denied trust groups, the rule effect is also applied.
Audits are created when the effect of a rule is alert or block. You can review audits in Monitor > Events. When reviewing audits, you can optionally add the image to a trust group to quickly adjust your policy and clean up false positives.
The Console UI provides a number of features to surface trust in your environment.
Image scan reports have indicators in the report header to show whether an image is trusted or not. See:
Monitor > Compliance > Containers and Images
Monitor > Vulnerabilities > Images
A dedicated page in Monitor > Compliance > Trusted Images, shows a snapshot of all running images in your environment and their trust status. The table is updated at scan-time, which is once per 24 hours by default. However, the page lets you force a re-scan and refresh the results.
Also note that updated policies aren’t automatically reflected in the view. If you change a rule in your Trusted Images policy, re-scan the images in your environment to update the view.
3. Trust indicators in the Console UI
Badges are shown throughout Console to clearly delineate between trusted and unstrusted images. The following badges are used:
Trusted — Explicitly trusted by a user-defined rule (Defend > Compliance > Trusted Images > Policy).
Untrusted. An image is considered untrusted if it’s untrusted on at least one host.
Badges are shown in the following pages:
Scan reports (click on a row in the table to open an image scan report):
Monitor > Compliance > Containers and Images
Monitor > Vulnerabilities > Images
Snapshot of all running containers and their trust status. The table is updated at scan-time.
Monitor > Compliance > Trusted Images
4. Events Viewer
Prisma Cloud generates an audit for every image that is started in your environment, but fails to comply with your trust policy. Audits can be reviewed under Monitor > Events > Trust Audits. When reviewing audits, you can optionally add the image to a trust group.
5. Establishing trust with rules
Prisma Cloud monitors the origin of all containers on the hosts it protects.
Policies are built on rules, and rules reference trust groups. Trust groups are collections of related registries and repositories.
Trust is established by one of the following factors:
Point of origin (registry and/or repository),
Base layer(s).
5.1. Establishing trust by registry and repository
Prisma Cloud lets you specify trust groups by registry and repository. If you specify just a registry, all images in the registry are trusted.
5.2. Establishing trust by base layer
Images can have layers in common. If your organization builds and approves specific base images for use in containerized apps, then you can use this mechanism to enforce compliance.
For example, consider the ubuntu:16.04 image. If you run docker inspect, the layers are:
" ]
Now consider a new image, where ubuntu:16.04 is the base OS. The following Dockerfile shows how such an image is constructed:
FROM ubuntu:16.04 RUN apt-get update ADD hello.txt /home/hello.txt WORKDIR /home
After building the image, and inspecting the layers, you can see that both images share the same first five layers.
", "sha256:29d16833b7ef90fcf63466967c58330bd513d4dfe1faf21bb8c729e69084058f", "sha256:1d622b0ae83a00049754079a2bbbf7841321a24cfd2937aea2d57e6e3b562ab9" ]
6. Creating trust groups manually
Trust groups are collections of related registries and repositories. Policies are built on rules, and rules reference trust groups.
When setting up a trust group, you can explicitly specify registries and repositories to trust.
Prisma Cloud supports leading and trailing wildcard matches as described in the following table:
Examples:
All repos under a parent repo:
reg: reg
repo: parent-repo/*
A nested repo:
reg: reg
repo: parent-repo/some-repo
All registries ending with "gcr.io":
reg: *gcr.io
repo: <unspecified>
Prerequisites:
You’ve enabled the trusted images feature in Defend > Compliance > Trusted Images > Policy.
Open Console.
Go to Defend > Compliance > Trusted Images > Trust Groups.
Click Add New Group.
In Name, enter a group name.
In Type, select how you want to specify an image.
By Image:
There are two ways to specify images:
Method 1 - Choose from a list of containers already running in your environment. In the table, select the images you trust, and click Add To Group.
Method 2 - Specify a registry address and/or repository, and click Add To Group. If you specify just a registry, then all images in the registry are trusted. If you specify just a repository, the registry is assumed to be Docker Hub.
As you add entries to the trust group, the entries are enumerated in the Group Images table at the bottom of the dialog.
By Base Layer:
Prisma Cloud lets you import the base layers from any image in your environment. If Prisma Cloud has seen and scanned an image, it is available in the Image drop-down list.
Select an image, import it, and then review the SHA256 hashes for the base layers. For example, if the secteam/ubuntu:16.04 is your trusted base OS, select it from the Image drop-down list, and click Import.
Click Save.
7. Creating trust groups based on what’s running in your environment
When setting up a trust group, Prisma Cloud shows you all running images in your environment You can use the filters to narrow the set, and them all to a trust group.
Filtering images by cluster is the most convenient option. For example, consider an environment with two clusters called "prod" and "dev". To create a trust group called "production images", select all the images running on the "prod" cluster. You would type "prod" in the filter line, and click Enter to filter. Then you could select all images on cluster and add them to the trust group. Later, you could create a rule for this prod cluster by specifying the cluster resource as "prod", and add the new trust group to the allowed groups. For more specific needs, you can also filter the running images by hosts.
8. Writing policy
After declaring the images you trust with trust groups, write the rules that make up your policy.
Prisma Cloud evaluates the rules in your trusted images policy from top to bottom until a match is found based on cluster and hostname. If the image being started in your environment matches a cluster/hostname in a rule, Prisma Cloud applies the actions in the rule and stops processing any further rules. If no match is found, no action is taken.
You should never delete the default rule, Default - alert all, and it should always be the last rule in your policy. The default rule matches all clusters and hosts (*). It serves as a catchall, alerting you to images that aren’t captured by any other rule in your policy.
Assuming the default rule is in place, policy is evaluated as follows:
A rule is matched — The rule is evaluated.
A rule is matched, but no trust group is matched — The image is considered untrusted. Prisma Cloud takes the same action is if it were explicitly denied.
No rule match is found — The default rule is evaluated, and an alert is raised for the image that was started. The default rule is always matched because the cluster and hostname are set to a wildcard
Open Console.
Go to Defend > Compliance > Trusted Images > Policy.
Click Add Rule.
Enter a rule name.
In Effect, specify how Prisma Cloud responds when it detects an explicitly denied image starting in your environment. This action is also used when a rule is matched (by cluster/hostname), but no trust group in the rule is matched.
Ignore — Do nothing if an untrusted image is detected.
Alert — Generate an audit and raise an alert.
Block — Prevent the container from running on the affected host. Blocking isn’t supported for Windows containers.
Specify the rule’s scope.
By default, the rule applies to all clusters and hosts in your environment. Pattern matching is supported.
Explicitly allow or deny images by trust group.
(Optional) Append a custom message to the block action message.
Custom messages help the operator better understand how to handle a blocked action. You can enhance Prisma Cloud’s default response by appending a custom message to the default message. For example, you could tell operators where to go to open a ticket.
Click Save.
Your rule is added to the top of the rule list. Rules are evaluated from top to bottom. The rule at the top of the table has the highest priority. The rule at the bottom of the table should be your catch-all rule. | https://docs.twistlock.com/docs/compute_edition/compliance/trusted_images.html | 2021-05-06T12:00:54 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.twistlock.com |
New in version 0.7.
Graphs disk IO stats. It automatically converts the running totals of Writes and Reads into rates of the values. The time based fields are left as running totals of the amount of time doing IO. Expects to receive messages with disk IO data embedded in a particular set of message fields which matches what is generated by Linux Disk Stats Decoder: WritesCompleted, ReadsCompleted, SectorsWritten, SectorsRead, WritesMerged, ReadsMerged, TimeWriting, TimeReading, TimeDoingIO, WeightedTimeDoingIO, TickerInterval.
Config:
Sets the size of the sliding window i.e., 1440 rows representing 60 seconds per row is a 24 sliding hour window with 1 minute resolution.
anomaly_config(string) - (see Anomaly Detection Module)
Example Heka Configuration
[DiskStatsFilter] type = "SandboxFilter" filename = "lua_filters/diskstats.lua" preserve_data = true message_matcher = "Type == 'stats.diskstats'" ticker_interval = 10 | https://hekad.readthedocs.io/en/v0.10.0/config/filters/disk_stats.html | 2018-11-12T19:49:07 | CC-MAIN-2018-47 | 1542039741087.23 | [] | hekad.readthedocs.io |
New in version 0.9.
Plugin Name: SandboxInput
The SandboxInput provides a flexible execution environment for data ingestion and transformation without the need to recompile Heka. Like all other sandboxes it needs to implement a process_message function. However, it doesn’t have to return until shutdown. If you would like to implement a polling interface process_message can return zero when complete and it will be called again the next time TickerInterval fires (if ticker_interval was set to zero it would simply exit after running once). See Sandbox.
Config:
All of the common input configuration parameters are ignored since the data processing (splitting and decoding) should happen in the plugin.
instruction_limitis always set to zero for SandboxInputs
Example
[MemInfo] type = "SandboxInput" filename = "meminfo.lua" [MemInfo.config] path = "/proc/meminfo" | https://hekad.readthedocs.io/en/v0.10.0/config/inputs/sandbox.html | 2018-11-12T19:46:57 | CC-MAIN-2018-47 | 1542039741087.23 | [] | hekad.readthedocs.io |
Use your free Azure Active Directory subscription in Office 365.
Before you begin
Use a private browsing session (not a regular session) to access the Azure portal (in step 1 below) because this will prevent the credential that you are currently logged on with from being passed to Azure. To open an InPrivate Browsing session in Internet Explorer or a Private Browsing session in Mozilla FireFox, just press CTRL+SHIFT+P. To open a private browsing session in Google Chrome (called an incognito window), press CTRL+SHIFT+N.
Access Azure Active Directory
Go to portal.azure.com and sign in with your Office 365 work or student account.
In the left navigation pane in the Azure portal, click Azure Active Directory.
The Azure Active Directory admin center is displayed.
More information
You can also access the Azure Active Directory admin center from the Office 365 admin center. In the left navigation pane of the Office 365 admin center , click Admin centers > Azure Active Directory.
For information about managing users and groups and performing other directory management tasks, see Manage your Azure AD directory. | https://docs.microsoft.com/en-us/office365/securitycompliance/use-your-free-azure-ad-subscription-in-office-365?redirectSourcePath=%252fen-ie%252farticle%252fuse-your-free-azure-active-directory-subscription-in-office-365-d104fb44-1c42-4541-89a6-1f67be22e4ad | 2018-11-12T20:47:48 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.microsoft.com |
page setup on a Raspberry Pi or a separate development server.
Fortunately, the latest versions of Synology DSM make this easy.
How to configure Reverse Proxy settings with Synology DSM
- Control Panel > Application Portal > Reverse Proxy
- Click Create and in the window that appears, fill in the description field.
- Then, under Source, select the protocol for your website HTTP or HTTPS
- Fill in the Hostname of your internal website (i.e. mysite.freedns-domain.org).
- Fill in the port – i.e. 80.
- Next under Destination – Select the protocol, the internal IP address of your web server and the port.
- Now, from a separate internet connection, provide mysite.freedns-domain.org is pointing your your Synology’s External IP – Your website should load up. | https://docs.autechnet.com/2017/12/27/configuring-reverse-proxy-with-synology-dsm/ | 2018-11-12T19:50:53 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.autechnet.com |
Header file for
AzTypeInfo, which uniquely describes a type so that it can be identified across Lumberyard modules and serialized into different contexts.
More...
Detailed Description
Header file for
AzTypeInfo, which uniquely describes a type so that it can be identified across Lumberyard modules and serialized into different contexts.
For a given type,
AzTypeInfo stores a name and a type ID. For a type to be uniquely identified across Lumberyard, its type ID must be unique. The default implementation of
AzTypeInfo attempts to generate a unique type ID by accessing static functions inside the type. For types for which this is not an option, you must specialize
AzTypeInfo.
- Note
- We recommend against using
AZ_FUNCTION_SIGNATUREto generate type IDs, because it does not always generate a unique type ID. Some compilers fail to expand the template type, and thus provide the same result for multiple types.
To add
AzTypeInfo to a class, use AZ_TYPE_INFO or AZ_RTTI within the class declaration or use AZ_TYPE_INFO_SPECIALIZE externally. When referencing the class from other code, be sure to include the header file that contains the class's
AzTypeInfo information. This is usually the header file of the class.
If you use AZ Code Generator, unique type IDs are automatically generated for you.
Macro Definition Documentation
◆ AZ_TYPE_INFO
A macro that you add to your class to enable the class to be identified across modules and serialized into different contexts.
You must pass in at least a type name and a unique type ID.
Usage: AZ_TYPE_INFO(_ClassName,_ClassUuid, (optional template parameters))
Templates and type IDs are special cases. If possible, prefer AZ_TYPE_INFO_SPECIALIZE for these types.
◆ AZ_TYPE_INFO_SPECIALIZE
AzTypeInfo specialization macro that is used to externally add AzTypeInfo to a type that cannot use the
AZ_RTTI/
AZ_TYPE_INFO macros internally.
Used for library types, middleware, and so on. | https://docs.aws.amazon.com/lumberyard/latest/apireference/_type_info_8h.html | 2018-11-12T20:25:22 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.aws.amazon.com |
Setting Up JMX Authentication for GemFire Management and Monitoring
Setting Up JMX Authentication for GemFire Management and Monitoring
To force JMX clients such as gfsh and GemFire Pulse to authenticate into the GemFire management system, you must configure the JMX Manager node.
By default, the JMX manager allows clients without credentials to connect. To set up JMX authentication for the management system:
- Verify that the jmx-manager GemFire property is set to true on any node that you want to be able to become a JMX Manager and authenticate clients. If this property is set to false or not specified, then all other jmx-manager-* properties are ignored.
- Create a password file that contains entries for the user names and passwords you want to grant access to GemFire's management and monitoring system. For example:
#the gemfiremonitor user has password Abc!@# #the gemfiremanager user has password 123Gh2! gemfiremonitor Abc!@# gemfiremanager 123Gh2!
- On each of your JMX Manager-enabled nodes, set the GemFire property jmx-manager-password-file to the name of the file you created in step 2. This will require clients to authenticate when connecting to a JMX Manager node in GemFire.
- If you wish to further restrict access to system operations, you can also set up an access file for the JMX Manager. The access file indicates whether the users listed in the password file have the ability to read system MBeans (monitor the system) or whether they can additionally modify MBeans (perform operations). For example, you can define the following:
#the gemfiremonitor user has readonly access #the gemfiremanager user has readwrite access gemfiremonitor readonly gemfiremanager readwrite
- On each of your JMX Manager-enabled nodes, set the GemFire property jmx-manager-access-file to the name of the file you created in step 4. This will associate MBean permissions to the users who authenticate to the JMX Manager node in GemFire.
- If desired, enable SSL for your JMX Manager connections. If you enable SSL for GemFire peer-to-peer connections, then by default the same SSL configuration is applied to the JMX manager. You can override the SSL configuration for JMX clients by configuring JMX manager configuration properties (such as jmx-manager-ssl) in the gfsecurity.properties file. See Configuring SSL.
For more information on the format of the password and access file, see. | http://gemfire81.docs.pivotal.io/docs-gemfire/managing/security/management_system_jmx_authentication.html | 2018-11-12T20:29:22 | CC-MAIN-2018-47 | 1542039741087.23 | [] | gemfire81.docs.pivotal.io |
Expanded state is the state when a cell is open, i.e. the user can see both header and body of the cell.
Collapsed state is the state when a cell is closed, i.e. the user can see only the header of the cell.
If the last visible cell in a row/column becomes collapsed, the nearest collapsed cell will be expanded instead. The component expands either the nearest left cell or the nearest right cell (if there is no nearest left one).
In other words, if there are two or more cells in a row/column, at least one of them will always be expanded.
You can localize the tooltip of the "Expand/Collapse" button into the desired language by using the following technique:
dhtmlXLayoutCell.prototype.i18n.expand = "Expand/Collapse";
To expand an item, use the expand method:
myLayout.cells("a").expand();
To collapse an item, use the collapse method:
myLayout.cells("a").collapse();
Related sample: Expanding/collapsing cellsBack to top | https://docs.dhtmlx.com/layout__expanding.html | 2018-11-12T20:34:46 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.dhtmlx.com |
Making trees¶
This is a short tutorial on how to make trees and other types of vegetation from scratch.
The aim is to not focus on the modelling techniques (there are plenty of tutorials about that), but how to make them look good in Godot.
Start with a tree¶
I took this tree from SketchFab:
and opened it in Blender.
Paint with vertex colors¶
The first thing you may want to do is to use the vertex colors to paint how much the tree will sway when there is wind. Just use the vertex color painting tool of your favorite 3D modelling program and paint something like this:
This is a bit exaggerated, but the idea is that color indicates how much sway affects every part of the tree. This scale here represents it better:
Write a custom shader for the leaves¶
This is a simple example of a shader for leaves:
shader_type spatial; render_mode depth_draw_alpha_prepass, cull_disabled, world_vertex_coords;
This is a spatial shader. There is no front/back culling (so leaves can be seen from both sides), and alpha prepass is used, so there are less depth arctifacts that result from using transparency (and leaves cast shadow). Finally, for the sway effect, world coordinates are recommended, so the tree can be duplicated, moved, etc. and it will still work together with other trees.
uniform sampler2D texture_albedo : hint_albedo; uniform vec4 transmission : hint_color;
Here, the texture is read, as well as a transmission color, which is used to add some back-lighting to the leaves, simulating subsurface scattering.
uniform float sway_speed = 1.0; uniform float sway_strength = 0.05; uniform float sway_phase_len = 8.0; void vertex() { float strength = COLOR.r * sway_strength; VERTEX.x += sin(VERTEX.x * sway_phase_len * 1.123 + TIME * sway_speed) * strength; VERTEX.y += sin(VERTEX.y * sway_phase_len + TIME * sway_speed * 1.12412) * strength; VERTEX.z += sin(VERTEX.z * sway_phase_len * 0.9123 + TIME * sway_speed * 1.3123) * strength; }
This is the code to create the sway of the leaves. It’s basic (just uses a sinewave multiplying by the time and axis position, but works well). Notice that the strength is multiplied by the color. Every axis uses a different small near 1.0 multiplication factor so axes don’t appear in sync.
Finally all that is left is the fragment shader:
void fragment() { vec4 albedo_tex = texture(texture_albedo,UV); ALBEDO = albedo_tex.rgb; ALPHA = albedo_tex.a; METALLIC = 0.0; ROUGHNESS = 1.0; TRANSMISSION = transmission.rgb; }
And this is pretty much it.
The trunk shader is similar, except it does not write to the alpha channel (thus no alpha prepass is needed) and does not require transmission to work. Both shaders can be improved by adding normal mapping, AO and other maps.
Improving the shader¶
There are many more resources on how to do this that you can read. Now that you know the basics, a recommended read is the chapter from GPU Gems3 about how Crysis does this (focus mostly on the sway code, as many other techniques shown there are obsolete): | https://docs.godotengine.org/en/latest/content/3d/making_trees.html | 2018-11-12T19:40:24 | CC-MAIN-2018-47 | 1542039741087.23 | [array(['../../_images/tree_sway.gif', '../../_images/tree_sway.gif'],
dtype=object)
array(['../../_images/tree_base.png', '../../_images/tree_base.png'],
dtype=object)
array(['../../_images/tree_vertex_paint.png',
'../../_images/tree_vertex_paint.png'], dtype=object)
array(['../../_images/tree_gradient.png',
'../../_images/tree_gradient.png'], dtype=object)] | docs.godotengine.org |
Timeline Service 2.0 Installation
Timeline Service 2.0 is automatically installed as part of the YARN installation process when deploying an HDP cluster using Ambari.
Timeline Service 2.0 uses HBase as the primary backing storage. Therefore, you can use either of the following approaches to configure HBase for Timeline Service 2.0:
- Enable the automatic installation of HBase along with Timeline Service 2.0 as part of the YARN installation process. This is different from the HBase service that you install using Ambari.
- Configure YARN to point to an external instance of HBase service installed using Ambari. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/dosg_timeline_service_2.0_installation.html | 2018-11-12T20:55:16 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.hortonworks.com |
DMA_CB_TypeDef Struct ReferenceEMLIB > DMA. | https://docs.silabs.com/mcu/latest/efm32lg/structDMA-CB-TypeDef | 2018-11-12T20:29:02 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.silabs.com |
Installation and Configuration Guide
Local Navigation
Troubleshooting
- BlackBerry Social Networking Application Proxy is not running
- Download of the BlackBerry Client for Microsoft SharePoint fails with HTTP Error 500
- Access denied. Insecure SSL request
- Object reference not set to an instance of an object
- Users cannot preview a Microsoft Office 2010 file
- Users cannot upload a Microsoft Office 2007 file
- Users cannot access content on a site
- Link redirection is disabled for this site
- HTTP Error 400, Unknown host hostname:port number
- Parameter webUrl is missing or invalid with HTTP error 500 Internal Server Error
- Request could not be understood by the server with HTTP error 400 Bad Request
- Users want to log in as a different user but they cannot clear the credentials cache
- Users cannot use the features of the BlackBerry Client for Microsoft SharePoint or the BlackBerry Client for Microsoft SharePoint does not start
- This application has been blocked by your system administrator
- Your device is not connected to the Internet. Please try again or contact the service provider for more information.
Previous topic: System logging properties
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/31148/Troubleshooting_671621_11.jsp | 2015-07-28T03:48:10 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.blackberry.com |
Revision history of "JDatabaseMySQL::updateObject/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 06:23, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDatabaseMySQL::updateObject/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDatabaseMySQL::updateObject== ===Description=== Description. {{Description:JDatabaseMySQL::updateObject}} <span class="editsection" st..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JDatabaseMySQL::updateObject/1.6&action=history | 2015-07-28T04:57:58 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
Information for "Upgrading Versions" Basic information Display titlePortal:Upgrading Versions Default sort keyUpgrading Versions Page length (in bytes)5,704 Page ID29722 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page4 Number of subpages of this page5 (0 redirects; 5 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorTom Hutchison (Talk | contribs) Date of page creation19:34, 8 September 2013 Latest editorSandra97 (Talk | contribs) Date of latest edit14:48, 10 July 2015 Total number of edits43 Total number of distinct authors5 Recent number of edits (within past 30 days)1 Recent number of distinct authors1 Page properties Transcluded templates (2)Templates used on this page: Template:CurrentSTSVer (view source) (protected)Template:Note (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Portal:Upgrading_Versions&action=info | 2015-07-28T03:42:38 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
Changes related to "How to Organize a JoomlaDay™"
← How to Organize a JoomlaDay™
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140404161728&target=How_to_Organize_a_JoomlaDay%E2%84%A2 | 2015-07-28T04:55:26 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
Analyze Test Results
The Test Results panel allows you to traverse test execution results, drilling down to the individual test step and back up again to the test list level.
- To see the results for a test list, double-click the result in the calendar.
- The Test Results panel appears on the right. The test results panel shows results for the entire test list.
The P**assed/Total** column shows that all steps of both tests passed. Double-clicking a test in this list drills down to show the steps of that test.
Double-click a test in the Test Results view to see the result of each test step. The bread crumb trail at the top of the panel now shows the test list followed by the test name._0<< | http://docs.telerik.com/teststudio/getting-started/test-results/analyze-test-results | 2015-07-28T03:30:26 | CC-MAIN-2015-32 | 1438042981525.10 | [array(['/teststudio/img/getting-started/test-results/analyze-test-results/fig4.png',
'Step as step results'], dtype=object) ] | docs.telerik.com |
Third-Party PluginsThird-Party Plugins
Altis is built atop the open-source WordPress CMS. In addition to custom Altis modules, you can also use third-party WordPress plugins to take advantage of the open-source ecosystem.
There are two ways to add plugins to your project: either by managing them via Composer, or commit the plugins to your project (or use submodules).
Managing Plugins via ComposerManaging Plugins via Composer
Plugins can be managed via Composer dependencies, similar to how Altis modules are added.
Some plugins are natively available via Composer, and can be found on Packagist via the
wordpress-plugin type.
The starter project automatically configures Composer to install plugins into the correct place for your project, but if you started from scratch you may need to configure this yourself. First, install
composer/installers as a dependency, which will allow installing these types of projects. Next, configure Composer under
extra.installer-paths in your
composer.json to place them into the correct directory:
"extra": { "installer-paths": { "content/plugins/{$name}/": [ "type:wordpress-plugin" ], "content/themes/{$name}/": [ "type:wordpress-theme" ] } }
Some plugins are not available natively on Packagist, and are available only via the WordPress.org Plugin Repository. These can be installed via a third-party Composer repository called WordPress Packagist.
To set up and configure WordPress Packagist, first add the custom repository to your project's
composer.json under a
repositories key:
"repositories": [ { "type":"composer", "url":"" } ]
To install plugins, you can now use
composer require wpackagist-plugin/{plugin-name}, where
{plugin-name} is the "slug" of the plugin from the WordPress.org Plugin Repository to install.
For example, Akismet is available at, so the "slug" of the plugin is
akismet. You can install this with:
composer require wpackagist-plugin/akismet
Managing Plugins ManuallyManaging Plugins Manually
Third-party plugins should be placed inside the
content/plugins/ directory. Each plugin should be contained in its own directory within the plugins directory, with one file containing a comment header.
Generally speaking, we recommend using Git submodules if you're installing modules manually. This reduces the amount of code necessary in your repository, and makes managing updates much easier.
Pre-Approved PluginsPre-Approved Plugins
We maintain a list of trusted 3rd party plugins that you can refer to for recommendations, read more about our pre-approved plugins here. | https://docs.altis-dxp.com/v5/getting-started/third-party-plugins/ | 2022-09-24T19:43:52 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.altis-dxp.com |
Objective-C SDK defaults to a User Profile Service that stores this state directly on the device. See the Objective-C.
No op in Objective-C
You can overwrite the User Profile Service by setting the User Profile Service to
OPTLYUserProfileServiceNoOp.init(). Here is an example:
let optimizelyManager = OPTLYManager.init { (builder) in builder !.projectId = "12345678" builder ?.userProfileService = OPTLYUserProfileServiceNoOp.init () } optimizelyManager ?.initialize (callback : { (err, client) in // activate an experiment let variation = client?.activate("Test", userId: "user_test_1") NSLog((variation?.variationKey)!) }
The bucketing will no longer be sticky for returning visitors: new traffic allocation changes will be applied for all visitors. The default user profile in iOS stores the bucketing decision for every visitor, so if the traffic allocation changes, this change is not applied to visitors who already have bucketing in storage.
Implement a User Profile Service
If you want to implement a custom User Profile Service rather than using the default provided by Objective-C SDK, then your User Profile | https://docs.developers.optimizely.com/full-stack/docs/implement-a-user-profile-service-objective-c | 2022-09-24T20:30:05 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.developers.optimizely.com |
PopupPanel¶
Inherits: Popup < Control < CanvasItem < Node < Object
Class for displaying popups with a panel background.
Description¶
Class for displaying popups with a panel background. In some cases it might be simpler to use than Popup, since it provides a configurable background. If you are making windows, better check WindowDialog.
If any Control node is added as a child of this
PopupPanel, it will be stretched to fit the panel's size (similar to how PanelContainer works).
Theme Properties¶
Theme Property Descriptions¶
The background panel style of this
PopupPanel. | https://docs.godotengine.org/pt_BR/latest/classes/class_popuppanel.html | 2022-09-24T20:20:46 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.godotengine.org |
.
This app demonstrates how
This example uses the Silabs website at.
Save the site's CA cert, in a Base-64 encoded file, using the following steps in a Chrome web browser:
- In an up-to-date Chrome web browser, navigate to:
Notice that a CA cert in the certificate chain
- Click the
View Certificatebutton
- A
Certificatedialog appears, for the selected certificate.
- Select the
Detailstab
- Click the
Copy to File...button
- Another dialog appears
- Click the
Nextbutton
- Select the
Base-64 encoded X.509 (.CER)format, then click the
Nextbutton
- Save the CA cert to your desktop with a file name of your choice. This example uses the name
site_cert_chain_ca.
Chrome appends a
.cerextension.
At this point the website's CA certificate is save to the desktop.
Double-click on the
site_cert_chain_ca.cer file on your desktop to view the certificate's information.
Upload CA Certificate to Wi-Fi Module
The next step is to upload the CA certificate to the Gecko OS module.
The easiest way to do this is to use the Gecko OS Web App provided with Gecko OS. To start the webapp, issue the setup_web command to the device using computer to the file system on the Gecko OS device.
On your computer, find the CA certificate that you just created:
site_cert_chain_ca.cer
Drag this file onto the Gecko OS webapp target area where it says
Drop files here. Alternatively, click the button labelled
Click to add files.
That's it! The CA certificate is now stored in non-volatile memory on the Gecko OS device flash file system.
Issue HTTPS GET Request
Now that the Wi-Fi module has the website CA cert, the module can issue secure HTTPS requests to the website.
First, configure the following setting so that the CA certificate is always used by default:
set network.tls.ca_cert site_cert_chain_ca.cer save
For details see the network.tls.ca_cert variable.
Set the module's network credentials so that the module can connect to the Internet. See Configuration and Setup, Wi-Fi Setup.
Now, issue a HTTPS request to the server:
http_get
Assuming the Wi-Fi network's SSID/password are set, this issues a secure HTTPS request to the server and downloads the encrypted webpage.
Once the connection is open, read the webpage data with the command:
read 0 1000
Keep issuing this command until all the webpage data is read.
You can store multiple CA certs on the Gecko OS flash file system.
Instead of setting the cert in the network.tls.ca_cert variable, you can specify the CA cert as an argument to the http_get command:
http_get site_cert_chain_ca.cer
Supporting Gecko OS Versions
- Gecko OS 2 | https://docs.silabs.com/gecko-os/2/amw007-w00001/2.3/cmd/apps/web-page-tls-cert | 2022-09-24T18:41:25 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.silabs.com |
Delete a relationship
You can delete a relationship between tables through the ThoughtSpot application or TQL.
You must have either the Can administer Joins.
The list of existing joins appears.
In a worksheet, you must click Joins within this worksheet and choose Joins from this worksheet.
Find the relationship you want to delete, and click the delete icon.
Repeat these steps until all the joins you want to remove have been deleted.
Related information | https://docs.thoughtspot.com/software/7.0/relationship-delete | 2022-09-24T20:12:02 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.thoughtspot.com |
Private DependenciesPrivate Dependencies
Altis automatically configures the build server to access your primary repository (the one you provided when your first environment was created).
If you need to add additional private dependencies to your environment, further configuration will be required.
This configuration is not necessary for public repositories. If you experience permission errors with public repositories, ensure you are using a URL for your repository rather than the
git@host:repo format.
GitHub is the only officially supported repository host, however it is possible to successfully use dependencies from other hosts.
Note: Altis Support is unable to assist with configuring access to private dependencies on third-party hosts.
GitHubGitHub
If you have dependencies in private GitHub repositories, you can use the pre-configured Altis accounts to provide access to these.
To use these repositories, provide the @humanmade-cloud user with read access to the repository using GitHub's access management tools.
BitBucketBitBucket
If you have dependencies in private Bitbucket repositories, you will need to configure access manually.
We recommend following Composer's configuration guide for Bitbucket repositories. In particular, note the following:
- Composer requires the use of
httpsrepository URLs for Bitbucket authentication.
- Bitbucket authentication must be set up following the instructions for Composer
- Note that when applying this configuration, this must be saved into the
auth.jsonfile within your repository; i.e. do not use the
--globaloption.
If you must use
git@bitbucket.org-style URLs (SSH URLs), you will need to follow the guide below for other Git hosts.
Other Git hostsOther Git hosts
For other Git hosts, you will typically need to set up SSH access keys to access these servers. This will require creation of a private key for this purpose and configuring the external service.
Note: You may not use the pre-configured Altis authentication keys for these purposes. Any attempt to access these keys is a violation of the terms of service, and will lead to account termination.
As Altis already provisions keys onto the build server, you will need to add a secondary set of SSH keys, and configure SSH (used by Git and Composer) to use them.
Generate a new SSH keypair locally. We recommend these instructions from GitHub. Please note that this keypair must not have a passphrase set. This will create a new file called
id_rsaor similar; this is your private key.
Commit this private key to your primary repository;
.configis a convenient place to put this file.
Configure the Git host to provide access to your private dependencies using this key; these instructions depend on your Git host.
In your build script, copy the private key file and set the permissions to avoid any warnings from SSH. (Copying the file ensures you avoid potential conflicts from the repository.)
For example, if your key file is at
.config/bitbucket_key:
cp .config/bitbucket_key ~/.ssh/bitbucket_key chmod 0600 ~/.ssh/bitbucket_key
These steps must be before your
composer installstep.
Register the key with SSH.
ssh-add ~/.ssh/bitbucket_key
This step must be before your
composer installstep, and after the key is copied into place.
SSH will now be able to use your key to connect to your Git host.
For example, your build script may look like:
cp .config/bitbucket_key ~/.ssh/bitbucket_key chmod 0600 ~/.ssh/bitbucket_key ssh-add ~/.ssh/bitbucket_key composer installcp .config/bitbucket_key ~/.ssh/bitbucket_key chmod 0600 ~/.ssh/bitbucket_key ssh-add ~/.ssh/bitbucket_key composer install
Composer should now be able to install your private dependencies from the other Git hosts.
Note: Altis Support is unable to assist with configuring access to private dependencies on third-party hosts. | https://docs.altis-dxp.com/v12/cloud/build-scripts/private-dependencies/ | 2022-09-24T19:04:42 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.altis-dxp.com |
Pass in audience attributes
In React SDK, you can provide user id and user attributes together as part of the
user prop, which is passed to the
OptimizelyProvider component. React from 'react'; import { createInstance, OptimizelyProvider, } from '@optimizely/react-sdk' const optimizely = createInstance({ sdkKey: '<Your_SDK_Key>', // TODO: Update to your SDK Key }) // You can pass strings, numbers, Booleans, and null as user attribute values. // The example below shows how to pass in attributes. const user = { id: ‘myuser123’, attributes: { device: 'iPhone', lifetime: 24738388, is_logged_in: true, app_version: '4.3.0-beta', }, }; function App() { return ( <OptimizelyProvider optimizely={optimizely} user={user} > {/* … your application components here … */} </OptimizelyProvider> </App> } | https://docs.developers.optimizely.com/full-stack/docs/pass-in-audience-attributes-react | 2022-09-24T18:54:16 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.developers.optimizely.com |
if dynamic loading is enabled, fires after the task branch was loaded to the page
Available only in PRO Edition
gantt.attachEvent("onAfterBranchLoading", function(settings){ console.log(settings.url); });
The
settings object contains two properties - the id of the task and the request url:
{ taskId: 1, url:"/data?parent_id=1" }
This event fires only when Dynamic loading is enabled. | https://docs.dhtmlx.com/gantt/api__gantt_onafterbranchloading_event.html | 2022-09-24T19:47:23 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.dhtmlx.com |
Contribute
Pando documentation is open source. You are very welcomed to translate it into other languages to make it accessible to greater population!
#Preparation
Pando documentation is based on Docusaurus, a static site generator for React.js.
- You need to ensure that you have Yarn installed
- Clone the repository of the document
- Run
yarnin the root of the repo
- Run
yarn startto preview the documentation
#Document Structure
The documentation is organized in the following way:
- all source is in the
docsdirectory and
developerdirectory
- the
docsdirectory contains the following subdirectories:
lakecontains all the Pando Lake & 4swap documentation
leafcontains all the Pando Leaf documentation
ringscontains all the Pando Rings documentation
walletscontains all documentation about wallets on Mixin Network
3rd-party-appslists applications built on top of Pando
securitycontains documentation concerning security issues
communitycontains all the community documentation
- the
developerdirectory contains the following subdirectories:
lakecontains the dev documentation of Pando Lake and 4swap
leafcontains the dev documentation of Pando Leaf
ringscontains the dev documentation of Pando Rings
resourcescontains the resources
- the sidebar of
docsis defined in
sidebar.docs.js, the sidebar of
developeris defined in
sidebar.developer.js
#Translation
#Translate at Crowdin
If you're not familiar with Github and the i18n of Pando, we recommend you to use Crowdin to help us translate the docs.
- Tap here to sign-up an account of Crowdin
- Browse the translation status and progress at Pando's Page at Crwodin
- Get familiar with the Crowdin translation UI, as you will need to use it to translate JSON and Markdown files
info
Please note that all the code in the documentation should not be translated.
#Initialize the Translation
#Generate new translation files for new languages
If you're the maintainer of this project, please follow the instructions in the i18n tutorial to add a new language.
Translate the index page
Please follow the instructions here to translate your index page and react components.
Generate/Update json files
yarn run write-translations --locale $LANG_CODE
The
$LANG_CODE is the language code of the language you want to generate. For example, if you want to generate the translation files for the French language, you should use
The translation files are generated in the
i18n/$LANG_CODE/ directory.
Generate Markdown files
Copy Markdown files in docs to
i18n/$LANG_CODE/docusaurus-plugin-content-docs/current, and translate them:
mkdir -p i18n/$LANG_CODE/docusaurus-plugin-content-docs/currentcp -r docs/** i18n/$LANG_CODE/docusaurus-plugin-content-docs/current
#Translate the documents
All the documents are placed in the
i18n/$LANG_CODE/ according to the languages.
i18n/$LANG_CODE/code.json: the translation of the index page and the text used by docusaurus.
i18n/$LANG_CODE/docusaurus-theme-classic/footer.json: the translation of footer.
i18n/$LANG_CODE/docusaurus-theme-classic/navbar.json: the translation of navbar.
i18n/$LANG_CODE/docusaurus-plugin-content-docs/current.json: the label of category on sidebar.
i18n/$LANG_CODE/docusaurus-plugin-content-docs/current/**: the markdown files of documents.
Preview the translation
yarn run start --locale $LANG_CODE | https://docs.pando.im/es/docs/community/contribute/ | 2022-09-24T18:39:12 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.pando.im |
NGLess: NGS Processing with Less Work¶
NGLess is a domain-specific language for NGS (next-generation sequencing data) processing.
For questions, you can also use the ngless mailing list.
Note
If you are using NGLess for generating results in a scientific publication, please cite
NG-meta-profiler: fast processing of metagenomes using NGLess, a domain-specific language by Luis Pedro Coelho, Renato Alves, Paulo Monteiro, Jaime Huerta-Cepas, Ana Teresa Freitas, Peer Bork - Microbiome 2019 7:84;
NG-meta-profiler¶
For metagenomics profiling, consider using ng-meta-profiler, which is a collection of predefined pipelines developed using NGLess.
NGLess¶
NGLess is best illustrated by an example:
ngless "1.4" input = paired('ctrl1.fq', 'ctrl2.fq', singles='ctrl-singles "1 "1 "1) | https://ngless.readthedocs.io/en/latest/ | 2022-09-24T20:27:19 | CC-MAIN-2022-40 | 1664030333455.97 | [] | ngless.readthedocs.io |
Mutation Annotation Format (MAF)
Description
Mutation Annotation Format (MAF) files are tab-delimited files that contain somatic and/or germline mutation annotations. MAF files containing any germline mutation annotations are protected and distributed in the controlled access portion of the GDC Data Portal. MAF files containing only somatic mutations are publicly available.
Overview
Variants are discovered by aligning DNA sequences derived from tumor samples and sequences derived from normal samples to a reference sequence1. A MAF file identifies, for all samples in a project, the discovered putative or validated mutations and categorizes those mutations (polymorphism, deletion, or insertion) as somatic (originating in the tumor tissue) or germline (originating from the germline). Mutation annotation are also reported. Note that VCF files report on all transcripts affected by a mutation while MAF files only report on the most affected one.
MAF files are generated at the GDC using the Variant Aggregation pipeline2 by aggregating case-level VCF files on a project and pipeline level (one protected MAF per project per pipeline). Protected MAF files are further processed into open-access somatic MAF files by removing low quality or germline mutations.
Structure
The structure of the MAF is available in the GDC MAF Specification along with descriptions for each field3.
References
Categories: Data Format | https://docs.gdc.cancer.gov/Encyclopedia/pages/Mutation_Annotation_Format/ | 2022-09-24T20:32:39 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.gdc.cancer.gov |